Precise method of dating Earth materials by
measuring the percentages of a long-life radioactive parent element with
respect to its daughter products or measuring the presence a short-life
radioactive element; those measurements are precise because the rates of decay
of many isotopes have been extensively documented and do not vary with physical
conditions found in the Earth’s outer layers. Consequently, each radioactive
isotope used in the dating process has been decaying at a known rate since it
was formed in the rock in which it is contained and the decay products have
also been accumulating at a corresponding rate. For example, when a mineral
that contains uranium crystallizes from magma, that magma contains no lead that
would be the product of a previous decay process. Therefore, the radiometric
clock starts ticking at that moment. As the uranium in the new mineral begins
decaying, its daughter products are trapped and measurable quantities of lead
will eventually accumulate.
Historical
Background: In 1896 the discovery of
the natural radioactive decay of uranium by the French physicist Henry
Becquerel opened the door to a cornucopia of scientific discoveries. In 1902 the
physicist Ernest Rutherford and chemist Frederick Soddy, working at Canada ’s McGill University
determined that radioactive elements, such as uranium and thorium, broke down at a fixed rate over
time into other elements in a
predictable sequence or series. Their discovery led to the identification of half life and
led to their disintegration theory of radioactivity, which proposed that over time atomic nuclei of an unstable
atom split to form other elements.
Their
research into radioactive decay, coupled with the work of their colleague,
Kasimir Fajans, resulted in the Radioactive Displacement Law of Fajans and
Soddy that described the products of alpha and beta decay.
That
discovery intrigued Bertram B. Boltwood (1870-1927), a radiation chemist
working at Yale University . Boltwood was spurred on when
in 1905 during a lecture at Yale University Rutherford had challenged the
scientific community to use radioactive decay to date rocks. Boltwood began
studying the radioactive series Rutherford and Soddy had defined earlier in
1905 and found that lead was always present in uranium and thorium ores. He
concluded that lead was the final product of the radioactive decay of uranium
and thorium. In 1907, he reasoned that once the rate at which uranium decays is
known (the half decay period or half-life), the proportion of lead in the
uranium ores could be used as a kind of measuring device, or clock, since it
would tell geoscientists when that ore, and therefore the Earth’s crust,
formed. Boltwood’s pioneering research, although somewhat crude when measured
against today’s far more sophisticated techniques (for example, the use of the
mass spectrometer to identify atoms by weight), put the Earth’s age at 1.2
billion years, which, for that time, was a dramatic increase in what scientists
believed was the right direction.
Shortly
after Boltwood’s discovery in 1907 that uranium decayed slowly to stable lead, Arthur
Holmes, an undergraduate physics student at University College of London, was
so taken by the geological implications of that discovery that he switched
majors to geology. By 1911, using only analytical chemistry applied to a few
mineral samples, Holmes established a framework for the geologic time scale
that proved to be uncannily accurate, considering the relatively primitive
nature of his approach (since it predated the discovery of isotopes). Building
on Boltwood’s pioneering work, Holmes performed the very first uranium-lead
analysis of rock specifically determined for age-dating purposes. That research
resulted in a date of 370 million years for a Devonian specimen. Although only
21 years old and still an undergraduate, Holmes had embarked on a lifetime’s
quest “to graduate the geological column with an ever-increasingly accurate
time scale.”
In
1913 he wrote The Age of the Earth, a
book that almost immediately became justly famous. In that book Holmes
estimated the Earth’s age at 1.6 billion years. It is quite extraordinary that
at that time he was only 23 and had not completed his doctoral studies. After
publication Holmes became recognized as the world’s authority on geochronology.
But, opposition from established geologists who clung to the belief that the
Earth was 100 million years old was formidable. Key advocates of the opposing
position included scientists who supported ideas the famous Scottish physicist
William Thompson, perhaps better known as Lord Kelvin, had advocated shortly
before his death in 1907. Other well-known opponents included German physicist
Hermann von Helmholtz, American astronomer Simon Newcomb, and Charles Darwin’s
astronomer-mathematician son, George H. Darwin.
However,
by the early to mid-1920s Holmes’s work was vindicated when both the British
Association for the Advancement of Science and the National Research Council of
the U.S. National Academy of Sciences came down on the side of the Earth being
between 1.6 and 2.0 billion years old. From the mid-1920s through the early
1940s, a group of physicists, geophysicists, and geochemists succeeded in
devising techniques that continued pushing back the age of the Earth. That
group included Holmes, Alfred Nier, E. K. Gerling, Friedrich Georg Houtermans, and
Clair C. Patterson, who ultimately produced accurate “primeval” lead isotopic
measurements from minerals collected from five meteorite fragments at Canyon
Diablo, Arizona .[1] By
1956, Patterson’s research had determined the age of the Earth at almost 4.6
billion years.
Since
Patterson’s and Houtermans’ pioneering research in the mid-1950s, additional
data have been accumulated, instruments have become more precise, and
analytical techniques have improved. Moon rocks and many more meteorites have
been sampled and dated. Decay constants have been measured with ever increasing
accuracy. Remarkably, certain technical adjustments to and corrections of
Patterson’s 1956 computation have canceled each other out. Today’s best
estimate of the age of meteorites (4.55 ± 0.02 billion years) is identical to
Patterson’s except for a smaller error range. That value has been confirmed by
dozens of scientists working independently.
Today
it is a nearly universally accepted scientific principle that radioactive decay
occurs at a constant rate that is specific to each radioactive isotope. Since
the 1950s, geologists and geophysicists have used radioactive elements as
natural “clocks” for determining ages of certain types of rocks. Radiometric
clocks are set when each rock forms. “Forms” means the moment an igneous rock
solidifies from magma, a sedimentary rock layer is deposited, or a rock heated
by metamorphism cools. That setting and resetting process allows geoscientists
to date rocks that formed at different times and under different circumstances.
Another commonly used radiometric dating technique is based on the decay of
potassium (K-40) to argon (Ar-40). In igneous rocks, the potassium-argon clock
starts the moment the rock crystallized from magma. Precise measurements of the
amount of the isotope K-40 relative to Ar-40 in an igneous rock determine the
time that has passed since crystallization (knowing that the half-life of K-40
is about 1.3 billion years). If an igneous or other rock is metamorphosed, its
radiometric clock is reset. Potassium-argon measurements are then used to
determine the number of years that have passed since metamorphism. See isotopic
dating.
Author’s
Note: It is critical for students to
realize that no scientific method is free from ambiguity. In addition, most
scientific techniques in and of themselves are subject to considerable latitude
in terms of the interpretation of results. Consequently, all physical-chemical
methods of dating rocks have uncertainties associated with them. Several basic
assumptions are made when geoscientists determine the age of rock samples. The
most significant assumption is that the sample is from a closed system in which
no parent or daughter isotopes were gained or lost over time. Another
assumption involves the amount of daughter isotope present at the time the
sample rock was formed. For rare isotopes, that amount is generally assumed to
be zero. Because of those and other uncertainties, the strongest evidence for
the age of a rock is obtained when two different radiochemical dating methods
produce similar results.
Since
we live in a real world where our convenient, highly intellectualized
categories and classifications are seldom found in nature, it is likely that geoscientists
unknowingly (or knowingly according to many creationist critics) put one or
both of those assumptions into play when rock samples are dated by radiometric
decay techniques. But, despite what Creationists like to assert, simply because
a specific dating technique fails to determine a reliable or verifiable date
for a rock sample is no reason to reject all radiometric dating techniques.
After all, when your car fails to start one winter morning, surely you don’t
automatically assume that all cars therefore are useless pieces of junk.
Interested
students may wish to consult one of the standard works on the topic. I
recommend highly to anyone with a high school background in science G. Brent Dalrymple’s classic, The Age of the Earth. Stanford ,
California : Stanford University
Press, 1991. It is well-written, well-reasoned, and powerful in its
explanations. Radiometric dating has been widely attacked by Christian
fundamentalists, or Creationists, as unreliable, riddled with inaccuracies, and
unscientific. A variety of their views may also be found on the internet under
the entry, radiometric dating. Curious students owe it to their intellectual
development to examine that alternate universe. However, for the point of view
of a Christian geophysicist, see: Roger C. Wiens, PhD, Radiometric Dating: A Christian Perspective, material written in
1994 and revised in 2002: http://www.asa3.org/ASA/resources/Wiens.html.
No comments:
Post a Comment