These First Americans Vanished Without a Trace — But Hints of Them Linger

There are no surviving members of an ancient and mysterious group of people who lived in North America for millennia. Until now, scientists thought they had vanished without a trace.

But new research shows that this paleo group’s genes live on today in several indigenous cultures.

The finding is surprising, as other studies had found that the people — one of the first groups of humans to arrive in North America — made little genetic contribution to later North American people. [10 Things We Learned About the First Americans in 2018]Advertisement

Using cutting-edge techniques, however, the new research shows that’s not the case. “They have never really gone extinct in that way,” study senior author Stephan Schiffels, group leader of population genetics at the Max Planck Institute for the Science of Human History in Germany, told Live Science. “They have actually contributed to living people.” 

The first wave of migrants arrived in North America before 14,500 years ago, likely by crossing the Bering Strait land bridge during the last ice age. But as that ice age ended and glaciers melted, sea levels rose, flooding the land bridge. After that, archaeological evidence suggests that the next major wave of people arrived about 5,000 years ago, likely by boat, Schiffels said. This is the group of people studied in the new research.

People continued arriving in the Americas after that. About 800 years ago, the ancestors of the modern-day Inuit and Yup’ik showed up, and within 100 years, the paleo group from 5,000 years ago had vanished, according to archaeological evidence.

So, what happened to this paleo group? To learn more, Schiffels and his colleagues, including study first author Pavel Flegontov, a faculty member of science in the Department of Biology and Ecology at the University of Ostrava in the Czech Republic, dug knee deep into the genetics of this enigmatic people.

The excavation of three ancient Athabaskan people. Researchers studied the DNA of these ancient people in the new study.
The excavation of three ancient Athabaskan people. Researchers studied the DNA of these ancient people in the new study.Credit: Tanana Chiefs Conference

The team received permission from modern indigenous groups to take very small bone samples from the remains of 48 ancient individuals found in the American Arctic and in Siberia. The scientists then ground these bone samples into powder so they could extract and study DNA.

Then, the researchers analyzed the genomes of 93 modern individuals of indigenous heritage from Siberia, Alaska, the Aleutian Islands and Canada. For good measure, the researchers looked at previously published genomes from these regions too.

With the novel method of looking for rare genetic mutations that the paleo group had passed down, as well as other family-tree-modeling methods, the researchers found that the paleo group left a hefty genetic footprint; their genes are found in modern people who speak the Eskimo-Aleut and Na-Dene languages, which includes Athabaskan and Tlingit communities from Alaska, northern Canada, and the U.S. West Coast and Southwest.

The scientists generated so much data that they could build a comprehensive model explaining ancient gene exchange between Siberia and the Americas. This model shows that Na-Dene-speaking peoples, people of the Aleutian Islands, and Yup’ik and Inuit in the Arctic all share ancestry from a single population in Siberia related to the paleo group, the researchers said.

“It is the first study to comprehensively describe all of these populations in one single, coherent model,” Schiffels said in a statement.

A facial reconstruction of a woman from the Uelen burial site in Chukotka, Siberia. The woman, who lived about 1,500 years ago, is an ancestor to present-day Inuit and Yup'ik.
A facial reconstruction of a woman from the Uelen burial site in Chukotka, Siberia. The woman, who lived about 1,500 years ago, is an ancestor to present-day Inuit and Yup’ik.Credit: Elizaveta Veselovskaya

According to the model, after the paleo group arrived in Alaska between 5,000 and 4,000 years ago, they mixed with people who had a similar ancestry to more-southern Native American peoples. The descendants of these couplings become the ancestors of the Aleutian Islanders and Athabaskans. [25 Grisly Archaeological Discoveries]

Moreover, the ancestors of the Inuit and Yup’ik people didn’t just venture from Siberia to North America once; they went back and forth like pingpong balls, crossing the Bering Strait at least three times, the researchers found. First, these ancient people crossed as that original paleo group to Alaska; then, they returned to Chukotka, Siberia; third, they traveled to Alaska again, as bearers of the Thule culture, the predecessor to the modern Inuit and Yup’ik cultures of Alaska, the Arctic, and High Arctic. During their stay in Chukotka — a long stint that lasted more than 1,000 years — the ancestors of the Inuit and Yup’ik mixed with local groups there. The genes from these offspring remain in modern-day people living in Chukchi and Kamchatka, Siberia.

“There’s a reason why this was hard [to do] before,” Schiffels told Live Science. “These populations are very closely related with each other, and it’s very hard to disentangle the different ancestry components.”

The study was published online yesterday (June 5) in the journal Nature. In another Nature study published online yesterday, researchers found human teeth dating to 31,000 years ago, remains that are now the oldest direct evidence of humans in Siberia.

Where are the aliens?

Where are the aliens?

One night about 60 years ago, physicist Enrico Fermi looked up into the sky and asked, “Where is everybody?” 

He was talking about aliens.

Today, scientists know that there are millions, perhaps billions of planets in the universe that could sustain life. So, in the long history of everything, why hasn’t any of this life made it far enough into space to shake hands (or claws … or tentacles) with humans? It could be that the universe is just too big to traverse. 

It could be that the aliens are deliberately ignoring us. It could even be that every growing civilization is irrevocably doomed to destroy itself (something to look forward to, fellow Earthlings).

Or, it could be something much, much weirder. Like what, you ask? Here are nine strange answers that scientists have proposed for Fermi’s paradox.

Mysterious ‘Bridge’ of Radio Waves Between Galaxies Seems to Be Smashing the Laws of Physics (But It’s Not)

On the big roadmap of the universe, bustling clusters of galaxies are connected by long highways of plasma weaving around the wilderness of empty space. These interspace roadways are known as filaments, and they can stretch for hundreds of millions of light-years, populated only by dust, gas and busy electrons driving very close to the universal speed limit.

Even when moving at near-lightspeed, particles should only be able to make it a fraction of the way down one of these filaments before running out of juice and breaking down. However, a team of astronomers patrolling a filament between two slowly colliding galaxy clusters has discovered a stream of electrons that isn’t abiding by these traffic rules. In the gassy filament between the galaxy clusters Abell 0399 and Abell 0401, the researchers have detected a vast bridge of radio-wave emissions, created by charged particles whizzing down a 10-million-light-year-long road for far longer than should be physically possible.

The source of this cosmic traffic violation, according to a new study published June 7 in the journal Science, may be a faint but turbulent magnetic field stretching from one galaxy cluster to the next, providing a mysterious particle accelerator that’s kicking electrons 10 times farther than they are ordinarily able to travel. [The 12 Strangest Objects in the Universe]Advertisement

According to lead study author Federica Govoni, a researcher at the Italian National Institute for Astrophysics, this is the first time a magnetic field has been observed coursing through a galactic filament, and could call for some rethinking about how particles are accelerated over incredibly long distances.

“It is a very faint magnetic field, about 1 million times [weaker] than the Earth’s,” Govoni said in a video accompanying the study. However, she and her colleagues wrote in the paper, that may still be strong enough to emit shockwaves capable of re-accelerating fast-moving particles across incredible lengths as they slow down — effectively creating an electron superhighway.

A bridge between giants

Located about 1 billion light-years from Earth, Abell 0399 and Abell 0401 are neighboring galaxy clusters — groups of hundreds or thousands of galaxies all gravitationally bundled together, representing some of the most massive objects in the universe. In a few billion years, the two large clusters will probably collide; for now, they’re about 10 million light-years apart and linked by the aforementioned highway of plasma.

In a previous study, Govoni and her colleagues discovered that the two clusters were each creating a magnetic field bristling with radio waves. In their new work, the researchers wanted to find out whether that field was extending into space beyond the bounds of the two massive objects — and, in particular, whether it could be riding down the vast plasma filament between them.

If galaxy clusters are the cities of the universe, filaments are the long, dusty highways connecting them. This map shows the all the known galaxy clusters and filaments within 500 million light-years of Earth (Abell 0399 and 0401 are not among them).
If galaxy clusters are the cities of the universe, filaments are the long, dusty highways connecting them. This map shows the all the known galaxy clusters and filaments within 500 million light-years of Earth (Abell 0399 and 0401 are not among them).Credit: Richard Powell/ CC BY-SA 2.5

Using a network of telescopes called the Low-Frequency Array (LOFAR), the researchers saw a long “ridge” of radio emissions clearly connecting one cluster to the next.

“This emission requires a population of relativistic [near light-speed] electrons and a magnetic field located in a filament between the two galaxy clusters,” the authors wrote in the study. Because there were no other obvious radio sources between the clusters, the team concluded that the ridge was most likely an extension of the magnetic fields and high-speed particle interactions occurring inside the clusters.

After running some computer simulations, the team found that even a relatively weak magnetic field (like this one) could create shockwaves strong enough to re-accelerate high-speed electrons that have slowed down and keep them whizzing down the length of the filament. However, that is only one possible explanation for a phenomenon that is, according to the researchers, still a pretty big mystery. Luckily, scientists still have a few billion years to solve it.

Why This Image of a Woodpecker Is Creeping People Out

When a seemingly innocuous image of a woodpecker stashing away its acorn supply made the internet rounds, Twitter-users expressed revulsion. They weren’t reacting to the bird or the actual acorns, but to the set of holes in which the bird was storing its treasure. Clustered in an irregular pattern, the holes were triggering a condition called trypophobia.

To someone with this phobia, an otherwise benign – and even downright gorgeous – image can spark fear and disgust. These individuals aren’t just afraid of any hole they see. Trypophobia is characterized by an aversion to clustered patterns of irregular holes or bumps. The term seems to have been coined by someone in an online forum in 2005, though scientists say the condition has likely been around for much longer.

“We know that this condition pre-existed the internet — although the internet may have exacerbated it,” Arnold Wilkins, a psychologist at the University of Essex, told Live Science. 

The phobia isn’t an official disorder, meaning it’s not listed in the “Diagnostic and Statistical Manual of Mental Disorders,” but up to 10% of people report experiencing symptoms, which include anxiety, nausea and a “skin-crawling” sensation, Wilkins said, after viewing certain images. “It can be quite debilitating,” he added.

So why is this phobia so common? Scientists are still trying to answer this question, but many believe the aversion is evolutionarily adaptive.

“You avoid things that are likely to harm you,” Wilkins explained.

In the first ever scientific documentation of trypophobia published in Psychological Science, Wilkins compared trypophobia-triggering images with pictures of poisonous animals, like the blue-ringed octopus. He and his co-authors found a similar distribution of spots, bumps or holes, as well as a similar level of contrast in the images. The researchers concluded that the phobia could stem from an evolutionarily adaptive aversion to poisonous creatures.

However, in a study published in 2018 in the journal Cognition and Emotion, scientists argued that the phobia evolved in response to disease. After all, the clusters of holes look like the lesions, bumps and pustules caused by ancient infectious diseases such as smallpox. That disease alone killed up to 10% of the population in the last millenium — an aversion to infected skin could have given individuals with trypophobia an evolutionary advantage by helping them avoid this deadly illness and others.

Plus, the authors of that study argue, the most common response to a picture of an acorn-dotted tree is not fear, but disgust, which psychologists have called “the disease avoidance emotion.” Whereas poisonous predators and disease are both threatening, they trigger two very different reactions. A snake causes fear by activating a person’s sympathetic nervous system — the system that causes to go into fight-or-flight mode. Disease and rotting food cause disgust by activating our parasympathetic nervous system, which causes the body to relax in order to conserve energy.

Research published in 2018 in the journal PeerJ found that participants’ pupils dilated in response to pictures of snakes, but they constricted in response to pictures of holes — a sign of parasympathetic nervous system activation.

Wilkins is uncertain about the disease-avoidance model — he thinks it’s likely a part of the puzzle, if not the whole picture. But it could be a while before scientists agree on why exactly people react so strongly to a photo of a harmless woodpecker. Until then, Wilkin said “the jury’s out.”

These Deep-Sea Weirdos Hold Their Breath for Minutes at a Time

These Deep-Sea Weirdos Hold Their Breath for Minutes at a Time

No wonder this fish looks like a grumpy, inflated balloon — it’s been holding onto a mouthful of water for ages.

This odd little creature is known as the coffinfish (Chaunax endeavouri), and it lives in the deepest parts of the Pacific ocean. Researchers observed this “breath-holding” behavior for the first time while combing through publicly available videos captured by the National Oceanic and Atmospheric Administration’s (NOAA) remotely operated vehicles, Science reported.

The scientists found footage of eight different individual coffinfish holding in the water they had taken in. [In Photos: Spooky Deep-Sea Creatures]

To get the necessary oxygen to survive, fish gulp down water (which is two parts hydrogen and one part oxygen), extract oxygen and then “exhale” the oxygen-depleted water by releasing it from their gills, Science reported. But these fish held onto that water in their large gill chambers for quite a long time, from 26 seconds up to 4 minutes, rather than releasing it immediately.

The scientists also took computed tomography (CT) scans of museum specimens of coffinfish to examine the massive gill chambers the animals use to hold water.

As to why the fish do this, the researchers have some guesses. They said breath-holding may help the fish conserve energy. It could even protect them by making them look bigger to predators, similar to what pufferfish accomplish by pushing out their stomachs. When a coffinfish holds in water, its body volume increases by 30%, according to the study.

The researchers reported their findings May 10 in the Journal of Fish Biology.

Can we detect dark matter?

Can we detect dark matter?

If dark matter is made from WIMPs, they should be all around us, invisible and barely detectable. So why haven’t we found any yet? While they wouldn’t interact with ordinary matter very much, there is always some slight chance that a dark-matter particle could hit a normal particle like a proton or electron as it travels through space. So, researchers have built experiment after experiment to study huge numbers of ordinary particles deep underground, where they are shielded from interfering radiation that could mimic a dark-matter-particle collision. The problem? After decades of searching, not one of these detectors has made a credible discovery. Earlier this year, the Chinese PandaX experiment reported the latest WIMP nondetection. It seems likely that dark-matter particles are much smaller than WIMPs, or lack the properties that would make them easy to study, physicist Hai-Bo Yu of the University of California, Riverside, told Live Science at the time.

Today’s Discussion:What’s Dark Matter Web

Dark Matter Web

In the 1930s, a Swiss astronomer named Fritz Zwicky noticed that galaxies in a distant cluster were orbiting one another much faster than they should have been given the amount of visible mass they had. He proposed than an unseen substance, which he called dark matter, might be tugging gravitationally on these galaxies.

Since then, researchers have confirmed that this mysterious material can be found throughout the cosmos, and that it is six times more abundant than the normal matter that makes up ordinary things like stars and people. Yet despite seeing dark matter throughout the universe, scientists are mostly still scratching their heads over it. Here are the 11 biggest unanswered questions about dark matter.

It Could Be Thousands of Years Before Physicists Devise a Theory of Everything

In 1925, Einstein went on a walk with a young student named Esther Salaman. As they wandered, he shared his core guiding intellectual principle: “I want to know how God created this world. I’m not interested in this or that phenomenon, in the spectrum of this or that element. I want to know His thoughts; the rest are just details.”

The phrase “God’s thoughts” is a delightfully apt metaphor for the ultimate goal of modern physics, which is to develop a perfect understanding of the laws of nature — what physicists call “a theory of everything,” or TOE. Ideally, a TOE would answer all questions, leaving nothing unanswered. Why is the sky blue? Covered. Why does gravity exist? That’s covered, too. Stated in a more scientific way, a TOE would ideally explain all phenomena with a single theory, a single building block and a single force. In my opinion, finding a TOE could take hundreds, or even thousands, of years. To understand why, let’s take stock. [The 18 Biggest Unsolved Mysteries in Physics]

We know of two theories that, when taken together, give a good description of the world around us, but both are light-years from being a TOE.AdvertisementThe first is Einstein’s theory of general relativity, which describes gravity and the behavior of stars, galaxies and the universe on the largest scales. Einstein described gravity as the literal bending of space and time. This idea has been validated many times, most notably with the discovery of gravitational waves in 2016.

The second theory is called the Standard Model, which describes the subatomic world. It is in this domain that scientists have made the most obvious progress toward a theory of everything.

If we look at the world around us — the world of stars and galaxies, poodles and pizza, we can ask why things have the properties they do. We know everything is made up of atoms, and those atoms are made up of protons, neutrons and electrons.

And, in the 1960s, researchers discovered that the protons and neutrons were made of even smaller particles called quarks and the electron was a member of the class of particles called leptons.

Finding the smallest building blocks is only the first step in devising a theory of everything. The next step is understanding the forces that govern how the building blocks interact. Scientists know of four fundamental forces, three of which — electromagnetism, and the strong and weak nuclear forces — are understood at the subatomic level. Electromagnetism holds atoms together and is responsible for chemistry. The strong force holds together the nucleus of atoms and keeps quarks inside protons and neutrons. The weak force is responsible for some types of nuclear decay.

Each of the known subatomic forces has an associated particle or particles that carry that force: The gluon carries the strong force, the photon governs electromagnetism, and the W and Z bosons control the weak force. There is also a ghostly energy field, called the Higgs field, that permeates the universe and gives mass to quarks, leptons and some of the force-carrying particles. Taken together, these building blocks and forces make up the Standard Model. [Strange Quarks and Muons, Oh My! Nature’s Tiniest Particles Dissected]

A theory of everything will explain all known phenomena. We aren't there yet, but we have unified the behavior of the quantum world in the standard model (yellow) and we understand gravity (pink). In the future, we imagine a series of additional unifications (green). However, the problem is that there are phenomena we don't understand (blue) that need to fit in somewhere. And we are not certain that we won't find other phenomena as we go to higher energy (red circles).
A theory of everything will explain all known phenomena. We aren’t there yet, but we have unified the behavior of the quantum world in the standard model (yellow) and we understand gravity (pink). In the future, we imagine a series of additional unifications (green). However, the problem is that there are phenomena we don’t understand (blue) that need to fit in somewhere. And we are not certain that we won’t find other phenomena as we go to higher energy (red circles).Credit: Don Lincoln

Using quarks and leptons and the known force-carrying particles, one can build atoms, molecules, people, planets and, indeed, all of the known matter of the universe. This is undoubtedly a tremendous achievement and a good approximation of a theory of everything.

And yet it really isn’t. The goal is to find a single building block and a single force that could explain the matter and motion of the universe. The Standard Model has 12 particles (six quarks and six leptons) and four forces (electromagnetism, gravity, and the strong and weak nuclear forces). Furthermore, there is no known quantum theory of gravity(meaning our current definition covers just gravity involving things larger than, for example, common dust), so gravity isn’t even part of the Standard Model at all. So, physicists continue to look for an even more fundamental and underlying theory. To do that they need to reduce the number of both building blocks and forces.

Finding a smaller building block will be difficult, because that requires a more powerful particle accelerator than humans have ever built. The time horizon for a new accelerator facility coming on line is several decades and that facility will provide only a relatively modest incremental improvement over existing capabilities. So, scientists must instead speculate on what a smaller building block might look like. A popular idea is called superstring theory, which postulates that the smallest building block isn’t a particle, but rather a small and vibrating “string.” In the same way a cello string can play more than one note, the different patterns of vibrations are the different quarks and leptons. In this way, a single type of string could be the ultimate building block. [Top 5 Reasons We May Live in a Multiverse]

The problem is that there is no empirical evidence that superstrings actually exist. Further, the expected energy required to see them is called the Planck energy, which is a quadrillion (10 raised to the 15th power) times higher than we can currently generate. The very large Planck energy is intimately connected to what’s known as the Planck length, an unfathomably tiny length beyond which quantum effects become so large that it is literally impossible to measure anything smaller. Meanwhile, go smaller than the Planck length (or bigger than the Planck energy), and the quantum effects of gravity between photons, or light particles, become important and relativity no longer works. That makes it likely this is the scale at which quantum gravity will be understood. This is, of course, all very speculative, but it reflects our current best prediction. And, if true, superstrings will have to remain speculative for the foreseeable future.

The plethora of forces is also a problem. Scientists hope to “unify” the forces, showing that they are just different manifestations of a single force. (Sir Isaac Newton did just that when he showed the force that made things fall on Earth and the force that governed the motion of the heavens were one and the same; James Clerk Maxwell showed that electricity and magnetism were really different behaviors of a unified force called electromagnetism.)

In the 1960s, scientists were able to show that the weak nuclear force and electromagnetism were actually two different facets of a combined force called the electroweak force. Now, researchers hope that the electroweak force and the strong force can be unified into what is called a grand unified force. Then, they hope that the grand unified force can be unified with gravity to make a theory of everything.

Historically, scientists have shown how seemingly unrelated phenomena originate from a single underlying force. We imagine that this process will continue, resulting in a theory of everything.
Historically, scientists have shown how seemingly unrelated phenomena originate from a single underlying force. We imagine that this process will continue, resulting in a theory of everything.Credit: Don Lincoln

However, physicists suspect this final unification would also take place at the Planck energy, again because this is the energy and size at which quantum effects can no longer be ignored in relativity theory. And, as we’ve seen, this is a much higher energy than we can hope to achieve inside a particle accelerator any time soon. To give a sense of the chasm between current theories and a theory of everything, if we represented the energies of particles we can detect as the width of a cell membrane, the Planck energy is the size of Earth. While it is conceivable that someone with a thorough understanding of cell membranes might predict other structures within a cell — things like DNA and mitochondria — it is inconceivable that they could accurately predict the Earth. How likely is it that they could predict volcanoes, oceans or Earth’s magnetic field?

The simple fact is that with such a large gap between currently achievable energy in particle accelerators and the Planck energy, correctly devising a theory of everything seems improbable.

That doesn’t mean physicists should all retire and take up landscape painting — there is still meaningful work to be done. We still need to understand unexplained phenomena such as dark matter and dark energy, which make up 95% of the known universe, and use that understanding to create a newer, more comprehensive theory of physics. This newer theory will not be a TOE, but will be incrementally better than the current theoretical framework. We will have to repeat that process over and over again.

Disappointed? So am I. After all, I’ve devoted my life to trying to uncover some of the secrets of the cosmos, but perhaps some perspective is in order. The first unification of forces was accomplished in the 1670s with Newton’s theory of universal gravity. The second was in the 1870s with Maxwell’s theory of electromagnetism. The electroweak unification was relatively recent, only half a century ago.

Given that 350 years has elapsed since our first big successful step in this journey, perhaps it’s less surprising that the path ahead of us is longer still. The notion that a genius will have an insight that results in a fully developed theory of everything in the next few years is a myth. We’re in for a long slog — and even the grandchildren of today’s scientists won’t see the end of it.

More Than 70 Gray Whales Dead in 6 Months, and Scientists Don’t Understand Why

Since January, more than 70 dead gray whales have washed up on the coasts of California, Oregon, Washington, Alaska and Canada. That’s the most in a single year since 2000, and scientists are concerned.Advertisement

Last week, the National Oceanic and Atmospheric Administration (NOAA) Fisheries designated these strandings as part of an Unusual Mortality Event (UME). Under the U.S. Marine Mammal Protection Act, the designation of a UME means that more resources and scientific expertise will be dedicated to investigating what’s causing so many whales to die.

Seeing numerous gray whales (Eschrichtius robustus) swimming along the west coast this time of year is expected. From about March to June, these large marine mammals swim north from the coast of Baja California, Mexico, to the cool, food-rich waters of the Bering and Chukchi seas, north of Alaska. They’ll start their return trip south in November. [Whale Photos: Giants of the Deep]

So far this year, 73 dead whales have been spotted on West Coast beaches: 37 in California; three in Oregon; 25 in Washington; three in Alaska; and five in British Columbia, Canada. Most of them were skinny and malnourished, which suggests they probably didn’t get enough to eat during their last feeding season in the Arctic, said Michael Milstein, a NOAA Fisheries public affairs officer.

Gray whales have been stranding on the West Coast at an alarming rate, and scientist's don't really know why.
Gray whales have been stranding on the West Coast at an alarming rate, and scientist’s don’t really know why.Credit: John Weldon/Northern Oregon Southern Washington Marine Mammal Stranding Program

The condition of the dead whales also suggests there are many that scientists aren’t counting because emaciated whales tend to sink, said John Calambokidis, a research biologist with Cascadia Research Collective. “So, the numbers that actually wash up do represent a fraction of the true number,” he said. “Some estimates suggest it’s as few as 10%.”

These gentle giants were once severely threatened by whalers. There were only around 2,000 of them left in 1946, when an international agreement to stop gray whale hunting was initiated to help the population recover, according to The Marine Mammal Center, a nonprofit organization that rescues and rehabilitates marine mammals in California. Gray whales were removed from the U.S. Endangered Species List in 1994, when the population was estimated to be about 20,000.

A previous UME from 1999 to 2000 knocked out this population, called the Eastern North Pacific population, to about 16,000 individuals, but the whales have since recovered. In 2016, scientists estimated there were about 27,000.

“We know from past data that this population is capable of rebounding from a loss on the order of at least 6,000, perhaps,” said David Weller, a research wildlife biologist with the NOAA Southwest Fisheries Science Center. But it’s still unclear what’s causing so many whales to die. So for now, the priority is to learn as much as possible from the stranded animals, Weller said. “We’ve got our finger on the pulse and I would say that we want to continue to monitor it closely.”

Earth’s Oldest Meteorite Collection Just Found in the Driest Place on the Planet

Meteorites crash into Earth pretty much constantly, and you can find their ancient remains everywhere from King Tut’s tomb to some guy’s farm in Edmore, Michigan. But to best understand where these space rocks came from and how long they’ve been living as earthly expats, it helps to visit the densest collection of meteorites on the planet — and that’s in Chile’s Atacama Desert.

What’s so special about Atacama? For starters, it’s old — more than 15 million years old — and that means the meteors that have crash-landed on its 50,000-square-mile (130,000 square kilometer) surface have the possibility of being really old, too. This poses a geological advantage over other deserts, including Antarctica, which boast vast supplies of meteorites, but are generally too young to house any space rocks older than about half a million years, according to Alexis Drouard, a researcher at Aix-Marseille Université in France and lead author of a new study in the journal Geology. [In Images: Stunning Flower Fields of the Atacama Desert]

Drouard and his colleagues recently went on a meteorite-hunting trip to the Atacama Desert in hopes of finding an array of rocks that spanned millions of years. “Our purpose in this work was to see how the meteorite flux to Earth changed over large timescales,” Drouard said in a statement. In other words, could the space rocks of Atacama reveal when Earth was bombarded by meteorites more or less frequently?Advertisement

For the new study (published May 22), the researchers collected nearly 400 meteorites and closely studied 54 of them, analyzing both the ages and chemical compositions of the alien stones. Consistent with the desert’s advanced age, about 30% of the meteorites were more than 1 million years old, while two of them had been gathering dust for more than 2 million years. According to Drouard, this represents the oldest meteorite collection on Earth’s surface.

And as for the meteorite flux? The team extrapolated the results of their small sample to determine that impact activity has remained relatively constant over the past 2 million years, amounting to about 222 meteor impacts in every square kilometer of desert every 1 million years.

Surprisingly, the composition of the meteorites changed more drastically. According to the researchers, the meteorites that bombarded Atacama between 1 million and half a million years ago were significantly more iron-rich than the rocks that fell before or after. It’s possible they all came from a single swarm of stones that got knocked loose from the asteroid belt between Mars and Jupiter, the team wrote.

There Are Still 10 Chernobyl-Style Reactors Operating Across Russia. How Do We Know They’re Safe?There Are Still 10 Chernobyl-Style Reactors Operating Across Russia. How Do We Know They’re Safe?

In the new HBO miniseries “Chernobyl,” Russian scientists uncover the reason for an explosion in Reactor 4 at the Chernobyl Nuclear Power Plant, which spewed radioactive material across northern Europe.

That reactor, a design called the RBMK-1000, was discovered to be fundamentally flawed after the Chernobyl accident. And yet there are still 10 of the same type of reactor in operation in Russia. How do we know if they’re safe?

The short answer is, we don’t. These reactors have been modified to lessen the risk of another Chernobyl-style disaster, experts say, but they still aren’t as safe as most Western-style reactors. And there are no international safeguards that would prevent the construction of new plants with similar flaws. [Images: Chernobyl, Frozen in Time]Advertisement

“There are a whole number of different types of reactors that are being considered now in various countries that are significantly different from the standard light-water reactor, and many of them have safety flaws that the designers are downplaying,” said Edwin Lyman, a senior scientist and the acting director of the Nuclear Safety Project at the Union of Concerned Scientists.SPONSORED BY MICHELIN

“The more things change,” Lyman told Live Science, “the more they stay the same.”

Reactor 4

At the center of the Chernobyl disaster was the RBMK-1000 reactor, a design used only in the Soviet Union. The reactor was different from most light-water nuclear reactors, the standard design used in most Western nations.

Light-water reactors consist of a large pressure vessel containing nuclear material (the core), which is cooled by a circulating supply of water. In nuclear fission, an atom (uranium, in this case), splits, creating heat and free neutrons, which zing into other atoms, causing them to split and release heat and more neutrons. The heat turns the circulating water to steam, which then turns a turbine, generating electricity.

In light-water reactors, the water also acts as a moderator to help control the ongoing nuclear fission within the core. A moderator slows down free neurons so that they’re more likely to continue the fission reaction, making the reaction more efficient. When the reactor heats up, more water turns to steam, and less is available to play this moderator role. As a result, the fission reaction slows. That negative feedback loop is a key safety feature that helps keep the reactors from overheating.

The RBMK-1000 is different. It also used water as a coolant, but with graphite blocks as the moderator. The variations in the reactor design allowed it to use less-enriched fuel than usual and to be refueled while running. But with the coolant and moderator roles separated, the negative feedback loop of “more steam, less reactivity,” was broken. Instead, RBMK reactors have what’s called a “positive void coefficient.”

When a reactor has a positive void coefficient, the fission reaction speeds up as the coolant water turns to steam, rather than slowing down. That’s because boiling opens up bubbles, or voids, in the water, making it easier for neutrons to travel right to the fission-enhancing graphite moderator, said Lars-Erik De Geer, a nuclear physicist who is retired from the Swedish Defence Research Agency.

From there, he told Live Science, the problem builds: The fission becomes more efficient, the reactor gets hotter, the water gets steamier, the fission becomes more efficient still, and the process continues.

Run-up to disaster

When the Chernobyl plant was running at full power, this wasn’t a big problem, Lyman said. At high temperatures, the uranium fuel that powers the fission reaction tends to absorb more neutrons, making it less reactive.

At low power, though, RBMK-1000 reactors become very unstable. In the run-up to the Chernobyl accident on April 26, 1986, operators were doing a test to see if the plant’s turbine could run emergency equipment during a power outage. This test required running the plant at reduced power. While the power was lowered, the operators were ordered by Kiev’s power authorities to pause the process. A conventional plant had gone offline, and Chernobyl’s power generation was needed.

“That was very much the main reason why it all happened in the end,” De Geer said.

The plant ran at partial power for 9 hours. When the operators got the go-ahead to power most of the rest of the way down, there had been a buildup of neutron-absorbing xenon in the reactor, and they couldn’t maintain the appropriate level of fission. The power fell to nearly nothing. Trying to boost it, the operators removed all of the control rods, which are made of neutron-absorbing boron carbide and are used to slow the fission reaction. Operators also reduced the flow of water through the reactor. This exacerbated the positive void coefficient problem, according to the Nuclear Energy Agency. Suddenly, the reaction became very intense indeed. Within seconds, the power surged to 100 times what the reactor was designed to withstand. [Chernobyl Nuclear Disaster 25 Years Later (Infographic)]

There were other design flaws that made it difficult to get the situation back under control once it started. For example, the control rods were tipped with graphite, De Geer says. When the operators saw that the reactor was starting to go haywire and tried to lower the control rods, they got stuck. The immediate effect was not to slow the fission, but to enhance it locally, because the additional graphite at the tips initially boosted the fission reaction’s efficiency nearby. Two explosions rapidly followed. Scientists still debate exactly what caused each explosion. They both may have been steam explosions from the rapid increase in pressure in the circulation system, or one may have been steam and the second a hydrogen explosion caused by chemical reactions in the failing reactor. Based on the detection of xenon isotopes at Cherepovets, 230 miles (370 kilometers) north of Moscow after the explosion, De Geer believes that the first explosion was actually a jet of nuclear gas that shot several kilometers into the atmosphere.

Changes made

The immediate aftermath of the accident was “a very unnerving time” in the Soviet Union, said Jonathan Coopersmith, a historian of technology at Texas A&M University who was in Moscow in 1986. At first, the Soviet authorities kept information close; the state-run press buried the story, and the rumor mill took over. But far away in Sweden, De Geer and his fellow scientists were already detecting unusual radioactive isotopes. The international community would soon know the truth.

On May 14, Soviet leader Mikhail Gorbachev gave a televised speech in which he opened up about what had happened. It was a turning point in Soviet history, Coopersmith told Live Science.

“It made glasnost real,” Coopersmith said, referring to the nascent policy of transparency in the Soviet Union.

It also opened a new era in cooperation for nuclear safety. In August 1986, the International Atomic Energy Agency held a post-accident summit in Vienna, and Soviet scientists approached it with an unprecedented sense of openness, said De Geer, who attended.

“It was amazing how much they told us,” he said.

Among the changes in response to Chernobyl were modifications to the other RBMK-1000 reactors in operation, 17 at the time. According to the World Nuclear Association, which promotes nuclear power, these changes included the addition of inhibitors to the core to prevent runaway reactions at low power, an increase in the number of control rods used in operation and an increase in fuel enrichment. The control rods were also retrofitted so that the graphite would not move into a position that would increase reactivity.

Chernobyl’s other three reactors operated till 2000 but have since closed, as have two more RBMKs in Lithuania, which were shut down as a requirement of that country entering the European Union. There are four RBMK reactors operating in Kursk, three in Smolensk and three in St. Petersburg (a fourth was retired in December 2018).

These reactors “aren’t as good as ours,” De Geer said, “but they are better than they used to be.”

“There were fundamental aspects of the design that couldn’t be fixed no matter what they did,” Lyman said. “I would not say they were able to increase the safety of the RBMK overall to the standard you’d expect from a Western-style light water reactor.”

In addition, De Geer pointed out, the reactors weren’t built with full containment systems as seen in Western-style reactors. Containment systems are shields made of lead or steel meant to contain radioactive gas or steam from escaping into the atmosphere in the event of an accident.

Oversight overlooked?

Despite the potentially international effects of a nuclear plant accident, there is no binding international agreement on what constitutes a “safe” plant, Lyman said.

The Convention on Nuclear Safety requires countries to be transparent about their safety measures and allows for peer review of plants, he said, but there are no enforcement mechanisms or sanctions. Individual countries have their own regulatory agencies, which are only as independent as local governments enable them to be, Lyman said.

“In countries where there is rampant corruption and lack of good governance, how can you expect that any independent regulatory agency is going to be able to function?” Lyman said.

Though no one besides the Soviet Union made RBMK-1000 reactors, some proposed new reactor designs do involve a positive void coefficient, Lyman said. For example, fast-breeder reactors, which are reactors that generate more fissile material as they generate power, have a positive void coefficient. Russia, China, India and Japan have all built such reactors, though Japan’s is not operational and is planned for decommission and India’s is 10 years behind schedule for opening. (There are also reactors with small positive void coefficients operating in Canada.)

“The designers are arguing that if you take everything into account, overall they’re safe, so that doesn’t matter that much,” Lyman said. But designers shouldn’t be overconfident in their systems, he said.

“That kind of thinking is what got the Soviets into trouble,” he said. “And it’s what can get us into trouble, by not respecting what we don’t know.”

No, That Baby Dinosaur Didn’t Crawl. But It Did Walk on 4 Legs As an Infant.

Just like a human, a Jurassic-period dinosaur used all four limbs to get around as an infant. But later, it switched to two legs.

The quadrupedal to bipedal switch made by this sauropodomorph — a type of herbivorous, long-necked and long-tailed dinosaur — appears to be unique among the animal kingdom.Advertisement

“We cannot find any living animals, besides humans, that do a transition like this at all,” said study co-lead researcher Andrew Cuff, a post-doctoral researcher in biomechanics at the Royal Veterinary College (RVC) in the United Kingdom.

Researchers solved this leggy mystery thanks to six well-preserved specimens of this dinosaur, known as Mussaurus patagonicus, that spanned from infancy to adulthood.

During its lifetime, about 200 million years ago, M. patagonicus lived in what is now Patagonia, in southern Argentina. Although the dinosaur weighed more than a ton as an adult, the sauropodomorph was teeny as a babe — its skeletal remains can fit in a human’s palms.

Curious about how this critter moved, scientists from Argentina’s Museo de La Plata, the National Scientific and Technical Research Council in Argentina (CONICET), and RVC teamed up to create 3D digital scans of the dinosaur’s anatomy at different life stages.

Then, the researchers figured out the dinosaur’s mass by calculating the likely weight of its muscles and soft tissues. This data helped them to determine the creature’s center of mass at each age — that is, as a newly-hatched dinosaur, a 1-year-old juvenile, and an 8-year-old adult.

M. patagonicus, the researchers found, likely walked on all fours as a baby because its center of mass (also known as its balancing point) was so far forward. If it had only walked on its two hind legs, the dinosaur would have face-planted.

“If you can’t get your foot underneath your center of mass, you’re going to fall over,” Cuff said. “And so, it has to be compensating in a different way. Instead of just relying on its hind legs, it had to be using its forelegs to help support its mass.”

However, this dinosaur did not crawl as a baby, as some headlines have suggested. “All of this stuff that you might see about it crawling is incorrect,” Cuff said. “It’s definitely walking around on four legs rather than crawling, like a human baby might do.”

Shortly after the dinosaur’s first birthday, its center of mass shifted back toward its hips. So, it likely began walking on its two hind legs at this point, Cuff said. This center-of-mass shift was largely driven by the growth of the creature’s tail as it grew older, said study co-lead researcher Alejandro Otero, a vertebrate paleontologist at the Museo de La Plata and a CONICET researcher.

“It is important to notice that such locomotor switching is rare in nature,” Otero told Live Science in an email. “The fact that we were able to recognize it in extinct forms, like dinosaurs, highlights the importance of our exciting findings.”

Design a site like this with WordPress.com
Get started