Sciworthy https://sciworthy.com/ The Encyclopedia of Science's Frontier Thu, 19 Mar 2026 14:51:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://sciworthy.com/wp-content/uploads/2017/04/sciworthy-favicon.png Sciworthy https://sciworthy.com/ 32 32 A complete DNA map to better detect cancer-causing changes https://sciworthy.com/a-complete-dna-map-to-better-detect-cancer-causing-changes/?utm_source=rss&utm_medium=rss&utm_campaign=a-complete-dna-map-to-better-detect-cancer-causing-changes Thu, 19 Mar 2026 11:00:57 +0000 https://sciworthy.com/?p=22852 Researchers used a complete map of the human genome to detect large DNA changes associated with cancer. Their approach takes more time than traditional methods, but produces clearer, more reliable results.

The post A complete DNA map to better detect cancer-causing changes appeared first on Sciworthy.

]]>
When scientists study complex human diseases like cancer, one of the many steps is to compare the DNA sequences from diseased individuals to a template of genetic information from healthy individuals, called the reference genome. This step identifies changes in their DNA, or variants, which researchers then label as best as possible to determine, for example, what caused their disease and how it might respond to treatments. 

Since 2000, the standard human reference genome of choice has been incomplete because the scientific community lacked the technology to access some challenging regions. This meant that some variants scientists detected were false alarms, or false positives, which made it difficult to know which variants were truly driving tumor growth. 

In 2022, an association of scientists published what they called the first truly complete human genome using new technologies whose outputs were less fragmented than earlier technologies. Since then, several researchers have explored the benefits of using the new genome over the previous reference genome for studying complex genetic diseases, such as cancers. 

Researchers in Canada and the USA recently hypothesized that using the complete human genome would help them detect large variants, or structural variants, in cancer more accurately than the standard reference genome. If our genome were a textbook, these variants would be like missing, added, or flipped paragraphs or pages. Scientists have shown that structural variants can cause cancer by duplicating cancer-promoting genes, causing abnormal gene fusions, and deactivating genes that should naturally suppress cancer growth. 

The researchers tested their hypothesis using a well-established cellular model of cancer, or cancer cell line, called COLO829, paired with a control without cancer. Scientists generally use structural variant data from this cell line as the benchmark to evaluate new methods of detecting cancer variants. The researchers analyzed 4 independent samples of this cell line sequenced at different laboratories. They also examined 3 tumor samples from patients with blood, brain, and ovarian cancer to validate their findings in real clinical scenarios. The team compared the cancer DNA sequences to both reference genomes and used 4 different computational tools to identify structural variants.

The complete human reference genome has approximately 200 million additional base pairs of DNA sequence, closing gaps and completing regions missing from the standard reference genome. The team manually inspected results from the COLO829 sample and observed a decrease from 225 falsely identified structural variants using the standard reference genome to just 83 using the complete reference genome, which meant that the new reference genome improved their ability to detect structural variants.

The researchers stated that although the new human reference genome helped them identify DNA changes more accurately, it contained less labelled medical information than the older reference genome, which scientists use to identify DNA changes that may be linked to diseases. To fix this, they used a tool called LevioSAM2, which let them match up, or liftover, results from the new genome with the older one. This strategy allowed them to leverage the greater accuracy of the new genome while benefiting from the detailed medical knowledge associated with the older genome. In other words, they get the best of both worlds.

The researchers applied their combined approach to the 3 patient samples and observed fewer candidate cancer-specific variants requiring manual clinical review compared to using the standard reference genome alone. They explained that having fewer candidates streamlines the laborious process of identifying cancer-causing mutations from an otherwise long list of false alarms. From one patient’s sample, they detected a large variant, 609,000 base pairs in length, affecting a gene previously linked to several cancers, that showed weak signals in the older reference genome but clear evidence in the new reference genome. 

The researchers concluded that their approach optimizes structural variant detection in cancer by reducing false positives, which can help doctors prioritize clinically relevant mutations. They noted that reducing false positives has important implications for analyzing patient samples, where filtering through false variants to find true cancer drivers requires time and expertise. Their liftover strategy increased analysis time by approximately 50% compared to using only the older reference genome, a trade-off the researchers considered acceptable given its substantial improvements in accuracy.

The post A complete DNA map to better detect cancer-causing changes appeared first on Sciworthy.

]]>
What lies between the stars? https://sciworthy.com/what-lies-between-the-stars/?utm_source=rss&utm_medium=rss&utm_campaign=what-lies-between-the-stars Mon, 16 Mar 2026 11:00:57 +0000 https://sciworthy.com/?p=22777 Astronomers simulated the clouds of material between stars. They found that regions of space with more heavy elements tend to be cooler and form fewer stars than regions with less heavy elements.

The post What lies between the stars? appeared first on Sciworthy.

]]>
Space is empty. When one looks away from Earth or the Milky Way Galaxy and focuses on the space between galaxies, one finds that, on average, there’s only 1 atom for every cubic meter or 35 cubic feet of space. But space isn’t totally empty, and on those smaller scales, one can find quite a lot. 

Within galaxies, there are collections of material between stars that exist in various phases of temperature and density, called the multiphase interstellar medium or ISM. Most of this material is made of the 2 lightest elements, hydrogen and helium, with a small amount of all the other heavier elements, which astronomers refer to generally as metals. It is also this material between stars that creates more stars.

A team of astronomers recently investigated how the presence or absence of metals, a quantity called metallicity, impacts star-forming regions of the ISM. They did so by simulating ISM clouds with metallicities matching 7 different regions of nearby space: the immediate region around the Sun, called the solar neighborhood, a random patch of the Milky Way, the Large and Small Magellanic Clouds, the dwarf galaxy Sextans A, the globular cluster NGC1904, and the blue compact dwarf galaxy I Zwicky 18. The team’s simulations are part of the Simulating the Life-Cycle of molecular Clouds (SILCC) project, a collaboration among several European research institutions that use high-powered computers to study the lifecycles of star-forming clouds of gas.

The team used a simulation code that modeled how gas moves in space and influences magnetic fields. The simulation’s structure was that of an enormous rectangular prism measuring 500 parsecs by 500 parsecs by 4 kiloparsecs. In other words, that’s a box measuring 15 quadrillion kilometers by 15 quadrillion kilometers by 120 quadrillion kilometers, or 10 quadrillion miles by 10 quadrillion miles by 77 quadrillion miles! Inside this box were gas molecules, bound together by self-gravity as a cloud, gravity from star clusters inside the cloud and old stars distributed through the cloud, and an external distribution of dark matter. To prevent the cloud from collapsing in on itself at the beginning of the simulation, the team coded the gas molecules to move at an average speed of 10 kilometers per second, or approximately 22,000 miles per hour, during the first 20 million years, essentially stirring the cloud.

The simulation modeled how the magnetic fields and fluids of the clouds moved after the starting period, how fast-moving, high-energy protons called cosmic rays interacted with the clouds, the formation, life, and death of stars within the clouds, and the chemistry of the remaining and resulting molecules over 200 million years. With all these factors accounted for in the simulation, the team isolated the impact of metallicity in each of the 7 simulations. The simulation representing the solar neighborhood had the highest metallicity, while the simulation representing I Zwicky 18 had the lowest, with only 2% the metallicity of the solar neighborhood.

They found that regions of the ISM with lower metallicity were, on average, warmer than those with higher metallicity. Their results showed that metals are much better at radiating away heat than hydrogen or helium. Cold phases of the ISM produced stars, which formed metals, while warmer, lower-metallicity regions tended to produce fewer stars, which further prevented them from cooling. This trend held up until the materials reached temperatures of around 1 million Kelvin, which is about 1 million °C or 2 million °F. 

The team qualified their results by noting that they made several simplifications. For one, many parameters in their code could be adjusted to model ISM clouds, and, for the sake of time, they could only vary the metallicity in each simulation, even though the corresponding regions of space differ across other parameters. They also undercounted some more common metals, such as carbon, oxygen, and silicon, which are formed through nuclear fusion in stars at higher rates than other metals. And finally, they assumed that all massive stars ended their lives by exploding into supernovae, excluding the possibility that some of these stars would have formed black holes.

The post What lies between the stars? appeared first on Sciworthy.

]]>
Explaining breast cancer predictions from machine learning https://sciworthy.com/explaining-breast-cancer-predictions-from-machine-learning/?utm_source=rss&utm_medium=rss&utm_campaign=explaining-breast-cancer-predictions-from-machine-learning Thu, 12 Mar 2026 11:00:17 +0000 https://sciworthy.com/?p=22786 Researchers applied statistical analyses to identify how machine learning models make breast cancer predictions. They argued that the more explainable a model, the more likely clinicians are to trust it for patient care.

The post Explaining breast cancer predictions from machine learning appeared first on Sciworthy.

]]>
Cells read the DNA within them to make useful products, such as proteins, through a process called gene expression. Scientists and health organizations have reported that gene expression datasets often contain too few patient samples with far too many genes per sample, which presents a major barrier to reducing cancers globally. This imbalance makes it difficult to find or prioritize the changes in gene expression that differentiate cancer cells from healthy cells. Scientists refer to this challenge as the curse of dimensionality

Machine learning techniques can model existing patterns within these large datasets and then classify samples as cancerous or not, but they introduce another barrier. Clinicians and physicians hesitate to trust the results because they don’t understand how a machine learning model reached its conclusions. They call this the black box problem. Therefore, researchers aim to develop methods that explain how a machine learning model makes its decisions. 

A research team based across several African institutions focused on explaining breast cancer model predictions. They downloaded publicly available gene expression data from a global database, called the Cancer Genome Atlas, which contained almost 20,000 genes across 1,208 breast cancer samples. Their goal was to identify the few genes out of the 20,000 that could predict whether a tissue was cancerous. 

First, the researchers reduced the data to 3,602 genes that showed differential expression between breast cancer and healthy cells. From there, they used an algorithm to test multiple gene combinations and selected the smallest group of genes that consistently produced good results. It’s like conducting thousands of small races with different runners to figure out which ones always come in first, even if they all eventually reach the finish line. 

Then, they trained and tuned multiple models using different machine learning techniques based on the expression data of the genes that the algorithm selected. They reported that all models performed well, correctly predicting cancer status at least 98% of the time. Next, they asked: “Which genes make the models work?” and “How do these genes influence the predictions?” 

They employed 4 different statistical interpretation methods, known as feature importance techniques, to identify the top-contributing genes to the models’ performance. The first one showed how each model’s prediction changes with the level of expression of a specific selected gene, and the second showed how multiple genes interact to drive the models’ decisions. The third method quantified the overall influence of each gene on the models’ decisions, thereby providing a ranked importance, and the last method assessed how well a single gene could predict breast cancer on its own. 

The researchers identified 7 genes that consistently appeared across all trained models and feature importance scales. They confirmed that all these genes had relevant biological functions that can influence cancer growth, like repairing damaged tissues, controlling the movement of materials in and out of the cell, and regulating how cells defend themselves. 

The team noted that while different models tended to agree on the most important genes, the exact rankings and influence scores sometimes varied. They explained that with biological data, models often see different slices of the same reality, and therefore, better results come from combining viewpoints from multiple machine learning models rather than relying on a single one.

The researchers highlighted some limitations. The gene selection algorithm took nearly 6 hours on a powerful laptop, which was longer than they anticipated, so it may not be efficient for datasets larger than theirs. They also acknowledged that the algorithm might have omitted some important genes during its selection. And despite its large size, their dataset didn’t fully capture the diversity of breast cancer worldwide, so their models might not perform as well across all samples. The researchers concluded that combining machine learning models with transparent, explainable techniques is the future of cancer prediction to enable clinical trust in machine learning recommendations.

The post Explaining breast cancer predictions from machine learning appeared first on Sciworthy.

]]>
How the brain recognizes blocked objects https://sciworthy.com/how-the-brain-recognizes-blocked-objects/?utm_source=rss&utm_medium=rss&utm_campaign=how-the-brain-recognizes-blocked-objects Mon, 09 Mar 2026 11:00:00 +0000 https://sciworthy.com/?p=22754 Researchers conducted a series of experiments with mice to determine how their brains fill in missing visual information.

The post How the brain recognizes blocked objects appeared first on Sciworthy.

]]>
Part of our brain’s job is determining what’s around us. It does this mainly through the 5 senses: sight, hearing, touch, smell, and taste. However, these senses—especially sight and hearing—often have incomplete information. For example, many objects that we look at are partially blocked from our view. Our brains make up for these deficits using our prior knowledge and expectations to fill in the gaps. This process is called sensory inference.

We use sensory inference so often that we barely notice. Take a coffee table—without sensory inference, you would fail to recognize the table as soon as you set down your drink! Despite how common sensory inference is, scientists don’t know how our brains do it. Recently, a team of researchers at the University of California, Berkeley, set out to understand the brain-level processes underlying sensory inference in mice.

Previous researchers found that mice, like humans, are susceptible to the Kanisza illusion, pictured below. This illusion exploits sensory inference. Most people perceive an upside-down triangle, even though the image only shows 3 incomplete circles and a few angles. Researchers have demonstrated that similar illusions trigger sensory inference in mice. To continue this line of work, the team at Berkeley used mice’s brains to help understand how human brains carry out sensory inference.

"Kanizsa triangle" by Fibonacci is licensed under CC BY-SA 3.0. Most people looking at this image see a white triangle in the middle, rather than just three incomplete circles and three angles. This is because of sensory inference.

“Kanizsa triangle” by Fibonacci is licensed under CC BY-SA 3.0. Most people looking at this image see a white triangle in the middle, rather than just three incomplete circles and three angles. This is because of sensory inference.

The researchers used 2 methods to detect brain activity in mice. First, they surgically inserted sensors, called Neuropixels, into the heads of 14 mice, which allowed them to monitor many neurons simultaneously. In the second method, referred to as two-photon imaging, they examined the brains of 4 mice using a special microscope that can see the activity of individual neurons. 

They explained that these 2 methods have complementary strengths and weaknesses. Neuropixels provide a broad view of brain activity, while two-photon imaging focuses on single neurons or small groups of neurons, called circuits. The researchers conducted each experiment on 2 groups of mice–they studied one group with Neuropixels and the other with two-photon imaging.

To figure out how sensory inference works, the researchers first determined which neurons were responsible for the mice perceiving a white triangle in the Kanisza illusion. They recorded the brain activity of each group of mice while showing them 2 kinds of images—some were illusions, like the example, and others contained real shapes. They found that a region towards the back of their brains that processes low-level visual information, called V1, showed similar activity in response to the illusions as it did in response to the real shapes. 

The researchers found that 2 types of neurons in the V1 region contributed to sensory inference. The first type consisted of neurons that only responded when shown an illusion. That is, they responded specifically to shapes that weren’t there. The researchers called these neurons illusory shape encoders. The second type showed similar activity regardless of whether there was an illusion, seemingly reacting to the individual shapes in the image. The researchers called these neurons segment responders.

The team compared both types of neurons using machine learning algorithms. They found that illusory shape encoders, seemingly responsible for the illusion, were more connected to brain regions linked to highly advanced visual processing outside of V1. This suggested that illusory shape encoders and similar neurons help the brain use expectations to fill in information gaps. However, they still weren’t sure how these neurons accomplished this goal.

The researchers hypothesized that partial visual information triggers the illusory shape encoders, which then activate other neurons in V1, making it seem as if the illusory shape were really there. To test this hypothesis, they used lasers to stimulate the illusory shape encoders in mice that weren’t looking at anything. The illusory shape encoders then activated neurons throughout V1, causing the mice to experience the sensation of seeing a real shape.

They concluded that 3 consecutive circuits help create sensory inference illusions in mice. First, segment responders detect individual shapes and send signals to higher-level processing regions of the brain, which determine what information is missing and how to fill in the gaps. Then, these higher processing regions activate illusory shape encoders. Finally, the illusory shape encoders complete the pattern, activating the rest of V1 and creating the sensation of seeing an actual shape.

Although this research team focused on illusions, they argued that their findings could apply to sensory inference in general. As the scientific understanding of brain mechanisms like sensory inference expands, future researchers could be able to generalize their results to other functions of the human brain, such as memory and language processing.

The post How the brain recognizes blocked objects appeared first on Sciworthy.

]]>
Has Earth received any radio signals from aliens? https://sciworthy.com/has-earth-received-any-radio-signals-from-aliens/?utm_source=rss&utm_medium=rss&utm_campaign=has-earth-received-any-radio-signals-from-aliens Thu, 05 Mar 2026 12:00:32 +0000 https://sciworthy.com/?p=22825 Scientists affiliated with the SETI@home project analyzed 14 years of radio wave data collected by the Arecibo Telescope and identified 92 candidates for potential incoming radio signals that warrant further investigation.

The post Has Earth received any radio signals from aliens? appeared first on Sciworthy.

]]>
Unless aliens are living somewhere nearby in the solar system, it’s unlikely that scientists will find direct evidence of extraterrestrial entities. But that doesn’t mean it’s impossible to find evidence of life elsewhere in the universe. Astrobiologists look for signatures of life, typically in the form of telltale biological chemicals like molecular oxygen and ozone in an exoplanet atmosphere.

However, even if astrobiologists found an exoplanet atmosphere containing these chemicals, it’s possible that some unknown or obscure process made them in the absence of life. If scientists found evidence of technological activity in space, or a technosignature, that would be more conclusive evidence of intelligent life elsewhere in the universe. In 1984, researchers founded the independent research institute known as the Search for Extraterrestrial Intelligence (SETI), dedicated to searching specifically for technosignatures in the form of radio signals.

From 2006 to 2020, the SETI@home project worked alongside researchers who were surveying the sky for excess radio emissions from space for their own projects using the Arecibo Telescope, which collapsed in December 2020. The SETI@home team gathered 400 days of combined observation time over these 14 years, producing billions of excess radio wave detections from different points in space. However, they think the vast majority of these won’t be useful for finding aliens, since they come from radio frequency interference, benign sources like pulsars or gas clouds, or scattered points in the sky rather than a single source. 

To clean up their radio emission data, the team recently developed algorithms that remove interference and identify signals from fixed sources. This strategy prepared the researchers for their final step: re-observing these locations with the Five-hundred-meter Aperture Spherical Telescope, or FAST.

The radio signals that these algorithms are designed to detect would have to appear distinct from signals produced by natural sources in space. Therefore, the team established 3 criteria for detectable technological signals. First, they had to be fixed within a narrow frequency range over an extended period. Second, they had to pulse at a constant cycle. Or third, they had to contain repeating structures lasting several seconds. 

One caveat they addressed is that intentionally transmitted signals intended for detection would likely differ from radio waves that leaked from an alien atmosphere and were observed coincidentally. An intelligence trying to communicate would account for how the motion of its planet around its star would affect the outgoing frequency of its signal, a phenomenon known as the Doppler Shift. To account for this issue, the team reasoned that aliens trying to communicate would produce radio signals at a nearly constant frequency, which would be a clear giveaway of an artificial radio source.

During the development and testing of their algorithm, the researchers also included artificial data points intended to simulate hypothetical detections of definite technosignature radio signals. They called these artificial data points candidate birdies. If their algorithm identified the birdies as targets for further investigation, then they could be confident that it was working properly. They adjusted the algorithm’s detection sensitivity and filtering based on whether it included or excluded the birdies from the targets for further investigation.

The team addressed the challenges of filtering and scoring their data by breaking them up into smaller tasks that could run on multiple machines simultaneously. For a run of their algorithm on 2,000 connected computer processors, filtering took around 15 hours, while scoring took 1.6 days. They completed 2 runs of their algorithm on the SETI@home data, including 1 run with 3,000 birdies for comparison. The team used the birdies to determine which settings of their algorithm yielded the largest number of targets for further investigation above certain energy thresholds and across different radio frequencies. By doing so, they identified 92 candidate signals of interest as priorities to re-observe using 23 hours of observation time they secured with FAST.  

The work of re-observing and analyzing these signals is ongoing. As of July, 2025, the researchers had re-observed 80 of the 92 candidate signals. They haven’t found any evidence of extraterrestrial intelligence yet, but they suggested that dedicated radio telescope surveys would help. In the meantime, the expense and demand for radio telescope time mean that SETI can still likely gather the most data by partnering with other radio astronomers, gaining access only to what they observe.

The post Has Earth received any radio signals from aliens? appeared first on Sciworthy.

]]>
“Radical mundanity” could explain why we haven’t met aliens https://sciworthy.com/radical-mundanity-could-explain-why-we-havent-met-aliens/?utm_source=rss&utm_medium=rss&utm_campaign=radical-mundanity-could-explain-why-we-havent-met-aliens Wed, 04 Mar 2026 12:00:02 +0000 https://sciworthy.com/?p=22833 In response to the question, “Where are all the aliens?”, one astronomer argued that, if they are similar to humans, they may have faced logistical limitations long ago that prevented them from contacting Earth.

The post “Radical mundanity” could explain why we haven’t met aliens appeared first on Sciworthy.

]]>
Where are they?” is the question reportedly asked by the famous Italian-American physicist Enrico Fermi during a conversation with fellow scientists in the early 1950s. The other scientists understood that Fermi was referring to aliens. Over the rest of the conversation, Fermi went through a series of calculations showing not only that extraterrestrials should exist, but that they should have come to Earth eons ago, multiple times. He essentially argued that it is strange that Earth isn’t already an alien outpost, and that this could have implications for civilization itself. 

In the ensuing decades, astronomers and scientists used this legendary conversation to formulate the Fermi Paradox, which asks if other alien civilizations exist throughout the Galaxy and we can theoretically detect signs of their existence, then why don’t we see them? Scientists have hypothesized many solutions to the Fermi Paradox. One is the Great Filter, which holds that some event or events prevent civilizations from reaching the technological sophistication to find each other. Another, known as the Zoo Hypothesis, claims that aliens are aware of human civilization and choose not to make contact to avoid disrupting us. There’s also the possibility that they’re already here, living among us, and we don’t know it, or that everything from Unidentified Aerial-Undersea/Anomalous Phenomena (UAPs) to the interstellar object ‘Oumuamua could be signs of aliens.

Some solutions to the Fermi Paradox rely on assumptions about technological development, evolution, or the nature of intelligence itself. By contrast, researcher Robin H.D. Corbet recently proposed that the solution to this paradox could be mundanity. Corbet’s argument stems in part from the assumption that there’s nothing fundamentally special about the Sun, the Earth, or even humans, a concept known as the Copernican Mediocrity Principle. From this principle, Corbet developed a logical chain of reasoning, arguing that if alien civilizations are like humans, it shouldn’t be surprising that we haven’t met any.

Corbet proposed 2 main assertions of the “radical mundanity” solution to the Fermi Paradox. First, that technology can only advance so far — even if alien civilizations are more technologically advanced than humans, none of them will have faster-than-light travel or other physics-defying tools. And, second, that a modest number of alien civilizations inhabit the Galaxy, meaning humans are not alone, but civilizations are not everywhere. 

Regarding technology, he argued that the immutable laws of physics would prevent alien civilizations from developing warp drives that could travel across the galaxy in hours. On the more realistic side, engineering and ecological limits would push civilizations to adopt sustainable practices rather than maximize energy consumption. In principle, this means that no civilization is ever likely to build megaprojects that we could detect, such as artificial rings around their stars or enormous radio beacons broadcasting for millennia.

The presence of several civilizations like ours in the Galaxy has its own implications. If they follow similar rational logic and balance curiosity with caution, then their space exploration must be subject to cost-benefit analysis. Searching for other alien civilizations would satisfy scientific curiosity. However, gathering resources from distant star systems could take too long, and exchanging information would bring only marginal benefits unless some civilization developed paradigm-shifting technologies. This ultimately means diminishing returns for each new alien civilization discovered. 

Corbet then argued that any space exploration would likely be done by self-guided and perhaps self-replicating machines, sometimes referred to as Von Neumann probes. These probes would have some form of advanced artificial intelligence and could reasonably travel between stars at 1 thousandth the speed of light, roughly 700,000 miles per hour or 1,000,000 kilometers per hour. Fears about out-of-control artificial intelligence add costs beyond the probes’ materials. This means that any given alien civilization could encounter only a fraction of the Galaxy’s civilizations before deciding that the risks of continued exploration outweigh the benefits.

Corbet concluded that if alien civilizations are living far from Earth, they could have reasonably abandoned the search for others millions of years ago, meaning we will continue to receive silence. Scientists could potentially detect nearby alien civilizations from their radio signals, especially with the newer radio arrays. However, we should expect them to be more like Earthlings than Star Trek Vulcans, implying no replicators or antimatter engines for us anytime soon. Corbet also suggests that UAPs found on Earth are almost certainly not aliens. Extraterrestrial life might not be that technologically interesting for us to find, but we are almost certainly too mundane for them to bother spying on us.

The post “Radical mundanity” could explain why we haven’t met aliens appeared first on Sciworthy.

]]>
Are aliens receiving radio signals from Earth? https://sciworthy.com/are-aliens-receiving-radio-signals-from-earth/?utm_source=rss&utm_medium=rss&utm_campaign=are-aliens-receiving-radio-signals-from-earth Tue, 03 Mar 2026 12:00:18 +0000 https://sciworthy.com/?p=22819 Scientists calculated the maximum distance at which extraterrestrials with modern human-level technology could detect radio signals from Earth.

The post Are aliens receiving radio signals from Earth? appeared first on Sciworthy.

]]>
Radio signals are a staple of the first-contact subgenre of science fiction. Carl Sagan’s Contact famously revolves around the discovery of encoded radio signals from the star Vega, Liu Cixin’s The Three-Body Problem follows what happens after a scientist secretly makes radio contact with aliens, and Vince Gilligan’s Pluribus centers on what happens after scientists follow instructions communicated to Earth via radio signals. But how likely is it that we could actually receive an alien radio signal, or that aliens could receive an outgoing signal from Earth?

Scientists at Pennsylvania State University and the California Institute of Technology recently investigated this question. They identified radio signals as especially relevant to the search for intelligent alien life because astronomers know that intelligent species, humans in our case, can build machines that produce them and devices that detect them. 

The team was particularly interested in a subset of radio transmissions from Earth that relay signals between ground stations and spacecraft far from Earth. This system, known as NASA’s Deep-Space Network, or DSN, comprises 3 sites in the United States, Spain, and Australia, each equipped with 70-meter (230-foot) and 34-meter (112-foot) radio antennas.

The distance at which signals from these antennas can be detected depends on how powerful the signal is, how long the would-be detector is observing the signal, the bandwidth they’re observing it with, and how distinct from background noise the signal needs to be. The team used a mathematical equation to calculate this distance, using the typical input power for a DSN signal and assuming that an extraterrestrial intelligence’s telescope for detecting signals from Earth would have similar specifications to the Green Bank Telescope, with 30-minute observation times. Using these parameters, they estimated that aliens could detect radio signals in a radius around Earth of approximately 7 parsecs. That’s 200 trillion kilometers or 100 trillion miles, which is only about 0.02% of the Milky Way’s diameter.

Next, the astronomers asked 2 related questions, the first of which would provide insights into the second. They asked: From which directions in the sky would Earth be most detectable by its radio signals? And, in which directions are the planetary systems from which Earth would most likely detect extraterrestrial radio signals? 

To answer the first question, the team identified the directions Earth is sending radio signals by analyzing patterns in the distribution of DSN signals from Earth to satellites and telescopes like JWST. If DSN patterns from Earth resemble anything like what an extraterrestrial intelligence might have, then knowing these patterns would tell astronomers where distant observers are most likely to detect them. They used publicly available DSN schedules to see where in the sky and for how long their antennas sent radio signals from Earth. They used this data to construct a sky map, showing the directions in which Earth’s signals were being broadcast. 

They found that most of the radio signals sent from Earth come from the Advanced Composition Explorer, the Deep Space Climate Observatory, and the Solar and Heliospheric Observatory, and are traveling close to the apparent path of the Sun across the sky, known as the ecliptic. They discovered that up to 79% of Earth’s deep-space radio transmissions were within 5° of the ecliptic, with prominent but weaker peaks in the directions of Mars, Mercury, Jupiter, Saturn, and JWST.

The researchers argued that if human civilization and its radio signals serve as a model for what to look for, then these findings have several implications. First, astronomers should prioritize searching for radio signals from distant planetary systems in which exoplanets pass between Earth and their host star. This is so that Earth-based observers could catch any stray signals an extraterrestrial civilization is sending to its own satellites and probes near its equivalent of the ecliptic. 

Second, astronomers should prioritize searching during times when exoplanets around a host star pass behind each other. This is because when Earth passes behind a planet in the solar system that we’re sending radio signals to, the likelihood of a distant observer detecting these signals increases to 12%. So, if an extraterrestrial civilization is sending radio signals to its equivalent of Jupiter or Mars, Earth’s astronomers would have a better chance of spotting them. 

Third, because most of Earth’s deep-space radio signals are concentrated near the ecliptic, astronomers should prioritize looking for radio signals in stars close to the ecliptic. Those stars are the most likely to have received radio signals coming from Earth and may be trying to reply. Following this model, they identified 128 star systems within a 7-parsec radius of Earth as possible places where aliens with human-civilization-level intelligence could detect Earth via its DSN transmissions, and vice versa. So, to have the best chance of finding Vegans, Trisolarans, or an alien hivemind, it may be best to look along the path of the Sun!

The post Are aliens receiving radio signals from Earth? appeared first on Sciworthy.

]]>
These gases could signify intelligent life beyond Earth https://sciworthy.com/these-gases-could-signify-intelligent-life-beyond-earth/?utm_source=rss&utm_medium=rss&utm_campaign=these-gases-could-signify-intelligent-life-beyond-earth Mon, 02 Mar 2026 12:00:50 +0000 https://sciworthy.com/?p=22808 Astronomers proposed that the industrial gases nitrogen trifluoride and sulfur hexafluoride could provide measurable signs of advanced technology on distant exoplanets.

The post These gases could signify intelligent life beyond Earth appeared first on Sciworthy.

]]>
Humans have searched for signs of intelligent life beyond Earth for over a century. The most well-known effort, the Search for Extraterrestrial Intelligence (SETI), was popularized by Carl Sagan’s 1985 novel, Contact, and its later film adaptation. Like Sagan’s protagonist, many SETI researchers use telescopes to listen for radio signals from distant civilizations. But radio waves are only one way scientists search for alien life.

Astronomers sweep the skies for measurable signs of advanced technology, known as technosignatures. In 1906, astronomer Percival Lowell mapped what he believed to be a vast series of artificial “canals” on Mars. In 1960, physicist Freeman J. Dyson proposed that an advanced civilization might build a structure around its star to harvest energy, sometimes called a Dyson sphere. Lowell’s canals were natural erosion features, and Dyson spheres remain hypothetical, but the hunt for technosignatures persists.

Today, astronomers analyze the chemical fingerprints of distant planetary atmospheres to search for signs of life or advanced technology. Researchers have proposed looking for industrial gases, such as chlorofluorocarbons or hydrofluorocarbons, to detect alien civilizations on exoplanets. However, these gases are in Earth’s atmosphere at extremely low levels, which suggests that detecting them on an exoplanet would be challenging. Under ideal conditions, astronomers would need up to 500 hours of observation time on JWST, the largest telescope in space, to detect comparable concentrations.

Scientists led by Sara Seager at MIT proposed nitrogen trifluoride (NF3) and sulfur hexafluoride (SF6) as ideal technosignature gases. Both gases are produced by industries on Earth – NF3 is manufactured to clean semiconductors and solar panels, while SF6 is made to insulate transformers and other high-voltage electrical equipment – and their concentrations in the atmosphere have rapidly increased over the last few decades.

The team first ruled out biological sources of these gases, since living things could, ironically, produce a false positive for a technosignature. They searched a database of all chemicals made by Earth-based life and found that no known lifeform produces NF3 or SF6. In fact, no lifeform is known to produce molecules like these with nitrogen-fluorine or sulfur-fluorine bonds.

The researchers suggested that life on Earth might avoid using molecules based on fluorine because it’s locked in minerals and hard to remove. It also has unique chemical properties that make it difficult for biological machinery to handle. Namely, it attracts electrons more strongly than any other element, so it reacts intensely with other molecules and forms difficult-to-break bonds. They argued that these chemical traits would also make fluorine incompatible with extraterrestrial life.

Next, they considered whether these gases had any non-biological, or abiotic, sources, like tectonics or other geologic processes. NF3 has no known abiotic sources on Earth, but volcanism produces minor amounts of SF6. They suggested that any volcanism that produced SF6 would also release the more common volcanic gas silicon tetrafluoride (SiF4), such that astronomers could simultaneously observe SiF4 and SF6 to identify a volcanic source. If they found SF6 without SiF4 on an exoplanet, it would strengthen the case for a technosignature.

Finally, the scientists considered how easily these gases could be distinguished from other atmospheric gases on exoplanets. To “see” an exoplanet’s atmosphere, astronomers watch it move in front of its star and measure the wavelengths of light that pass through it. These data produce patterns known as transmission spectra. Ideally, each bump on the spectra would represent a single atmospheric gas, but in reality, some overlap or block each other, making them difficult to distinguish.

The team used the computer model Simulated Exoplanet Atmosphere Spectra (SEAS) to generate transmission spectra for a rocky exoplanet about 5 times Earth’s mass, known as a Super Earth, orbiting an M-dwarf star. They simulated spectra for 3 different atmospheres: one dominated by H2, one by N2, and one by CO2. They found that NF3 and SF6 both had spectral features that were distinguishable from those of the major atmospheric gases and theoretically detectable by JWST, albeit at levels much higher than current atmospheric levels on Earth. Next-generation telescopes, like the Habitable Worlds Observatory and Large Interferometer for Exoplanets, will be better suited to detect these. 

Seager and her colleagues concluded that NF3 and SF6 are promising technosignature gases, but many uncertainties remain. Scientists have a limited understanding of how these gases behave in Earth’s atmosphere. In addition, their transmission spectra overlapped with those of chlorofluorocarbon gases, requiring additional research to untangle the signals. They also noted that it’s impossible to predict the waste products extraterrestrial biology might produce. But if astronomers observed a rapid, steady increase in technosignature gases on an exoplanet over about 100 years, it might be the clearest mark of an alien civilization becoming industrialized. Astronomers just have to get lucky enough to observe it!

The post These gases could signify intelligent life beyond Earth appeared first on Sciworthy.

]]>
JWST reveals properties of dusty star-forming galaxies https://sciworthy.com/jwst-reveals-hidden-properties-of-dusty-star-forming-galaxies/?utm_source=rss&utm_medium=rss&utm_campaign=jwst-reveals-hidden-properties-of-dusty-star-forming-galaxies Thu, 26 Feb 2026 12:00:57 +0000 https://sciworthy.com/?p=22760 Astronomers used new JWST data to unravel the structures of 22 dusty star-forming galaxies that previous telescopes couldn’t see.

The post JWST reveals properties of dusty star-forming galaxies appeared first on Sciworthy.

]]>
The origins of the universe are veiled in dust. Space is filled with tiny particles ranging in size from a few molecules to micrometers. That’s 1 millionth of a meter or 1 hundred thousandth of an inch max! From the early universe to today, huge clouds of gas and dust have accumulated and collapsed, creating stars and galaxies. Studying these particles provides scientists with insight into what happened in the early universe. However, dust also obscures many interstellar objects from telescopes, limiting what scientists can learn about deep space.

Astronomers are particularly interested in a class of deep-space objects known as dusty star-forming galaxies or DSFGs, which produce new stars at exceptionally high rates. These ancient and distant galaxies create over 100 stars per year, almost 10 times as many as the Milky Way produces, but dust completely obscures them from visible light. Astronomers use a technique to interpret high-resolution data, called resolving, to find the properties of these DSFGs. It’s like looking at detailed pictures in HD or 4K on a screen, but the image is of space! Yet, no equipment was able to resolve DSFGs until JWST.

An international team of astronomers recently resolved 22 DSFGs using JWST’s near-Infrared Camera, known as NIRCam. NIRCAM can observe galaxies at wavelengths of 0.6 to 5 micrometers, which is around 5 millionths of a meter or 2 thousandths of an inch. Astronomers can use these high-resolution observations to bypass the dust enshrouding DSFGs.

The team used 7 different filters on NIRCam to block certain wavelengths or colors of light for each galaxy. Each filter traces different physical properties of the galaxy, such as the galaxy’s size, shape, clumpiness, mass, and star-forming rate. No single filter can resolve all of these properties at once. Astronomers also adjust the filters based on the distance between the galaxy and the Earth. The universe is expanding, which means old, distant galaxies like DSFGs are moving away from our own. This expansion stretches the light waves we receive from them, a phenomenon called redshift.

The team used their high-resolution data to classify the DSFGs into 3 types, based on their visual properties. Type I galaxies form stars throughout their entirety. Type II galaxies only create stars within their center, or core. Lastly, Type III galaxies only create stars outside their core, in an area called the disk. Astronomers studying the history of the universe look for regions where no stars form, a trait known as quenching, to recognize Type II and Type III galaxies. The researchers found that the DSFGs they studied consisted of 10 Type I galaxies, 5 Type II galaxies, and 7 Type III galaxies.

The team then examined the internal properties of each galaxy to create general trends within each category. To estimate their masses and the rate at which they create stars, the astronomers applied a model that used patterns in the light emitted by the DSFGs. They found that the galaxy sizes ranged from 30 billion to 300 billion times the mass of the sun, meaning that the most massive DSFG was smaller than the Milky Way, and formed 25 to 500 stars per year. They also found that these galaxies were between 10 billion light-years and 18 billion light-years from Earth. That’s bigger than a trillion, trillion meters or inches!

The researchers also described trends in the shapes of these galaxies. One was that the farther and older a galaxy was, the more fragmented its shape. The astronomers interpreted this fragmented shape to mean that high-redshift DSFGs are in the process of forming a tightly packed group of stars in the center, known as a bulge. The team suggested that the galaxies forming bulges will eventually quench in their cores, becoming Type III galaxies. In addition, the scientists discovered a previously hidden feature of most of the galaxies, which is that they are lopsided. This lopsidedness signals to astronomers that, at some point in time, these galaxies might have merged with other galaxies. 

The team concluded that high-resolution data from JWST can be used to discern hidden characteristics of DSFGs, which can help astronomers determine what occurred to them in the past and predict what will happen to them in the future. They suggested future researchers use JWST data to test theories on why these galaxies are lopsided and how they evolved over time.

The post JWST reveals properties of dusty star-forming galaxies appeared first on Sciworthy.

]]>
The European ant that clones another species https://sciworthy.com/the-european-ant-that-clones-another-species/?utm_source=rss&utm_medium=rss&utm_campaign=the-european-ant-that-clones-another-species Mon, 23 Feb 2026 12:00:19 +0000 https://sciworthy.com/?p=22640 European ant queens can lay eggs of a different species to mate with them and create a separate set of workers. These worker ants may represent the only case of animal domestication not performed by humans.

The post The European ant that clones another species appeared first on Sciworthy.

]]>
We generally assume that an organism’s offspring are of the same species. However, the species known as the European ant, Messor ibericus, doesn’t always comply. Researchers in evolutionary ecology discovered that some ants from the Messor genus seemed to be offspring of individuals of 2 different species, called hybrids

Scientists recently conducted a study at the University of Montpellier in France, where they found that European ant queens produce workers by cloning hybrids of another ant species. This makes the European ants the first known animals to naturally produce offspring of another species, in a process called xenoparity, challenging scientists’ assumptions about reproductive biology.

Researchers analyzed the population genetics of Messor ants by looking at single DNA letters at specific points in their genome. They found that all of the European ant species’ workers are hybrids. Genetic sequencing confirmed that the workers had maternal genes from European ants and paternal genes from their close relatives, the harvester ants, or Messor structor. These species do not commonly coexist in Europe, begging the question: how are these hybrids produced when the parent species don’t overlap?

To answer this question, the scientists started by observing samples of wild European ant colonies. For 132 males from 26 colonies, the researchers observed that 44% of the males were hairy, a characteristic of European ants. The other 56% of the males appeared hairless, a characteristic of harvester ants. They analysed the shared ancestry of these ants using DNA and protein sequences, and confirmed that these physical differences occurred because the males in the colony consisted of 2 distinct species, the European ant and the harvester ant, which diverged more than 5 million years ago.

They explained that the European ant queens mate with both European ant and harvester ant males, meaning they are polyandrous. Sperm from European ant males only produce queen ants, so European ant queens must use sperm from the harvester ant species to produce the entirety of their worker caste. This makes all workers hybrid females. It implies that without the harvester ant males, the European ant would be unable to produce the workers needed to maintain their colonies. 

The researchers then sequenced the part of the ants’ genomes that is exclusively inherited from the mother, known as their mitochondrial genome, to confirm that the male European ants and male harvester clones share European ant mothers. Of 286 eggs genotyped from 5 European ant laboratory colonies, scientists found that 9% of the eggs laid by the queens contained exclusively harvester ant DNA. This substantiated the notion that European ant queens can lay offspring that lack any of their own DNA. This phenomenon, whereby males are the sole source of genetic material, is called male clonality, or androgenesis

The researchers postulated that millions of years ago, when both species regularly coexisted, European ant queens acquired the sperm from wild harvester ant colonies to produce workers. Then, when harvester ants later declined in the European ants’ habitat, the queens began storing their sperm and removing their own genetic material from the egg to directly clone their males. This strategy produced a distinct clonal lineage of harvester ant males that continues today.

The researchers demonstrated that cloned males fathered most of the hybrid workers in the colonies they observed. However, they also documented a small percentage of workers fathered by wild harvester ant males. The cloned males had much less genetic diversity than the wild males. They also looked different, like a domestic cat compared to a wild relative. In fact, the harvester ant clones lacked some hair on their bodies that their wild counterparts possessed. Due to these differences, the researchers proposed that the male clones should be considered a domesticated lineage of the harvester ant species.

While artificial cloning is not new to science, the European ant queens have naturally developed this unique adaptation to survive. Scientists have demonstrated that these queens naturally clone another species’ male, but the cellular and genetic mechanisms behind this cloning process remain unknown. The evolutionary origins of this behavior and its implications for other species are still unclear, but the French team will continue working to uncover this mystery.

For another take on this article, see here.

The post The European ant that clones another species appeared first on Sciworthy.

]]>