The Download: how doctors fight conspiracy theories, and your AI footprint

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How conspiracy theories infiltrated the doctor’s office

As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.

We spoke to a number of health-care professionals who told us how this modern impulse to “do your own research” is changing their profession. Read the full story.

—Rhiannon Williams

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.

Stop worrying about your AI footprint. Look at the big picture instead.

—Casey Crownhart

As a climate technology reporter, I’m often asked by people whether they should be using AI, given how awful it is for the environment. Generally, I tell them not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want.

That response might surprise some. I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. But I feel strongly about not putting the onus on individuals. Here’s why.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

A new ion-based quantum computer makes error correction simpler

A company called Quantinuum has just unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability.

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling.

But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s. Read the full story.

—Sophia Chen

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 A new California law could change how all Americans browse online 
It gives web users the chance to opt out of having their personal information sold or shared. (The Markup)

2 The FDA has fast-tracked a pill to treat pancreatic cancer
The experimental drug appears promising, but experts worry corners may be cut. (WP $)
+ Demand for AstraZeneca’s cancer and diabetes drugs is pushing profits up. (Bloomberg $)
+ A new cancer treatment kills cells using localized heat. (Wired $)

3 AI pioneers claim it is already superior to humans in many tasks
But not all tasks are created equal. (FT $)
+ Are we all wandering into an AGI trap? (Vox)
+ How AGI became the most consequential conspiracy theory of our time. (MIT Technology Review)

4 IBM is planning on cutting thousands of jobs
It’s shifting its focus to software and AI consulting, apparently. (Bloomberg $)
+ It’s keen to grow the number of its customers seeking AI advice. (NYT $)

5 Big Tech’s data centers aren’t the job-generators we were promised
The jobs they do create are largely in security and cleaning. (Rest of World)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)

6 Microsoft let AI shopping agents loose in a fake marketplace 
They were easily manipulated into buying goods, it found. (TechCrunch)
+ When AIs bargain, a less advanced agent could cost you. (MIT Technology Review)

7 Sony has compiled a dataset to test the fairness of computer vision models
And it’s confident it’s been compiled in a fair and ethical way. (The Register)
+ These new tools could make AI vision systems less biased. (MIT Technology Review)

8 The social network is no more
We’re living in an age of anti-social media. (The Atlantic $)
+ Scam ads are rife across platforms, but these former Meta workers have a plan. (Wired $)
+ The ultimate online flex? Having no followers. (New Yorker $)

9 Vibe coding is Collins dictionary’s word of 2025 📖
Beating stiff competition from “clanker.” (The Guardian)
+ What is vibe coding, exactly? (MIT Technology Review)

10 These people found romance with their chatbot companions
The AI may not be real, but the humans’ feelings certainly are. (NYT $)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)

Quote of the day

“The opportunistic side of me is realizing that your average accountant won’t be doing this.”

—Sal Abdulla, founder of accounting-software startup NixSheets, tells the Wall Street Journal he’s using AI tools to gain an edge on his competitors.

One more thing

Ethically sourced “spare” human bodies could revolutionize medicine

Many challenges in medicine stem, in large part, from a common root cause: a severe shortage of ethically-sourced human bodies.

There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain.

Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman. Read the full story.

—Carsten T. Charlesworth, Henry T. Greely & Hiromitsu Nakauchi

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Make sure to look up so you don’t miss November’s supermoon.
+ If you keep finding yourself mindlessly scrolling (and who doesn’t?), maybe this whopping six-pound phone case could solve your addiction.
+ Life lessons from a 101-year old who has no plans to retire.
+ Are you a fan of movement snacking?

A new ion-based quantum computer makes error correction simpler

The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability. 

Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s.

“Helios is an important proof point in our road map about how we’ll scale to larger physical systems,” says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum’s majority owner.

Located at Quantinuum’s facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium.  These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ℉), on top of an optical table. Users can access the computer by logging in remotely over the cloud.

Helios encodes information in the ions’ quantum states, which can represent not only 0s and 1s, like the bits in classical computing, but probabilistic combinations of both, known as superpositions. A hallmark of quantum computing, these superposition states are akin to the state of a coin flipping in the air—neither heads nor tails, but some probability of both. 

Quantum computing exploits the unique mathematics of quantum-mechanical objects like ions to perform computations. Proponents of the technology believe this should enable commercially useful applications, such as highly accurate chemistry simulations for the development of batteries or better optimization algorithms for logistics and finance. 

In the last decade, researchers at companies and academic institutions worldwide have incrementally developed the technology with billions of dollars of private and public funding. Still, quantum computing is in an awkward teenage phase. It’s unclear when it will bring profitable applications. Of late, developers have focused on scaling up the machines. 

A key challenge to making a more powerful quantum computer is implementing error correction. Like all computers, quantum computers occasionally make mistakes. Classical computers correct these errors by storing information redundantly. Owing to quirks of quantum mechanics, quantum computers can’t do this and require special correction techniques. 

Quantum error correction involves storing a single unit of information in multiple qubits rather than in a single qubit. The exact methods vary depending on the specific hardware of the quantum computer, with some machines requiring more qubits per unit of information than others. The industry refers to an error-corrected unit of quantum information as a “logical qubit.” Helios needs two ions, or “physical qubits,” to create one logical qubit.

This is fewer physical qubits than needed in recent quantum computers made of superconducting circuits. In 2024, Google used 105 physical qubits to create a logical qubit. This year, IBM used 12 physical qubits per single logical qubit, and Amazon Web Services used nine physical qubits to produce a single logical qubit. All three companies use variations of superconducting circuits as qubits.

Helios is noteworthy for its qubits’ precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer’s qubit error rates are low to begin with, which means it doesn’t need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. “To the best of my knowledge, no other platform is at this level,” says Islam.

This advantage comes from a design property of ions. Unlike superconducting circuits, which are affixed to the surface of a quantum computing chip, ions on Quantinuum’s Helios chip can be shuffled around. Because the ions can move, they can interact with every other ion in the computer, a capacity known as “all-to-all connectivity.” This connectivity allows for error correction approaches that use fewer physical qubits. In contrast, superconducting qubits can only interact with their direct neighbors, so a computation between two non-adjacent qubits requires several intermediate steps involving the qubits in between. “It’s becoming increasingly more apparent how important all-to-all-connectivity is for these high-performing systems,” says Strabley.

Still, it’s not clear what type of qubit will win in the long run. Each type has design benefits that could ultimately make it easier to scale. Ions (which are used by the US-based startup IonQ as well as Quantinuum) offer an advantage because they produce relatively few errors, says Islam: “Even with fewer physical qubits, you can do more.” However, it’s easier to manufacture superconducting qubits. And qubits made of neutral atoms, such as the quantum computers built by the Boston-based startup QuEra, are “easier to trap” than ions, he says. 

Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction “on the fly,” says David Hayes, the company’s director of computational theory and design, That’s a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry.

Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum’s predecessor, with the claim that it “rivals the best classical approaches in expanding our understanding of magnetism.” Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor. 

“These aren’t contrived problems,” says Hayes. “These are problems that the Department of Energy, for example, is very interested in.”

Quantinuum plans to build another version of Helios in its facility in Minnesota. It has already begun to build a prototype for a fourth-generation computer, Sol, which it plans to deliver in 2027, with 192 physical qubits. Then, in 2029, the company hopes to release Apollo, which it says will have thousands of physical qubits and should be “fully fault tolerant,” or able to implement error correction at a large scale.

The State of AI: Is China about to win the race? 

The State of AI is a collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power.

In this conversation, the FT’s tech columnist and Innovation Editor John Thornhill and MIT Technology Review’s Caiwei Chen consider the battle between Silicon Valley and Beijing for technological supremacy.

John Thornhill writes:

Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. 

Here in the West, our initial instinct is to focus on America’s significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers. The legendary investor Warren Buffett once warned: “Never bet against America.” He is right that for more than two centuries, no other “incubator for unleashing human potential” has matched the US.

Today, however, China has the means, motive, and opportunity to commit the equivalent of technological murder. When it comes to mobilizing the whole-of-society resources needed to develop and deploy AI to maximum effect, it may be just as rash to bet against. 

The data highlights the trends. In AI publications and patents, China leads. By 2023, China accounted for 22.6% of all citations, compared with 20.9% from Europe and 13% from the US, according to Stanford University’s Artificial Intelligence Index Report 2025. As of 2023, China also accounted for 69.7% of all AI patents. True, the US maintains a strong lead in the top 100 most cited publications (50 versus 34 in 2023), but its share has been steadily declining. 

Similarly, the US outdoes China in top AI research talent, but the gap is narrowing. According to a report from the US Council of Economic Advisers, 59% of the world’s top AI researchers worked in the US in 2019, compared with 11% in China. But by 2022 those figures were 42% and 28%. 

The Trump administration’s tightening of restrictions for foreign H-1B visa holders may well lead more Chinese AI researchers in the US to return home. The talent ratio could move further in China’s favor.

Regarding the technology itself, US-based institutions produced 40 of the world’s most notable AI models in 2024, compared with 15 from China. But Chinese researchers have learned to do more with less, and their strongest large language models—including the open-source DeepSeek-V3 and Alibaba’s Qwen 2.5-Max—surpass the best US models in terms of algorithmic efficiency.

Where China is really likely to excel in future is in applying these open-source models. The latest report from Air Street Capital shows that China has now overtaken the US in terms of monthly downloads of AI models. In AI-enabled fintech, e-commerce, and logistics, China already outstrips the US. 

Perhaps the most intriguing—and potentially the most productive—applications of AI may yet come in hardware, particularly in drones and industrial robotics. With the research field evolving toward embodied AI, China’s advantage in advanced manufacturing will shine through.

Dan Wang, the tech analyst and author of Breakneck, has rightly highlighted the strengths of China’s engineering state in developing manufacturing process knowledge—even if he has also shown the damaging effects of applying that engineering mentality in the social sphere. “China has been growing technologically stronger and economically more dynamic in all sorts of ways,” he told me. “But repression is very real. And it is getting worse in all sorts of ways as well.”

I’d be fascinated to hear from you, Caiwei, about your take on the strengths and weaknesses of China’s AI dream. To what extent will China’s engineered social control hamper its technological ambitions? 

Caiwei Chen responds:

Hi, John!

You’re right that the US still holds a clear lead in frontier research and infrastructure. But “winning” AI can mean many different things. Jeffrey Ding, in his book Technology and the Rise of Great Powers, makes a counterintuitive point: For a general-purpose technology like AI, long-term advantage often comes down to how widely and deeply technologies spread across society. And China is in a good position to win that race (although “murder” might be pushing it a bit!).

Chips will remain China’s biggest bottleneck. Export restrictions have throttled access to top GPUs, pushing buyers into gray markets and forcing labs to recycle or repair banned Nvidia stock. Even as domestic chip programs expand, the performance gap at the very top still stands.

Yet those same constraints have pushed Chinese companies toward a different playbook: pooling compute, optimizing efficiency, and releasing open-weight models. DeepSeek-V3’s training run, for example, used just 2.6 million GPU-hours—far below the scale of US counterparts. But Alibaba’s Qwen models now rank among the most downloaded open-weights globally, and companies like Zhipu and MiniMax are building competitive multimodal and video models. 

China’s industrial policy means new models can move from lab to implementation fast. Local governments and major enterprises are already rolling out reasoning models in administration, logistics, and finance. 

Education is another advantage. Major Chinese universities are implementing AI literacy programs in their curricula, embedding skills before the labor market demands them. The Ministry of Education has also announced plans to integrate AI training for children of all school ages. I’m not sure the phrase “engineering state” fully captures China’s relationship with new technologies, but decades of infrastructure building and top-down coordination have made the system unusually effective at pushing large-scale adoption, often with far less social resistance than you’d see elsewhere. The use at scale, naturally, allows for faster iterative improvements.

Meanwhile, Stanford HAI’s 2025 AI Index found Chinese respondents to be the most optimistic in the world about AI’s future—far more optimistic than populations in the US or the UK. It’s striking, given that China’s economy has slowed since the pandemic for the first time in over two decades. Many in government and industry now see AI as a much-needed spark. Optimism can be powerful fuel, but whether it can persist through slower growth is still an open question.

Social control remains part of the picture, but a different kind of ambition is taking shape. The Chinese AI founders in this new generation are the most globally minded I’ve seen, moving fluidly between Silicon Valley hackathons and pitch meetings in Dubai. Many are fluent in English and in the rhythms of global venture capital. Having watched the last generation wrestle with the burden of a Chinese label, they now build companies that are quietly transnational from the start.

The US may still lead in speed and experimentation, but China could shape how AI becomes part of daily life, both at home and abroad. Speed matters, but speed isn’t the same thing as supremacy.

John Thornhill replies:

You’re right, Caiwei, that speed is not the same as supremacy (and “murder” may be too strong a word). And you’re also right to amplify the point about China’s strength in open-weight models and the US preference for proprietary models. This is not just a struggle between two different countries’ economic models but also between two different ways of deploying technology.  

Even OpenAI’s chief executive, Sam Altman, admitted earlier this year: “We have been on the wrong side of history here and need to figure out a different open-source strategy.” That’s going to be a very interesting subplot to follow. Who’s called that one right?

Further reading on the US-China competition

There’s been a lot of talk about how people may be using generative AI in their daily lives. This story from the FT’s visual story team explores the reality 

From China, FT reporters ask how long Nvidia can maintain its dominance over Chinese rivals

When it comes to real-world uses, toys and companions devices are a novel but emergent application of AI that is gaining traction in China—but is also heading to the US. This MIT Technology Review story explored it.

The once-frantic data center buildout in China has hit walls, and as the sanctions and AI demands shift, this MIT Technology Review story took an on-the-ground look at how stakeholders are figuring it out.

The Download: the AGI myth, and US/China AI competition

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AGI became the most consequential conspiracy theory of our time

—Will Douglas Heaven, senior AI editor 

Are you feeling it?

I hear it’s close: two years, five years—maybe next year! And I hear it’s going to solve our biggest problems in ways we cannot yet imagine. I also hear it will bring on the apocalypse and kill us all…

We’re of course talking about artificial general intelligence, or AGI—that hypothetical near-future technology that (I hear) will be able to do pretty much whatever a human brain can do.

Every age has its believers, people with an unshakeable faith that something huge is about to happen—a before and an after that they are privileged (or doomed) to live through. For us, that’s the promised advent of AGI. And here’s what I think: AGI is a lot like a conspiracy theory, and it may be the most consequential one of our time. Read the full story.

This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.

The State of AI: Is China about to win the race? 

Viewed from abroad, it seems only a matter of time before China emerges as the AI superpower of the 21st century. 

In the West, our initial instinct is to focus on America’s significant lead in semiconductor expertise, its cutting-edge AI research, and its vast investments in data centers.

Today, however, China has the means, motive, and opportunity to win. When it comes to mobilizing the whole-of-society resources needed to develop and deploy AI to maximum effect, it may be rash to bet against it. Read the full story.

—John Thornhill & Caiwei Chen

This is the first edition of The State of AI, a collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power. Every Monday for the next six weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. Sign up to receive future editions every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 China is prepared to cut its data centers a sweet deal
If they agree to use native chips over American rivals’, that is. (FT $)
+ What happened when a data center moved into a small American town. (WSJ $)
+ Microsoft and OpenAI want more power—they just don’t know how much more. (TechCrunch)
+ The data center boom in the desert. (MIT Technology Review)

2 Norway’s oil fund has rejected Elon Musk’s $1 trillion pay package
The Tesla shareholder is concerned about the size of the reward. (WSJ $)
+ It says it will vote against the deal on Thursday. (FT $)

3 OpenAI has signed a massive compute deal with Amazon
It’s the latest in a long string of blockbuster deals for the AI company. (Wired $)

4 Cybersecurity workers moonlighted as criminal hackers
They’re accused of sharing their profits with the creators of the ransomware they deployed. (Bloomberg $)
+ The hackers demanded tens of millions in extortion payments. (The Register)

5 Tech’s elites are funding plans to safeguard MAGA
Entrepreneur Chris Buskirk is using donor money to equip it to outlive Trump. (WP $)

6 These startups supply the labor to train multitasking humanoid robots
Teams of humans are doing the dirty work, including filming themselves folding towels hundreds of times a day. (LA Times $)
+ This new system can teach a robot a simple household task within 20 minutes. (MIT Technology Review)

7 LLMs can’t accurately describe their internal processes
Anthropic is on a mission to measure their so-called introspective awareness. (Ars Technica)

8 Why are people using AI to hack their hobbies?
Talk about the death of fun. (NY Mag $)
+ While we’re at it, don’t use chatbots to answer friends’ dilemmas either. (Wired $)
+ Or to write research papers. (404 Media)

9 Coca-Cola is doubling down on AI in its ads
Undeterred by criticism last year, it’s back with more for the 2025 holidays. (WSJ $)
+ Nothing says festive joy like AI slop. (The Verge)

10 Facebook Dating is a…hit?
But you should still be on the lookout for scammers. (NYT $)
+ It’s not just for boomers—younger people are using it too. (TechCrunch)
+ For better or worse, AI is seeping into all the biggest dating platforms. (Economist $)

Quote of the day

“That was the kick of it, that the AI actually did find compatibility. It was the human part that didn’t work out.”

—Emma Inge, a project manager looking for love in San Francisco, describes the trouble with using an AI matchmaker to the New York Times: it can’t stop you getting ghosted.

One more thing

Inside the most dangerous asteroid hunt ever

If you were told that the odds of something were 3.1%, it might not seem like much. But for the people charged with protecting our planet, it was huge.

On February 18, astronomers determined that a 130- to 300-foot-long asteroid had a 3.1% chance of crashing into Earth in 2032. Never had an asteroid of such dangerous dimensions stood such a high chance of striking the planet. Then, just days later on February 24, experts declared that the danger had passed. Earth would be spared.

How did they do it? What was it like to track the rising danger of this asteroid, and to ultimately determine that it’d miss us?

This is the inside story of how a sprawling network of astronomers found, followed, mapped, planned for, and finally dismissed the most dangerous asteroid ever found—all under the tightest of timelines and, for just a moment, with the highest of stakes. Read the full story.

—Robin George Andrews

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ People in the Middle Ages chose to depict the devil in very interesting ways, I’ll say that much.
+ We may be inching closer to understanding why the animal kingdom has developed such elaborate markings.
+ The music in the new game Pokémon Legends: Z-A sure is interesting.
+ Slow cooker dinners are beckoning.

Why the for-profit race into solar geoengineering is bad for science and public trust

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.

The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. 

Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts.  In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors.

The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. 

There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.  

The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property.

In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.  

Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM.

Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change.

Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding.

The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections.

The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.

Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.”

But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area.

(Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.)

In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means.

If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment. 

Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms. 

The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns. 

Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.

We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?   

Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.  

But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts.

The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions?

Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture. 

Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?  

David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.