
If you listen to the heads of artificial-intelligence companies, the venture capitalists investing in them, the researchers and software engineers developing new models and apps, or the numerous technology optimists, AI is about to lead us into a glorious future.
The technology underpins Alphabet’s Nobel Prize–winning AlphaFold system, which can predict protein structures, potentially helping identify the cause of diseases or possible treatments for them. It underlies the Waymo cars that are navigating passengers around San Francisco streets. People are already using applications like OpenAI’s ChatGPT in place of Google Search to find information, compose letters, write software code, design presentations or create images.
In the near future, boosters say, the technology will take on the tedious tasks that many people perform, as well as the dangerous jobs that some workers do, allowing for safer, more creative and fulfilling jobs while making businesses more efficient and productive. It will discover treatments for the worst diseases, saving lives and reducing human suffering. It will boost education by developing personalized curricula that cater to the interests and abilities of each individual student. It will find solutions to our biggest societal problems, including climate change.
And, if you believe Dario Amodei, the CEO of Anthropic, a San Francisco–based developer of cutting-edge AI models, the technology might even help people live forever.

Jathan Sadowski
Jathan Sadowski
But amid these early uses and utopian promises, AI skeptics have been focusing in growing numbers on the harm the technology is inflicting on people and society here and now, and calling attention to its increasingly apparent drawbacks and shortcomings. Many have started to question whether the benefits of AI — or at least generative AI, the version of the technology that can mimic human-created writing, images and software code and that has dominated the industry since OpenAI first released ChatGPT in late 2022 — are worth these myriad downsides.
Some have concluded the unequivocal answer is no.
Count Jathan Sadowski among them. A senior lecturer in the Emerging Technologies Research Lab at Monash University’s Department of Human Centred Computing in Melbourne, Australia, Sadowski likens the promises the tech industry is making about all the future benefits of AI to check-kiting.
While insisting that we all will reap the rewards of AI sometime in the future, industry executives are cashing in now, raising tens of billions of dollars for their companies, making themselves extraordinarily wealthy and politically powerful even as AI-induced or -created carbon emissions, pollution, disinformation, deepfakes and more leave much of the rest of society worse off.
“I think the answer is it’s not worth it,” Sadowski says. “The consequences are very clear and immediate.”
By contrast, he says, the benefits are “all speculative.”
How we as a country and the world at large weigh AI’s benefits versus its costs will have enormous implications for San Francisco and the Bay Area. The City has become ground zero for AI development. It’s home not only to hundreds of nascent companies in the space, but also to the two most valuable startups in the industry — OpenAI and Anthropic. Thanks in part to these two companies’ gargantuan fundraising efforts, San Francisco has drawn an outsized portion of the venture capital dollars flooding into AI.
While much of that money is being used to buy or rent computing time on Nvidia’s pricey AI chips, a substantial portion is helping buoy San Francisco’s economy. The big AI providers have been rapidly expanding their workforces. And to the extent that interest in office space is picking up in San Francisco, it’s in no small part due to the AI boom.
For example, Databricks, which offers data storage and analysis services for AI companies, recently announced plans to double its presence downtown and invest $1 billion in the City over the next three years. All told, generative AI companies accounted for 15 percent of all space leased in San Francisco last year, according to real estate brokerage Cushman & Wakefield, which projects such companies will lease about 5.2 million square feet in the City in the next two years — double the amount they leased in the last three.
That AI companies like Databricks are investing heavily in San Francisco “is really important right now, because we’ve obviously gone through a lot in our economic recovery,” says Abby Raisz, a research director at the Bay Area Council Economic Institute.
Local economic benefits of the AI boom aside, the technology’s skeptics and critics are focusing on its downsides. In his book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, Gary Marcus, an expert on the technology, listed 12 near-term risks and problems with generative AI. Among them: its ability and tendency to create misinformation and disinformation, biases built into the technology that can be used to discriminate against individuals or groups, its proclivity to make mistakes and its impact on the environment.
By contrast, the advantages generative AI has provided — things like helping develop software code or helping people brainstorm — so far have been modest at best, says Marcus, a New York University professor emeritus and a founder of Geometric Intelligence, a machine-learning startup acquired by Uber in 2016.
The “intellectually honest” answer to the question of whether those advantages of generative AI outweigh the harm is no, he says. It “doesn’t look that great, if you ask me.”
For Marcus and other critics, one of AI’s most prominent and worrisome downsides is how the use of the technology is fueling climate change. AI models such as OpenAI’s new GPT-4.5 are typically trained and run on powerful computer processors in large data centers that require enormous amounts of electricity. As such models have gotten bigger and more popular, training and running them have required more and more power. Electricity market experts are projecting that the share of U.S. electricity used by data centers will soar in coming years from about 4 percent in 2022 to potentially
12 percent in 2030, thanks largely to AI.
Many of the data centers in which AI models are being trained and run are powered by natural gas or coal generators. Even when data centers are run on renewable power, it’s not necessarily limiting carbon emissions. Because AI-running data centers demand so much electricity, and that demand has been increasing so rapidly, utilities have been keeping open coal plants they previously planned to close and are talking about building new natural gas plants to ensure there’s enough energy on the grid to serve all customers.
Thanks to such factors, two of the biggest data center operators, Google and Microsoft, said last year their carbon emissions had risen in 2023, despite promises to cut them.
But AI’s environmental impact goes beyond carbon emissions.
Water is often used to cool data centers and the power plants that provide their energy; when it’s used that way, it evaporates and so doesn’t go back into the local watershed. Water is also used in manufacturing AI chips and the computers they run in. With AI models getting bigger, being used more often and requiring increasing numbers of AI chips, they’re resulting in the evaporation of growing amounts of water, according to a 2023 paper from a team of researchers at University of California, Riverside and the University of Texas at Arlington.
Meanwhile, the fossil fuel power plants and diesel generators used to provide primary and backup energy to AI data centers generate copious amounts of pollutants such as sulfur dioxide and nitrogen dioxide. That air pollution is already leading to hundreds of premature deaths and billions of dollars in health-related costs, according to a 2024 paper from a research team at UC Riverside and the California Institute of Technology.
“This is something that really resonates with people, that AI has this massive ecological consequence,” Sadowski says.
Many AI optimists push back, arguing the technology will find ways of solving climate change and other environmental problems.

James Landay, co-director of the HAI institute at Stanford: “I’m not worried about headline-grabbing things like AI taking over the world and launching nuclear weapons or stuff like that.”
Craig Lee/The Examiner
What’s more, the idea that AI’s power demands will keep growing at the rates they have been recently is unrealistic, says James Landay, codirector of the Stanford Institute for Human-Centered Artificial Intelligence. AI companies have an economic incentive to use power more efficiently. The recently released DeepSeek model indicated that robust models can be built with less power and therefore at a lower cost, he says. Many companies in the industry are already following DeepSeek’s lead, working on ways to make their models more efficient, he says.
“A lot of the predictions about how much energy AI will use in 10 years or 15 years are just way misguided,” he says.
Whether or not AI models become more efficient, the idea that the technology will eventually help the environment is wishful thinking, skeptics say.
“So far there’s very little substance to all of that,” says Alex de Vries, founder of Digiconomist, an online publication that focuses on technology’s environmental impact.
“These models — they’re not miracle machines,” he says.
Right now, AI companies appear to be inflicting environmental damage without giving it much thought, says Alex Hanna, the director of research at the Distributed AI Research Institute based in Oakland. In an effort to popularize the technology, they’re shoehorning it into places where people don’t necessarily want or need it, she says. Google is topping its search results with AI-powered answers, and both it and Microsoft are bundling AI features into their office-software suites. As a result, perhaps hundreds of millions of people are being forced to use AI daily with consequent environmental costs, she says.

Alex Hanna, PhD, Director of Research, DAIR Institute, pictured at her home with her cat Ana in the Bay Area.
Craig Lee
If the AI industry really wanted the technology to offer a net benefit to the environment — or at least limit the damage it was causing — companies would be more conscientious about how they’re using and implementing it, Hanna says.
Companies “would actually be concerned about the externalities,” she says.
The case against generative AI, say critics, goes well beyond its environmental consequences. They argue the technology is just not very good at what it supposedly does well — generate new content, whether text or software code images. What it’s good at is producing things that are plausible or convincing to the lay person but that, to an expert, are flawed or erroneous. Those in the industry have dubbed those errors “hallucinations,” but numerous critics liken them to “bulls—.”
That’s because the errors are a result of how the technology fundamentally works. Generative AI models don’t actually “know” anything, the way a human does. Instead, they’re essentially probability engines. When generating a line of text, say, an AI model taps its training data to determine which words are most likely to come after each preceding one. In many cases, the results might be something that makes sense or sounds correct, but it’s not based on real-world knowledge.
“The AI that we know … is essentially a faking machine,” says Dan McQuillan, a lecturer in creative and social computing at Goldsmiths, University of London.
What that means is that AI’s outputs often can’t be trusted. There have been numerous stories about models citing research or court cases or books that don’t exist, or advising people to do things like eat rocks. They also often can’t do basic things like determine the number of Rs in “strawberry.”

Damien P. Williams is an assistant professor at the University of North Carolina at Charlotte, where he studies how AI and other technologies are shaped by the values, societies and beliefs of their designers.
Kat Lawrence
Such errors present an obvious problem if a model is asked to give health or legal advice. But it’s a worry even for software engineers or students using ChatGPT or similar systems to generate code or write essays, areas where such tools have quickly gained traction.
Damien P. Williams, an assistant professor at the University of North Carolina at Charlotte, says his colleagues in the university’s School of Data Science who have used generative AI for coding are constantly on alert for such errors. Many will only use such tools with programming languages they are familiar with so they can easily detect mistakes, he says.
“Then that tells us that it’s not something that should be in the driver’s seat of developing code,” Williams says.
But the problem is much broader than that, he and other critics say.
One of the major advances that came with generative AI — and specifically, large-language models, or LLMs, which focus on text and written language — is the ability of computers to communicate with people via natural language. You can ask ChatGPT a regular question, and it will answer you in complete sentences, rather than giving you just a list of links.
That such systems can converse with people in natural language is truly impressive and marks a significant advance in computer-human interaction, Williams says. But the fact that the responses can’t be trusted to be accurate undermines it, he says, asking, “A large-language model that can converse fluidly and very confidently about factually incorrect statements does me what good?”
Precisely because of its ability to create plausible text and images, its ability to interact with people using natural language and its capability of handling and sifting through immense amounts of data, generative AI has enormous power to deceive and cause other social harm, critics say.
Among the bigger dangers they see from generative AI: creating disinformation, propagating online scams or helping thieves and crooks find cybersecurity vulnerabilities, perpetuating biases and creating nonconsensual deepfakes.
Again, these aren’t theoretical. South Korea in particular in recent months has seen a spate of deepfake porn targeting young women and girls. In January of last year, voters in New Hampshire got a robocall featuring a voice that sounded like Joe Biden’s telling them not to vote in the state’s primary. The voice was generated by a political consultant using AI — and wasn’t authorized by Biden. And security researchers and the FBI say that AI-powered cyberattacks are on the rise.
Even when it’s not being used for such explicitly malicious purposes, the technology still harms people, critics say. It’s being used to determine who can get insurance and how much they should pay. Critics have charged it’s being used to deny health insurance claims.
“AI for me isn’t very good at the things I would value but is good at … identifying, segregating and dividing groups of people,” says McQuillan, author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence.
Critics say AI poses a particular danger in education. For years, philanthropists and supposed education reformers have been pushing the idea of personalized education, where technology would be used to figure out what kids want to learn and how they could learn it, and to develop an individualized curriculum for each student, says Benjamin Riley, the Austin-based founder of Cognitive Resonance, a think tank focused on generative AI and understanding human cognition. Those advocates have latched on to AI as a way to design and deliver such instruction, he says.

Benjamin Riley
Benjamin Riley
The problem is that personalized education fundamentally doesn’t work, Riley says. Humans are social creatures. They learn from interacting and responding to each other and their teachers. When a student gets an answer wrong, a good teacher can work with them to figure out their thinking and use that insight to steer them in the right direction, he says, adding that’s something AI can’t do.
The problem of AI’s errors shows up in education too. If a student asked to solve the problem “4+5” answers “11-2,” an AI–powered tutoring system likely won’t recognize that as a correct response, Riley says.
What’s more, kids by nature don’t like learning from a chatbot, he says. They treat it as many people do an automated call–answering system — they want to avoid it and talk to a real person — but AI boosters are pushing the technology in education anyway, he says.
Elon Musk “is saying AI literally can replace teachers, … which is f—ing ludicrous,” Riley says.
A related danger is that AI will impair critical thinking skills, Riley says, as developing and maintaining the ability to think critically requires building up knowledge and struggling with new ideas and thoughts. Akin to physical exercise, keeping your mind fit requires forcing it to do things that aren’t necessarily easy. But generative AI makes such tasks simple to avoid, whether taking notes, summarizing a document or even writing an essay. Williams notes that three research papers in recent months warned of the same issue.
“When we offload too much of our task work to large-language-model-type AI systems, we actually damage our ability to think critically and to be careful in our skills development, because we are not foundationally building those skills,” he says.
The technology has or could have other negative impacts, particularly on people’s work and jobs. Many of the major generative AI models were trained on writing, images and art, generally without the consent of the owners or creators of those works and without offering them any kind of compensation. Now the companies behind those models are turning around and trying to profit off the models they built. In many cases they’re doing so by offering services that directly compete with those offered by the people who created the data their technology trained on.
AI researcher Suchir Balaji thought that that business model was so profoundly unfair and damaging to those whose works AI was built on that he left OpenAI last summer. Balaji planned to testify against the company in a copyright infringement lawsuit filed against it by the New York Times, but the 26-year-old died in November in what’s been ruled a suicide.
“This is not a sustainable model for the internet ecosystem as a whole,” he told the Times before he died.
Even optimists like Landay worry that AI adoption will result in massive job losses. Those designing and backing the technology have, at times, been explicit in stating that one of their motivations is to eliminate or reduce labor from scores of fields. In a tweet earlier this year, Andreessen Horowitz partner Marc Andreessen lauded the idea that AI will “crash” wages, arguing that would be necessary to enable a “consumer cornucopia” where goods and services sell for near-zero prices.
Many in the industry have been touting the idea that AI will soon — possibly as soon as this year, if you believe Times columnist Kevin Roose — surpass human-level intelligence and reach artificial general intelligence, or AGI. Critics, though, generally dismiss the idea that AGI is coming anytime soon.
The near-term danger isn’t that AI will be smarter than most humans or even particularly good at what it’s asked to do, Williams says. It’s that it will be significantly less expensive than paying people — or at least offer that promise.
“In many cases, cheaper and worse is just fine for a lot of people who are worried about the cheaper and not the quality at all,” he says. It’s a very real and serious possibility that AI will be adopted widely “specifically to, in fact, try to save on labor … ‘save on labor’ being a euphemistic way of saying, ‘fire a whole lot of people.’”
While optimists and enthusiasts argue that the benefits of AI will be shared widely, many skeptics doubt that will happen. In the U.S. and developed countries in general, wealth and income have been increasingly concentrated at the top in recent decades. The tech industry — whose products have undermined the pay and jobs of unskilled workers and which is dominated by a handful of companies that produce massive profits — has played a significant role in that expanding inequality.
Government regulation is going to be needed to ensure AI benefits more than the developers of the technology, Landay and others say. But with industry figures like Sam Altman and Musk cozying up with President Donald Trump and pushing for less regulation, and Trump pushing for more tax breaks that would largely benefit corporations and the wealthy, things are heading in the opposite direction.
“At some point, we’re going to have to sort out as a society whether we’re comfortable with the Trumpian regime, where all the spoils go to a few people, or whether we’re going to try to make a more just society that is beneficial to larger numbers of people,” Marcus says. “Right now, I’m not optimistic.”
Despite his skepticism about generative AI, Marcus is optimistic about AI in general. There are versions of the technology that could offer AI’s promised benefits without the hallucinations and other shortcomings of LLMs and generative AI, he says.
Marcus in particular touts neurosymbolic AI, which is a hybrid of the more traditional version of the technology and the newer version that underlies generative AI and LLMs. Jeff Hawkins, the cofounder of mobile computing pioneer Palm, has been pushing the notion that the way to get truly human-like AI is to train such systems by using sensors that allow them to experience the real world.
But there’s so much money and effort going into generative AI that these and other promising types of the technology are pushed aside, Marcus said.
Even with the massive investment, progress appears to be slowing. Since it released GPT-4 in 2023, for example, OpenAI has reportedly been working on the next major version of its AI model. The company was widely expected to release that model, GPT-5, last year.
It missed that deadline and instead in February of this year released GPT-4.5, which the company itself initially acknowledged didn’t represent a major new release and said would likely perform worse than some of its prior models. That was essentially an admission that its effort to create GPT-5 failed, Marcus says.
There’s a growing consensus among AI researchers that LLMs aren’t the way to get to AGI, Marcus says. But people in the industry don’t want to hear that. As long as they keep people believing the idea that what they need to reach AGI is more data and bigger models, they can continue to raise money and prop up their companies’ valuations, he says.
But the billion-dollar bets investors are making on AI aren’t going to pay off, Marcus warns. At best, companies developing generative AI are racing to tie, he says. If many companies basically offer similar technology, no one’s going to profit much off of it. When that realization sets in, valuations are going to crash and the investors who put their money into venture funds — frequently institutional investors like CalPERS and other pension funds — are going to be left holding the bag, he says.
How the industry is shaping up is “depressing for people like me who actually want AI to succeed,” he says.
So if you ask Marcus and other skeptics whether generative AI is worth the costs — financial, environmental, social, educational and more — they’ll say no.
As Hanna says, “I certainly don’t think so. … It’s 100 percent not worth the trade-off.”