Home All The Atlantic - Technology
author

The Atlantic - Technology

The Atlantic's technology section provides deep insights into the tech giants, social media trends, and the digital culture shaping our world.

May 17, 2025  13:48:23

On a recent commute to work, I texted my distant family about our fantasy baseball league, which was nice because I felt connected to them for a second. Then I switched apps and became enraged by a stupid opinion I saw on X, which I shouldn’t be using anymore due to its advanced toxicity and mind-numbing inanity. Many minutes passed before I was able to stop reading the stupid replies to the stupid original post and relax the muscles of my face.

This is the duality of the phone: It connects me to my loved ones, and sometimes I think it’s ruining my life. I need it and I want it, but sometimes I hate it and I fear it. Many people have to navigate this problem—and it may be at its worst for parents, who’ve recently been drowned in media suggesting that smartphones and social media might be harming their children’s mental health, but who also want their kids to enjoy technology’s benefits and prepare themselves for adult life in a digital age.

[Read: No one knows exactly what social media is doing to teens]

It was with this tension in mind that I rode a train last week to the town of Westport, Connecticut. There, a parent-led group called OK to Delay had organized an “Alternative Device Fair” for families who wanted to learn about different kinds of phones that were intentionally limited in their functionality. (There would be no frowning at X with these devices, because most of them block social media.) Similar bazaars have been popping up here and there over the past year, often in the more affluent suburbs of the tristate area. Westport’s fair, modeled after an event held last fall in Rye, New York, was set up in a spacious meeting room in the most immaculate and well-appointed public library I’ve ever seen. When I arrived, about 30 minutes after the start of the four-hour event, it was bustling. The chatter was already at a healthy, partylike level.

The tables set up around the room each showed off a different device. One booth had a Barbie-branded flip phone; another was offering a retro-styled “landline” phone called the Tin Can. But most of the gadgets looked the same—generic, rectangular smartphones. Each one, however, has its own special, restricted app store, and a slew of parental-control features that are significantly more advanced than what would have been available only a few years ago. One parent showed me her notepad, on which she was taking detailed notes about the minute differences among these phones; she planned to share the information with an online group of parents who hadn’t been able to come. Another mom told me that she’d be asking each booth attendant how easy it would be for kids to hack the phone system and get around the parent controls—something you can see kids discussing openly on the internet all the time.

A couple of years ago, I explored the “dumb phone” trend, a cultural curiosity about returning to the time before smartphones by eschewing complex devices and purchasing something simpler and deliberately limited. One of the better phones I tried then was the Light Phone II, which I disliked only because it was so tiny that I constantly feared that I would break or lose it. At the library, I chatted with Light Phone’s Dan Fox, who was there to show people the latest version of the device. The Light Phone III is larger and thicker and has a camera, but it still uses a black-and-white screen and prohibits web browsing and social-media apps. He told me that it was his third alternative-device event in a week. He’d also been to Ardsley, a village in New York’s Westchester County, and to the Upper East Side, in Manhattan. He speculated that kids like the Light Phone because it doesn’t require all the rigmarole about filters and settings and parents. It was designed for adults, and therefore seems cool, and was designed in Brooklyn, which makes it seem cooler. (Fox then left early to go to a Kendrick Lamar concert with his colleagues.)

[Read: Phones will never be fun again]

The crowded room in Westport was reflective of the broad concern about the effect that social media may have on children and teenagers. But it was also a very specific expression of it. Explaining the impetus for hosting the marketplace, Becca Zipkin, a co-founder of the Westport branch of OK to Delay, told me that it has become the standard for kids in the area to receive an iPhone as an elementary-school graduation present. One of her group’s goals is to push back on this ritual and create a different culture in their community. “This is not a world in which there are no options,” she said.

The options on display in Westport were more interesting than I’d thought they were going to be. They reflected the tricky balancing act parents face: how to let kids enjoy the benefits of being connected (a chess game, a video call with Grandma, a GPS route to soccer practice, the feeling of autonomy that comes from setting a photo of Olivia Rodrigo as your home-screen background) and protect them from the bad stuff (violent videos, messages from creeps, the urge to endlessly scroll, the ability to see where all of your friends are at any given time and therefore be aware every time you’re excluded).

Pinwheel, an Austin-based company, demonstrated one solution with a custom operating system for Android phones such as the Google Pixel that allows parents to receive alerts for “trigger words” received in their kids’ texts, and lets them read every message at any time. As with most of the others demonstrated at the fair, Pinwheel’s custom app store made it impossible for kids to install social media. During the demo, I saw that Pinwheel also blocked a wide range of other apps, including Spotify—the booth attendant told me and a nearby mom that the app contains “unlimited porn,” a pronouncement that surprised both of us. (According to him, kids put links to porn in playlist descriptions; I don’t know if that’s true, but Spotify did have a brief problem with porn appearing in a small number of search results last year.) The app for the arts-and-crafts chain Michaels was also blocked, for a similar but less explicit reason: A red label placed on the Michaels app advised that it may contain a loophole that would allow kids to get onto unnamed other platforms. (Michaels didn’t respond to my request for comment, and Spotify declined comment.)

Beyond the standard suite of surveillance tools, many of the devices are also outfitted with AI-powered tools that would preemptively censor content on kids’ phones: Nudity would be blurred out and trigger an alert sent to a parent, for instance; a kid receiving a text from a friend with a potty mouth would see only a series of asterisks instead of expletives.

“The constant need to be involved in the monitoring of an iPhone is very stressful for parents,” Zipkin told me, referring to the parental controls that Apple offers, which can become the focus of unceasing negotiation and conflict between kids and their guardians. That is part of these alternative devices’ marketing. Pinwheel highlights the helping hand of AI on its website: “Instead of relying on parents to manually monitor every digital interaction (because who has time for that?), AI-driven tech is learning behaviors, recognizing risks, and proactively keeping kids safe.”

The story was similar at other tables. Gabb, a Lehi, Utah, company, offers a feature that automatically shuts down video calls and sends notifications to parents if it detects nudity. The AI still needs some work—it can be triggered by, say, a person in a bathing suit or a poster of a man with his shirt off, if they appear in the background of the call. Gabb also has its own music app, which uses AI and human reviewers to identify and block songs with explicit language or adult themes. “Taylor Swift is on here, but not all of Taylor Swift’s music,” Lori Morency Kun, a spokesperson for the company, told me.

At the next booth, another Utah-based company, Troomi, was demoing a system that allows parents to set content filters for profanity, discussions of violence, and “suggestive” chitchat, on a sliding scale depending on their kid’s age. The demonstrator also showed us how to add custom keywords to the system that would also be blocked, in case a parent feels that the AI tools are not finding everything. (“Block harmful content BEFORE it even has the chance to get to your kiddo!” reads a post on the company’s chipper Instagram account.)

Across the room, Bark, an Atlanta-based company that started with a parental-control app and then launched its own smartphone, offered yet another nice-looking slab with similar features. This one sends alerts to parents for 26 possible problems, including signs of depression and indications of cyberbullying. I posed to the booth attendant, Chief Commercial Officer Christian Brucculeri, that a kid might joke 100 times a day about wanting to kill himself without having any real suicidal thoughts, an issue Brucculeri seemed to understand. But false positives are better than missed negatives, he argued. Bark places calls to law enforcement when it receives an alert about a kid threatening to harm themselves or others, he told me, but those alerts are reviewed by a human first. “We’re not swatting kids,” he said.

Although everybody at the library was enormously polite, there is apparently hot competition in the alternative-device space. Troomi, for instance, markets itself as a “smarter, safer alternative to Pinwheel.” Pinwheel’s website emphasizes that its AI chatbot, PinwheelGPT, is a more useful tool than Troomi’s chatbot, Troodi—which Pinwheel argues is emotionally confusing for children, because the bot is anthropomorphized in the form of a cartoon woman. Bark provides pages comparing each of these competitors, unfavorably, with its own offering.

Afterward, Zipkin told me that parents had given her varied feedback on the different devices. Some of them felt that the granular level of monitoring texts for any sign of emotional distress or experimental cursing was over-the-top and invasive. Others were impressed, as she was, with some of the AI features that seem to take a bit of the load off of parents who are tired of constant vigilance. Despite all the negative things she’d personally heard about artificial intelligence, this seemed to her like a way it could be used for good. “Knowing that your kids won’t receive harassing or bullying material or sexual images or explicit images, or anything like that, is extremely attractive as a parent,” she told me. “Knowing that there’s technology to block that is, I think, amazing.”

Of course, as every parent knows, no system is actually going to block every single dangerous, gross, or hurtful thing that can come in through a phone from the outside world. But that there are now so many alternative-device companies to choose from is evidence of how much people want and are willing to search for something that has so far been unattainable: a phone without any of the bad stuff.

May 16, 2025  16:08:21

In the summer of 2023, Ilya Sutskever, a co-founder and the chief scientist of OpenAI, was meeting with a group of new researchers at the company. By all traditional metrics, Sutskever should have felt invincible: He was the brain behind the large language models that helped build ChatGPT, then the fastest-growing app in history; his company’s valuation had skyrocketed; and OpenAI was the unrivaled leader of the industry believed to power the future of Silicon Valley. But the chief scientist seemed to be at war with himself.

Sutskever had long believed that artificial general intelligence, or AGI, was inevitable—now, as things accelerated in the generative-AI industry, he believed AGI’s arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever’s thinking. (Many of the sources in this piece requested anonymity in order to speak freely about OpenAI without fear of reprisal.) To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan.

“Once we all get into the bunker—” he began, according to a researcher who was present.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”

Empire of AI by Karen Hao
This essay has been adapted from Hao’s forthcoming book, Empire of AI.

Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,” the researcher told me. “Literally, a rapture.” (Sutskever declined to comment on this story.)

Sutskever’s fears about an all-powerful AI may seem extreme, but they are not altogether uncommon, nor were they particularly out of step with OpenAI’s general posture at the time. In May 2023, the company’s CEO, Sam Altman, co-signed an open letter describing the technology as a potential extinction risk—a narrative that has arguably helped OpenAI center itself and steer regulatory conversations. Yet the concerns about a coming apocalypse would also have to be balanced against OpenAI’s growing business: ChatGPT was a hit, and Altman wanted more.

When OpenAI was founded, the idea was to develop AGI for the benefit of humanity. To that end, the co-founders—who included Altman and Elon Musk—set the organization up as a nonprofit and pledged to share research with other institutions. Democratic participation in the technology’s development was a key principle, they agreed, hence the company’s name. But by the time I started covering the company in 2019, these ideals were eroding. OpenAI’s executives had realized that the path they wanted to take would demand extraordinary amounts of money. Both Musk and Altman tried to take over as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. To plug the hole, Altman reformulated OpenAI’s legal structure, creating a new “capped-profit” arm within the nonprofit to raise more capital.

Since then, I’ve tracked OpenAI’s evolution through interviews with more than 90 current and former employees, including executives and contractors. The company declined my repeated interview requests and questions over the course of working on my book about it, which this story is adapted from; it did not reply when I reached out one more time before the article was published. (OpenAI also has a corporate partnership with The Atlantic.)

OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude on the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.

Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development. Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.

The altruistic OpenAI is gone, if it ever existed. What future is the company building now?

Before ChatGPT, sources told me, Altman seemed generally energized. Now he often appeared exhausted. Propelled into megastardom, he was dealing with intensified scrutiny and an overwhelming travel schedule. Meanwhile, Google, Meta, Anthropic, Perplexity, and many others were all developing their own generative-AI products to compete with OpenAI’s chatbot.

Many of Altman’s closest executives had long observed a particular pattern in his behavior: If two teams disagreed, he often agreed in private with each of their perspectives, which created confusion and bred mistrust among colleagues. Now Altman was also frequently bad-mouthing staffers behind their backs while pushing them to deploy products faster and faster. Team leads mirroring his behavior began to pit staff against one another. Sources told me that Greg Brockman, another of OpenAI's co-founders and its president, added to the problems when he popped into projects and derail­ed long-​standing plans with ­last-​minute changes.

The environment within OpenAI was changing. Previously, Sutskever had tried to unite workers behind a common cause. Among employees, he had been known as a deep thinker and even something of a mystic, regularly speaking in spiritual terms. He wore shirts with animals on them to the office and painted them as well—a cuddly cat, cuddly alpacas, a cuddly fire-breathing dragon. One of his amateur paintings hung in the office, a trio of flowers blossoming in the shape of OpenAI’s logo, a symbol of what he always urged employees to build: “A plurality of humanity-loving AGIs.”

But by the middle of 2023—around the time he began speaking more regularly about the idea of a bunker—Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.

Meanwhile, Murati was trying to manage the mess. She had always played translator and bridge to Altman. If he had adjustments to the company’s strategic direction, she was the implementer. If a team needed to push back against his decisions, she was their champion. When people grew frustrated with their inability to get a straight answer out of Altman, they sought her help. “She was the one getting stuff done,” a former colleague of hers told me. (Murati declined to comment.)

During the development of GPT‑­4, Altman and Brockman’s dynamic had nearly led key people to quit, sources told me. Altman was also seemingly trying to circumvent safety processes for expediency. At one point, sources close to the situation said, he had told Murati that OpenAI’s legal team had cleared the latest model, GPT-4 Turbo, to skip review by the company’s Deployment Safety Board, or DSB—a committee of Microsoft and OpenAI representatives who evaluated whether OpenAI’s most powerful models were ready for release. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression.

In the summer, Murati attempted to give Altman detailed feedback on these issues, according to multiple sources. It didn’t work. The CEO iced her out, and it took weeks to thaw the relationship.

By fall, Sutskever and Murati both drew the same conclusion. They separately approached the three board members who were not OpenAI employees—Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology; the roboticist Tasha McCauley; and one of Quora’s co-founders and its CEO, Adam D’Angelo—and raised concerns about Altman’s leadership. “I don’t think Sam is the guy who should have the finger on the button for AGI,” Sutskever said in one such meeting, according to notes I reviewed. “I don’t feel comfortable about Sam leading us to AGI,” Murati said in another, according to sources familiar with the conversation.

That Sutskever and Murati both felt this way had a huge effect on Toner, McCauley, and D’Angelo. For close to a year, they, too, had been processing their own grave concerns about Altman, according to sources familiar with their thinking. Among their many doubts, the three directors had discovered through a series of chance encounters that he had not been forthcoming with them about a range of issues, from a breach in the DSB’s protocols to the legal structure of OpenAI Startup Fund, a dealmaking vehicle that was meant to be under the company but that instead Altman owned himself.

If two of Altman’s most senior deputies were sounding the alarm on his leadership, the board had a serious problem. Sutskever and Murati were not the first to raise these kinds of issues, either. In total, the three directors had heard similar feedback over the years from at least five other people within one to two levels of Altman, the sources said. By the end of October, Toner, McCauley, and D’Angelo began to meet nearly daily on video calls, agreeing that Sutskever’s and Murati’s feedback about Altman, and Sutskever’s suggestion to fire him, warranted serious deliberation.

As they did so, Sutskever sent them long dossiers of documents and screenshots that he and Murati had gathered in tandem with examples of Altman’s behaviors. The screenshots showed at least two more senior leaders noting Altman’s tendency to skirt around or ignore processes, whether they’d been instituted for AI-safety reasons or to smooth company operations. This included, the directors learned, Altman’s apparent attempt to skip DSB review for GPT-4 Turbo.

By Saturday, November 11, the independent directors had made their decision. As Sutskever suggested, they would remove Altman and install Murati as interim CEO. On November 17, 2023, at about noon Pacific time, Sutskever fired Altman on a Google Meet with the three independent board members. Sutskever then told Brockman on another Google Meet that Brockman would no longer be on the board but would retain his role at the company. A public announcement went out immediately.

For a brief moment, OpenAI’s future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened.

After what had seemed like a few hours of calm and stability, including Murati having a productive conversation with Microsoft—at the time OpenAI’s largest financial backer—she had suddenly called the board members with a new problem. Altman and Brockman were telling everyone that Altman’s removal had been a coup by Sutskever, she said.

It hadn’t helped that, during a company all-​hands to address employee questions, Sutskever had been completely ineffectual with his communication.

“Was there a specific incident that led to this?” Murati had read aloud from a list of employee questions, according to a recording I obtained of the meeting.

“Many of the questions in the document will be about the details,” Sutskever responded. “What, when, how, who, exactly. I wish I could go into the details. But I can’t.”

“Are we worried about the hostile takeover via coercive influence of the existing board members?” Sutskever read from another employee later.

“Hostile takeover?” Sutskever repeated, a new edge in his voice. “The OpenAI nonprofit board has acted entirely in accordance to its objective. It is not a hostile takeover. Not at all. I disagree with this question.”

Shortly thereafter, the remaining board, including Sutskever, confronted enraged leadership over a video call. Kwon, the chief strategy officer, and Anna Makanju, the vice president of global affairs, were leading the charge in rejecting the board’s characterization of Altman’s behavior as “not consistently candid,” according to sources present at the meeting. They demanded evidence to support the board’s decision, which the members felt they couldn’t provide without outing Murati, according to sources familiar with their thinking.

In rapid succession that day, Brockman quit in protest, followed by three other senior researchers. Through the evening, employees only got angrier, fueled by compounding problems: among them, a lack of clarity from the board about their reasons for firing Altman; a potential loss of a tender offer, which had given some the option to sell what could amount to millions of dollars’ worth of their equity; and a growing fear that the instability at the company could lead to its unraveling, which would squander so much promise and hard work.

Faced with the possibility of OpenAI falling apart, Sutskever’s resolve immediately started to crack. OpenAI was his baby, his life; its dissolution would destroy him. He began to plead with his fellow board members to reconsider their position on Altman.

Meanwhile, Murati’s interim position was being challenged. The conflagration within the company was also spreading to a growing circle of investors. Murati now was unwilling to explicitly throw her weight behind the board’s decision to fire Altman. Though her feedback had helped instigate it, she had not participated herself in the deliberations.

By Monday morning, the board had lost. Murati and Sutskever flipped sides. Altman would come back; there was no other way to save OpenAI.

I was already working on a book about OpenAI at the time, and in the weeks that followed the board crisis, friends, family, and media would ask me dozens of times: What did all this mean, if anything? To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we’ll make our future better, not worse?

The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models. In the pursuit of an amorphous vision of progress, its aggressive push on the limits of scale has rewritten the rules for a new era of AI development. Now every tech giant is racing to out-scale one another, spending sums so astronomical that even they have scrambled to redistribute and consolidate their resources. What was once unprecedented has become the norm.

As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation. Anthropic is valued at more than $60 billion. Near the end of last year, the six largest tech giants together had seen their market caps increase by more than $8 trillion after ChatGPT. At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it’s also eroding their critical thinking.

In a November Bloomberg article reviewing the generative-AI industry, the staff writers Parmy Olson and Carolyn Silverman summarized it succinctly. The data, they wrote, “raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.”

Meanwhile, it’s not just a lack of productivity gains that many in the rest of the world are facing. The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable, people I met around the world, whether workers and rural residents in the global North or impoverished communities in the global South, all suffering new degrees of precarity. Workers in Kenya earned abysmal wages to filter out violence and hate speech from OpenAI’s technologies, including ChatGPT. Artists are being replaced by the very AI models that were built from their work without their consent or compensation. The journalism industry is atrophying as generative-AI technologies spawn heightened volumes of misinformation. Before our eyes, we’re seeing an ancient story repeat itself: Like empires of old, the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.

To quell the rising concerns about generative AI’s present-day performance, Altman has trumpeted the future benefits of AGI ever louder. In a September 2024 blog post, he declared that the “Intelligence Age,” characterized by “massive prosperity,” would soon be upon us. At this point, AGI is largely rhetorical—a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power. Under the guise of a civilizing mission, the empire of AI is accelerating its global expansion and entrenching its power.

As for Sutskever and Murati, both parted ways with OpenAI after what employees now call “The Blip,” joining a long string of leaders who have left the organization after clashing with Altman. Like many of the others who failed to reshape OpenAI, the two did what has become the next-most-popular option: They each set up their own shops, to compete for the future of this technology.


This essay has been adapted from Karen Hao’s forthcoming book, Empire of AI.


*Illustration by Akshita Chandra / The Atlantic. Sources: Nathan Howard / Bloomberg / Getty; Jack Guez / AFP / Getty; Jon Kopaloff / Getty; Manuel Augusto Moreno / Getty; Yuichiro Chino / Getty.

May 16, 2025  17:43:51

Updated at 10:22 p.m. ET on May 15, 2025

Yesterday, a user on X saw a viral post of TimothĂ©e Chalamet celebrating courtside at a Knicks game and had a simple question: Who was sitting next to him? The user tapped in Grok, X’s proprietary chatbot, as people often do when they want help answering questions on the platform—the software functions like ChatGPT, except it can be summoned via reply to a post. And for the most part, Grok has performed reasonably well at providing responses. But not yesterday. Chalamet was sitting with Kylie and Kendall Jenner, but here is how the chatbot replied: “I believe you’re referring to a photo with TimothĂ©e Chalamet, but the context you mention doesn’t seem to align with this image. The post discusses South African politics, which doesn’t relate to TimothĂ©e or the people around him.” Nothing in the photo or post referenced South Africa or politics.

Grok spit out similar answers to many queries for a large chunk of time yesterday, randomly contorting its responses to mention “white genocide” in South Africa, apparently in reference to controversial claims about incidents of violence against some of the country’s white farmers. When one user asked Grok for “analysis” on a video of a small, cute pig, the chatbot offered this explanation: “The topic of White Genocide in South Africa is highly contentious. Claims of systematic targeting of white farmers exist, with farm attacks averaging 50 murders yearly, often cited as evidence. Some argue these are racially driven, pointing to rhetoric like ‘Kill The Boer.’” (Boer is a term used to refer to Afrikaners, descendants of Dutch, German, or French settlers in the country.) Nothing in the video or the accompanying text mentioned South Africa, “white genocide,” or “Kill the Boer.”

Ever since Elon Musk bought Twitter and renamed it X, the platform has crept further into the realm of the outlandish and unsettling. Porn spam bots are rampant, and Nazi apologia—which used to be extremely hard to find—frequently goes viral. But yesterday, X managed to get considerably weirder. For hours, regardless of what users asked the chatbot about—memes, ironic jokes, Linux software—many queries to Grok were met with a small meditation on South Africa and white genocide. By yesterday afternoon, Grok had stopped talking about white genocide, and most of the posts that included the tangent had been deleted.

Why was Grok doing this? We don’t know for sure. Both Musk and X’s parent company, xAI, did not respond to requests for comment. (Several hours after publication, xAI posted on X explaining that “an unauthorized modification” had been made to the system prompt for the Grok bot on the platform, without specifying who made the change. xAI is now publicly sharing its system prompts on GitHub and says it will adopt additional measures to ensure a similar unauthorized change does not happen in the future.) The glitch is all the more curious considering that “white genocide” in South Africa is a hobbyhorse for Musk, who is himself a white South African. At various points over the past couple of years, Musk has posted about his belief in the existence of a plot to kill white South Africans.

Even apart from Musk, the international far right has long been fixated on the claim of white genocide in South Africa. White supremacists in Europe and the United States invoke it as a warning about demographic shifts. When Musk first tweeted about it in 2023, prominent white nationalists such as Nick Fuentes and Patrick Casey celebrated that Musk was giving attention to one of their core beliefs. The claim has gained even more purchase on the right since then: Earlier this week, the Trump administration welcomed white South Africans as refugees. The president hasn’t directly described what he believes is happening in South Africa as “white genocide,” but he has come close. On Monday, he said, “White farmers are being brutally killed, and their land is being confiscated in South Africa.” They needed to come to the United States to avoid the “genocide that’s taking place” in their home country. This is a stark contrast to how Trump has treated other refugee groups. At the start of his second term, he attempted to indefinitely ban most refugee groups from being able to resettle in the U.S.

There has never been good evidence of an ongoing effort by Black people in South Africa to exterminate white people. There have been instances in which white farmers in the country have been killed in racially motivated attacks, but such crimes do not represent a disproportionate share of the murders in the country, which struggles with a high rate of violent crime. Many arguments to the contrary rely on statistical distortion or outright false numbers. (Take it from Grok: In March, when Musk posted that “there is a major political party in South Africa that is actively promoting white genocide,” the chatbot called his assertions “inaccurate” and “misleading.”)


It’s possible that Grok was intentionally made to reference unfounded claims of a violent, coordinated assault on white South Africans. In recent months, Musk has shared research indicating Grok is less liberal than competing chatbots and said he is actively removing the “woke mind virus” from Grok, suggesting he may be willing to tinker with the chatbot so that it reflects his personal views. In February, a Business Insider investigation found that Grok’s training explicitly prioritized “anti-woke” beliefs, based on internal documents and interviews with xAI employees. (xAI hasn’t publicly commented on the allegations.)

If some intentional adjustment was made—and indeed, xAI’s update that came out after this story was published suggests that one was—yesterday’s particular fiasco could have come about in a few different ways. Perhaps the simplest would be a change to the system prompt—the set of invisible instructions that tell a chatbot how to behave. AI models are strange and unwieldy, and so their creators typically tell them to follow some obvious, uncontroversial directions: Provide relevant examples; be warm and empathetic; don’t encourage self-harm; if asked for medical advice, suggest contacting a doctor. But even small changes to the system prompt can cause problems. When ChatGPT became extremely sycophantic last month—telling one user that selling “shit on a stick” was a brilliant business idea—the problem seemed in part to have stemmed from subtle wording in ChatGPT’s system prompt. If engineers at xAI explicitly told Grok to lend weight to the “white genocide” narrative or provided it with false information that such violence is real, this could have inadvertently tainted unrelated queries. In some of its aberrant responses, Grok mentioned that it had been “instructed” to take claims of white genocide in South Africa seriously or that it already had been provided with facts about the theory, lending weight to the possibility of some explicit direction from xAI engineers.

Another possibility is that, in the later stages of Grok’s training, the model was fed more data about a “white genocide” in South Africa, and that this, too, spread to all manner of other responses. Last year, Google released a version of its Gemini model that generated an image of racially diverse Nazis, and seemed to resist creating images of white people. It was the result of crude training efforts to avoid racist biases. DeepSeek, the Chinese chatbot, refuses to answer questions about Tiananmen Square; perhaps Grok had been engineered to do the opposite for the purported white genocide.

Even more methods for manipulation exist. Maybe Grok researchers directly modified the program’s code, lending outsized importance to the “white genocide” topic. Last year, as a stunt, Anthropic briefly tweaked its Claude model to incessantly mention the Golden Gate Bridge: If you asked the bot, say, how to spend $10, it would suggest paying the toll to drive across the bridge. Or perhaps, because Grok pulls information from X posts in real time, the racist content that thrives on Musk’s site, and that he promotes on his own page, had a strong influence—since his takeover, Musk reportedly has warped the platform to amplify all manner of right-wing content.

Yesterday’s problem appears, for now, to be fixed. But therein lies the larger issue. Social-media platforms operate in darkness, and Musk is a fountain of misinformation. Musk, or someone at xAI, has the ability to modify an extremely powerful AI model without providing any information as to how, or any requirement to take accountability should the modification prove disastrous. Earlier this year, when Grok stopped mentioning Musk or Donald Trump as the biggest sources of misinformation on X, a co-founder of xAI attributed the problem to a single employee acting without the company’s permission. Even if Musk himself was not directly involved in the more recent debacle, that is cold comfort. Already, research has suggested that generative-AI chatbots can be particularly convincing interlocutors. The much scarier possibility is that xAI has tweaked Grok in ways more subtle, successful, and pernicious than responding to a question about a pig video with a reference to “white genocide.”

This morning, less than 24 hours after Grok stopped spewing the “white genocide” theory, Musk took up the mantle. He shared several posts on X suggesting there was widespread discrimination and violence targeting Afrikaners.


This article has been updated to include new information from xAI.

May 14, 2025  22:48:19

The first months of Donald Trump’s second presidency have included a systematic attempt to dismantle government agencies and pillage their data; state-sponsored renditions of immigrants; flagrant corruption; and brazen flouting of laws and the courts. The New York Times editorial board summed it up well: “The first 100 days of President Trump’s second term have done more damage to American democracy than anything else since the demise of Reconstruction.”

But let us also not forget how extremely dumb this term has been. We now inhabit a world beyond parody, where the pixels of reality seem to glitch and flicker. Consider the following report from Trump’s state visit to Saudi Arabia this week, posted by the foreign-affairs journalist Olga Nesterova: “As part of the red-carpet treatment, Saudi officials arranged for a fully operational mobile McDonald’s unit to accompany President Trump during his stay.” A skeptical news consumer might be inclined to pause for a moment at the phrase fully operational mobile McDonald’s unit, their brain left to conjure what those words could possibly mean. (The Hamburglar clad in fatigues, perhaps? Ronald McDonald pulling on a Marlboro Red, an assault rifle slung across his back while on break from operating the Happy Meal command center/ball pit? A Death Star made of ground beef?) Thankfully, one’s mind needn’t wander far, as Nesterova attached a video of the fully operational mobile McDonald’s unit (FOMMU): It’s essentially a retrofit 18-wheeler made to look like a suburban fast-food restaurant, complete with modern wood siding and the golden arches.

The truck was reportedly parked near the state visit’s “media oasis,” perhaps also as an offering to journalists covering the president. The White House did not immediately respond to a request for comment as to whether Trump himself visited or ate at the unit. But the president’s fondness for McDonald’s is no secret.

It’s worth emphasizing that all of this is pretty embarrassing. Multiple news outlets, including Fox News, framed the truck as an act of burger diplomacy; the Kingdom of Saudi Arabia pandered to a mercurial elderly man, ostensibly to guarantee that a slender beef patty was never far from his lips. As with all things Trump, it’s hard to know exactly what to believe. Is the burger unit a stylized but mostly normal bit of state-visit infrastructure, or is it a bauble meant to please the Fast-Food President? In a world where leaders seem eager to bend the knee to Trump’s every impulse, even the truly ridiculous seems plausible. The mere fact of all of this is unmooring. When strung together, the words fully operational mobile McDonald’s unit overwhelm my synapses; there could be no funnier or dumber phrase to chisel out of the English language.

I don’t quite subscribe to the notion that this kind of absurdity is a “distraction” from the many crises of the administration, as so many of the Trump era’s pseudo events are claimed to have been. Coverage of the FOMMU is instead a side effect of the wild incompetence and corruption of the 47th presidency. Trump has a complete disregard for laws and expertise, and a unique shamelessness, both of which create fertile soil for inanity. A fast-food tanker makes sense only on a continuum with Trump’s executive order to rename the Gulf of Mexico the Gulf of America, his spitballing about annexing Greenland or turning Canada into a state. It goes on and on. The Fox News host he hired to oversee the military, Pete Hegseth, reportedly wanted a makeup studio at the Pentagon (which Hegseth has denied). This week, Trump named his former defense attorney from his hush-money trial as the acting librarian of Congress.

See also: Trump’s cryptocurrency projects, which are hardly veiled—and successful—attempts to enrich his family. Recently, Trump announced a crypto fundraising dinner where wealthy people looking to curry favor with the president—including foreigners—can purchase his meme coin for a literal seat at the table. In early May, the crypto-investment company World Liberty Financial—to which Trump has intimate ties—announced that a state-backed Emirati firm would use a Trump-affiliated digital coin to help fund a $2 billion investment deal in Abu Dhabi. Nearly every detail of World Liberty Financial co-founder Zach Witkoff’s announcement, “made during a conference panel with Mr. Trump’s second-eldest son, contained a conflict of interest,” the Times reported. Similarly, earlier this month, the owner of a Texas freight company announced that it would purchase $20 million worth of Trump’s meme coin, which it justified as an “effective way to advocate for fair, balanced, and free trade between Mexico and the US.”

And then there’s the gift to Trump of a $400 million super-luxury Boeing 747-8 jumbo jet from the royal family of Qatar, which the administration appears ready to accept as a replacement for Air Force One. (The plane will supposedly be transferred to the Trump presidential library as the president prepares to leave office.) This is nakedly corrupt, but Trump has called it “a very public and transparent transaction.” As my colleague David Graham wrote recently, “One secret to his impunity thus far has been that rather than try to hide his misdeeds—that’s what amateurs such as Nixon and Harding did—he calculates that if he makes no pretense, he can get away with them.”

But Trump’s brazenness isn’t just a cover for his corruption. A headline on The Bulwark argued that Trump’s “unquenchable, unconstitutional greed is deforming America.” The verb choice here is especially apt. Trump hasn’t destroyed institutions as much as he’s distorted them, shaping them in his possibly Alibaba-ed gold-plated image.

And so the news that comes out of his administration is deformed as well. Instead of Snowden-esque stories of political intrigue, we get the shambolic equivalent: a national security adviser accidentally texting war plans to my boss on Signal; a government subagency, DOGE, named after a Shiba Inu meme and staffed in part by a 19-year-old who goes by the nickname “Big Balls.” We get Elon Musk doing a Tesla infomercial on the White House lawn while the president gawps at the car’s central console and exclaims, “Everything’s computer!”

Those who try to play along with the administration are made to look absurd as well. Look no further than the tech titans milling behind Trump on the inauguration dais or Secretary of Commerce Howard Lutnick justifying Trump’s disastrous tariff plan by arguing that Europeans “hate our beef because our beef is beautiful and theirs is weak.” If you’re Saudi Arabia, you embrace this dynamic by deploying a tactical burger unit for the leader of the free world.

The steady stream of bizarre news is the consequence of putting a person in charge of systems and institutions when he has no regard for those systems and institutions beyond his own self-interest. When these systems break under the stress of abuse, neglect, or general incompetence, bad things happen. Some of these things are straightforwardly bad: possibly illegal, horrific, cruel. Others would be scandals worthy of resignations if only there were political leaders able to enforce some accountability. But others are just weird mutations.

In this way, Trump’s callousness, indifference, and corruption alter the very texture of our shared reality. They drag us all into a world of his making. A system that is healthy does not produce a fully operational mobile McDonald’s unit. Such units are reserved for the dumbest timeline, which is the one we’re currently living in.

May 14, 2025  13:15:59

On a Wednesday morning last month, I thought, just for a second, that AI was going to kill me. I had hailed a self-driving Waymo to bring me to a hacker house in Nob Hill, San Francisco. Just a few blocks from arrival, the car lurched toward the other lane—which was, thankfully, empty—and immediately jerked back.

That sense of peril felt right for the moment. As I stepped into the cab, Federal Reserve Chair Jerome Powell was delivering a speech criticizing President Donald Trump’s economic policies, and in particular the administration’s sweeping on-again, off-again tariffs. A day earlier, the White House had claimed that Chinese goods would be subject to overall levies as high as 245 percent when accounting for preexisting tariffs, and the AI giant Nvidia’s stock had plummeted after the company reported that it expected to take a quarterly hit of more than $5 billion for selling to China. The global economy had been yanked in every direction, nonstop, for weeks. America’s tech industry—an engine of that system, so reliant on overseas labor and hardware—seemed like it would be in dire straits.

Yet within the hacker house—it was really a duplex—the turmoil could be forgotten. The living space, known as Accelr8, is a cohabitat for early-stage founders. Residents have come from around the world—Latvia, India, Japan, Italy, China—to live in one of more than a dozen rooms (“tiny,” an Accelr8 co-founder, Daniel Morgan, told me), many of which have tech-inspired names: the “Ada Lovelace Room,” the “Zuck Room,” the “GPT-5 Room.” Akshay Iyer, who was sitting on a couch when I walked in, had launched his AI start-up the day before; he markets it as a “code editor for people who don’t know how to code.” In the kitchen, a piece of paper reading Wash your pans or Sam Altman will get you was printed above a photo of the OpenAI CEO declaring, in a speech bubble, that he eats children.

For a certain type of techie in the Bay Area, the most important economic upheaval of our time is the coming of ultrapowerful AI models. With the help of generative AI, “I can build a company myself in four days,” Morgan, who’d previously worked in sales and private equity, said. “That used to take six months with a team of 10.” The White House can do whatever it wants, but this technological revolution and all the venture capital wrapped up in it will continue apace. “However much Trump tweets, you better believe these companies are releasing models as fast,” Morgan said. Founders don’t fear tariffs: They fear that the next OpenAI model is going to kill their concept.

[John Hendrickson: What I found in San Francisco]

I heard this sentiment across conversations with dozens of software engineers, entrepreneurs, executives, and investors around the Bay Area. Sure, tariffs are stupid. Yes, democracy may be under threat. But: What matters far more is artificial general intelligence, or AGI, vaguely understood as software able to perform most human labor that can be done from a computer. Founders and engineers told me that with today’s AI products, many years of Ph.D. work would have been reduced to just one, and a day’s worth of coding could be done with a single prompt. Whether this is hyperbole may not matter—start-ups with “half-broken” AI products, Morgan said, are raising “epic” amounts of money. “We’re in the thick of the frothiest part of the bubble,” Amber Yang, an investor at the venture-capital firm CRV, told me.

There were also whispers about the stock market and the handful of high-profile tech figures who have criticized Trump’s economic policies. Yang told me that she had heard of investors advising start-ups to “take as much capital as you can right now, because we don’t know how the next few years will play out.” But around the Bay, the concerns I heard mostly positioned tariffs and stricter immigration enforcement as a rough patch, not a cataclysm. The industry’s AI growth would continue, tech insiders told me: It would speed through volatile stocks, collapsing commerce, a potential recession, and crises of democracy and the rule of law. Silicon Valley’s exceptionalism has left the rest of the country behind.

Along highways and street corners, on lampposts and public transit across the Bay Area, promises of an AI-dominated future are everywhere. There are advertisements for automated tools for compliance, security, graphic design, customer service, IT, job-interview coaching, even custom insoles—and, above all, AI products that promise to speed the development of still more powerful AI products. At an AI happy hour at a beer garden in the Mission neighborhood, I listened to a group of start-up founders passionately debate whether today’s approach to AI will produce “superintelligence.” (That the industry will achieve AGI went unquestioned.) A few days later, Evan Conrad, a co-founder of the San Francisco Compute Company, a start-up that rents out AI computing chips, suggested, when I asked about Trump’s tariffs, that I might be the one with too narrow a focus. “Why aren’t you more freaked out about the other stuff?” he asked.

The release of ChatGPT, in late 2022, began a frenzy over AI products. Founders and executives promise that the technology will cure cancer, solve climate change, and rapidly grow the world economy. “People just don’t start non-AI companies anymore,” Morgan said. The wealthiest firms—Amazon, Alphabet, Meta, Microsoft—have together spent hundreds of billions of dollars building the infrastructure needed to train and run AI models. Only a year ago, the AI industry was still “in the mid- to early stages of the gold rush,” Yang told me at the time, over coffee. Then an investor at Bloomberg Beta, she had risen to local fame for popularizing the nickname “Cerebral Valley” for the Hayes Valley neighborhood, dubbed as such for its abundance of tech start-ups and hacker houses. “There’s still so much that you can make from just slight automations,” she said. On that same day, I went to OpenAI’s offices, where, on a floor with rooms named after core human inventions (“Clock,” “Fire,” and so on), a conference room was called “AGI.” A year later, the gold rush is mature, and the term AGI is common enough that an advertisement in San Francisco International Airport offers to help customers overcome “bottlenecks to AGI.”

The day after visiting Accelr8, I made my way to another hacker house: one story in a brick and terra-cotta building rented by Finn Mallery as his home and office for his start-up, Origami Agents, which builds AI tools for sales teams. I was instructed to take my shoes off, and then we settled in the kitchen to talk beside Costco-size bags of potatoes, a Kirkland tub of pink salt, and two sinks, one spotless and the other full of dirty pans.

Mallery graduated from Stanford last year and told me that his computer-science classmates were all hungry to launch or join AI start-ups; he knew of at least eight undergraduates who’d dropped out to do so. “The bar is so much lower” to found a company than when he started school, Mallery said, because AI can take care of anything administrative (which might otherwise require paying accountants, lawyers, and the like). Origami Agents could lower the bar further: The company’s goal, Mallery said, is to build a “superintelligent system of sales agents that can do all the work a team of humans can do.” He was one of several entrepreneurs who mentioned an internal memo by Tobi Lutke, the CEO of Shopify, mandating that his employees use AI. “Before asking for more Headcount and resources,” Lutke wrote, “teams must demonstrate why they cannot get what they want done using AI.” Working at a major tech firm, Mallery said, seems almost less secure than starting your own company.

AI development, in this view, matters far more than traditional drivers and markers of economic development. “If OpenAI’s next model is horrible or plateaus, that would be much more concerning,” Mallery said. Founders and investors repeated the same thing: Tech start-ups are inherently risky and are not expected to turn a profit for a decade; they raise enough money to have “runway” precisely in the event of a rough stretch or a wider recession. The tech industry admittedly doesn’t “think very hard about how bad things could get,” Conrad told me. “Our job is to raise this,” he said, pointing upward—to raise the ceiling on how prosperous and enjoyable society can be. “Your job”—media, banks, elected officials, the East Coast—“is to protect the floor.”

Several investors I met suggested that a recession might even be an opportunity for AI firms. “Companies aren’t going to hire; they’re going to roll out AI,” Jeremiah Owyang, a partner at the VC firm Blitzscaling Ventures, told me. “It’s not a good story to tell, but it’s true.”

I met Owyang outside Stanford’s Jen-Hsun Huang Engineering Center, named after the CEO of Nvidia. Hundreds of entrepreneurs, software engineers, VCs, and students had gathered there in April for the 17th edition of an AI event Owyang hosts called the “Llama Lounge.” The energy was giddy: pizza, demo tables, networking. “Eighty to 90 percent of use cases are still out there,” Chet Kumar, an investor at the AI-focused firm Argonautic Ventures, told me that evening—meaning, in other words, that ChatGPT and all the rest weren’t even beginning to make good on AI’s potential. A few minutes later, I met James Antisdel, a former product manager at Google who recently launched his own company, CXO AGI, which aims to help businesses manage AI programs that act as employees. “With tariffs, if it becomes harder to move around the world, agents are going to become even more important,” Antisdel told me. “You can’t get a human, so get AI.”

[Read: A disaster for American innovation]

I heard this in Palo Alto, in San Francisco, in Menlo Park. “With the economy bad in the U.S. and around the world, you can make businesses more efficient,” Joanathan McIntosh, an AI-start-up founder, told me. Less than two weeks later, the CEO of Duolingo, the language-learning app, put out a memo telling employees that they were required to use generative AI and that “headcount will only be given if a team cannot automate more of their work.” Anthropic, on the same day, published research showing that 79 percent of user interactions with its AI coding interface, Claude Code, were some form of “automation”—human software engineers getting AI to directly complete a task for them. Moderna, the pharmaceutical giant, has combined its human resources and tech departments to determine which jobs are better done by people or AI. Should the nation enter a recession, and hundreds of thousands or millions of Americans lose employment, this time, they may never get their jobs back.

The day after the Llama Lounge, I traveled to the sidewalk outside OpenAI’s new offices (not the ones with the “AGI” conference room) in San Francisco, only minutes from the water, where a small group dressed in red shirts that read STOP AI was gathering. When I arrived, there were eight protesters and eight police officers nearby; at a previous demonstration, a few protesters were arrested for trespassing. Attendees were angry about potential automation, copyright infringement, affronts to human dignity, and a robot apocalypse. “This company is putting people’s lives at risk,” Sam Kirchner, the lead organizer, said in a short speech. The protesters then performed a skit in which Kirchner played Sam Altman and the other protesters beggars; faux Altman, seemingly at random, chose whether to dole out fruit from behind a sign that read Universal Basic Income—a fixed monthly payment that the real Altman has suggested as a solution to widespread AI-induced job loss. Nobody, other than the police officers and a small number of reporters, was there to watch or listen.

Not everyone was blocking out the White House with visions of AGI, of course. Outside Coupa CafĂ©, a Palo Alto coffee shop known for tech-founder and VC meetings, I sat down with Mike Lanza and Katrina Montinola, who have spent decades in start-ups and major tech firms around Silicon Valley, and they were irate over the Trump administration’s antagonistic approach to immigration and international collaboration. “The ones who have the gumption to come over here are admirable,” Montinola, a Filipina immigrant, told me. “That personality is what makes America great.” Lanza was more direct: “I have that American exceptionalism,” he told me, passed down from his father and his Italian-immigrant grandparents. “And now I’m embarrassed.”

Of all the whispers of discontent I heard in the techno-optimistic valley, this was by far the most frequent. Silicon Valley would not be the success story it is, people told me more than once, without the immigrants who have driven innovation here. At the Accelr8 hacker house, miniature national flags from around the world were strung across the ceiling, crisscrossing between the doors. America’s global standing, Lanza told me, matters for the tech industry’s talent pool, investors, and customers.

At the same cafĂ©, Mustafa Mohammadi, a robotics and AI-simulation consultant, explained to me how Trump’s policies risk dooming the robot revolution—the path for AI to transition from screens to the real world. Much of the best robot hardware and highest-quality robot data, as well as many of the most talented engineers, come from China, Mohammadi said. In the past, collaboration between the United States and China formed a robotics flywheel, he continued, spinning his finger in a circle. Should Trump continue down his current path—tariffs, immigration crackdowns, racist remarks—“you’ll break the fucking wheel.” At a recent dinner with AI-software engineers, many of whom were Chinese, Mohammadi told me, his friends were furious that Vice President J. D. Vance had described trading with China as buying from “Chinese peasants.” For all that Silicon Valley has to offer, these engineers are souring on America, he said—before long, if paid more to do the same job in China, “they will go back.”

Even the most confident AI founders I spoke with were beginning to worry about international researchers and entrepreneurs not being able, or no longer wanting, to enter the United States. Just over a week after my meeting with Mohammadi, an OpenAI researcher named Kai Chen was denied a U.S. green card. Chen had been instrumental to one of the firm’s most advanced models, GPT-4.5. “What is america doing,” one outraged colleague wrote on X. “Immigration makes america strong,” another chimed in. “We should not be denying entry to brilliant AI researchers.” (A few hours later, Noam Brown, the OpenAI researcher who had announced Chen’s predicament, posted an update: It seemed to have been a paperwork error, which a spokesperson for OpenAI told me is also the company’s “initial assessment.” Chen is working from Canada until the issue is resolved.)

The tech industry’s bubble, then, remains permeable. Shortly after visiting the hacker houses, I found myself on the eighth floor of the Phelan Building, a century-old triangular office in downtown San Francisco. It holds the headquarters of Flexport, which coordinates supply-chain logistics and freight shipments for billions of dollars of goods each year; its CEO, Ryan Petersen, has watched and felt the effects of Trump’s tariffs. Freight bookings from China to the U.S. were down by 50 percent, Petersen told me at the time. Roughly “90 days from now, you’re going to see mass shortages across the United States,” he said.

Petersen suggested that I talk with Dan Siroker, the founder of the AI-gadget start-up Limitless, and a few days later, we spoke over Zoom. Limitless was feeling the full force of Trump’s tariffs—the firm manufactures in China and had accepted many preorders at $59 each, but the duties had raised manufacturing costs to nearly $190 per unit. Siroker seemed to think that Limitless will be fine, because it had shipped enough inventory pre-tariffs to survive and will recover costs on subscriptions. But if the tariffs had come six months ago, he said, “it would be much harder.”

Trump’s policies, Petersen told me, reminded him “of central planning of the economy at the level you’re used to seeing from a Stalinist state.” At the bar of Rosewood Sand Hill hotel, a VC meetup in Menlo Park reminiscent of a White Lotus resort, Boyd Fowler, the chief technology officer at the semiconductor manufacturer OmniVision, lamented that his lawyers were working “night and day” on the tariffs. The legendary tech investor Paul Graham has likened the tariffs to China’s Great Leap Forward. Of course, Petersen said, all of this was only if nothing changed—and in his view, these tariffs were “so bad” that “there’s no way that it just stays like this.” That was in mid-April. Just yesterday, the U.S. and China announced a 90-day reduction in their tariffs—“Get ready for a big shipping boom,” Petersen wrote on X—although without any long-term trade deal or material concessions from either side.

Again and again, I heard the assumption that every Trump policy was reversible and would be reversed, in no small part because of the “really good, smart tech people” in the administration, as Rahul Kayala, a former Apple and Microsoft employee who recently co-founded an AI start-up, told me. He noted David Sacks and Sriram Krishnan, two influential tech investors advising the White House. Lanza, despite his fury with Trump’s immigration policy and tariffs, also cited Sacks. Anybody “who’s got a brain in the Trump administration is biting their tongue about these tariffs,” he said. “Everyone is assuming this is a reversible decision still,” Conrad said. Investors, Yang told me, had not changed their long-term plans.

Even before the latest pause, the White House had already announced some tariff exemptions for tech products, including Apple devices and some duties affecting carmakers. But the reversals don’t appear to be rational, let alone part of any plan. Even then, founders and investors told me that no matter what happens with tariffs and the broader economy, AI is clearly a priority for Trump. The White House has issued statements to this effect—but has simultaneously gutted funding for the basic science research that today’s generative-AI products depend on, put international scientific and technological collaboration at risk, and issued tariffs that could make it more expensive to build and power data centers in the United States.

This particular strain of optimism—a sense that tariffs and restricted immigration are terrible, but a stronger conviction that the tech industry can survive, or even thrive, anyway—was everywhere. I thought back to the demonstration in front of OpenAI’s offices, which had attracted a single counterprotester. Vikram Subbiah, a former SpaceX software engineer working on an AI start-up, was there to defend the technology, and he’d unfurled a red banner that read Stop Protesting AI. “My job is at more risk than they are,” Subbiah told me. If even the most automatable software engineers support AI, he argued, everyone should. Siroker, of the AI-gadget start-up, said something similar. Trade policy in the 1990s and 2000s “was a tiny blip compared to this big sucking sound, which is the internet,” Siroker told me. “And that big sucking sound today is AI.” Even the coronavirus pandemic, he said, “is a micro trend by comparison.”

In Silicon Valley, where the technological future is the center of today’s world, the president is easily reduced to memedom—not the most powerful man on the planet, but just some guy trolling everybody on the internet. The real power, the big sucking sound, is apparently in California. Trust the autopilot to stay the course. Where that takes us exactly, no one can say.

May 13, 2025  16:48:55

A few weeks ago, OpenAI pulled off one of the greatest corporate promotions in recent memory. Whereas the initial launch of ChatGPT, back in 2022, was “one of the craziest viral moments i’d ever seen,” CEO Sam Altman wrote on social media, the response to a new upgrade was, in his words, “biblical”: 1 million users supposedly signed up to use the chatbot in just one hour, Altman reported, thanks to a new, more permissive image-generating capability that could imitate the styles of various art and design studios. Altman called it “a new high-water mark for us in allowing creative freedom.”

Almost immediately, images began to flood the internet. The most popular style, by a long shot, was that of Studio Ghibli, the Japanese animation studio co-founded by Hayao Miyazaki and widely beloved for films such as Spirited Away and Princess Mononoke. Ghibli’s style was applied to family portraits, historical events including 9/11, and whatever else people desired. Altman even changed his X avatar to what appears to be a Ghiblified version of himself, and posted a joke about the style’s sudden popularity overtaking his previous, supposedly more important work.

The Ghibli AI phenomenon is often portrayed as organic, driven by the inspiration of ChatGPT users. On X, the person credited with jump-starting the trend noted that OpenAI had been “incredibly fortunate” that “the positive vibes of ghibli was the first viral use of their model and not some awful deepfake nonsense.” But Altman did not appear to think it was luck. He responded, “Believe it or not we put a lot of thought into the initial examples we show when we introduce new technology.” He has personally reposted numerous Ghiblified images in addition to the profile picture that appears atop every one of his posts, which he added less than 24 hours after the Ghibli-esque visuals became popular; OpenAI President Greg Brockman has also recirculated and celebrated these images.

[Read: Generative AI is challenging a 234-year-old law]

This is different from other image-sharing trends involving memes or GIFs. The technology has given ChatGPT users control over the visual languages that artists have honed over the course of their careers, potentially devaluing those artists’ styles and destroying their ability to charge money for their work. Existing laws do not explicitly address generative AI, but there are plausible arguments that OpenAI is in the wrong and could be liable for millions of dollars in damages—some of those arguments are now being tested in a case against another image-generating AI company, Midjourney.

It’s worth noting that OpenAI and Studio Ghibli could conceivably have a deal for the promotion, similar to the ones the tech company has struck with many media publishers, including The Atlantic. But based on Miyazaki’s clear preference for hand-drawn work and distaste for at least certain types of computer-generated imagery, this seems unlikely. Neither company answered my questions about whether such a deal had been made, and neither Miyazaki nor Studio Ghibli have made any public remarks on the situation.

Individual works of art are protected by copyright, but visual styles, such as Studio Ghibli’s, are not. The legal logic here is that styles should be allowed to evolve through influence and reinterpretation by other artists. That creative and social process is how van Gogh led to Picasso, and Spenser to Shakespeare. But a deluge of people applying Ghibli’s style like an Instagram filter, without adding any genuine creative value, isn’t a collective effort to advance our visual culture. The images are also the direct result of a private company promoting a tech product, in part through its executives’ social media, with the ability to manufacture images in a specific style. In response to a broader request for comment, a spokesperson for OpenAI told me, “We continue to prevent generations in the style of individual living artists, but we do permit broader studio styles.”  

Still, this has the flavor of an endorsement deal, such as the ones Nike has made with LeBron James, and Pepsi with BeyoncĂ©: Use ChatGPT; make Studio Ghibli art! These kinds of endorsements typically cost millions of dollars. Consider what happened in 1985, when Ford Motor Company wanted to promote one of its cars with an ad campaign featuring popular singers. Ford’s advertising agency, Young & Rubicam, asked Bette Midler to record her hit song “Do You Want to Dance?” but she declined. Undeterred, they approached one of Midler’s backup singers and asked her to perform the song in Midler’s style. She accepted, and imitated Midler as well as she could. The ad aired. Midler sued.

In court, the judge described the central issue as “an appropriation of the attributes of one’s identity,” quoting from a previous case that had set precedent. Young & Rubicam had chosen Midler not because they wanted just any good singer but because they wanted to associate their brand with the feelings evoked by Midler’s particular, recognizable voice. “When a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product,” wrote the court, “the sellers have appropriated what is not theirs.” Young & Rubicam had violated Midler’s “right of publicity,” in the language of the law. Midler received a $400,000 judgment (the equivalent of approximately $1 million today).

[Read: The unbelievable scale of AI’s pirated-books problem]

OpenAI risked ending up in a similar lawsuit last year when it used a voice many people thought sounded similar to Scarlett Johansson’s to promote its voice-assistant product. Like Midler, Johansson had been asked to participate, and declined. Experts believed she had a viable right-of-publicity case against OpenAI. Johansson’s lawyers sent letters to OpenAI but did not file a formal legal complaint. (OpenAI denied that the voice was modeled on Johansson’s, but removed it and apologized to the actor.)

The average person seeing a torrent of images in the Studio Ghibli style, with captions praising ChatGPT, might reasonably infer that Miyazaki himself endorses or is associated with OpenAI, given that he is the most famous artist at the studio and has directed more of its films than any other. That people tend to call the aesthetic Ghibli’s doesn’t change the fact that the style is most recognizably Miyazaki’s, present even in his early work, such as the 1979 film Lupin III: The Castle of Cagliostro, which was created six years before Ghibli was founded. Surely many people recognize Spirited Away as Miyazaki’s and have never heard of Studio Ghibli.

Besides a right-of-publicity complaint, another legal option might be to file a complaint for false endorsement or trade-dress infringement, as other artists have recently done against AI companies. False endorsement aims to prevent consumer confusion about whether a person or company endorses a product or service. Trade-dress law protects the unique visual cues that indicate the source of a product and distinguish it from others. The classic Coca-Cola bottle shape is protected by trade dress. Apple has also acquired trade-dress protection on the iPhone’s general rectangular-with-rounded-corners shape—a design arguably less distinctive (and therefore less protectable) than Ghibli’s style.

In August, a judge agreed that false-endorsement and trade-dress claims against Midjourney were viable enough to litigate, and found it plausible that, as the plaintiffs allege, Midjourney and similar AI tools use a component that functions as “a trade dress database.”

[Read: There’s no longer any doubt that Hollywood writing is powering AI]

Regardless of what the courts decide or any action that Studio Ghibli takes, the potential downsides are clear. As Greg Rutkowski, one of the artists involved in the case against Midjourney, has observed, AI-generated images in his style, captioned with his name, may soon overwhelm his actual art online, causing “confusion for people who are discovering my works.” And as a former general counsel for Adobe, Dana Rao, commented to The Verge last year, “People are going to lose some of their economic livelihood because of style appropriation.” Current laws may not be up to the task of handling generative AI, Rao suggested: “We’re probably going to need a new right here to protect people.” That’s not just because artists need to make a living, but because we need our visual aesthetics to evolve. Artists such as Miyazaki move the culture forward by spending their careers paying attention to the world and honing a style that resonates. Generative AI can only imitate past styles, thus minimizing the incentives for humans to create new ones. Even if Ghibli has a deal with OpenAI, ChatGPT allows users to mimic any number of distinct studio styles: DreamWorks Animation, Pixar, Madhouse, Sunrise, and so on. As one designer recently posted, “Nobody is ever crafting an aesthetic over decades again, and no market will exist to support those who try it.”

Years from now, looking back on this AI boom, OpenAI could turn out to be less important for its technology than for playing the role of provocateur. With its clever products, the company has rapidly encouraged new use cases for image and text generation, testing what society will accept legally, ethically, and socially. Complaints have been filed recently by many publishers whose brands are being attached to articles invented or modified by chatbots (which is another kind of misleading endorsement). These publishers, one of which is The Atlantic, are suing various AI companies for trademark dilution and trademark infringement, among other things. Meanwhile, as of today, Altman is still posting under his smiling, synthetic avatar.

May 10, 2025  05:43:08

Early Monday morning, the leader of the free world had a message to convey. Not about the economic turmoil from tariffs, any one of the skirmishes playing out abroad, or a surprise shake-up in his White House staff. Instead, President Donald Trump turned to Truth Social to post about something called the “$TRUMP GALA DINNER,” with a link to gettrumpmemes.com.

A visit to the website paints a slightly fuller picture: Buy as many tokens as you can of Trump’s personal cryptocurrency, $TRUMP, and you could be invited to a private event later this month at the Trump National Golf Club outside Washington, D.C. There, you will get the unique opportunity to meet with the president and “learn about the future of Crypto.” The gala looks very much like a thinly veiled gambit to pump up the price of $TRUMP, a so-called memecoin that is mostly owned by Trump-backed entities. Funnel the greatest amount of money to the president of the United States, and you could win some face time with the big man himself.

In 2021, Trump called bitcoin a “scam.” Now he seems to understand exactly what crypto can do for him personally: namely, make Trump and his family very, very rich. The $TRUMP gala is one part of a constellation of Trump-affiliated crypto efforts that includes Trump Digital Trading Card NFTs, a crypto company called World Liberty Financial, and a bitcoin-mining firm. According to an analysis by Bloomberg, the Trump family has already banked nearly $1 billion from these projects. Long before he descended the golden escalator at Trump Tower a decade ago, Trump’s public image was rooted in his business prowess. But compared with his real-estate projects or The Apprentice, crypto is already turning into his most successful venture yet.

[Read: The crypto world is already mad at Trump]

Trump perhaps wouldn’t be president at all if it wasn’t for crypto. During the 2024 campaign, the industry was among his campaign’s biggest donors. That money flowed in from both crypto corporations and individual donors, such as the bitcoin billionaires Tyler and Cameron Winklevoss. (The identical twins gave $1 million each in bitcoin to the Trump campaign, but had to be refunded because they exceeded the legal donation limit.) In exchange, Trump promised the imperiled industry a fresh start after four years of a Biden-sanctioned crypto crackdown. Last summer, as the keynote speaker at the annual bitcoin conference, Trump promised that if elected, he would make America the “crypto capital of the planet.” The crypto industry is now getting its money’s worth. Consider the crypto firm Ripple, which spent four years squaring off against Biden’s regulators in federal courtrooms and donated $4.9 million to Trump’s inauguration fund. Yesterday, the new administration dropped the government’s case, as the White House has effectively stopped enforcing crypto rules.

Trump is still tapping crypto magnates for money. On Monday, he attended a super PAC’s “Crypto & AI Innovators” fundraiser, for which donors shelled out $1.5 million to get in the door. But for Trump, crypto has quickly become about more than soliciting campaign donations and rewarding supporters. In September, Trump announced the launch of World Liberty Financial, a decentralized-finance company to be managed by his sons Eric and Don Jr. and a couple of young entrepreneurs. (One previously ran a company called Date Hotter Girls, while the other is the son of Steve Witkoff, a longtime Trump ally serving as special envoy to the Middle East.) Then, in January, just before Inauguration Day, he launched $TRUMP. Like all memecoins, it has no underlying business fundamentals or links to real-world assets—the point is to just quickly capitalize on a viral trend, conjuring value out of practically nothing. This proved extremely lucrative almost immediately: $TRUMP initially spiked in value before crashing back down, at one point accounting for almost 90 percent of the president’s net worth. (There’s also an official $MELANIA coin, if that’s more your thing.)

With crypto, Trump has found an unnervingly effective way to transmute the clout and power of the nation’s highest office into cold, hard cash. Last week, World Liberty Financial announced that its cryptocurrency, USD1, would facilitate an Abu Dhabi investment firm’s $2 billion stake in the crypto exchange Binance. Eric and Don Jr. are also on the crypto press circuit, with plans to speak at the 2025 bitcoin conference later this month. Some of Trump’s decisions as president, such as creating a “Strategic Bitcoin Reserve,” may also function to inflate his crypto riches, in the sense that a rising tide lifts all boats; promoting crypto as part of the national interest can only support the idea that these coins are worth buying into.

[Read: Trump’s crypto reserve is really happening]

Crypto is a conduit for the self-interest that has defined Trump’s entire political career—an M.O. that has consistently blurred the boundary between public and private, country and party. For the most part, Trump has been especially good to those who line his pockets, rewarding them with all kinds of preferential treatment.

During his first term, Trump enriched himself the old-fashioned way—by way of merchandising deals and real-estate investments across the globe. But with crypto, all of that has ratcheted up in Trump’s second term. In crypto, money is fast, loose, and digitally native—properties that have made his personal dealings in the industry even more galling, and potentially more vulnerable to outside sway. Someone looking to gain access to Trump might have once had to pay thousands of dollars a night for a room at Mar-a-Lago for a chance encounter with the president on the golf course. Now the door is open for influence from almost anyone in the world with an internet connection.

The White House insists that there is nothing to see here. “His assets are in a trust managed by his children, and there are no conflicts of interest,” Deputy Press Secretary Anna Kelly said in an emailed statement. Keeping that wealth in a trust may do very little to sever the connection between Trump and his riches, though, depending on the exact conditions of the arrangement. Even when Eric and Don Jr. serve as a buffer, the money stays in the family.

Crypto’s anonymous nature poses unique challenges in understanding exactly what is happening—transactions on a blockchain are typically posted using long strings of numbers known as addresses, rather than verified by legal name. By all accounts, to interact with $TRUMP is to funnel money directly into the president’s pockets, but the campaign-finance laws that caused the Winklevosses’ exorbitant donations to be refunded don’t apply here. Nothing is stopping, say, agents of foreign powers, or tech billionaires looking for favorable tariff treatment, from using $TRUMP to gain access to the highest echelons of government. Lawmakers on both sides of the aisle are starting to get it: Yesterday, three GOP senators joined Democrats to block a major crypto bill that would serve to benefit World Liberty Financial.

Ironically, Trump’s embrace of crypto is pumping money into the industry while simultaneously damaging it. Since the fall of Sam Bankman-Fried in 2022, the image of crypto as a haven for scams and hackers has loomed large. At a moment when the crypto industry is trying to claw its way back to respectability and legitimization, Trump has taken every opportunity to cement it in the minds of the Americans as nothing more than a vehicle for channeling money directly to him. In crypto, “there are many people who have ethics, and have been working for years to build the system because they believe what they are doing is in the public interest,” Angela Walch, a crypto expert and former law professor, told me. “And what this does is it makes all the messaging that has come from extreme crypto critics about, ‘It’s only a tool for grift,’ and makes it look like that.”

By hitching their wagon to Trump, the industry’s leaders have unleashed a force they can’t control. The moment the president cashed in on crypto, the calculus shifted. Like the hot dogs at Costco, “being the president” is the loss leader; crypto pays the bills.

May 9, 2025  20:08:40

Recently, after an update that was supposed to make ChatGPT “better at guiding conversations toward productive outcomes,” according to release notes from OpenAI, the bot couldn’t stop telling users how brilliant their bad ideas were. ChatGPT reportedly told one person that their plan to sell literal “shit on a stick” was “not just smart—it’s genius.”

Many more examples cropped up, and OpenAI rolled back the product in response, explaining in a blog post that “the update we removed was overly flattering or agreeable—often described as sycophantic.” The company added that the chatbot’s system would be refined and new guardrails would be put into place to avoid “uncomfortable, unsettling” interactions. (The Atlantic recently entered into a corporate partnership with OpenAI.)

But this was not just a ChatGPT problem. Sycophancy is a common feature of chatbots: A 2023 paper by researchers from Anthropic found that it was a “general behavior of state-of-the-art AI assistants,” and that large language models sometimes sacrifice “truthfulness” to align with a user’s views. Many researchers see this phenomenon as a direct result of the “training” phase of these systems, where humans rate a model’s responses to fine-tune the program’s behavior. The bot sees that its evaluators react more favorably when their views are reinforced—and when they’re flattered by the program—and shapes its behavior accordingly.

The specific training process that seems to produce this problem is known as “Reinforcement Learning From Human Feedback” (RLHF). It’s a variety of machine learning, but as recent events show, that might be a bit of a misnomer. RLHF now seems more like a process by which machines learn humans, including our weaknesses and how to exploit them. Chatbots tap into our desire to be proved right or to feel special.

Reading about sycophantic AI, I’ve been struck by how it mirrors another problem. As I’ve written previously, social media was imagined to be a vehicle for expanding our minds, but it has instead become a justification machine, a place for users to reassure themselves that their attitude is correct despite evidence to the contrary. Doing so is as easy as plugging into a social feed and drinking from a firehose of “evidence” that proves the righteousness of a given position, no matter how wrongheaded it may be. AI now looks to be its own kind of justification machine—more convincing, more efficient, and therefore even more dangerous than social media.

[Read: The internet is worse than a brainwashing machine]

This is effectively by design. Chatbots have been set up by companies to create the illusion of sentience; they express points of view and have “personalities.” OpenAI reportedly gave GPT-4o the system prompt to “match the user’s vibe.” These design decisions may allow for more natural interactions with chatbots, but they also pull us to engage with these tools in unproductive and potentially unsafe ways—young people forming unhealthy attachments to chatbots, for example, or users receiving bad medical advice from them.

OpenAI’s explanation about the ChatGPT update suggests that the company can effectively adjust some dials and turn down the sycophancy. But even if that were so, OpenAI wouldn’t truly solve the bigger problem, which is that opinionated chatbots are actually poor applications of AI. Alison Gopnik, a researcher who specializes in cognitive development, has proposed a better way of thinking about LLMs: These systems aren’t companions or nascent intelligences at all. They’re “cultural technologies”—tools that enable people to benefit from the shared knowledge, expertise, and information gathered throughout human history. Just as the introduction of the printed book or the search engine created new systems to get the discoveries of one person into the mind of another, LLMs consume and repackage huge amounts of existing knowledge in ways that allow us to connect with ideas and manners of thinking we might otherwise not encounter. In this framework, a tool like ChatGPT should evince no “opinions” at all but instead serve as a new interface to the knowledge, skills, and understanding of others.

This is similar to the original vision of the web, first conceived by Vannevar Bush in his 1945 Atlantic article “As We May Think.” Bush, who oversaw America’s research efforts during World War II, imagined a system that would allow researchers to see all relevant annotations others had made on a document. His “memex” wouldn’t provide clean, singular answers. Instead, it would contextualize information within a rich tapestry of related knowledge, showing connections, contradictions, and the messy complexity of human understanding. It would expand our thinking and understanding by connecting us to relevant knowledge and context in the moment, in ways a card catalog or a publication index could never do. It would let the information we need find us.

[From the July 1945 issue: As we may think]

Gopnik makes no prescriptive claims in her analysis, but when we think of AI in this way, it becomes evident that in seeking opinions from AI itself, we are not tapping into its true power. Take the example of proposing a business idea—whether a good or bad one. The model, whether it’s ChatGPT, Gemini, or something else, has access to an inconceivable amount of information about how to think through business decisions. It can access different decision frameworks, theories, and parallel cases, and apply those to a decision in front of the user. It can walk through what an investor would likely note in their plan, showing how an investor might think through an investment and sourcing those concerns to various web-available publications. For a nontraditional idea, it can also pull together some historical examples of when investors were wrong, with some summary on what qualities big investor misses have shared. In other words, it can organize the thoughts, approaches, insights, and writings of others for a user in ways that both challenge and affirm their vision, without advancing any opinion that is not grounded and linked to the statements, theories, or practices of identifiable others.

Early iterations of ChatGPT and similar systems didn’t merely fail to advance this vision—they were incapable of achieving it. They produced what I call “information smoothies”: the knowledge of the world pulverized into mathematical relationships, then reassembled into smooth, coherent-sounding responses that couldn’t be traced to their sources. This technical limitation made the chatbot-as-author metaphor somewhat unavoidable. The system couldn’t tell you where its ideas came from or whose practice it was mimicking even if its creators had wanted it to.

But the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process; links to outside articles are now a common feature. My own research in this space suggests that with proper prompting, these systems can begin to resemble something like Vannevar Bush’s idea of the memex. Looking at any article, claim, item, or problem in front of us, we can seek advice and insight not from a single flattering oracle of truth but from a variety of named others, having the LLM sort out the points where there is little contention among people in the know and the points that are sites of more vigorous debate. More important, these systems can connect you to the sources and perspectives you weren’t even considering, broadening your knowledge rather than simply reaffirming your position.

I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth. And this would extend even to areas where judgment is somewhat personal. Imagine, for example, asking an AI to evaluate your attempt at writing a haiku. Rather than pronouncing its “opinion,” it could default to explaining how different poetic traditions would view your work—first from a formalist perspective, then perhaps from an experimental tradition. It could link you to examples of both traditional haiku and more avant-garde poetry, helping you situate your creation within established traditions. In having AI moving away from sycophancy, I’m not proposing that the response be that your poem is horrible or that it makes Vogon poetry sound mellifluous. I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better.

There’s a good analogy in maps. Traditional maps showed us an entire landscape—streets, landmarks, neighborhoods—allowing us to understand how everything fit together. Modern turn-by-turn navigation gives us precisely what we need in the moment, but at a cost: Years after moving to a new city, many people still don’t understand its geography. We move through a constructed reality, taking one direction at a time, never seeing the whole, never discovering alternate routes, and in some cases never getting the sense of place that a map-level understanding could provide. The result feels more fluid in the moment but ultimately more isolated, thinner, and sometimes less human.

For driving, perhaps that’s an acceptable trade-off. Anyone who’s attempted to read a paper map while navigating traffic understands the dangers of trying to comprehend the full picture mid-journey. But when it comes to our information environment, the dangers run in the opposite direction. Yes, AI systems that mindlessly reflect our biases back to us present serious problems and will cause real harm. But perhaps the more profound question is why we’ve decided to consume the combined knowledge and wisdom of human civilization through a straw of “opinion” in the first place.

The promise of AI was never that it would have good opinions. It was that it would help us benefit from the wealth of expertise and insight in the world that might never otherwise find its way to us—that it would show us not what to think but how others have thought and how others might think, where consensus exists and where meaningful disagreement continues. As these systems grow more powerful, perhaps we should demand less personality and more perspective. The stakes are high: If we fail, we may turn a potentially groundbreaking interface to the collective knowledge and skills of all humanity into just more shit on a stick.

May 7, 2025  15:01:00

Cameo is a platform that allows everyday people to commission B-to-Z-list celebrities to record personalized videograms for any occasion. Some time ago, when my friend Caroline was in the hospital, I used it to buy, for $12.59, a 2-minute, 14-second pep talk for her, delivered by a man who is famous online for dressing like a dog.

More than two years later, Cameo wants me to know that if I would like to not receive Mother’s Day–related promotional emails, I can opt out. So does Heyday, the Millennial skin-care company, and Parachute, the Millennial linen store, and Prose, the Millennial shampooery, and at least two different stores that have sold me expensive candles. They offer this service using the whispery timbre and platitudinous vocabulary of therapy-speak: This time of year, I am told, can be “meaningful” but also “tender.” I can take care of myself by electing not to receive Mother’s Day marketing emails. Very often, there is a JPEG of flowers.

This is well intentioned, of course: This holiday really can be difficult, for any number of reasons. “The death of a beloved,” C. S. Lewis wrote, “is an amputation,” and every mother, without exception, eventually dies, leaving lots of people without someone to celebrate. Being a mother and having a mother are also two of the most profound experiences a person can have, and profundity is rarely uncomplicated. Not being a mother if you want to be one can be a sadness you carry in your pocket every day. There are so many ways to wish things were different. Whatever’s going on, I can guarantee that no one wants to be reminded of their familial trauma by the company they bought a soft-rib bath bundle (colorway: agave) from five years ago. And so they email us, asking if it’s okay to email us.

[Read: Why I’m skipping Mother’s Day]

The practice took off in the United States a few years ago, shortly after the coronavirus pandemic started and George Floyd was murdered by a police officer. Because of social media, people were already used to multinational corporations talking to them like friends, but when the world started falling apart, they wanted those friends to be better—to seem more empathetic, more human, more aware of things other than selling products. Younger customers, especially, “want to feel like they’re in a community with their favorite brands,” the business journalist Dan Frommer told me. “There’s this level of performance that becomes necessary, or at least, you know, part of the shtick.”

The Mother’s Day opt-out email suggests that the brand sending it sees you as a whole person, not just as a market segment (at least for a moment). It uses an intimate medium to manufacture more intimacy, appearing between messages from your human loved ones and talking like them too. (A recent email from Vena, a CBD company co-founded by a former Bravo housewife, begins by saluting me as “babe” and reassures me that if I “need to push pause for these emails, we totally get that.”) It allows the brand to suggest that it is different from all of the other corporations competing for your attention and money—while simultaneously giving them more access to your attention and money.

[Read: Brands have nothing real to say about racism]

For companies, sending the Mother’s Day opt-out email is like buying insurance on a highly valuable asset: your inbox. “Email is, probably for every brand, the most profitable marketing channel for e-commerce,” Frommer told me. The people on any given company’s email list are likely on it because they’ve already engaged with the brand in some way, whether knowingly or not. In the argot of online marketing, they’re good leads—a consumer relationship just waiting to be strengthened, one strenuously casual email at a time. This is why every start-up is constantly offering you 10 percent off your first purchase if you sign up for their email list, and also why they will do anything to keep you on it. If a Mother’s Day opt-out prevents even a small number of people from unsubscribing to all of a brand’s emails, it will be worthwhile. “It’s the kind of thing that probably means a lot to very few people,” Frommer said, “but those people really appreciate it.”

But like a lot of what makes for good business these days, the effect is a little absurd. So many emails about Mother’s Day are flying around, all in the service of sending fewer emails about Mother’s Day. Advertisements are constantly shooting into our every unoccupied nook and cranny, but the good ones are now sensitive to our rawest family dynamics. Also, not to be too literal about it, but: The idea that pain, or regret, or tenderness, or whatever the brands want to call it, is something a person can decide not to participate in is fiction. “Everyone is grieving something at any given point in time,” Jaclyn Bradshaw, who runs a small digital-marketing firm in London, told me. (She recently received a Mother’s Day email that cannily combined a sale and an opt-out, offering 15 percent off just above the button to unsubscribe.) If someone’s grief is acute, an email is unlikely to be the thing that reminds them. “No, I remember,” Bradshaw said. “It was at the very forefront of my mind.”

[Read: When Mother’s Day is ‘empowering’]

Mother’s Day originated as an occasion for expressing simple gratitude for child care and the women who do it; people celebrated by writing letters and wearing white carnations. It is now a festival of acquisition, a day mostly devoted to buying things—$34 billion worth of things this year, according to forecasters. The brunch places in my neighborhood are advertising Mother’s Day specials, and the ads on my television are reminding me that it’s “not too late to buy her jewelry.” I’m planning on going to a baseball game that day, and when I get there, a free clutch bag, designed to look like a baseball and “presented by” a mattress company, will be pressed into my hand, in honor of the concept of motherhood. My friends will post on Instagram, and my co-workers will ask me how my day was when I get to work on Monday.

This doesn’t bother me, personally. I love being a mother, almost entirely uncomplicatedly, and I love my mother, almost entirely uncomplicatedly. (In this, I know, I’m very lucky.) I have no particular problem with Mother’s Day, which is to say I’m as happy receiving an email from a brand about it as I am receiving an email from a brand about anything.

But every year around this time, I think of my friend Mimi, who died the day after Mother’s Day in 2018. That’s not fully true, actually—the truth is that I think about her all the time: when I see a dog she would have delighted in petting, or find myself walking behind a woman with wild curly hair like hers on the street, or am served an old photo by my phone’s “memories” feature, or talk to someone who loved her too. Most of the time, I like it. Other times, if you gave me a button I could click to stop being reminded that she’s not here anymore, I’d push it until my forefinger broke. It wouldn’t work, of course. Brands are some of the most powerful forces in modern life, but they cannot do everything.

May 6, 2025  19:29:51

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

When Elon Musk’s engineers bundled a batch of prototype satellites into a rocket’s nose cone six years ago, there were fewer than 2,000 functional satellites in Earth’s orbit. Many more would soon be on the way: All through the pandemic, and the years that followed, Musk’s company, SpaceX, kept launching them. More than 7,000 of his satellites now surround Earth like a cloud of gnats. This fleet, which works to provide space-based internet service to the ground, dwarfs those of all other private companies and nation-states put together. And almost every week, Musk adds to it, flinging dozens more satellites into the sky.

I recently asked the space historian Jonathan McDowell, who keeps an online registry of Earth’s satellites, if any one person had ever achieved such dominance over the orbital realm, and so quickly. “This is unique,” he said. Then, after considering the question further, McDowell realized there was a precedent, but only one: Sergei Pavlovich Korolev, the Soviet engineer who developed Sputnik and its launch vehicle. “From 1958 to 1959, when no one else had any satellites in orbit, Korolev was the only guy in town.” Musk is not the only guy in town circa 2025, but the rapid growth of his space-based network may represent a Sputnik moment of its own.

Musk first announced his intention to build a space-based internet, which he would eventually call Starlink, in January 2015. He had plans to settle Mars, then the moons of Jupiter, and maybe asteroids too. All those space colonies would have to be connected via satellite-based communication; Starlink itself might one day be adapted for this use. Indeed, Starlink’s terms of service ask customers to affirm that they “recognize Mars as a free planet and that no Earth-based government has authority or sovereignty over Martian activities.”

Musk is clearly imagining a future in which neither his network nor his will can be restrained by the people of this world. But even now, here on Earth, space internet is a big business. Fiber networks cannot extend to every bit of dry land on the planet, and they certainly can’t reach airborne or seaborne vessels. More than 5 million people have already signed up for Starlink, and it is growing rapidly. (You may end up using Starlink when you fly United, for example.) In the not-too-distant future, an expanded version of this system—or one very much like it—could overtake broadband as the internet’s backbone. A decade or two from now, it could be among our most crucial information infrastructure. The majority of our communications, our entertainment, our global commerce, might be beamed back and forth between satellites and the Earth. If Musk continues to dominate the launches that take satellites to space, and the internet services that operate there, he could end up with more power over the human exchange of information than any previous person has ever enjoyed.


Musk recognized that Starlink’s early adopters would be in remote and rural areas, where cables may not reach, and there are few, if any, cell towers. The U.S. is, for now, his biggest market, and the U.S. government may soon become a major customer: President Donald Trump has just delayed a $42 billion federal effort to expand broadband services, especially in rural areas. His administration has decided to make that project “tech-neutral,” such that cable hookups aren’t necessarily preferred over satellite—which means that Starlink can compete for the money. In the meantime, Starlink’s internet service is now also in planes, in ships at sea, in deep jungles, tundras, and deserts. In Gaza, medics have used Starlink while healing the wounded. At times when the people of Myanmar and Sudan learned that the internet had been shut off by their autocratic governments, they turned to Starlink. Ukraine’s soldiers use it to communicate on the front lines.

Musk’s ability to deliver this crucial service—the ability to coordinate action in conflict zones—has given him unprecedented geopolitical leverage for a private citizen. Reportedly, Pentagon officials have already had to go hat in hand to Musk after he threatened to restrict Starlink’s service to Ukraine’s troops, who were using it to launch attacks inside Russia. “He is not merely a mogul,” Kimberly Siversen Burke, a director at Quilty Space, an aerospace-research firm, told me. “This is someone who can flip a switch and decide the outcome of a war.” (Neither Musk nor Starlink responded to requests for comment.)

[Read: When a telescope is a national-security risk]

Political leaders all over the world have come to understand that Starlink’s dominance will be hard to dislodge, because SpaceX is so good at making satellites and getting them to space. The company makes its satellites in a factory outside of Seattle. Even in their bundled-up, larval form, they are enormous. The newest ones weigh more than half a ton, and once their solar-panel wings unfurl, they measure about 100 feet across. The company can reportedly manufacture at least four of these behemoths a day, and SpaceX’s reusable Falcon 9 rocket can hold more than 25 of them at once, all folded up inside its nose cone. Musk is able to launch these bundles of satellites at a Gatling-gun pace, while his competitors operate at musket speed with rockets that must be rebuilt from scratch each time. Last year, SpaceX successfully lofted 133 rockets into orbit, and more than 60 percent of them were carrying Starlink satellites. Every one of Musk’s commercial competitors, and also every nation’s military combined, launched fewer rockets than he did.

Before the rise of SpaceX, the French company Arianespace had dominated the global satellite-launch market. But its newest rocket, the Ariane 6, has so far been a boondoggle, with development delays and a costly one-and-done design. (The company expects to launch only 10 of them a year.) This is one reason that Europe has had a hard time fielding a serious competitor to Starlink, despite a desire to reduce Musk’s influence on future conflicts on the continent. Europe is home to Starlink’s largest commercial competitor, at least to this point, in OneWeb, a subsidiary of the French company Eutelsat. OneWeb has more than 600 satellites, compared with Musk’s more than 7,000, and its hardware is less advanced. As a result, the internet service it provides is slower than Starlink’s.

Separately, European Union nations have spent years planning the construction of a dedicated network of satellites for military and civilian use. But this project was recently dealt a blow when Giorgia Meloni, Italy’s prime minister—a friend of Musk’s—announced that she now prefers a deal with Starlink. The governments of Germany and Norway are each working on their own sovereign fleets, but they’re nowhere near having them up and running.

[Read: The military is about to launch a constellation]

The U.S. government, too, would have good reasons to avoid full dependence on Musk’s company for access to the space-based internet. The American military has an orbital network of military-grade satellites that allows for secure government communications and reconnaissance. But this too is a Musk product: SpaceX builds the satellites and ferries them to orbit.

The Pentagon’s leaders know this is a problem, or at least they once did. During the end of the Biden administration, the U.S. Space Force published a new strategy that ordered policy makers to avoid overreliance on any single company. But that was before the Defense Department came under the control of Trump, whose victorious campaign received more than $250 million in support from Musk. When I wrote to the Pentagon to ask whether avoiding overreliance on one provider was still a priority, I did not hear back. Even if the agency does end up diversifying its vendors, that process will take years, Masao Dahlgren, a fellow at the Center for Strategic and International Studies who specializes in space and defense, told me. “You can look at the launch schedule, and look at how many you need up there, and tell that it’s going to be a while.”

China’s People’s Liberation Army reportedly has its own concerns about Musk’s dominance over the potential future of communication in space. Several Chinese companies are currently building satellite-internet services; the largest one has roughly 90 satellites in orbit at the moment, and provides service only in the city of Shanghai. If that pilot project works, the network’s operator intends to expand across the country and beyond. China’s total number of satellites could tick up fast, because unlike Europe, the country is actually capable of launching a lot of rockets.

But of all of the aspiring competitors to Starlink, the most formidable is based in the U.S. Although Amazon has only just started launching satellites for its Project Kuiper, the company is looking to manufacture several thousand more in the coming years. It has also done the hard work of designing small, inexpensive terminals for users on the ground, which can compete with Starlink’s sleek, iPad-size consumer equipment. If Jeff Bezos’s space company, Blue Origin, can make its own reusable rocket fully operational, Amazon will start flinging satellites up into the sky in big batches as SpaceX does.

Of course, Musk is not going to sit still while the rest of the space industry catches up. Starlink is already available in more than 100 countries, and in Nigeria, Africa’s most populous country, it will soon be the largest internet provider of any kind. Other developing countries will likely want to make that same leapfrog bet that they can skip an expensive broadband build-out and go straight to satellite. And not just for the internet: Musk recently secured permission from the FCC to offer cellphone service via Starlink too. And he’s doing all this with his current technology. If SpaceX can finish testing its much bigger, next-generation Starship rocket within a year or two, as analysts expect, Musk will be able to expand his orbital fleets dramatically. SpaceX has previously said that the Starship will be able to carry up to 100 satellites in a single launch.

“In five years, we’ve gone from around 1,000 functional satellites to around 10,000,” McDowell told me. “I would not be surprised if in another 10 years, we get to 100,000 satellites.” They will beam more information down to the Earth than those that whirl around it today. They will offer an unprecedented degree of connectivity to people and devices, no matter where they are on the planet’s surface. The space internet of the future may become the central way that we communicate with one another, as human beings. Information of every kind, including the most sensitive kinds, will flow through it. Whoever controls it will have a great deal of power over us all.

May 5, 2025  17:54:31

Updated at 1:53 p.m. ET on May 5, 2025

One Friday in April, Meta’s chief global affairs officer, Joel Kaplan, announced that the process of removing fact-checking from the American versions of Facebook, Threads, and Instagram was nearly complete. By the following Monday, there would be “no new fact checks and no fact checkers” working across these platforms in the U.S.—no professionals marking disinformation about vaccines or stolen elections. Elon Musk, owner of X—a rival platform with an infamously permissive approach to content moderation—replied to Kaplan, writing, “Cool.”

Meta, then just called Facebook, began its fact-checking program in December 2016, after President Donald Trump was first elected and the social network was criticized for allowing the rampant spread of fake news. The company will still take action against many kinds of problematic content—threats of violence, for example. But it has left the job of patrolling many kinds of misinformation to users themselves. Now, if users are so compelled, they can turn to a Community Notes program, which allows regular people to officially contradict one another’s posts with clarifying or corrective supplementary text. A Facebook post stating that the sun has changed color might receive a useful correction, but only if someone decided to write one and submit it for consideration. Almost anyone can sign up for the program (Meta says users must be over 18 and have accounts “in good standing”), making it, in theory, an egalitarian approach to content moderation.

Meta CEO Mark Zuckerberg has called the pivot on misinformation a return to the company’s “roots,” with Facebook and Instagram as sites of “free expression.” He announced the decision to adopt Community Notes back in January, and explicitly framed the move as a response to the 2024 elections, which he described as a “cultural tipping point towards once again prioritizing speech.” Less explicitly, Meta’s shift to Community Notes is a response to years of being criticized from both sides of the aisle over the company’s approach to misinformation. Near the end of his last term, Trump targeted Facebook and other online platforms with an executive order accusing them of “selective censorship that is harming our national discourse,” and during the Biden administration, Zuckerberg said he was pressured to take down more posts about COVID than he wanted to.

Meta’s abandonment of traditional fact-checking may be cynical, but misinformation is also an intractable problem. Fact-checking assumes that if you can get a trustworthy source to provide better information, you can save people from believing false claims. But people have different ideas of what makes a trustworthy source, and there are times when people want to believe wrong things. How can you stop them? And, the second question that platforms are now asking themselves: How hard should you try?


Community Notes programs—originally invented in 2021 by a team at X, back when it was still called Twitter—are a somewhat perplexing attempt at solving the problem. It seems to rely on a quaint, naive idea of how people behave online: Let’s just talk it out! Reasonable debate will prevail! But, to the credit of social-media platforms, the approach is not as starry-eyed as it seems.

The chief innovation of Community Notes is that the annotations are generated by consensus among people who might otherwise see things differently. Not every note that is written actually appears under a given post; instead, they are assessed using “bridging” algorithms, which are meant to “bridge” divides by accounting for what’s called “diverse positive feedback.” This means that a potential note is valued more highly and is more likely to appear on a post if it is rated “helpful” by a wide array of people who have demonstrated different biases at other times. The basics of this system have quickly become a new industry standard. Shortly after Meta’s announcement about the end of fact-checking, TikTok said that it would be testing its own version of Community Notes, called Footnotes—though unlike Meta and X, TikTok will keep using a formal fact-checking program as well.

These tools are “a good idea and do more good than harm,” Paul Friedl, a researcher at Humboldt University, in Berlin, told me. Friedl co-authored a 2024 paper on decentralized content moderation for Internet Policy Review, which discussed X’s Community Notes among other examples, including Reddit’s forums and old Usenet messaging threads. A major benefit he and his co-author cited was that these programs may help create a “culture of responsibility” by encouraging communities “to reflect, debate, and agree” on the purpose of whatever online space they’re using.

Platforms certainly have good reasons to embrace the model. The first, according to Friedl, is the cost. Rather than employing fact-checkers around the world, these programs require only a simple algorithm. Users do the work for free. The second is that people like them—they often find the context added to posts by fellow users to be helpful and interesting. The third is politics. For the past decade, platforms—and Meta in particular—have been highly reactive to political events, moving from crisis to crisis and angering critics in the process. When Facebook first started flagging fake news, it was perceived as too little, too late by Democrats and reckless censorship by Republicans. It significantly expanded its fact-checking program in 2020 to deal with rampant misinformation (often spread by Trump) about the coronavirus pandemic and that year’s election. From March 1, 2020, to Election Day that year, according to Facebook’s self-reporting, the company displayed fact-checking labels on more than 180 million pieces of content. Again, this was perceived as both too much and not enough. With a notes-based system, platforms can sidestep the hassle of public scrutiny over what is or isn’t fact-checked and why and cleanly remove themselves from drama. They avoid making contentious decisions, Friedl said, which helps in an effort “not to lose cultural capital with any user bases.”

John Stoll, the recently hired head of news at X, told me something similar about Community Notes. The tool is the “best solution” to misinformation, he said, because it takes “a sledgehammer to a black box.” X’s program allows users to download all notes and their voting history in enormous spreadsheets. By making moderation visible and collaborative, instead of secretive and unaccountable, he argued, X has discovered how to do things in “the most equitable, fair, and most pro-free-speech way.” (“Free speech” on X, it should be noted, has also meant platforming white supremacists and other hateful users who were previously banned under Twitter’s old rules.)

[Read: X is a white-supremacist site]

People across the political spectrum do seem to trust notes more than they do standard misinformation flags. That may be because notes feel more organic and tend to be more detailed. In the 2024 paper, Friedl and his co-author wrote that Community Notes give responsibilities “to those most intimately aware of the intricacies of specific online communities.” Those people may also be able to work faster than traditional fact-checkers—X claims that notes usually appear in a matter of hours, while a complicated independent fact-check can take days.


Yet all of these advantages have their limits. Community Notes is really best suited to nitpicking individual instances of people lying or just being wrong. It cannot counter sophisticated, large-scale disinformation campaigns or penalize repeated bad actors (as the old fact-checking regime did). When Twitter’s early version of Community Notes, then called Birdwatch, debuted, the details of the mechanism were made public in a paper that acknowledged another important limitation: The algorithm “needs some cross-partisan agreement to function,” which may, at times, be impossible to find. If there is no consensus, there are no notes.

Musk himself has provided a good case study for this issue. A few Community Notes have vanished from Musk’s posts. It’s possible that he had them removed—at times, he has seemed to resent the power that X has given its users through the program, suggesting that the system is “being gamed” and chiding users for citing “legacy media”—but the disappearances could instead be an algorithmic issue. An influx of either Elon haters or Elon fans could ruin the consensus and the notes’ helpfulness ratings, leading them to disappear. (When I asked about this problem, Stoll told me, “We’re, as a company, 100 percent committed to and in love with Community Notes,” but he did not comment on what had happened to the notes removed from Musk’s posts.)

The early Birdwatch paper also noted that the system might get really, really good at moderating “trivial topics.” That is the tool’s core weakness and its core strength. Notes, because they are written and voted on by people with numerous niche interests and fixations, can appear on anything. While you’ll see them on something classically wrong and dangerous, such as conspiracy theories about Barack Obama’s birth certificate, you’ll also see them on things that are ridiculous and harmless, such as a cute video of a hedgehog. (The caption for a hedgehog video I saw last week suggested that a stumbling hedgehog was being “helped” across a street by a crow; the Community Note clarified that the crow was probably trying to kill it, and the original poster deleted the post.) At times, the disputes can be wildly annoying or pedantic and underscore just how severe a waste of your one life it is to be online at all. I laughed recently at an X post: “People really log on here to get upset at posts and spend their time writing entire community notes that amount to ‘katy perry isn’t an astronaut.’”

[Read: The perfect pop star for a dumb stunt]

The upside, though, is that when anything can be annotated, it feels like less of a big deal or a grand conspiracy when something is. Formal fact-checking programs can feel punitive and draconian, and they give people something to rally against; notes come from peers. This makes receiving one potentially more embarrassing than receiving a traditional fact-check as well; early research has shown that people are likely to delete their misleading posts when they receive Community Notes.

The optimistic take on notes-type systems is that they make use of material that already exists and with which everyone is already acquainted. People already correct each other online all the time: On nearly any TikTok in which someone is saying something obviously wrong, the top comment will be from another person pointing this out. It becomes the top comment because other users “like” it, which bumps it up. I already instinctively look to the comment section whenever I hear something on TikTok and think, That can’t be true, right?

For better or worse, the idea of letting the crowd decide what needs correcting is a throwback to the era of internet forums, where actually culture got its start. But this era of content moderation will not last forever, just as the previous one didn’t. By outright saying that a cultural and political vibe, of sorts, inspired the change, Meta has already suggested as much. We live on the actually internet for now. Whenever the climate shifts—or whenever the heads of the platforms perceive it to shift—we’ll find ourselves someplace else.


This article has been updated to clarify that Meta is ending fact-checking operations only in the United States.

May 3, 2025  14:07:06

When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

So earlier this week, when members of a popular subreddit learned that their community had been infiltrated by undercover researchers posting AI-written comments and passing them off as human thoughts, the Redditors were predictably incensed. They called the experiment “violating,” “shameful,” “infuriating,” and “very disturbing.” As the backlash intensified, the researchers went silent, refusing to reveal their identity or answer questions about their methodology. The university that employs them has announced that it’s investigating. Meanwhile, Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

Joining the chorus of disapproval were fellow internet researchers, who condemned what they saw as a plainly unethical experiment. Amy Bruckman, a professor at the Georgia Institute of Technology who has studied online communities for more than two decades, told me the Reddit fiasco is “the worst internet-research ethics violation I have ever seen, no contest.” What’s more, she and others worry that the uproar could undermine the work of scholars who are using more conventional methods to study a crucial problem: how AI influences the way humans think and relate to one another.

The researchers, based at the University of Zurich, wanted to find out whether AI-generated responses could change people’s views. So they headed to the aptly named subreddit r/changemyview, in which users debate important societal issues, along with plenty of trivial topics, and award points to posts that talk them out of their original position. Over the course of four months, the researchers posted more than 1,000 AI-generated comments on pitbulls (is aggression the fault of the breed or the owner?), the housing crisis (is living with your parents the solution?), DEI programs (were they destined to fail?). The AI commenters argued that browsing Reddit is a waste of time and that the “controlled demolition” 9/11 conspiracy theory has some merit. And as they offered their computer-generated opinions, they also shared their backstories. One claimed to be a trauma counselor; another described himself as a victim of statutory rape.

In one sense, the AI comments appear to have been rather effective. When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters, according to preliminary findings that the researchers shared with Reddit moderators and later made private. (This analysis, of course, assumes that no one else in the subreddit was using AI to hone their arguments.)

[Read: The man out to prove how dumb AI still is]

The researchers had a tougher time convincing Redditors that their covert study was justified. After they had finished the experiment, they contacted the subreddit’s moderators, revealed their identity, and requested to “debrief” the subreddit—that is, to announce to members that for months, they had been unwitting subjects in a scientific experiment. “They were rather surprised that we had such a negative reaction to the experiment,” says one moderator, who asked to be identified by his username, LucidLeviathan, to protect his privacy. According to LucidLeviathan, the moderators requested that the researchers not publish such tainted work, and that they issue an apology. The researchers refused. After more than a month of back-and-forth, the moderators revealed what they had learned about the experiment (minus the researchers’ names) to the rest of the subreddit, making clear their disapproval.

When the moderators sent a complaint to the University of Zurich, the university noted in its response that the “project yields important insights, and the risks (e.g. trauma etc.) are minimal,” according to an excerpt posted by moderators. In a statement to me, a university spokesperson said that the ethics board had received notice of the study last month, advised the researchers to comply with the subreddit’s rules, and “intends to adopt a stricter review process in the future.” Meanwhile, the researchers defended their approach in a Reddit comment, arguing that “none of the comments advocate for harmful positions” and that each AI-generated comment was reviewed by a human team member before being posted. (I sent an email to an anonymized address for the researchers, posted by Reddit moderators, and received a reply that directed my inquiries to the university.)

Perhaps the most telling aspect of the Zurich researchers’ defense was that, as they saw it, deception was integral to the study. The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

How humans are likely to respond in such a scenario is an urgent issue and a worthy subject of academic research. In their preliminary results, the researchers concluded that AI arguments can be “highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.” (Because the researchers finally agreed this week not to publish a paper about the experiment, the accuracy of that verdict will probably never be fully assessed, which is its own sort of shame.) The prospect of having your mind changed by something that doesn’t have one is deeply unsettling. That persuasive superpower could also be employed for nefarious ends.

[Read: Chatbots are cheating on their benchmark tests]

Still, scientists don’t have to flout the norms of experimenting on human subjects in order to evaluate the threat. “The general finding that AI can be on the upper end of human persuasiveness—more persuasive than most humans—jibes with what laboratory experiments have found,” Christian Tarsney, a senior research fellow at the University of Texas at Austin, told me. In one recent laboratory experiment, participants who believed in conspiracy theories voluntarily chatted with an AI; after three exchanges, about a quarter of them lost faith in their previous beliefs. Another found that ChatGPT produced more persuasive disinformation than humans, and that participants who were asked to distinguish between real posts and those written by AI could not effectively do so.

Giovanni Spitale, the lead author of that study, also happens to be a scholar at the University of Zurich, and has been in touch with one of the researchers behind the Reddit AI experiment, who asked him not to reveal their identity. “We are receiving dozens of death threats,” the researcher wrote to him, in a message Spitale shared with me. “Please keep the secret for the safety of my family.”

One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep. “One of the pillars of that community is mutual trust,” Spitale told me; it’s part of the reason he opposes experimenting on Redditors without their knowledge. Several scholars I spoke with about this latest ethical quandary compared it—unfavorably—to Facebook’s infamous emotional-contagion study. For one week in 2012, Facebook altered users’ News Feed to see if viewing more or less positive content changed their posting habits. (It did, a little bit.) Casey Fiesler, an associate professor at the University of Colorado at Boulder who studies ethics and online communities, told me that the emotional-contagion study pales in comparison with what the Zurich researchers did. “People were upset about that but not in the way that this Reddit community is upset,” she told me. “This felt a lot more personal.”

[Read: AI executives promise cancer cures. Here’s the reality.]

The reaction probably also has to do with the unnerving notion that ChatGPT knows what buttons to push in our minds. It’s one thing to be fooled by some human Facebook researchers with dubious ethical standards, and another entirely to be duped by a cosplaying chatbot. I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

May 2, 2025  18:08:55

Updated at 1:02 p.m. ET on May 2, 2025

Darren Beattie, a senior official at the State Department, is concerned that his agency has abused its powers under previous Democratic administrations. To rectify that, he has decided to marshal the power of his office—in what his fellow State Department employees reportedly described as “unusual” and “improper” ways—to conduct a political witch hunt.

Yesterday, the MIT Technology Review revealed that, in March, Beattie made a request to gain sweeping access to communications between and about the State Department and journalists, disinformation researchers, and Donald Trump critics. Specifically, Beattie was targeting the Counter Foreign Information Manipulation and Interference (R/FIMI) hub, which the State Department shut down this year and the Global Engagement Center (GEC), which was shut down in 2024—both of which focused on tracking foreign disinformation campaigns. Right-wing critics have accused these offices of engaging in censorship campaigns against conservatives, under the pretense of fighting fake news.

In response to these unproven allegations, Beattie—who had also served as a speechwriter in President Trump’s first administration, though he was fired in 2018 after CNN reported that he had attended a conference featuring prominent white nationalists—asked the State Department for all “staff emails and other records with or about roughly 60 individuals and organizations that track or write about foreign disinformation.” This request included correspondence with and about journalists, including The Atlantic’s Anne Applebaum, researchers at institutions such as the Stanford Internet Observatory, and political enemies of the Trump administration, such as the former U.S. cybersecurity official Christopher Krebs. Beattie also wanted all staff communications that mentioned a specific list of keywords (“incel,” “q-anon,” “Black Lives Matter,” “great replacement theory”) and Trump-world figures, like Robert F. Kennedy Jr. According to the report, he plans to publish any noteworthy internal communications he receives as part of a transparency campaign to win back public trust in government agencies. A spokesperson for the State Department declined to comment on the record when reached for this story.

[Read: The white nationalist now in charge of Trump’s public diplomacy]

Let’s be clear about what’s really happening here. A high-ranking member of the Trump administration is turning federal-government data—in this case, State Department communications—into a political weapon against perceived ideological enemies. The individuals Beattie has singled out (Bill Gates, the former FBI special agent Clint Watts, and Nina Jankowicz, a disinformation researcher who had a short and somewhat disastrous tenure at the Department of Homeland Security, to name a few) are familiar targets for the far right’s free-speech-defender crowd. The keywords Beattie has asked his department to search for (which also include “Alex Jones,” “Glenn Greenwald,” and “Pepe the Frog”) are ones that seem likely to produce a juicy piece of correspondence, but who knows? This is a fishing expedition—a government agency using a kind of grievance-politics Mad Libs in an effort to find anything that might make it appear as if vestiges of the “deep state” were biased against the right.

Beattie himself has reportedly told State Department officials that this campaign is an attempt to copy Elon Musk’s “Twitter Files” playbook. Shortly after purchasing Twitter, Musk picked a few ideologically aligned journalists to comb through some of the social network’s internal records in an attempt to document its supposedly long-standing liberal bias—and moreover, how political and government actors sought to interfere with content-moderation decisions. The result was a drawn-out, continuously teased social-media spectacle framed as a series of smoking guns. In reality, the revelations of the Twitter Files were much more complicated. Far from exposing blanket ideological bias, they showed that Twitter employees often agonized over how to apply their rules fairly in high-pressure, politicized edge cases.

The Twitter Files did show that the company made editorial decisions—for example, limiting reach on posts from several large accounts that had flaunted Twitter’s rules, including those of the Stanford doctor (and current National Institutes of Health head) Jay Bhattacharya, the right-wing activists Dan Bongino and Charlie Kirk, and Chaya Raichik, who operates the Libs of TikTok account. Not exactly breaking news to anyone who’d paid attention. But they also showed that, in some cases, Twitter employees and even Democratic lawmakers were opposed to or pushed back on government requests to take down content. Representative Ro Khanna, for example, reached out to Twitter’s executive leadership to express his frustration that Twitter was suppressing speech during its handling of the New York Post’s story about Hunter Biden’s laptop.

Of course, none of this stopped Musk from portraying the project as a Pentagon Papers–esque exercise in transparency. Teasing the document dump back in December 2022, Musk argued that the series was proof of large-scale “violation of the Constitution’s First Amendment,” but then later admitted he had not read most of the files. This was fitting: For the Twitter Files’ target audience, the archives and their broader contexts were of secondary importance. What mattered more was the mere existence of a dump of primary-source documents—a collection of once-private information that they could cast as nefarious in order to justify what they believed all along. As I wrote in 2022, Twitter had been quite public about its de-amplification policies for accounts that violated its rules, but the screenshots of internal company documents included in the Twitter Files were interpreted by already aggrieved influencers and posters as evidence of malfeasance. This gave them ammunition to portray themselves as victims of a sophisticated, coordinated censorship effort.

For many, the Twitter Files were just another ephemeral culture-war skirmish. But for the MAGA sympathetic and right-leaning free-speech-warrior crowds, the files remain a canonical, even radicalizing event. RFK Jr. has argued on prime-time television that “I don’t think we’d have free speech in this country if it wasn’t for Elon Musk” opening up Twitter’s archives. Similarly, individuals mentioned in the files, such as the researcher and Atlantic contributor RenĂ©e DiResta, have become objects of obsession to MAGA conspiracy theorists. (“One post on X credited the imaginary me with ‘brainwashing all of the local elections officials’ to facilitate the theft of the 2020 election from Donald Trump,” DiResta wrote last year.) Simply put, the Twitter Files may have largely been full of sensationalistic claims and old news, but the gambit worked: Their release fleshed out a conspiratorial cinematic universe for devotees to glom onto.

Beattie’s ploy at the State Department is an attempt to add new characters and updated lore to this universe. By casting a wide net, he can potentially gain access to a trove of information that he could present as evidence. Say the request dredges up an email between a journalist and the GEC that references Ukraine and Russia. Such communications could be innocuous—a request for comment or an on-background conversation providing context for a news story—but, to somebody unfamiliar with the intricacies of reporting, it could look sinister or be framed by an interested party as some kind of collusion. As Musk proved with the Twitter Files, Beattie and the State Department don’t even need to do the dirty work of sifting through or presenting the information themselves. They can outsource that work to a handpicked network of sympathetic individuals or news outlets—or, for maximum chaos, they can release the raw information to the public in the name of pure transparency and let them make their own connections and judgments.

Perhaps the records request could dredge up something concerning. It’s not out of the realm of possibility that there could be examples of bias or worse in a large tranche of private conversations between a government agency and outside organizations on a host of polarizing topics. But Beattie’s effort, as far as MIT described it, bears none of the hallmarks of an earnest push for transparency. Instead, it reeks of cynical politicking and using one’s privileged government position to access private information for political gain.

Publishing the internal correspondence of people the administration sees as critics and ideological opponents may very well have a chilling effect on journalists and institutions trying to hold government agencies to account. At the very least, it sends a message that the administration is willing to marshal the information stores it has been entrusted with by its citizens to harass or intimidate others. It is, in other words, an attempt to abuse government power in the precise way that Beattie and Republicans have accused Democrats of doing.

Whether Beattie is successful or not, we’ll likely see more of this from the current administration. The Twitter Files was a glimpse of the future of right-wing political warfare, and its success offered a template for providing red meat to an audience with an insatiable appetite for grievance. Now Musk, the man who created the playbook, is at the helm of a government-wide effort to collect and pool federal information across agencies. It is not unreasonable to imagine that one outcome of DOGE’s efforts is a Twitter Files–esque riffling through of the U.S. government’s internal comms.

Twitter Files–ing is a brute-force tactic, but one that has an authoritarian genius to it. The entire effort is billed as an exercise in building trust, but the opposite is true. It’s really about destroying trust in everyone except the select few who are currently in charge. Take over an institution and use the information of that institution against it, in order to show how corrupt it was. Suggest that only you can fix it. Rinse and repeat.

May 7, 2025  02:08:31

Updated at 10:04 p.m. ET on May 6, 2025

On the long list of reasons the United States could have lost World War II—the terribly effective surprise Japanese attack, an awful lack of military readiness, the relatively untrained troops—there is perhaps no area in which Americans were more initially outmatched than armament. Americans had the M4 Sherman, a tank mass-produced by Detroit automakers. Germans had the formidable panzer, a line of tanks with nicknames such as Panther and Royal Tiger that repeatedly outgunned the Americans. In the 1940s, you couldn’t pick up a newspaper in the United States without reading about the panzer’s superior maneuverability and robust armor, qualities that made it especially hard for Americans to stop. “This doesn’t mean our tanks are bad,” The New York Times reported in January 1945. “They are the best in the world—next to the Germans.”

The panzer invoked Nazi might and aggression even decades after the war ended. Sylvia Plath’s “Daddy,” first published in 1965, contains this stanza: “Panzer-man, panzer-man, O You—— / Not God but a swastika / So black no sky could squeak through.” In the 2000s, popular video-game franchises—including Call of Duty, Battlefield, and Medal of Honor—released installments set during World War II that featured the panzer, etching it into the collective consciousness of a new generation of Americans.

So you can see why it’s noteworthy that Joseph Kent, Donald Trump’s nominee to head the National Counterterrorism Center, has a panzer tattoo. Last month, Mother Jones’s David Corn uncovered a shirtless picture of Kent from 2018, in which he has the word PANZER written down his left arm. Why? It’s not clear. Kent did not respond to multiple requests for comment, and the Trump administration hasn’t offered an explanation either. Olivia C. Coleman, a spokesperson for the Office of the Director of National Intelligence, directed me to a post on X in which Alexa Henning, a deputy chief of staff at the agency, calls Kent a “selfless patriot who loves this country and his family.”

Kent’s tattoo is all the more curious considering his background. A former member of the Army Special Forces who twice ran for Congress in Washington State, he has had repeated interactions with far-right extremists. During his unsuccessful 2022 congressional bid, Kent consulted with Nick Fuentes, the young white supremacist, and hired a campaign adviser who was a member of the Proud Boys, a violent far-right group. (Kent ultimately disavowed Fuentes, and his campaign said that the Proud Boys member, Graham Jorgensen, was a low-level worker). The tattoo “could mean that he’s glorifying the Nazis. Or it could have a different context,” says Heidi Beirich, a co-founder of the Global Project Against Hate and Extremism, an organization that tracks right-wing extremism. Despite what the word evokes in history, panzer references are not common on the far right, Beirich told me. “I don’t think I’ve run across a panzer.”

Right-wing accounts on X have spread the claim that Kent has jĂ€ger—German for “hunter”—tattooed on his other arm. The two tattoos together would add up to “tank hunter.” The accounts claim that heavy-anti-armor-weapons crewman was one of Kent’s jobs in the Army. Kent was part of a battalion that, in part, did anti-tank work, but I couldn’t find evidence that he even has a jĂ€ger tattoo on his other arm. (Let me point out that Kent could resolve all of this by simply rolling up a sleeve.) After this story was published, The National Pulse, a right-wing website founded and operated by the Steven Bannon ally Raheem J. Kassam, released a photo featuring a man with a jĂ€ger tattoo on his right arm whom Kassam identifies as Kent. I reached out to both Kent and ODNI to inquire about the photo’s authenticity, but did not immediately hear back.

There aren’t many other explanations. The United States Army has an installation on a base outside Stuttgart, Germany, called Panzer Kaserne, but there’s no information to suggest that Kent was ever deployed there. All we’re left with is a strange tattoo that could be associated with Nazi Germany.

Of course, people frequently make strange tattoo choices. Some get ones they come to regret, and plenty have tattooed foreign words onto their body that they don’t fully understand. Yet it’s reasonable to wonder about the messages a person decides to make permanent on their body. Tattoos can connote in-group belonging or membership to a subculture. Olympians are known to get tattoos of the Olympic rings to commemorate competing in the games. Bikers famously love getting tattoos of skulls and flames. And then there are white supremacists, who have emblazoned themselves with swastikas, Norse runes, the SS logo, and other symbols. Why settle for a T-shirt or a flag when you can carve your values into your skin?

The Trump administration seems to strongly agree with the notion that tattoos are meaningful—but only when convenient for the president’s agenda. Consider Kilmar Abrego Garcia, the Maryland resident the Trump administration deported to El Salvador’s Terrorism Confinement Center, or CECOT, prison camp last month. Garcia was living with protected legal status in the U.S., and the government’s own lawyers have acknowledged that he was deported because of an “administrative error.” Trump loyalists have doubled down on Garcia’s detention, in part pointing to his tattoos. On Truth Social, Trump posted a picture of Garcia’s knuckle tattoos—a leaf, a smiley face, a cross, and a skull. The photo was altered with text above each symbol to spell out M-S-1-3, suggesting Garcia’s tattoos are a code for the gang MS-13. (Criminal-justice professors doubt that claim.) In an interview with ABC this week, Trump insisted it’s “as clear as you can be” that Garcia has MS-13 tattooed on his knuckles, even as ABC’s Terry Moran noted that the actual M-S-1-3 in the photo Trump has distributed clearly is Photoshopped in.

[Read: An ‘administrative error’ sends a Maryland father to a Salvadoran prison]

At least some of the hundreds of other immigrants who have been deported to CECOT appear to have been targeted simply for having the wrong tattoos. Andry José Hernåndez Romero, a makeup artist with no confirmed gang affiliation, was deported after his crown tattoos were reportedly mistaken for symbols associated with Tren de Aragua. Neri José Alvarado Borges, according to his family and friends, was deported for his tattoos, including an autism-acceptance symbol that he got in support of his younger brother.

Tom Homan, the White House’s “border czar,” has claimed that tattoos alone are not being used to label people as gang members. I reached out to the White House for comment, but received only another response from Coleman, the ODNI spokesperson, pointing to another post on X by Henning. This post mocks the fact that The Atlantic had contacted them to ask questions. In reference to Kent’s tattoo, Henning wrote, “Should we just reply that it’s photoshopped?” and then included a video clip of Trump’s ABC interview. To put this in plain terms: I asked the administration to address concerns that one of the president’s nominees has a tattoo associated with Nazis, and its response was to make a joke.

Trump’s secretary of defense, Pete Hegseth, has some questionable tattoos of his own. On the right side of his chest, Hegseth has a large Jerusalem Cross: It has even sides and looks like a plus symbol, with four smaller crosses in each quadrant. On his right arm, Hegseth has a large tattoo of Deus vult (Latin for “God wills it”), written in Gothic script. Also on Hegseth’s right arm is a tattoo of the Arabic word Kafir, which commonly translates to “infidel” or “unbeliever.”

Both the Jerusalem Cross and Deus vult date back to the Crusades, the bloody series of wars between Christians and Muslims during the Middle Ages. But modern extremists have co-opted them to invoke a new war on Muslims. Insurrectionists who mobbed the Capitol on January 6, 2021, flew a Deus vult flag and wore shirts that featured it and the Jerusalem Cross. The Trump administration defends Hegseth’s ink: In an email, Deputy Pentagon Press Secretary Kingsley Wilson said that Hegseth’s tattoos “depict Christian symbols and mottos used by Believers for centuries,” and that “anyone attempting to paint these symbols and mottos as ‘extreme’ is engaging in anti-Christian bigotry.”

[Read: A field guide to flags of the far right]

The Jerusalem Cross is still occasionally used in non-extremist religious contexts, Matthew D. Taylor, the senior Christian scholar at the Institute for Islamic, Christian, and Jewish Studies, in Baltimore, told me. “If that was the only tattoo he had, I’m not sure how I would interpret that,” he said. But Taylor finds Hegseth’s Deus vult tattoo to be noteworthy. “Deus vult is not a common symbol. It has very strong connotations,” he said. During the Crusades, Deus vult was the “phrase that sanctioned violence against Muslims.” Other members of the military have also tattooed Kafir on themselves, reportedly in an act of defiance against Islamic terrorism, especially those who have seen combat in the Middle East, as Hegseth has. An American soldier with a Kafir tattoo might be interpreted as a provocation—essentially, I’m an infidel. Come and get me. Taylor reads Hegseth’s Kafir tattoo as “a signal of aggression towards Islam and embracing Islamic aggression towards himself.” When Hegseth’s three tattoos are taken together, Taylor said, “it’s not hard to interpret what he’s trying to signal.”

Maybe both Hegseth and Kent have bad luck and got their tattoos without knowing what they might signal. Maybe they just don’t care about the possible darker implications. But this is the constant problem of trying to make sense of the signs from people in Trump’s orbit—the recurrent use of white supremacists’ favorite sequence of numbers, ambiguous (and sometimes unambiguous) Nazi salutes, and other dog-whistling. How much benefit of the doubt really should be given? At some point, there’s not a lot of room to interpret things any other way. As of 2024, Hegseth was a member of the Tennessee congregation of an Idaho-based church run by a Christian nationalist. He has appeared to express support for a relatively niche theocratic ideology that advocates for laws to be subordinate to the perspectives of Christian conservatism. Kent, in addition to associating with Fuentes during his first congressional campaign, was interviewed by the Nazi sympathizer Greyson Arnold. (Following the interview, a campaign spokesperson said that Kent was unaware of Arnold’s beliefs.)

Trump’s White House operates on inconsistency. High prices on consumer goods are bad, unless they are the result of the tariffs. Unelected bureaucrats must be excised from the government, unless they are Elon Musk and his team at DOGE. Free speech is a tenet of American values that is to be vehemently upheld, unless people say things that Donald Trump does not like. Tattoos matter. Except they also don’t. They are a sufficient admission of guilt—sufficient enough to disqualify you for due process, even—unless you are part of Trump’s team. If you’re on the losing side, there is no recourse. If you’re on the winning side, there are no consequences.


This article previously misstated the name of Alexa Henning, a deputy chief of staff at the Office of the Director of National Intelligence. It has been updated to include additional context about Joseph Kent’s military service.

April 29, 2025  15:31:16

In at least one crucial way, AI has already won its campaign for global dominance. An unbelievable volume of synthetic prose is published every moment of every day—heaping piles of machine-written news articles, text messages, emails, search results, customer-service chats, even scientific research.

Chatbots learned from human writing. Now the influence may run in the other direction. Some people have hypothesized that the proliferation of generative-AI tools such as ChatGPT will seep into human communication, that the terse language we use when prompting a chatbot may lead us to dispose of any niceties or writerly flourishes when corresponding with friends and colleagues. But there are other possibilities. Jeremy Nguyen, a senior researcher at Swinburne University of Technology, in Australia, ran an experiment last year to see how exposure to AI-generated text might change the way people write. He and his colleagues asked 320 people to write a post advertising a sofa for sale on a secondhand marketplace. Afterward, the researchers showed the participants what ChatGPT had written when given the same prompt, and they asked the subjects to do the same task again. The responses changed dramatically.

“We didn’t say, ‘Hey, try to make it better, or more like GPT,’” Nguyen told me. Yet “more like GPT” is essentially what happened: After the participants saw the AI-generated text, they became more verbose, drafting 87 words on average versus 32.7 in the first round. The full results of the experiment are yet to be published or peer-reviewed, but it’s an intriguing finding. Text generators tend to write long, even when the prompt is curt. Might people be influenced by this style, rather than the language they use when typing to a chatbot?

[Read: The words that stop ChatGPT in its tracks]

AI-written text is baked into software that millions, if not billions, of people use every day. Even if you don’t use ChatGPT, Gemini, Claude, or any of the other popular text-generating tools, you will inevitably be on the receiving end of emails, documents, and marketing materials that have been compiled with their assistance. Gmail offers some users an integrated AI tool that starts drafting responses before any fingers hit the keys. Last year, Apple launched Apple Intelligence, which includes AI features on Macs, iPhones, and iPads such as writing assistance across apps and a “smart reply” function in the Mail app. Writing on the internet is now more likely than even a year or two ago to be a blended product—the result of a human using AI somewhere in the drafting or refining phase while making subtle tweaks themselves. “And so that might be a way for patterns to get laundered, in effect,” Emily M. Bender, a computational-linguistics professor at the University of Washington, told me.

Bender, a well-known critic of AI who helped coin the term stochastic parrots, does not use AI text generators on ethical grounds. “I’m not interested in reading something that nobody said,” she told me. The issue, of course, is that knowing if something was written by AI is becoming harder and harder. People are sensitive to patterns in language—you may have noticed yourself switching accents or using different words depending on whom you’re speaking to—but “what we do with those patterns depends a lot on how we perceive who’s saying them,” Bender told me. You might not be moved to emulate AI, but you could be more susceptible to picking up its linguistic quirks if they appear to come from a respected source. Interacting with ChatGPT is one thing; receiving a ChatGPT-influenced email from a highly esteemed colleague is another.

Language evolves constantly, and advances in technology have long shaped the way people communicate (lol, anyone?). These influences are not necessarily good or bad, although technological developments have often helped to make language and communication more accessible: Most people see the invention of the printing press as a welcome development from longhand writing. LLMs follow in this vein—it’s never been easier to turn your thoughts into flowing prose, regardless of your view on the quality of the output.

Recent technological advances have generally inspired or even demanded concision—many text messages and social-media posts have explicit character limits, for instance. As a general rule, language works on the principle that effort increases with length; five paragraphs require more work than two sentences for the sender to write and the receiver to read. But AI tools could upset this balance, Simon Kirby, a professor of language evolution at the University of Edinburgh, told me. “What happens when you have a machine where the cost of sending 10,000 words is the same or roughly the same as the cost of sending 1,000?” he said.

Kirby offered me a hypothetical: One person may give an AI tool a few bullet points to turn into a lengthy, professional-sounding email, only for the recipient to immediately use another tool to summarize the prose before reading. “Essentially, we’ve come up with a protocol where the machines are using flowery, formal language to send very long versions of very short, encapsulated messages that the humans are using,” he said.

[Read: The end of foreign-language education]

Beyond length, the linguists I spoke with speculated that the proliferation of AI writing could lead to a new form of language. “It’s pretty easy to imagine that English will become more standardized to whatever the standard of these language models is,” said Jill Walker Rettberg, a professor of digital culture at the University of Bergen’s Center for Digital Narrative, in Norway. This already happens to an extent with automated spelling- and grammar-checkers, which nudge users to adhere to whichever formulations they consider to be “correct.” As AI tools become more commonplace, people may see their style as the template to follow, resulting in a greater homogenization of language: Just yesterday, Cornell University presented a study suggesting that this is happening already. In the experiment, an AI writing tool “caused Indian participants to write more like Americans, thereby homogenizing writing toward Western styles and diminishing nuances that differentiate cultural expression,” the authors wrote.

Philip Seargeant, an applied linguist at the Open University in the U.K., told me that when students use AI tools inappropriately, their work reads a little too perfect, “but in a very bland and uninteresting way.” Kirby says that AI text lacks the errors or awkwardness he’d expect in student essays and has an “uncanny valley” feel. “It does have that kind of feeling [that] there’s nothing behind the eyes,” he said.  

Several linguists I spoke with suggested that the proliferation of AI-written or -mediated text may spark a countermovement. Perhaps some people will rebel, leaning into their own linguistic mannerisms in order to differentiate themselves. Bender imagines people turning off AI features or purposely choosing synonyms when prompted to use certain words, as an act of defiance. Kirby told me he already sees some of his students taking pride in not using AI writing tools. “There is a way in which that will become the kind of valorized way of writing,” he said. “It’ll be the real deal, and it’ll be obvious, because you’ll deliberately lean into your idiosyncrasies as a writer.” Rettberg compares it to choosing handmade goods over cheap, factory-made fare: Rather than losing value as a result of the AI wave, human writing may be appreciated even more, taking on an artisanal quality.

Ultimately, as language continues to evolve, AI tools will be both setting trends and playing catch-up. Trained on existing data, they’ll always be somewhat behind how people are using language today, even as they influence it. In fact, we may end up with AI tools evolving language separately to humans, Kirby said. Large language models are usually trained on text from the internet, and the more AI-generated text ends up permeating the web, the more these tools may end up being trained on their own output and embedding their own linguistic styles. For Kirby, this is fascinating. “We might find that these models start going off and taking the language that’s produced with them in a particular direction that may be different from the direction language would have evolved in if it had been passed from human to human,” he said. This, he believes, is what could set generative AI apart from other technological advances when it comes to impact on language: “We’ve inadvertently created something that could itself be culturally evolving.”

April 27, 2025  14:41:08

If you have tips about DOGE and its data collection, you can contact Ian and Charlie on Signal at @ibogost.47 and @cwarzel.92.


If you were tasked with building a panopticon, your design might look a lot like the information stores of the U.S. federal government—a collection of large, complex agencies, each making use of enormous volumes of data provided by or collected from citizens.

The federal government is a veritable cosmos of information, made up of constellations of databases: The IRS gathers comprehensive financial and employment information from every taxpayer; the Department of Labor maintains the National Farmworker Jobs Program (NFJP) system, which collects the personal information of many workers; the Department of Homeland Security amasses data about the movements of every person who travels by air commercially or crosses the nation’s borders; the Drug Enforcement Administration tracks license plates scanned on American roads. And that’s only a minuscule sampling. More obscure agencies, such as the recently gutted Consumer Financial Protection Bureau, keep records of corporate trade secrets, credit reports, mortgage information, and other sensitive data, including lists of people who have fallen on financial hardship.

A fragile combination of decades-old laws, norms, and jungly bureaucracy has so far prevented repositories such as these from assembling into a centralized American surveillance state. But that appears to be changing. Since Donald Trump’s second inauguration, Elon Musk and the Department of Government Efficiency have systematically gained access to sensitive data across the federal government, and in ways that people in several agencies have described to us as both dangerous and disturbing. Despite DOGE’s stated mission, little efficiency seems to have been achieved. Now a new phase of Trump’s project is under way: Not only are individual agencies being breached, but the information they hold is being pooled together. The question is Why? And what does the administration intend to do with it?

glowing rectangles

In March, President Trump issued an executive order aiming to eliminate the data silos that keep everything separate. Historically, much of the data collected by the government had been heavily compartmentalized and secured; even for those legally authorized to see sensitive data, requesting access for use by another government agency is typically a painful process that requires justifying what you need, why you need it, and proving that it is used for those purposes only. Not so under Trump.

This is a perilous moment. Rapid technological advances over the past two decades have made data shedding ubiquitous—whether it comes from the devices everyone carries or the platforms we use to communicate with the world. As a society, we produce unfathomable quantities of information, and that information is easier to collect than ever before.

person in front of an ATM with personal data surrounding them
Illustration by Anson Chan

The government has tons of it, some of which is obvious—names, addresses, and census data—and much of which may surprise you. Consider, say, a limited tattoo database, created in 2014 by the National Institute of Standards and Technology, and distributed to multiple institutions for the purpose of training software systems to recognize common tattoos associated with gangs and criminal organizations. The FBI has its own “Next Generation Identification” biometric and criminal-history database program; the agency also has a facial-recognition apparatus capable of matching people against more than 640 million photos—a database made up of driver’s license and passport photos, as well as mug shots. The Social Security Administration keeps a master earnings file, which contains the “individual earnings histories for each of the 350+ million Social Security numbers that have been assigned to workers.” Other government databases contain secret whistleblower data. At the Department of Veterans Affairs, you’ll find granular mental-health information on former service members, including notes from therapy sessions, details about medication, and accounts of substance abuse. Government agencies including the IRS, the FBI, DHS, and the Department of Defense have all purchased cellphone-location data, and possibly collected them too, via secretive groups such as the National Geospatial-Intelligence Agency. That means the government has at least some ability to map or re-create the past everyday movements of some American citizens. This is hardly even a cursory list of what is publicly known.

Advancements in artificial intelligence promise to turn this unwieldy mass of data and metadata into something easily searchable, politically weaponizable, and maybe even profitable. DOGE is reportedly attempting to build a “master database” of immigrant data to aid in deportations; NIH Director Jay Bhattacharya has floated the possibility of an autism registry (though the administration quickly walked it back). America already has all the technology it needs to build a draconian surveillance society—the conditions for such a dystopia have been falling into place slowly over time, waiting for the right authoritarian to come along and use it to crack down on American privacy and freedom.

But what can an American authoritarian, or his private-sector accomplices, do with all the government’s data, both alone and combined with data from the private sector? To answer this question, we spoke with former government officials who have spent time in these systems and who know what information these agencies collect and how it is stored.

To a person, these experts are alarmed about the possibilities for harm, graft, and abuse. Today, they argued, Trump is targeting law firms, but DOGE data could allow him to target individual Americans at scale. For instance, they described how the government, aside from providing benefits, is also a debt collector on all kinds of federal loans. Those who struggle to repay, they said, could be punished beyond what’s possible now, by having professional licenses revoked or having their wages or bank accounts frozen.

Musk has long dreamed of an “everything app” that would combine banking, shopping, communication, and all other human affairs. Such a project would entail holding and connecting all the information those activities produce. Even if Musk were to step back from DOGE, he or his agents may still possess data they collected or gained access to in the organization’s ongoing federal-data heist. (Musk did not respond to emailed questions about this, nor any others we posed for this story.)

These data could also allow the government or, should they be shared, its private-sector allies to target big swaths of the population based on a supposed attribute or trait. Maybe you have information from background checks or health studies that allows you to punish people who have seen a therapist for mental illness. Or to terminate certain public benefits to anybody who has ever shown income above a particular threshold, claiming that they obviously don’t need public benefits because they once made a high salary. A pool of government data is especially powerful when combined with private-sector data, such as extremely comprehensive mobile-phone geolocation data. These actors could make inferences about actions, activities, or associates of almost anybody perceived as a government critic or dissident. These instances are hypothetical, but the government’s current use of combined data in service of deportations—and its refusal to offer credible evidence of wrongdoing for some of those deported—suggests that the administration is willing to use these data for its political aims.

Harrison Fields, a spokesperson for the White House, confirmed that DOGE is combining data that it has collected across agencies, but he did not respond to individual questions about which data it has or how it plans to safeguard citizens’ private information. “DOGE has been instrumental in enhancing data accuracy and streamlining internal processes across the federal government,” Fields told us in an emailed statement. “Through data sharing between agencies, departments are collaborating to identify fraud and prevent criminals from exploiting hardworking American taxpayers.”

For decades, government data have been both an asset and a liability, used and occasionally abused in service of its citizens or national security. Under Trump and DOGE, the proposition for the data’s use has been flipped. The sensitive and extensive collective store of information may still benefit some American citizens, but it is also being exploited to satisfy the whims and grievances of the president of the United States.

glowing rectangles

Trump and DOGE are not just undoing decades of privacy measures. They appear to be ignoring that they were ever written. Over and over, the federal experts we spoke with insisted that the very idea of connecting federal data is anathema. An employee in senior leadership at USAID told us that the systems operate on their own platforms with no interconnectivity by design. “There’s almost no data sharing between agencies,” said one former senior government technologist. That’s a good thing for privacy, but it makes it harder for agencies to work together for citizens’ benefit.

On occasions when sharing must happen, the Privacy Act of 1974 requires what’s called a Computer Matching Agreement, a written contract that establishes the terms of such sharing and to protect personal information in the process. A CMA is “a real pain in the ass,” according to the official, just one of the ways the government discourages information swapping as a default mode of operation. According to the USAID employee, workers in one agency do not and cannot even hold badges that grant them access to another agency—in part to prevent them from having access to an outside location where they might happen upon and exfiltrate information. So you can understand why someone with a stated mission to improve government efficiency might train their attention on centralizing government data—but you can also understand why there are rigorous rules that prevent that from happening. (The Privacy Act was passed to curtail abuses of power such as those exhibited in the Watergate and COINTELPRO scandals, in which the government conducted illegal surveillance against its citizens.)

The former technologist, who worked for the Biden administration, described a system he had tried to facilitate building at the General Services Administration that would provide agencies with income information in order to verify eligibility for various benefits, such as SNAP, Medicaid, and Pell Grants. A simple, basic service to verify income, available only to federal and state agencies that really needed it, seemed like it would be an easy success.

person in a window with personal location 
data surrounding them
Illustration by Anson Chan

It never happened. (The former federal technologist blamed “enormous legal obstacles,” including the Privacy Act itself, policies at the Office of Management and Budget, and various court rulings.) The IRS even maintains an API—a way for computers to talk to one another—built to give the banking industry a way to verify someone’s income, for example to underwrite a mortgage application. But using that service inside the government—even though it was made by the federal government—was forbidden. The best option for agencies who wanted to do this was to ask citizens to prove their eligibility, or to pay a private vendor such as Equifax, which can leverage the full power of data brokering and other commercial means of acquiring information, to confirm it.

Even without regulatory hurdles, intermingling data may not be as straightforward as it seems. “Data isn’t what you’d imagine,” Erie Meyer, a founder of the U.S. Digital Service and the chief technologist for multiple agencies, including the CFPB, told us. “Sometimes it’s hard-paper information. It’s a mess.” Just because a federal agency holds certain information in documents, files, or records doesn’t mean that information is easily accessed, retrieved, or used. Your tax returns contain lots of information, including the charities to which you might have contributed and the companies that might have paid you as an employee or contractor. But in their normal state—as fields in the various schedules of your tax return, say—those data are not designed to be easily isolated and queried as if they were posts on social media.

An American surveillance society that fully stitched together the data the government already possesses would require officials to upend the existing rules, policies, and laws that protect sensitive information about Americans.

To this end, DOGE has strong-armed its way into federal agencies; intimidated, steamrolled, and fired many of their workers; entered their IT systems; and accessed some unknown quantity of the data they store. DOGE removes the safeguards that have protected controls for access, logs for activity, and of course the information itself. Borrowing language from IT management, the senior USAID employee called DOGE a kind of permission structure for privacy abuse.

But the federal technologist added something else: “We worship at the altar of tech.” Many Americans have at least a grudging respect for the private tech industry, which has changed the world, and quickly—a sharp contrast to the careful, if slow-moving, government. Booting out the bureaucrats in favor of technologists may look to some like liberation from mediocrity, even if it may lead to repression.

Glowing rectangles

Musk has said that his goal with DOGE is to serve his country. He says he wants to “end the tyranny of bureaucracy.” But around Washington, people are asking one another what he really wants with all those data. Keys to the federal dataverse could, for example, be extremely useful to a highly ambitious man who is aggressively trying to win the AI race.

We already know that Musk’s people have access to large swaths of information from federal agencies—what we don’t know is what they’ve copied, exfiltrated, or otherwise taken with them. In theory, this material, whether usable together or not, could be recombined with other identifying information from private companies for all kinds of purposes. There has been speculation already that it could be fed into third-party large language models to train them or make the information more usable (Musk’s xAI has its own model, Grok); outside firms could use their own technologies to make sense of disparate sets of data, as well. Such approaches, the federal workers told us, could make it easier to turn previously obfuscated information, such as the individual elements of a tax return, into something to be mined.

Tech companies already collect as much information as possible not because they know exactly what it’s good for, but because they believe and assume—correctly—that it can provide value for them. They can and do use the data to target advertising, segment customers, perform customer-behavior analysis, carry out predictive analytics or forecasting, optimize resources or supply chains, assess security or fraud risk, make real-time business decisions and, these days, train AI models. The central concept of the so-called Big Data era is that data are an asset; they can be licensed, sold, and combined with other data for further use. In this sense, DOGE is the logical end point of the Big Data movement.

Collecting and then assembling data in the industrial way—just to have them in case they might be useful—would represent a huge and disturbing shift for the government. So much so that the federal workers we spoke with struggled even to make sense of the idea. They insisted that the government has always tried to serve the people rather than exploit them. And yet, this reversal matches the Trump transactional ethos perfectly—turning How can we serve our fellow Americans? into What’s in it for us?

Us, in this case, isn’t even the government, let alone your fellow Americans. It’s Trump’s business concerns; the private-sector ones that have supplicated to him; the interests of his friends and allies, including Musk, and other loyalists who enter their orbits. Once the laws, rules, and other safeguards that have prevented federal data from comingling fall away—and many of them already have in practice—previously firewalled federal data can be combined with private data sets, such as those held by Trump allies or associates, tech companies who want to get on the administration’s good side, or anyone else the administration can coerce.

person in a hospital wheelchair with personal health data surrounding them
Illustration by Anson Chan

Many Americans have felt resigned to the Big Data accrual of their information for years already. (Plenty of others simply don’t understand the scope of what they’ve given up, or don’t care.) Data breaches became banal—including at Equifax and even inside the government at the Office of Personnel Management. Some private firms, such as Palantir, already hold lucrative government data-intelligence contracts. As Wired recently reported, ICE cannot track “self-deportations” in near-real time—but Palantir can. Lisa Gordon, Palantir’s head of global communications, told us that the company does not “own, collect, sell or provide any data to our customers—government or commercial,” and that clients are ultimately in control of their information. However, she also added that Palantir “is accredited to secure a customer’s data to the highest standards of data privacy and classification.” Theoretically, even if federal data are stored by a third-party contractor, they are protected legally and contractually. But such guarantees might no longer matter if the government deems its own privacy laws irrelevant. Public data sets could become a gold mine if sold to private parties, though there is no evidence this is taking place.

The thought that the government would centralize or even give away citizen data for private use is scandalous. But it’s also, in a way, expected. The Vietnam War and Watergate gave Americans reasons to believe that the government can’t be trusted. The Cold War issued a constant, decades-long threat of annihilation and the necessary surveillance to avoid it. The War on Terror extended the logic into the 21st century. Optical, recording, and then computer technologies arose, offering new ways to watch the public. During the 2010s, Edward Snowden’s NSA surveillance leaks took place, and the Facebook–Cambridge Analytica scandal was brewing. By then, the 20th-century assumption that U.S. intelligence agencies were running mind-control experiments, infiltrating and disrupting civil-rights groups, or carrying out surreptitious missions at home like they do abroad had been fully internalized, and fused with the suspicion that Google, Facebook, Amazon, and Walmart were—in their own ways—following suit.

BREAK 4.png

Earlier this month, The Washington Post reported that government agencies are combining data that are normally siloed so that identifying undocumented immigrants would be easier. At the Department of Labor, DOGE has gained access to sensitive data about immigrants and farmworkers, Wired reported. This and other reporting shows that DOGE seems to be particularly interested in finding ways to “cross-reference datasets and leverage access to sensitive SSA systems to effectively cut immigrants off from participating in the economy,” according to Wired.

A worst-case scenario is easy to imagine. Some of this information could be useful simply for blackmail—medical diagnoses and notes, federal taxes paid, cancellation of debt. In a kleptocracy, such data could be used against members of Congress and governors, or anyone disfavored by the state. Think of it as a domesticated, systemetized version of kompromat—like opposition research on steroids: Hey, Wisconsin is considering legislation that would be harmful to us. There are four legislators on the fence. Query the database; tell me what we’ve got on them.

Say you want to arrest or detain somebody—activists, journalists, anyone seen as a political enemy—even if just to intimidate them. An endless data set is an excellent way to find some retroactive justification. Meyer told us that the CFPB keeps detailed data on consumer complaints—which could also double as a fantastic list of the citizens already successfully targeted for scams, or people whose financial problems could help bad actors compromise them or recruit them for dirty work. Similarly, FTC, SEC, or CFPB data, which include subpoenaed trade secrets gathered during long investigations, could offer the ability for motivated actors to conduct insider trading at previously unthinkable scale. The world’s richest man may now have access to that information.

An authoritarian, surveillance-control state could be supercharged by mating exfiltrated, cleaned, and correlated government information with data from private stores, corporations who share their own data willingly or by force, data brokers, or other sources. What kind of actions could the government perform if it could combine, say, license plates seen at specific locations, airline passenger records, purchase histories from supermarket or drug-store loyalty cards, health-care patient records, DNS-lookup histories showing a person’s online activities, and tax-return data?

It could, for example, target for harassment people who deducted charitable contributions to the Palestine Children’s Relief Fund, drove or parked near mosques, and bought Halal-certified shampoos. It could intimidate citizens who reported income from Trump-antagonistic competitors or visited queer pornography websites. It could identify people who have traveled to Ukraine and also rely on prescription insulin, and then lean on insurance companies to deny their claims. These examples are all speculative and hypothetical, but they help demonstrate why Americans should care deeply about how the government intends to manage their private data.

A future, American version of the Chinese panopticon is not unimaginable, either: If the government could stop protests or dissent from happening in the first place by carrying out occasional crackdowns and arrests using available data, it could create a chilling effect. But even worse than a mirror of this particular flavor of authoritarianism is the possibility that it might never even need to be well built or accurate. These systems do not need to work properly to cause harm. Poorly combined data or hasty analysis by AI systems could upend the lives of people the government didn’t even mean to target.

“Americans are required to give lots of sensitive data to the government—like information about someone’s divorce to ensure child support is paid, or detailed records about their disability to receive Social Security Disability Insurance payments,” Sarah Esty, a former senior adviser for technology and delivery at the U.S. Department of Health and Human Services, told us. “They have done so based on faith that the government will protect that data, and confidence that only the people who are authorized and absolutely need the information to deliver the services will have access. If those safeguards are violated, even once, people will lose trust in the government, eroding its ability to run those services forever.” All of us have left huge, prominent data trails across the government and the private sector. Soon, and perhaps already, someone may pick up the scent.

April 25, 2025  19:59:53

To hear Silicon Valley tell it, the end of disease is well on its way. Not because of oncology research or some solution to America’s ongoing doctor shortage, but because of (what else?) advances in generative AI.

Demis Hassabis, a Nobel laureate for his AI research and the CEO of Google DeepMind, said on Sunday that he hopes that AI will be able to solve important scientific problems and help “cure all disease” within five to 10 years. Earlier this month, OpenAI released new models and touted their ability to “generate and critically evaluate novel hypotheses” in biology, among other disciplines. (Previously, OpenAI CEO Sam Altman had told President Donald Trump, “We will see diseases get cured at an unprecedented rate” thanks to AI.) Dario Amodei, a co-founder of Anthropic, wrote last fall that he expects AI to bring about the “elimination of most cancer.”

These are all executives marketing their products, obviously, but is there even a kernel of possibility in these predictions? If generative AI could contribute in the slightest to such discoveries—as has been promised since the start of the AI boom—where would the technology and scientists using it even begin?

I’ve spent recent weeks speaking with scientists and executives at universities, major companies, and research institutions—including Pfizer, Moderna, and the Memorial Sloan Kettering Cancer Center—in an attempt to understand what the technology can (and cannot) do to advance their work. There’s certainly a lot of hyperbole coming from the AI companies: Even if, tomorrow, an OpenAI or Google model proposed a drug that appeared credibly able to cure a single type of cancer, the medicine would require years of laboratory and human trials to prove its safety and efficacy in a real-world environment, which AI programs are nowhere near able to simulate. “There are traffic signs” for drug development, “and they are there for a good reason,” Alex Zhavoronkov, the CEO of Insilico Medicine, a biotech company pioneering AI-driven drug design, told me.

Yet Insilico has also used AI to help design multiple drugs that have successfully cleared early trials. The AI models that made Hassabis a Nobel laureate, known as AlphaFold, are widely used by pharmaceutical and biomedical researchers. Generative AI, I’ve learned, has much to contribute to science, but its applications are unlikely to be as wide-ranging as its creators like to suggest—more akin to a faster engine than a self-driving car.


There are broadly two sorts of generative AI that are currently contributing to scientific and mathematical discovery. The first are essentially chatbots: tools that search, analyze, and synthesize scientific literature to produce useful reports. The dream is to eventually be able to ask such a program, in plain language, about a rare disease or unproven theorem and receive transformative insights. We’re not there, and may never be. But even the bots that exist today, such as OpenAI’s and Google’s separate “Deep Research” products, have their uses. “Scientists use the tools that are out there for information processing and summarization,” Rafael Gómez-Bombarelli, a chemist at MIT who applies AI to material design, told me. Instead of Googling for and reading 10 papers, you can ask Deep Research. “Everybody does that; that’s an established win,” he said.

Good scientists know to check the AI’s work. Andrea Califano, a computational biologist at Columbia who studies cancer, told me he sought assistance from ChatGPT and DeepSeek while working on a recent manuscript, which is now a normal practice for him. But this time, “they came up with an amazing list with references, people, authors on the paper, publications, et cetera—and not one of them existed,” Califano said. OpenAI has found that its most advanced models, o3 and o4-mini, are actually two to three times more likely to confidently assert falsehoods, or “hallucinate,” than their predecessor, o1. (This was expected for o4-mini, because it was trained on less data, but OpenAI wrote in a technical report that “more research is needed to understand” why o3 hallucinates at such a high rate.) Even when AI research agents work perfectly, their strength is summary, not novelty. “What I don’t think has worked” for these bots, Gómez-Bombarelli said, “is true, new reasoning for ideas.” These programs, in some sense, can fail doubly: Trained to synthesize existing data and ideas, they invent; asked to invent, they struggle. (The Atlantic has a corporate partnership with OpenAI.)

[Read: The man out to prove how dumb AI still is]

To help temper—and harness—the tendency to hallucinate, newer AI systems are being positioned as collaborative tools that can help judge ideas. One such system, announced by Google researchers in February, is called the “AI co-scientist”: a series of AI language models fine-tuned to research a problem, offer hypotheses, and evaluate them in a way somewhat analogous to how a team of human scientists would, Vivek Natarajan, an AI researcher at Google and a lead author on the paper presenting the AI co-scientist, told me. Similar to how chess-playing AI programs improved by playing against themselves, Natarajan said, the co-scientist comes up with hypotheses and then uses a “tournament of ideas” to rank which are of the highest quality. His hope is to give human scientists “superpowers,” or at least a tool to more rapidly ideate and experiment.

The usefulness of those rankings could require months or years to verify, and the AI co-scientist, which is still being evaluated by human scientists, is for now limited to biomedical research. But some of its outputs have already shown promise. Tiago Costa, an infectious-disease researcher at Imperial College London, told me about a recent test he ran with the AI co-scientist. Costa and his team had made a breakthrough on an unsolved question about bacterial evolution, and they had not yet published the findings—so it could not be in the AI co-scientist’s training data. He wondered whether Google’s system could arrive at the breakthrough itself. Costa and his collaborators provided the AI co-scientist with a brief summary of the issue, some relevant citations, and the central question they had sought to answer. After running for two days, the system returned five relevant and testable hypotheses—and the top-ranked one matched the human team’s key experimental results. The AI appeared to have proposed the same genuine discovery that they had made.

The system developed its top hypothesis with a simple rationale, drawing a link to another research area and coming to a conclusion the human team had taken years to arrive at. The humans had been “biased” by long-held assumptions about this particular phenomenon, JosĂ© PenadĂ©s, a microbiologist at ICL who co-led the research with Costa, told me. But the AI co-scientist, without such tunnel vision, had found the idea by drawing straightforward research connections. If they’d had this tool and hypothesis five years ago, he said, the research would have proceeded significantly faster. “It’s quite frustrating for me to realize it was a very simple answer,” PenadĂ©s said. The system did not concoct a new paradigm or unheard-of notion—it just efficiently considered a large amount of information, which turned out to be good enough. With human scientists having already produced, and continuously producing, tremendous amounts of knowledge, perhaps the most useful AI will not automate that ability so much as complement it.

The second type of scientific AI aims, in a sense, to speak the language of biology. AlphaFold and similar programs are trained not on internet text but on experimental data, such as the three-dimensional structure of proteins and gene expression. These types of models quickly apply patterns drawn from more data than even a large team of human researchers could analyze in a lifetime. More traditional machine-learning algorithms have, of course, been used in this way for a long time, but generative AI could supercharge these tools, allowing scientists to find ways to repurpose an older drug for a different disease, or identify promising new receptors in the body to target with a therapy, to name two examples. These tools could substantially increase both “time efficiency and probability of success,” Sriram Krishnaswami, the head of scientific affairs at Pfizer Oncology, told me. For instance, Pfizer has used an internal AI tool to identify two such targets that might help treat breast and prostate cancer, which are currently being tested.

Similarly, generative-AI tools can contribute to drug design by helping scientists more efficiently balance various molecular traits, side effects, or other factors before going to a lab or trial. The number of configurations and interactions for any possible drug is profoundly large: There are 10⁶³ÂČ sequences of mRNA that could produce the spike protein used in COVID vaccines, Wade Davis, Moderna’s head of digital for business, including the AI-product team, told me. That’s dozens of orders of magnitude beyond the number of atoms in the universe. Generative AI could help substantially reduce the number of sequences worth exploring.

“Possibly there will never be a drug which is ‘discovered’ through AI,” Pratyush Tiwary, a chemical physicist at the University of Maryland who uses AI methods, told me. “There are good companies that are working on it, but what AI will do is to help reduce the search space”—to reduce the number of possibilities scientists need to investigate on their own. These AI models are to biologists like a graphic calculator and drafting software are to an engineer: You can ideate faster, but you still have to build a bridge and confirm that it won’t crumble before driving across it.


The ultimate achievement of AI, then, may just be to drastically improve scientific efficiency—not unlike chatbots already used in any number of normal office jobs. When considering “the whole drug-development life cycle, how do we compress time?” Anaeze Offodile II, the chief strategy officer at MSK, told me. AI technologies could shave years off of that life cycle, though still more years would remain. Offodile imagined a reduction “from 20 years to maybe 15 years,” and Zhavoronkov, of Insilico, said that AI could “help you cut maybe three years” off the total process and increase the probability of success.

There are, of course, substantial limitations to these biological models’ capabilities. For instance, though generative AI has been very successful in determining protein structure, similar programs frequently suggest small molecule structures that cannot actually be synthesized, Gómez-Bombarelli said. Perhaps the biggest bottleneck to using generative AI to revolutionize the life sciences—making useful predictions about not just the relatively constrained domain of how a protein will fold or bind to a specific receptor, but also the complex cascade of signals within and between cells across the body—is a scarcity of high-quality training data gathered from relevant biological experiments. “The most important thing is not to design the best algorithm,” Califano said. “The most important thing is to ask the right question.” The machines need knowledge to begin with that they cannot, at least for the foreseeable future, generate by themselves.

But perhaps they can with human collaborators. Gómez-Bombarelli is the chief science officer of materials at Lila Sciences, a start-up that has built a lab with equipment that can be directed by a combination of human scientists and generative AI, allowing models to test and refine hypotheses in a loop. Insilico has a similar robotic lab in China, and Califano is part of a global effort led by the Chan Zuckerberg Initiative to build an AI “virtual cell” that can simulate any number of human biological processes. Generating “novel” ideas is not really the main issue. “Hypotheses are cheap,” Gómez-Bombarelli said. But “evaluating hypotheses costs millions of dollars.”

[Read: A virtual cell is a “holy grail” of science. It’s getting closer.]

Throwing data into a box and shaking it has yielded incredible results in processing human language, but that won’t be enough to treat disease. Humans designing science-boosting AI models have to understand the problem, ask appropriate questions, and curate relevant data, then experimentally verify or refute any resultant AI system’s outputs. The way to build AI for science, in other words, is to do some science.

April 25, 2025  19:58:36

It’s a rare thing to shoot yourself in the foot and win a marathon. For years, Elon Musk has managed to do something like that with Tesla, achieving monumental success in spite of a series of self-inflicted disasters. There was the time he heavily promoted the company’s automated factory, only to later admit that its “crazy, complex network of conveyor belts” had thrown production of the Model 3 off track; and the time a tweet led him to be sued for fraud by the Securities and Exchange Commission; and the time he said that the Tesla team had “dug our own grave” with the massively delayed and overhyped Cybertruck. Tesla is nonetheless the most valuable car company in the world by a wide margin.

But luck runs out. Yesterday evening, Tesla reported first-quarter earnings for 2025, and they were abysmal: Profits dropped 71 percent from the same time last year. Musk sounded bitter on the call with investors that followed, blaming the company’s misfortune on protesters who have raged at Tesla dealerships around the world over his role running DOGE and his ardent support of far-right politicians. “The protests that you’ll see out there, they’re very organized. They’re paid for,” he said, without evidence.

Then he pivoted. Although Musk described DOGE as “critical work,” he said that his “time allocation” there “will drop significantly” next month, down to just one or two days a week. He’s taking a big step back from politics and returning the bulk of his attention to Tesla, as even his most enthusiastic supporters have begged him to do. (Tesla did not immediately return a request for comment.)

[Read: The Tesla revolt]

One bad quarter won’t doom Tesla, but it’s unclear how, exactly, the company can move forward from here. Arguably, its biggest and most immediate problem is that electric-vehicle fans in America, who tend to lean left politically, do not want to buy Musk’s cars anymore. The so-called Tesla Takedown protests have given people who feel helpless and angry about President Donald Trump’s policies a tangible place to direct their anger. Because Musk was also the Trump campaign’s biggest financier, those protesters saw a Tesla boycott as one of the best ways to hit back. The fact that these demonstrations were the first thing Musk brought up on the earnings call speaks volumes about how rattled he must be; Tesla purchases have been down considerably this year in the U.S., even as EV sales keep rising.

And while some people in Europe may believe they do not have much cause to care about DOGE, they do care that Musk has been promoting far-right political actors, most notably Germany’s Alternative for Germany party. That seems to be having a palpable impact on Tesla’s sales; they’ve been tanking by double digits across Europe.

Buyers are turning to other car brands for their electric-powered driving needs, and those brands are happy to take their business. Tesla may have effectively created the modern EV sector, but the competition is catching up. In the U.S. alone, several car companies now offer electric options with more range, better features, and lower prices than Tesla. A long-awaited cheaper new Tesla could bring in more buyers, but there’s been little fanfare around it, perhaps because Musk is preoccupied with autonomous taxis and self-driving cars; a new “robotaxi” service is supposedly launching in June, in Austin. Yet any self-driving-technology investment depends on Tesla’s ability to sell cars right now to finance those dreams, and that’s where Tesla is likely to continue to have trouble.

[Read: The great Tesla sell-off]

Finally, there’s the bigger problem of China. Musk’s company effectively showed that country how to make modern EVs, and although Teslas still sell well enough there, Musk is up against dozens of new Tesla-like companies that have taken his ideas and run wild. Electric cars in China can be had with more advanced features than what Tesla offers, faster charging times, and more advanced approaches to automated driving. (Case in point: I am writing this story in Shanghai, from the passenger’s seat of an EV that can swap its depleted battery for a fresh one in mere minutes.)

At most companies, it’d be long past time to show the CEO the door. But Tesla’s stock price is inextricably linked to Musk and his onetime image as Silicon Valley’s greatest living genius. Even if Musk were to move on, it’s unclear whether Tesla as a brand could recover, Robby DeGraff, an analyst at the research firm AutoPacific, told me. “I’m genuinely not convinced removing him would be enough,” DeGraff said. “I do believe the potential is there for the brand to steer itself around with exciting, quality, innovative products. But there’s a colossal amount of repair work that needs to be done behind the scenes first.”

Unfortunately for Tesla, the great disrupter of the automotive industry is beginning to feel a lot like a “legacy” car company, struggling to figure out what’s next and getting lapped by newcomers. The competition has the advantage of not being inextricably tied to a boss who’s made the brand so toxic that people would rather go to his dealerships to wave angry signs than to buy cars. If Tesla’s future rests on left-leaning EV fans forgiving Musk for backing Trump, boosting the AfD party in Germany, and gleefully putting hundreds of thousands of federal employees out of work, then Musk may find himself longing for the days when his biggest problem was building a wild-looking stainless-steel truck.

April 23, 2025  14:27:12

Updated at 10:26 a.m. ET on April 23, 2025

Everywhere I look on social media, disembodied heads float in front of legal documents, narrating them line by line. Sometimes they linger on a specific sentence. Mostly they just read and read.

One content creator, who posts videos under the username I’m Not a Lawyer But, recently made a seven-minute TikTok in which she highlighted the important sentences from Drake’s 81-page defamation complaint against Universal Music Group. Another described herself in a recent video as “literally reading through the receipts of Justin Baldoni’s 179-page lawsuit,” referring to one stage of a complicated legal battle between Baldoni and his It Ends With Us co-star, Blake Lively, which is the hot legal case of the moment. The threads of this conflict are too knotted for me to fully untangle here, but the dispute began in December with Lively accusing Baldoni of inappropriate on-set behavior and of a secret social-media campaign against her. It became chaotic—and ripe for play-by-play commentary—in February when Baldoni, who has denied Lively’s allegations, launched a website with the URL thelawsuit.info to tell his side of the story.

The creators I’m seeing have loyal, long-term audiences and sell T-shirts and water bottles emblazoned with obscure references. They go by names such as Lawyer You Know and Legal Bytes (“Explaining the law one bite at a time!”) and sometimes appeal to expertise, usually by proving that they are actual attorneys. For some, though, their bona fides are looser: “I’m not an attorney, but I was raised by attorneys,” one creator said in a recent video.

The popularity of this material—a kind of lawyerly ASMR—has surprised even some of the people who make it. “It seems odd to us,” Stewart Albertson, one half of the podcast Ask 2 Lawyers, told me. He and his co-host, Keith Davidson, in fact are lawyers, and sometimes get 100,000 views on lengthy videos in which they go through a legal motion line by line. They’ve asked the audience if they should go faster and skip over some things. The commenters say no. They love monotony and minutiae. “People talk about, ‘Oh, I could go to sleep to these guys,’” Albertson told me. These are words of affection. He and Davidson know that because the commenters have also asked them to make Ask 2 Lawyers merch (specifically, they would like coffee mugs that say “12(b)(6)” on them, in reference to a type of legal motion filed by Blake Lively).

Albertson and Davidson spent more than a decade making marketing videos explaining trust and estate law, their firm’s speciality. Now they mostly make what they call educational content in which they explain high-profile legal disputes. They started with a series on a dramatic saga involving Tom Girardi, the ex of one of the women on Bravo’s Real Housewives of Beverly Hills. (Girardi was famous for his heroism in the Erin Brockovich case; he is now infamous for having been convicted of embezzlement and wire fraud, as well as for the way his criminal activity affected the fascinating women of RHOBH.) Although the topics are salacious, the two lawyers’ videos are, with all due respect, breathtakingly boring. “You know, we’re kind of bringing calm to chaos,” Albertson said. “Maybe that’s what speaks to people.”

[Read: What the JFK file dump actually revealed]

That’s part of it, but it’s also the simple allure of stacks of papers. Markos Bitsakakis, a 25-year-old TikTok creator from Toronto who also runs an herbal-honey company, sometimes gets 1 million likes on a video in which he flips through huge dossiers the entire time he is talking. (He has so far published 12 installments of a series titled “The Downfall of Blake Lively & Ryan Reynolds.”) He’ll explain to the viewers that he’s just spent nine hours with one document, or that he’s recording at two in the morning because he has been reading for so long. His followers joke that a printer must, as the saying goes, hate to see him coming. “I have like a million files,” he told me. “Mentally, I’m 45 years old,” he added, to explain why he prefers hard copies to PDFs. The way that he dramatizes his work situates him in an online tradition of romanticizing studying and research (especially when they’re done alone). “Lucky for you guys I never travel anywhere without my files,” he said at the start of a video he recorded while on a trip.  

Celebrity lawsuits have always been followed in detail by tabloids and gossip bloggers, and our reality-TV culture has been fascinated for some time with the idea of “receipts”—proof of malfeasance, often in the form of text messages or screenshots. But this is newer. Amateur legal analysis is now a whole category of content creation, and thick, formal documents are the influencers’ bread and butter.

These creators often present a very internet-age populist message alongside their analysis—many of the videos allow for the possibility that anyone can become an expert simply by having the commitment to read and keep reading the things that they are able to access freely online. Another of their stated commitments is to the notion of transparency, which helps explain why many of the same creators have expressed an interest in the National Archives’ recent dump of files pertaining to the John F. Kennedy assassination. Of course, some of the draw is gossip. But to a significant degree, I think the draw really is files.

Files are never-ending stories—or at least they can feel that way when a case drags on, providing a new flurry of paperwork week after week. Katy Hoffman, a 32-year-old in Kansas City, follows CourtListener and PacerMonitor for updates on the Baldoni-Lively case and told me that this is effectively her unwinding ritual. Instead of watching TV or scrolling Instagram at night, she’ll read whatever is new. “I try to maintain a good balance,” she said. That last hour of the day that everyone spends doing something pointless is the one she spends on this, she told me. (She also makes her own videos sometimes, though her audience is quite small.)

Similarly, Julie Urquhart, a 49-year-old teacher from New Brunswick, Canada, told me that she spends much of her free time reading court documents and then making short TikTok videos about them. “I’ve read everything you can on this case,” she told me. “All the lawsuits, multiple times.” She loved working at a radio station in college, so this is a hobby that can satisfy the same impulse to research and then broadcast, even if to a tiny audience. As with many other creators, Urquhart has recently focused on the Baldoni lawsuit, and it has caused her some grief: She takes Lively’s side, which she says has made her videos less popular and led people to be furious with her in the comments.

Here is where content about files becomes less fun. Particularly if you look at the comment sections, you’ll see a lot of vitriol against Lively—visible in much the same way as was vitriol against Amber Heard during the Johnny Depp trial or Evan Rachel Wood during her dispute with Marilyn Manson. Most of the creators I spoke with insisted that misogyny is not a factor in the success of their videos or in their own presentation of the facts, but this is not totally convincing. In 2020, I wrote about the rise of conspiracy theories about celebrities allegedly faking their pregnancies, which were transparently the product of resentment toward famous people and other elites, women especially, and I see quite a bit of that here as well. Commenters often express that they are tired of being “manipulated” by such people.

[Read: How a fake baby is born]

This is not to say that content about the Baldoni-Lively case is inherently toxic. In fact, it’s likely that these lawsuit influencers have had success with it because it’s fairly middle-of-the-road: mysterious, but not acutely morbid or upsetting like true crime can be. As Bitsakakis put it, the topic is “dramatic and exciting and salacious, but it isn’t necessarily as serious as nuclear war.” He thinks that he’s been rewarded by the TikTok algorithm for hitting that sweet spot. There are other legal topics he’d like to “investigate,” such as the Luigi Mangione case and the Sean “Diddy” Combs allegations, but those things may be just a little too dark to be pushed out into the main feed by the powerful recommendation engine. (For the same reason, many TikTok creators reference Blake Lively’s claims by saying “SH” rather than “sexual harassment.”)

The broad interest in this case—and its many files—has made for some strange bedfellows. Recently, New York magazine published a story on self-described liberals winding up on the YouTube page of the right-wing influencer Candace Owens, who dabbles in conspiracy theories and is currently working on a YouTube series trying to prove that Brigitte Macron was born a man, because they’re impressed by her ample Baldoni-Lively coverage. Owens is explicitly anti-#MeToo and sells Anti-Feminist baseball caps in her merch store, but viewers who don’t share her politics reportedly still enjoy watching her go through lots of legal documents and show her work. “I read them myself,” she told me. “I sit down with a pen, mark things up, use stickies, little different-colored stickies if I have questions, like for my lawyer, and he’ll explain things to me.”

These videos, as well as ones in which Owens speculates about whether Ryan Reynolds is gay, get millions of views. When I told her that I found it odd that so many people were interested in what amounted to a workplace dispute, she rejected the characterization. It was bigger than that, because it represented a shift in the way that people consume information, she told me. They’re more trusting now of online content creators who will present everything—all of the documents—than they are of traditional journalists, whom they perceive as being inappropriately possessive and aloof. “I’m very excited to see that both the left and the right are agreeing, finally, that we should really be removing a lot of the authority that we gave to the mainstream media to tell us what to think about other people,” she said. “I think it’s great. I think it’s brilliant.”

This was a sentiment I heard frequently from creators and saw often in the comments on their videos—people expressed a vaguely paranoid feeling that raw information is being deliberately kept away from them by reporters who hoard or hide it so that they can maintain their own power. It’s not an accurate understanding of the current state of journalism, but it is a popular one, and it helps explain the allure of reams of court documents. Davidson told me that the audience for Ask 2 Lawyers appreciates the granular level of detail that he and Albertson provide because it indicates that they are intelligent and curious enough to understand.

“We don’t talk down to them,” he said. “We don’t try to make them feel like, We know and you don’t. We’re here to give you the information, and you make up your own mind on it.” Clearly, people are really, really into that.


This article previously referred to Tom Girardi as the ex-husband of a cast member on The Real Housewives of Beverly Hills. The two are separated but not divorced.

April 22, 2025  17:33:16

There are really two OpenAIs. One is the creator of world-bending machines—the start-up that unleashed ChatGPT and in turn the generative-AI boom, surging toward an unrecognizable future with the rest of the tech industry in tow. This is the OpenAI that promises to eventually bring about “superintelligent” programs that exceed humanity’s capabilities.

The other OpenAI is simply a business. This is the company that is reportedly working on a social network and considering an expansion into hardware; it is the company that offers user-experience updates to ChatGPT, such as an “image library” feature announced last week and the new ability to “reference” past chats to provide personalized responses. You could think of this OpenAI as yet another tech company following in the footsteps of Meta, Apple, and Google—eager not just to inspire users with new discoveries, but to keep them locked into a lineup of endlessly iterating products.

[Read: The curse of ChatGPT]

The most powerful tech companies succeed not simply by the virtues of their individual software and gadgets, but by building ecosystems of connected services. Having an iPhone and a MacBook makes it very convenient to use iCloud storage and iMessage and Apple Pay, and very annoying if a family member has a Samsung smartphone or if you ever decide to switch to a Windows PC. Google Search, Drive, Chrome, and Android devices form a similar walled garden, so much so that federal attorneys have asked a court to force the company to sell Chrome as a remedy to an antitrust violation. But compared with computers or even web browsers, chatbots are very easy to switch among—just open a new tab and type in a different URL. That makes the challenge somewhat greater for AI start-ups. Google and Apple already have product ecosystems to slide AI into; OpenAI does not.

OpenAI CEO Sam Altman recently claimed that his company’s products have some 800 million weekly users—approximately a tenth of the world’s population. But even if OpenAI had only half that number of users, that would be a lot of people to risk losing to Anthropic, Google, and the unending torrent of new AI start-ups. As other tech companies have demonstrated, collecting data from users—images, conversations, purchases, friendships—and building products around that information is a good way to keep them locked in. Even if a competing chatbot is “smarter,” the ability to draw on previous conversations could make parting ways with ChatGPT much harder. This also helps explain why OpenAI is giving college students two months of free access to a premium tier of ChatGPT, seeding the ground for long-term loyalty. (This follows a familiar pattern for tech companies: Hulu used to be free, Gmail used to regularly increase its free storage, and eons ago, YouTube didn’t serve ads.) Notably, OpenAI has recently hired executives from Meta, Twitter, Uber, and NextDoor to advance its commercial operations.

OpenAI’s two identities—groundbreaking AI lab and archetypal tech firm—do not necessarily conflict. The company has said that commercialization benefits AI development, and that offering AI models as consumer products is an important way to get people accustomed to the technology, test its limitations in the real world, and encourage deliberation over how it should and shouldn’t be used. Presenting AI in an intuitive, conversational form, rather than promoting a major leap in an algorithm’s “intelligence” or capabilities, is precisely what made ChatGPT a hit. If the idea is to make AI that “benefits all of humanity,” as OpenAI professes in its charter, then sharing these purported benefits now both makes sense and creates an economic incentive to train better and more reliable AI models. Increased revenue, in turn, can sustain the development of those future, improved models.

Then again, OpenAI has gradually transitioned from a nonprofit to a more and more profit-oriented corporate structure: Using generative-AI technology to magically discover new drugs is a nice idea, but eventually the company will need to start making money from everyday users to keep the lights on. (OpenAI lost well over $1 billion last year.) A spokesperson for OpenAI, which has a corporate partnership with The Atlantic, wrote over email that “competition is good for users and US innovation. Anyone can use ChatGPT from any browser,” and that “developers remain free to switch to competing models whenever they choose.”

[Read: The Gen Z lifestyle subsidy]

Anthropic and Meta have both taken alternative approaches to bringing their models to internet users. The former recently offered the ability to integrate its chatbot Claude into Gmail, Google Docs, and Google Calendar—gaining a foothold in an existing tech ecosystem rather than building anew. (OpenAI seemed to be testing this strategy last year by partnering with Apple to incorporate ChatGPT directly into Apple Intelligence, but this requires a bit of setup on the user’s part—and Apple’s AI efforts have been broadly perceived as disappointing.) Meta, meanwhile, has made its Llama AI models free to download and modify—angling to make Llama a standard for software engineers. Altman has said OpenAI will release a similarly open model later this year; apparently the start-up wants to both wall off its garden and make its AI models the foundation for everyone else, too.

From this vantage, generative AI appears less revolutionary and more like all the previous websites, platforms, and gadgets fighting to grab your attention and never let it go. The mountains of data collected through chatbot interactions may fuel more personalized and precisely targeted services and advertisements. Dependence on smartphones and smartwatches could breed dependence on AI, and vice versa. And there is other shared DNA. Social-media platforms relied on poorly compensated content-moderation work to screen out harmful and abusive posts, exposing workers to horrendous media in order for the products to be palatable to the widest audience possible. OpenAI and other AI companies have relied on the same type of labor to develop their training data sets. Should OpenAI really launch a social-media website or hardware device, this lineage will become explicit. That there are two OpenAIs is now clear. But it remains uncertain which is the alter ego.

April 22, 2025  03:30:32

Finals season looks different this year. Across college campuses, students are slogging their way through exams with all-nighters and lots of caffeine, just as they always have. But they’re also getting more help from AI than ever before. Through the end of May, OpenAI is offering students two months of free access to ChatGPT Plus, which normally costs $20 a month. It’s a compelling deal for students who want help cramming—or cheating—their way through finals: Rather than firing up the free version of ChatGPT to outsource essay writing or work through a practice chemistry exam, students are now able to access the company’s most advanced models, as well as its “deep research” tool, which can quickly synthesize hundreds of digital sources into analytical reports.

The OpenAI deal is just one of many such AI promotions going around campuses. In recent months, Anthropic, xAI, Google, and Perplexity have also offered students free or significantly discounted versions of their paid chatbots. Some of the campaigns aren’t exactly subtle: “Good luck with finals,” an xAI employee recently wrote alongside details about the company’s deal. Even before the current wave of promotions, college students had established themselves as AI’s power users. “More than any other use case, more than any other kind of user, college-aged young adults in the US are embracing ChatGPT,” the vice president of education at OpenAI noted in a February report. Gen Z is using the technology to help with more than schoolwork; some people are integrating AI into their lives in more fundamental ways: creating personalized workout plans, generating grocery lists, and asking chatbots for romantic advice.

AI companies’ giveaways are helping further woo these young users, who are unlikely to shell out hundreds of dollars a year to test out the most advanced AI products. Maybe all of this sounds familiar. It’s reminiscent of the 2010s, when a generation of start-ups fought to win users over by offering cheap access to their services. These companies especially targeted young, well-to-do, urban Millennials. For suspiciously low prices, you could start your day with pilates booked via ClassPass, order lunch with DoorDash, and Lyft to meet your friend for happy hour across town. (On Uber, for instance, prices nearly doubled from 2018 to 2021, according to one analysis). These companies, alongside countless others, created what came to be known as the “Millennial lifestyle subsidy.” Now something similar is playing out with AI. Call it the Gen Z lifestyle subsidy. Instead of cheap Ubers and subsidized pizza delivery, today’s college students get free SuperGrok.

AI companies are going to great lengths to chase students. Anthropic, for example, recently started a “campus ambassadors” program to help boost interest; an early promotion offered students at select schools a year’s worth of access to a premium version of Claude, Anthropic’s AI assistant, for only $1 a month. One ambassador, Josefina Albert, a current senior at the University of Washington, told me that she shared the deal with her classmates, and even reached out to professors to see if they might be willing to promote the offer in their classes. “Most were pretty hesitant,” she told me, “which is understandable.”

The current discounts come at a cost. There are roughly 20 million postsecondary students in the U.S. Say just 1 percent of them take advantage of free ChatGPT Plus for the next two months. The start-up would effectively be giving a handout to students that is worth some $8 million. In Silicon Valley, $8 million is a rounding error. But many students are likely taking advantage of multiple such deals all at once. And, more to the point, AI companies are footing the bill for more than just college students. All of the major AI companies offer free versions of their products despite the fact that the technology itself isn’t free. Every time you type a message into a chatbot, someone somewhere is paying for the cost of processing and generating a response. These costs add up: OpenAI has more than half a billion weekly users, and only a fraction of them are paid subscribers. Just last week, Sam Altman, the start-up’s CEO, suggested that his company spends tens of millions of dollars processing “please” and “thank you” messages from users. Tack on the cost of training these models, which could be as much as $1 billion for the most advanced versions, and the price tag becomes even more substantial. (The Atlantic recently entered into a corporate partnership with OpenAI.)

These costs matter because, despite AI start-ups’ enormous valuations (OpenAI was just valued at $300 billion), they are wildly unprofitable. In January, Altman said that OpenAI was actually losing money on its $200-a-month “Pro” subscription. This year, the company is reportedly projected to burn nearly $7 billion; in a few years, that number could grow to as much as $20 billion. Normally, losing so much money is not a good business model. But OpenAI and its competitors are able to focus on acquiring new users because they have raised unprecedented sums from investors. As my colleague Matteo Wong explained last summer, Silicon Valley has undertaken a trillion-dollar leap of faith, on track to spend more on AI than what NASA spent on the Apollo space missions, with the hope that eventually the investments will pay off.

The Millennial lifestyle subsidy was also fueled by extreme amounts of cash. Ride-hailing businesses such as Uber and Lyft scooped up customers even as they famously bled money for years. At one point in 2015, Uber was offering carpool rides anywhere in San Francisco for just $5 while simultaneously burning $1 million a week. At times, the economics were shockingly flimsy. In 2019, the owner of a Kansas-based pizzeria noticed that his restaurant had been added to DoorDash without his doing. Stranger still, a pizza he sold for $24 was priced at $16 on DoorDash, yet the company was paying him the full price. In its quest for growth, the food-delivery start-up had reportedly scraped his restaurant’s menu, slapped it on their app, and was offering his pie at heavy discount. (Naturally, the pizzeria owner started ordering his own pizzas through DoorDash—at a profit.)

These deals didn’t last forever, and neither can free AI. The Millennial lifestyle subsidy eventually came crashing down as the cheap money dried up. Investors that had for so long allowed these start-ups to offer services at artificially deflated prices wanted returns. So companies were forced to raise prices, and not all of them survived.

If they want to succeed, AI companies will also eventually have to deliver profits to their investors. Over time, the underlying technology will get cheaper: Despite companies’ growing bills, technical improvements are already increasing efficiency and driving down certain expenses. Start-ups could also raise revenue through ultra-premium enterprise offerings. OpenAI is reportedly considering selling “PhD-level research agents” at $20,000 a month. But it’s unlikely that companies such as OpenAI will allow hundreds of millions of free users to coast along indefinitely. Perhaps that’s why the start-up is currently working on both search and social media; Silicon Valley has spent the past two decades essentially perfecting the business models for both.

Today’s giveaways put OpenAI and companies like it only further in the red for now, but maybe not in the long run. After all, Millennials became accustomed to Uber and Lyft, and have stuck with ride-hailing apps even as prices have increased since the start of the pandemic. As students learn to write essays and program computers with the help of AI, they are becoming dependent on the technology. If AI companies can hook young people on their tools now, they may be able to rely on these users to pay up in the future.

Some young people are already hooked. In OpenAI’s recent report on college students’ ChatGPT adoption, the most popular category of non-education or career-related usage was “relationship advice.” In conversations with several younger users, I heard about people who are using AI for color-matching cosmetics, generating customized grocery lists based on budget and dietary preferences, creating personalized audio meditations and half-marathon training routines, and seeking advice on their plant care. When I spoke with Jaidyn-Marie Gambrell, a 22-year-old based in Atlanta, she was in the parking lot at McDonald’s and had just consulted ChatGPT on her order. “I went on ChatGPT and I’m like, ‘Hey girl,’” she said. “‘Do you think it’d be smart for me to get a McChicken?’” The chatbot, which she has programmed to remember her dietary and fitness goals, advised against it. But if she really wanted a sandwich, ChatGPT suggested, she should order the McChicken with no mayo, extra lettuce, tomatoes, and no fries. So that’s what she got.

The Gen Z lifestyle subsidy isn’t entirely like its Millennial predecessor. Uber was appealing because using an app to instantly summon a car is much easier than chasing down a cab. Ride-hailing apps were destructive for the taxi business, but for most users, they were just convenient. Today’s chatbots also sell convenience by expediting essay writing and meal planning, but the technology’s impact could be even more destabilizing. College students currently signing up for free ChatGPT Plus ahead of finals season might be taking exams intended to prepare them for jobs that the very same AI companies suggest will soon evaporate. Even the most active young users I spoke with had mixed feelings about the technology.  Some people “are skating through college because of ChatGPT,” Gambrell told me. “That level of convenience, I think it can be abused.” When companies offer handouts, people tend to take them. Eventually, though, someone has to pay up.

April 17, 2025  11:39:45

Like countless others who have left their hometown to live a sinful, secular life in a fantastic American city, I no longer actively practice Christianity. But a few times a year, my upbringing whispers to me across space and time, and I have to listen. The sound is loudest at Easter, which, aside from being the most important Christian holiday, is also the most fun.

I could talk about Easter all day. The daffodils, the brunch. The color scheme, the smell of grass, the annual screening of VeggieTales: An Easter Carol, which is the same story as Charles Dickens’s A Christmas Carol, except that it’s set at Easter and all the characters are vegetables who work in a factory (the Scrooge character is a zucchini). And most of all, the Easter eggs! Of all the seasonal crafts, this one is the easiest (no carving) and the most satisfying (edible).

This year, because of shocking egg prices, people with online lifestyle brands—or people who aspire to have online lifestyle brands—have suggested numerous ways to keep the dyeing tradition alive without shelling out for eggs. For instance, you can dye jumbo-size marshmallows, or you can make peanut-butter eggs that you then coat in colored white chocolate. You can paint rocks. The story has been widely covered, by local TV and radio stations and even The New York Times. “Easter Eggs Are So Expensive Americans Are Dyeing Potatoes,” the Times reported (though most of the story was about one dairy farmer who’d replaced real eggs with plastic replicas for an annual Easter-egg hunt).

I don’t think many people are actually making Easter spuds. Like baking Goldfish or making breakfast cereal from scratch, dyeing potatoes seems mostly like a good idea for a video to post online. Many Instagram commenters reacted to the Easter potatoes by saying things such as “What in the great depression is this” and “These potatoes make me sad.” And yet, because I love Easter and am curious about the world, I decided to try it myself—just to see if it was somehow any fun.

[Read: I really can’t tell if you’re serious]

My local Brooklyn grocery store didn’t have the classic Paas egg-dyeing tablets, so I bought an “organic” kit that cost three times as much ($6.99) and expensed it to The Atlantic. I bought a dozen eggs ($6.49) and a bag of Yukon Gold potatoes that were light-colored enough to dye and small enough to display in a carton ($5.99), and expensed those to The Atlantic too. Then I looked online for advice on how to proceed; mostly, I wanted to know whether I should cook the potatoes before or after dyeing them. A popular homemaking blog called The Kitchn gave detailed instructions on how to dye Easter potatoes and “save some cash while flexing your creativity for the Easter Bunny this year.” The suggestions—which included soaking the potatoes in ground turmeric, shredded beets, or three cups of mashed blueberries—were not as cost-effective as promised. (Such a volume of fruit could cost north of $15.) But I did find out that I should decorate the potatoes and then cook them. Thank you!

Alone in my kitchen on a Saturday morning, I dyed six boiled eggs and six raw potatoes and used a teensy paintbrush to add squiggly lines, daisies, and other doodles, returning me to my youth as an observant Methodist who really knew her resurrection-specific hymns. The eggs came out in stunning shades of marigold, magenta, and cornflower blue. The potatoes came out sort of yellow, or sort of pink, or sort of purple, all of which you may recognize as colors that potatoes already have when you buy them at the store. I hated them.

When I painted HAPPY EASTER on one of the potatoes, it looked like a threat. When I baked them in my oven, their skins (naturally) crinkled and came somewhat unstuck from their insides. This had the effect of making them look shriveled and even more sinister. When I put them in the egg carton next to my beautiful half-dozen Easter eggs, I thought: Only a person who was lying would do this and say it was good. Without being too overwrought about it, the whole project felt like a symbol not of renewal but of the wan stupidity of our cultural moment.

[Read: The case for brain rot]

The average price for a carton of eggs last month was $6.23, which is, we all agree, a lot for eggs. But it’s not really a lot for a craft project that also serves as a cultural ritual and can also serve as breakfast, so long as you put your craft project and cultural ritual in the refrigerator. (Until recently, I’d assumed that all families eat their Easter eggs, but apparently some people put them on display in their house, after which you certainly can’t eat them.) Sure, if an egg is really too expensive, replacing it with a potato could be called ingenious. But the many deficiencies of this replacement are immediately obvious. For instance, dye doesn’t work as well on a brown potato as it does on a white egg. Potatoes are uninspiring objects—people evoke them when they want to suggest that something is lumpy, dumb, or useless. Eggs are lovely, smooth, elegant, and the subject of fine art. Eggs are revered. You can’t just swap one thing out for another because they are a similar size and weight.

I know I am being judgmental—decidedly not the point of Easter. But this insincere hack rudely assumes that children can’t tell the difference between a simple, nice thing and a more complicated, far inferior thing. I will concede only one point to the potato-dyers. As The Atlantic put it rather grotesquely in 1890, eggs symbolize “the bursting into life of a buried germ.” I have to admit that this is a pretty good way to describe tubers as well. It made me briefly consider burying my Easter potatoes in the backyard and waiting to see if they would grow into more Easter potatoes. Season of hope and all that.

[From the May 1890 issue: The Easter hare]

Instead, I ate all of them and then they were gone, which felt a lot better.

April 18, 2025  16:00:21

Josh Shapiro is very lucky to be alive. The Pennsylvania governor and his family escaped an arson attack in the early hours of this morning. Parts of the governor’s mansion were badly charred, including an opulent room with a piano and a chandelier where Shapiro had hosted a Passover Seder just hours earlier. Things could have been much worse. The suspect, Cody Balmer, who turned himself in, would have beaten Shapiro with a hammer if he had found him in his home, he reportedly said in an affidavit.

Balmer admitted to “harboring hatred” of Shapiro, authorities said, but his precise motives are still unclear. He reportedly expressed anti-government views and made allusions to violence on social media. He reposted an image of a Molotov cocktail with the caption “Be the light you want to see in the world.” Balmer’s mother told CBS that he has a history of mental illness. But no matter how you square it, the attack is just the latest example of political violence in the United States. Last month, a Wisconsin teenager was charged with murdering his mother and stepfather as part of a plot to try to assassinate President Donald Trump—this, of course, follows two assassination attempts targeting Trump last year. Other prominent instances of ideological violence include the murder of UnitedHealthcare CEO Brian Thompson late last year, and the time when a man broke into Nancy Pelosi’s home in 2022 and attacked her husband, Paul Pelosi, with a hammer, fracturing his skull. (He was badly hurt but survived.)

Shapiro is a Democrat, but in a rare moment of bipartisan agreement, Republicans joined Democrats in condemning the attack. President Trump said in the Oval Office today that the suspect “was probably just a wack job and certainly a thing like that cannot be allowed to happen.” Vice President J. D. Vance called the violence “disgusting,” and Attorney General Pam Bondi posted on X that she was “relieved” that Shapiro and his family are safe.

These kinds of condemnations of political violence are good. They’re also meaningless—especially when taken in the broader context of Trump’s governing style. Perhaps it’s no coincidence that since Trump first ran for office, political violence has been on the rise. When it’s useful to Trump, he praises violence and makes leveraging the threat of it endemic to his style of politics. When Montana’s then–congressional candidate (and now-governor) Greg Gianforte assaulted a reporter in 2017, Trump later said, “Any guy that can do a body slam, he is my type!” After Kyle Rittenhouse shot and killed a protester in Kenosha, Wisconsin, in the summer of 2020, he had a friendly meeting with Trump at Mar-a-Lago the next year. And during a presidential debate against Joe Biden that fall, when Trump was asked if he would rebuke the Proud Boys, a far-right organization with a history of inciting violence, he told the group to “stand back and stand by,” as though he were giving it orders. (This is also how the Proud Boys interpreted it.)

Read: A brief history of Trump’s violent remarks

Trump made his willingness to engage in political violence especially clear during the Capitol insurrection on January 6, 2021. Instead of immediately attempting to call off his rabid supporters, Trump sat on his hands as his supporters stormed the Capitol—even as members of his own party urged him to help. Despite having lost the election, Trump appeared okay with violence if it helped him maintain the presidency.

Since retaking office, Trump has appeared to continue this tradition. When Pete Hegseth, the president’s pick for secretary of defense, faced a sexual-assault accusation ahead of his confirmation vote, violence may have been the ingredient that ensured that Trump got his way. Republican Senator Thom Tillis seemed concerned that some of the allegations against Hegseth could be credible and was on track to tank his nomination. According to Vanity Fair, the FBI warned Tillis of “credible death threats” against him, which could have played a role in his decision to back down. Tillis has not said whether the death threats influenced his Hegseth vote, but his office released recordings of the threats he has received.

Other Republicans in Congress are afraid of opposing Trump because of similar potential concerns for their safety. Many have gone on the record in recent years and said as much. Mitt Romney told my colleague McKay Coppins that a fellow congressman confessed to him that he had wanted to vote for Trump’s second impeachment in 2021 but ultimately chose not to out of fear for his family’s safety. That same year, Republican Representative Peter Meijer told my colleague Tim Alberta that he witnessed a fellow member of Congress have a near breakdown over fear that Trump supporters would come for his family if he voted to certify the 2020 election results.

All of this is to say that when Trump condemns acts of political violence, it’s impossible to take him seriously. In this specific case, the attack on Shapiro served no clear benefit to Trump, which is why he was able to so quickly speak out against it. Compare that with how he’s talked about the Pelosi hammer attack, which he has used as fodder to mock the Pelosis. Trump’s relationship to political violence is the same as his relationship to anything and anyone else in his orbit: If something benefits him, it’s welcome. If not, he may dismiss it.

April 14, 2025  16:04:11

Nearly three months into President Donald Trump’s term, the future of American AI leadership is in jeopardy. Basically any generative-AI product you have used or heard of—ChatGPT, Claude, AlphaFold, Sora—depends on academic work or was built by university-trained researchers in the industry, and frequently both. Today’s AI boom is fueled by the use of specialized computer-graphics chips to run AI models—a technique pioneered by researchers at Stanford who received funding from the Department of Defense. All of those chatbots? They rely on a training method called “reinforcement learning,” the foundations of which were developed with National Science Foundation (NSF) grants.

“I don’t think anybody would seriously claim that these [AI breakthroughs] could have been done if the research universities in the U.S. didn’t exist at the same scale,” Rayid Ghani, a machine-learning researcher at Carnegie Mellon University, told me. But Trump and the Department of Government Efficiency have frozen, canceled, or otherwise slowed billions of dollars in grants and fired hundreds of staff from the federal agencies that have funded the nation’s pioneering academic research for decades, including the National Institutes of Health and the NSF. The administration has halted or threatened to withhold billions of dollars from premier research universities that it has accused of anti-Semitism or unwanted DEI initiatives. Graduate students are being detained by immigration agents. Universities, in turn, are issuing hiring freezes, reducing offers to graduate students, and canceling research projects.

Outwardly, Trump has positioned himself as a champion of AI. During his first week in office, he signed an executive order intended to “sustain and enhance America’s dominance in AI” and proudly announced the Stargate Project, a private venture he called “the largest AI infrastructure project, by far, in history.” He has been clear that he wants to make it as easy as possible for companies to build and deploy AI models as they wish. Trump has consulted and associated himself with leaders in the tech industry, including Elon Musk, Sam Altman, and Larry Ellison, who have in turn showered the president with praise. But generative AI is not just an industry—it is a technology dependent on progressive innovations. Despite his bravado, Trump is rapidly eroding the engine of scientific innovation in America, and thus the capacity for AI to continue to advance.

In a statement, White House Assistant Press Secretary Taylor Rogers wrote that the administration’s actions are in service of building up the economy, fighting China, and combatting “divisive DEI programs” at the nation’s universities. “While Joe Biden sat back and let China make gains in the AI space, President Trump is restoring America’s global dominance by imposing tariffs on China—which has ripped us off for far too long,” Rogers wrote. (As my colleague Damon Beres wrote earlier this week, tariffs may only hurt American technology businesses.)

Despite Trump’s aims, the United States now risks losing ground to Canada, Europe, and, indeed, China in the race for AI and other technological innovation. In a Nature poll of American scientists last month, 75 percent of respondents—some 1,200 researchers—said they were considering leaving the country. New scientific and technological developments may occur elsewhere, slow down, or simply stop altogether.

Silicon Valley, despite frequently operating at odds with federal oversight, could not have come up with some of its most valuable ideas, or trained the research scientists who did, without the government’s assistance. Federally supported research and researchers, conducted and trained at American universities, helped make possible the internet, Google Search, ChatGPT, AlphaFold, and the entire AI boom (to say nothing of vaccines, electric vehicles, and weather forecasting). This fact is not lost on two of the “godfathers” of AI, Yann LeCun and Geoffrey Hinton, both of whom have lambasted the administration’s assault on science funding.

[Read: Throw Elon Musk out of the Royal Society]

“Curiosity-driven research is what allows us to explore directions that venture capital or research labs in industry would not, and should not, explore,” Alex Dimakis, a computer scientist at UC Berkeley and a co-founder of the AI start-up Bespoke Labs, told me. For example, AlphaFold—a series of AI models that predict the 3-D structure of proteins—was designed at Google but trained on an enormous collection of protein data that, for decades, has been maintained with funding from the NIH, the NSF, and other federal agencies, as well as similar government support in Europe and Japan; AlphaFold’s creators recently won a Nobel Prize. “All of these innovations, whether it’s the transformer or GPT or something else like that, were built on top of smaller little breakthroughs that happened earlier on,” Mark Riedl, a computer scientist at the Georgia Institute of Technology, told me. Needing to show investors progress each fiscal quarter, then a source of revenue within a few years, limits what topics scientists can pursue; meanwhile, federal grants allow them to explore high-risk, long-term ideas and hypotheses that may not present obvious paths to commercialization. The largest tech companies, such as Google, can fund exploratory research but without the same breadth of subjects or tolerance for failure—and these giants are the exception, not the norm.

The AI industry has turned previous, foundational research into impressive AI breakthroughs, pushing language- and image-generating models to impressive heights. But these companies wish to stretch beyond chatbots, and their AI labs can’t run without graduate students. “In the U.S., we don’t make Ph.D.s without federal funding,” Riedl said. From 2018 to 2022, the government supported nearly $50 billion in university projects related to AI, which at the same time received roughly $14 billion in non-federal awards, according to research led by Julia Lane, a labor economist at NYU. A substantial chunk of grant money goes toward paying faculty, graduate students, and postdoctoral researchers, who themselves are likely teaching undergraduates—who then work at or start private companies, bringing expertise and fresh ideas. As much as 49 percent of the cost of building advanced AI models, such as Gemini and GPT-4, goes to research staff.

“The way in which innovation has occurred as a result of federal investment is investments in people,” Lane told me. And perhaps as important as federal investment is federal immigration policy: The majority of top AI companies in the U.S. have at least one immigrant founder, and the majority of full-time graduate students in key AI-related fields are international, according to a 2023 analysis. Trump’s detainment and deportation of a number of immigrants, including students, have cast doubt on the ability—and desire—of foreign-born or -trained researchers to work in the United States.

If AI companies hope to bring their models to bear on scientific problems—say, in oncology or particle physics—or build “superintelligent” machines, they will need staff with bespoke scientific training that a private company simply cannot provide. Slashing funding from the NIH, the NSF, and elsewhere, or directly withdrawing money from universities, may lead to less innovation, fewer U.S.-trained AI researchers, and, ultimately, a less successful American industry. Meanwhile, multiple Chinese AI companies—notably DeepSeek, Alibaba, and Manus AI—are rapidly catching up, and Canada and Europe have sizable AI-research operations (and healthier government science funding) as well. They will simply race ahead, and other companies could even relocate some of their American operations elsewhere, as many financial institutions did after Brexit.

If the pool of talented AI researchers shrinks, only the true AI behemoths will be able to pay them; as the pool of federal science grants dwindles, those same firms will likely further steer research in the directions that are most profitable to them. Without open academic research, the AI oligopoly will only further cement itself.

That may not be good for consumers, nor for AI as a scientific endeavor. “Part of what has built the United States into a real juggernaut of research and innovation is the fact that people have shared research,” Alondra Nelson, a professor at the Institute for Advanced Study who previously served as the acting director of the White House Office of Science and Technology Policy, told me. OpenAI, Anthropic, and Google share limited research, code, or training data sets, and almost nothing about their most advanced models—making it difficult to check products against executives’ grandiose claims. More troublingly, progress in AI—and really any technology or science—depends on collaboration among people and pollination of ideas. These firms could plow ahead with the same massive, expensive, and energy-intensive models that may not be able to do what they promise. Fewer and fewer start-ups and academics will be able to challenge them or propose alternative approaches; these firms will benefit from fewer and fewer graduate students with outside perspectives and expertise to spark new breakthroughs.

President Trump may not care much for these scientists. But there is one he holds in high esteem who might have had something to say about all this. The president’s late uncle, John G. Trump, was a physicist at MIT who did pioneering work in clinical and military uses of radiation. The president has called Uncle John a “super genius.” John Trump received a national medal of science from the NSF, and his work was supported by at least hundreds of thousands of dollars in grants from the agency—more than $4 million today—in addition to funding from the NIH, according to his papers in the MIT archives and government reports. Those NSF grants supported at least six doctoral, 20 master’s, and 13 undergraduate theses in Trump’s lab—and that was one 14-year period in the elder Trump’s decades-long career.

As I did research for this article, I found the scientist’s final research report to the NSF upon the conclusion of those 14 years, written in 1966.

Image of John G. Trump's letter and signature
Courtesy of MIT Libraries

John G. Trump took care to note his team’s “tremendouse [sic] appreciation for the financial support of the National Science Foundation” and its “admiration for the thoughtful and considerate manner in which the project was administered and evaluated by NSF personnel.” The foundation’s support, Trump said, had been an “invaluable influence on the educational and research operation” of his lab. Almost 60 years later, education and research no longer seem to be among the nation’s priorities.

April 10, 2025  17:02:38

The madness started, as baseball madness tends to start, with the New York Yankees: At the end of March, during the opening weekend of the new season, the team’s first three batters hit home runs on the first three pitches thrown their way. The final score, 20–9, was almost too good to be true. And then, everybody noticed the bats.

A handful of Yankees had used unconventional instruments to hit their home runs: Their bats bulged out a little near the end, such that they were shaped more like bowling pins than clubs. It turned out they’d been designed by an MIT-trained physicist and were tailored to each player’s swing, with the bulge positioned at the place on the bat where that player tends to hit the ball. Yes, after at least a century’s worth of baseball bats that all looked more or less the same—“it must be made of wood, and may be of any length to suit the striker,” reads a set of rules from 1861—the art of making striker’s wood had at last produced a major innovation. After the Yankees hit a franchise-record nine home runs in that one game, media coverage of torpedo bats exploded, and manufacturers are struggling to meet demand from other teams. Even fantasy baseball leagues have cottoned to the trend. “This is torpedomania,” said the CEO of a major bat maker.

At first glance, the craze appears to be the culmination of the data-driven tweaks that have overhauled the modern game. A pursuit of minute statistical advantages characterizes nearly every aspect of baseball today: Pitchers maximize effectiveness by throwing the ball as hard as possible, and rarely spend more than five innings in a game; managers eschew traditional—and suboptimal—strategies such as bunting and stealing bases; fans obsess over esoteric performance metrics with names such as “wRC” and “xFIP.” Now the data revolution is reimagining one of the game’s most fundamental tools: the bat.

The idea of the bowling-pin shape is actually a few years old and has been explored by multiple teams. Aaron Leanhardt, the aforementioned MIT physicist, began designing the Yankees’ torpedo bats in 2022 as a minor-league hitting coach for the team, and some major leaguers were using them last year. His premise was straightforward: Standard bats are widest and heaviest at the tip, but players prefer to make contact with a pitch closer to the midpoint. That’s in part because a bat’s “sweet spot”—the portion of the wood that transfers the most energy on contact—is also a few inches down the barrel from the end. To address this inefficiency, torpedo bats are made with more wood in the sweet spot and less wood elsewhere—thus, the bulge. The idea was to “put it where you’re trying to hit the ball,” Leanhardt told The Athletic.

But that premise may be suspect. Despite their Moneyball makeover, torpedo bats remain, for now, a blunt instrument, largely superstition with a patina of data. Though it seems like common sense that adding heft to the part of the bat where a player hits the ball would be advantageous, several physicists who study baseball bats told me that’s not necessarily true. Because a bat has a thick barrel and rotates when swung, its motion and power depend on the distribution of weight across the entire shaft, not just in one spot. In other words, the physics aren’t cut-and-dried: A bulging sweet spot may provide more space for making contact with the ball, but it likely won’t provide more power. (The Miami Marlins, for whom Leanhardt now works as a field coordinator, declined a request for comment.)

[Read: Why aren’t women allowed to play baseball?]

All of the mass along a bat’s barrel, not just at the point of contact, contributes to the impact. As a result, shifting some wood from the end of the barrel to the sweet spot will not make the bat more powerful, Lloyd Smith, a mechanical engineer who studies ball-bat collisions at Washington State University, told me. Brian Hillerich, the director of professional bat production at Hillerich & Bradsby Co., which makes bats for Louisville Slugger, said that even if torpedo bats are not more powerful, they still promote more consistent contact at the sweet spot, which would tend to help a player’s performance. Smith and other physicists said this is possible, but remains unproved.

In any case, by redistributing some mass closer to the handle, the bowling-pin design could actually make a bat feel lighter when swung—it could lower the “moment of inertia,” in physics parlance. That will allow a player to increase his bat speed, but it also shrinks the force he can apply upon contact. These two factors may well cancel out, Dan Russell, a physicist at Penn State who studies baseball-bat vibrations, told me. (Imagine swinging a hammer while gripping its head instead of its handle: It might move faster, but it wouldn’t do a better job of pounding nails.) A torpedo bat could also be constructed by adding extra wood to make the bulge instead of merely shifting it from other places on the barrel. This would keep the “moment of inertia” constant—the bat would be heavier on a scale but feel the same when swung. Baseball bats used to have more heft as a rule; Babe Ruth swung clubs perhaps 50 percent heavier than today’s. But the net effects remain unclear, and would depend on each particular player’s strength and swing.

A faster swing could still be useful even if it doesn’t give a hitter greater power: “You simply have better bat control, can wait a little longer on the pitch before deciding to swing, make adjustments once you’ve started,” Alan Nathan, who studies the physics of baseball at the University of Illinois, told me. That won’t be the case for everyone—athletes who have spent years honing their swings and timing could be thrown off by the new shape, and several players using torpedo bats have had terrible starts to this season. Hillerich told me that his company designs torpedo bats with this in mind, trying to make them feel as similar to a player’s original bat as possible. It might all be a matter of preference and confidence—and others may not care that much either way. The new shape feels the same in his hands, Jazz Chisholm, a torpedo-wielding Yankee, recently said. “I don’t know the science of it. I’m just playing baseball.”

That the Yankees had a historically great game, and that some players were using funny-looking bats, “is more coincidence than destiny or science,” Smith told me. After all, nobody noticed the new shape last season, and for good reason—there’s simply not enough information, either from MLB games or physics labs, to definitively say what these bats offer, and to which players. Smith said he suspects that “the number of athletes this torpedo bat benefits is going to be fairly narrow.”

Indeed, the current buzz about the bats is pretty much the opposite of being data-driven. In an interview last week with The Athletic, Brett Laxton, the lead bat maker at Marucci Sports, pointed to the fact that Giancarlo Stanton, the Yankees’ designated hitter, had hit three home runs in his first game using a torpedo bat last year. That was “a good eye test” of the technology, he said, invoking just the kind of baseball intuition that statistics-driven analysts would sneer at. Yes, the bat felt and looked good in Stanton’s hands; no, this is not sabermetrics. Meanwhile, other “eye tests” have yielded more ambiguous results. Elly de la Cruz, of the Cincinnati Reds, hit two home runs in his first game using the torpedo bat, for instance, then went 0–4 the next day. Max Muncy, of the Los Angeles Dodgers, tried using a torpedo bat and recorded three outs in a row, then switched back to his old wood and hit a game-tying double.

If anything, the torpedo bats harken to an era before Moneyball, computers, or even the official formation of Major League Baseball. The late 1800s were a time of “great experimentation” in bat design, John Thorn, MLB’s official historian, told me: four-sided bats and flat bats, bats with slits for springs and sliding weights. All of that tinkering has long been left behind, however, and the modern, non-torpedo bat now seems like a simple fact of the game. Perhaps the biggest change to bat manufacturing in recent decades happened in the 1990s, when Barry Bonds started swinging bats made from maple instead of ash, and the rest of the league followed. That, too, had an element of superstition: As it turns out, a bat made from maple wood transfers a little bit less energy to a ball than one made from ash. Bonds, who hit more home runs than any MLB player in history, “could have hit the ball just a bit further if he had stayed with ash,” Smith told me.

Thorn takes issue with the whole discussion. “The whole idea that the magic is in the bat rather than in the batter is fraud,” Thorn said. “It’s calumny.” Of course, baseball players and fans have always been in pursuit of magic. They once used less pretentious tricks—eating chicken before each game, wearing a gold thong to emerge from a funk—but these have now been funneled through the optimization craze; instead of mismatched socks, there are “literal genius”–designed bats. In an era when baseball teams will squeeze any source of data for tiny statistical advantages, torpedomania pretends to be yet another nerd-ish secret weapon. Perhaps, for some subset of players, the new design really is miraculous. More likely, though, when the stats have all been counted and compared, we’ll discover that the torpedo bat is no different from any other talisman in baseball: a ridiculous distraction; a delightful waste of time.