Wednesday, 12 Nov 2025
  • Privacy Policy
  • Terms
  • Cookies Policy
  • Contact Us
Subscribe
Newsgrasp
  • Home
  • Today’s News
  • World
  • US
  • Nigeria News
  • Politics
  • 🔥
  • Today's News
  • US
  • World
  • Politics
  • Nigeria News
  • Donald Trump
  • Israel
  • President Donald Trump
  • White House
  • President Trump
Font ResizerAa
NewsgraspNewsgrasp
Search
  • Home
  • Today’s News
  • World
  • US
  • Nigeria News
  • Politics
Have an existing account? Sign In
Follow US
2025 © Newsgrasp. All Rights Reserved.
Yahoo news home
Today's NewsUS

The New Brutality of OpenAI

Matteo Wong
Last updated: November 11, 2025 1:13 am
Matteo Wong
Share
SHARE

The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here.

On September 12, Jay Edelson received what he expected to be a standard legal document. Edelson is a lawyer representing the parents of Adam Raine; they are suing OpenAI, alleging that their 16-year-old son took his life at the encouragement of ChatGPT. OpenAI’s lawyers had some inquiries for the opposing counsel, which is normal. For instance, they requested information about therapy Raine may have received, and Edelson complied.

But some of the asks began to feel invasive, he told me. OpenAI wanted the family to send any videos taken at memorial services for Raine, according to documents I have reviewed. It wanted a list of people who attended or were invited to any memorial services. And it wanted the names of anyone who had cared for or supervised Raine over the past five years, including friends, teachers, school-bus drivers, coaches, and “car pool divers [sic].”

“Going after grieving parents, it is despicable,” Edelson told me, and he objected to the requests. OpenAI did not respond to multiple inquiries from me about discovery in the Raine case, nor did Mayer Brown, the law firm representing the company. (OpenAI has announced that it would work on a number of algorithmic and design changes, including the addition of new parental controls, following the Raine lawsuit.) According to Edelson, OpenAI also has not provided any documents in response to his own discovery requests in preparation for trial.

Companies play hardball in legal disputes all the time. But until recently, OpenAI didn’t seem to be taking that approach. Many lawsuits have been filed against the firm—in particular by publishers and authors alleging that OpenAI infringed on their intellectual-property rights by training ChatGPT on their books and articles without permission—but OpenAI has appeared to stick to legal arguments and attempted to strike a somewhat conciliatory posture—while also entering licensing partnerships with a number of other media organizations, including The Atlantic, presumably as a way to avoid further lawsuits.

Now, however, OpenAI is going on the offensive. Gone are the days of a nonprofit research lab publicly sharing its top AI model’s code, unsure that it would ever have a product or revenue. Today, ChatGPT and OpenAI CEO Sam Altman are the faces of potentially historic technological upheaval, and OpenAI is worth $500 billion, making it the most valuable private company in the world. Altman and other company executives have used aggressive social-media posts and interviews to rebuke critics and antagonize competitors; over the summer, at a live New York Times event, Altman interrupted to ask, “Are you going to talk about where you sue us because you don’t like user privacy?” (The Times is suing OpenAI over copyright infringement, which OpenAI denies.) Recently, Altman bristled at questions from the investor Brad Gerstner over how OpenAI will meet its $1.4 trillion spending commitments, given its far smaller annual revenues: “If you want to sell your shares, I’ll find you a buyer. I just—enough.”

As it continues to grow, OpenAI will almost certainly be sued many more times. At the end of last week, seven new lawsuits were filed against the company in California, all of them alleging that ChatGPT pushed someone toward suicide or severe psychological distress.

Situations like Edelson’s have been playing out in another of OpenAI’s high-profile legal entanglements. In August, Nathan Calvin opened his door to a sheriff’s deputy, who had come to serve a subpoena from OpenAI. Calvin is general counsel at Encode, an AI-policy nonprofit with three full-time employees. Encode has been critical of OpenAI, joining a coalition of other organizations rallying against the start-up’s attempt to restructure from nonprofit governance into a more traditional for-profit business, which they fear would come at the expense of AI safety.

In December, Encode filed a brief in support of part of a lawsuit by Elon Musk, in which he asked the court to block OpenAI’s restructure (his request was denied). The subpoena sought documents and communications that would show if Encode had received funding or otherwise coordinated with Musk, which Calvin denied. But as with the legal requests of the Raine family, this one asked Encode to produce information about far-flung topics, including documents that Encode might have had about potential changes to OpenAI’s structure and a major California AI regulation that Encode co-sponsored.

Over the past several months, OpenAI has subpoenaed at least seven nonprofit organizations in relation to Musk’s lawsuit, typically asking for any ties to Musk in addition to a broader set of queries. The other six have not submitted briefs in the Musk litigation. Beyond the encumbrance—paying lawyers is expensive, and producing documents is very time-consuming—some of the targeted groups have said the subpoenas have already had a punishing effect. Tyler Johnston, the founder and one of two employees at the Midas Project, a small AI-industry watchdog, told me he has been trying to get an insurance policy that would protect Midas in the event that it’s sued over media it publishes—a standard practice—but every insurer has turned him down. Multiple insurance companies pointed to the OpenAI subpoena as the reason, according to Johnston. Being subpoenaed “makes people less likely to want to talk with you during a really critical period,” Calvin said—the late stages of getting that AI regulation passed—“and does create just some sense of, ‘Oh, maybe you have done something wrong.’”

In response to an inquiry about its subpoenas related to the Musk litigation, an OpenAI spokesperson pointed me to a series of social-media posts by Jason Kwon, the firm’s chief strategy officer. Kwon wrote that the subpoenas were a standard part of the legal process, and he’s right. “To target nonprofits is really oppressive, but I can’t say that it’s so unusual,” David Zarfes, a University of Chicago law professor who is not involved with the litigation between OpenAI and Musk, told me. Indeed, “broad” and even “aggressive” discovery requests are advised by law firms that represent corporations.

Kwon also wrote that OpenAI had “transparency questions” about the funding and control of several organizations that launched or joined campaigns critical of OpenAI shortly after Musk sued. It is true that Musk is an external adviser and has donated to at least one of the subpoenaed groups, the Future of Life Institute, and FLI has itself given money to Encode. But FLI has not received any funding from Musk since 2021, according to a spokesperson. Some of the subpoenaed nonprofits, including FLI, Ekō, and Legal Advocates for Safe Science and Technology, have also been publicly critical of Musk and xAI for, among other things, neglecting or abandoning their commitments to AI safety.

Whatever the motivations, this legal strategy represents the new normal for OpenAI: an outwardly aggressive approach. OpenAI’s determination to shift from the nonprofit model was apparently motivated in part by the desire to fundraise. The Japanese investment group SoftBank, for instance, had conditioned $22.5 billion on OpenAI making such a change. (OpenAI completed its transition to a more traditional for-profit model last week. The actual structure is a bit more complicated than it initially seemed, and a nonprofit board still technically retains control of the business side. But nothing about OpenAI’s recent actions or the board’s makeup—Altman is himself a member—suggests any changes to the company’s commercial ambitions.)

And over the past year, the company has morphed into today’s version of the famous 1904 political cartoon depicting Standard Oil as an octopus wrapping its tentacles around the globe. OpenAI has launched or revealed plans for a social-media app, a web browser, shopping inside ChatGPT, a personal device. There is the commercial showing ChatGPT suggesting a recipe for a date night; Altman’s appearances on Theo Von’s and Tucker Carlson’s podcasts; all of the lobbying documents and influence OpenAI appears to have had on Donald Trump’s AI policy. Building artificial general intelligence that “benefits all of humanity”—the company’s original mission—seems less the focus than the inverse: shaping human civilization and the planet to the benefit of building AGI.

The OpenAI of today resembles Meta or Google far more than a research lab or nonprofit. In a recent post on X, Altman wrote that the “first part” of OpenAI consisted of developing very powerful AI models, what “i believe is the most important scientific work of this generation.” Meanwhile, “this current part” of OpenAI’s evolution is about trying to “make a dent in the universe”—which largely consists, it would seem, of getting his products into the world. First was research; now comes business.

Article originally published at The Atlantic

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X

Like this:

Like Loading...

Related

TAGGED:Adam RaineEdelsonElon MuskNathan CalvinOpenAISam AltmanThe AtlanticTyler Johnston
Share This Article
Email Copy Link Print
Previous Article Yahoo news home Trump pardons man who took brief detour as he ran up and down Wyoming’s Grand Teton in record time
Next Article Yahoo news home Brazil’s Lula urges ‘defeat’ of climate deniers as COP30 opens
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Your Trusted Source for Accurate and Timely Updates!

Our commitment to accuracy, impartiality, and delivering breaking news as it happens has earned us the trust of a vast audience. Stay ahead with real-time updates on the latest events, trends.
FacebookLike
XFollow
InstagramFollow
LinkedInFollow
MediumFollow
QuoraFollow
- Advertisement -
Ad image

You Might Also Like

Yahoo news home
Today's NewsWorld

Pentagon’s Hegseth okays US Navy next-generation fighter, sources say

By By Mike Stone
Yahoo news home
PoliticsToday's News

Donald Trump dances in Malaysia as Asia trip kicks off

By James Powel, USA TODAY
Yahoo news home
Today's NewsUS

GOP Senator Urges Mike Johnson To Swear In Arizona Democrat After Weeks-Long Delay

By Sebastian Murdock
Yahoo news home
Today's NewsUS

California elementary teacher filmed himself sexually assaulting kids at school, authorities say

By Clara Harter
Newsgrasp
Facebook Twitter Youtube Rss Medium

About US


Newsgrasp Live News: Your instant connection to breaking stories and live updates. Stay informed with our real-time coverage across politics, tech, entertainment, and more. Your reliable source for 24/7 news.

Top Categories
  • Home
  • Today’s News
  • World
  • US
  • Nigeria News
  • Politics
Usefull Links
  • Contact Us
  • Advertise with US
  • Complaint
  • Privacy Policy
  • Terms of use
  • Cookie Policy
  • Disclaimer

2025 ©️ Newsgrasp. All Right Reserved 

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

%d