Twitter’s Trust and Safety Head Ditches Protocol for Musk Whims

Ella Irwin is Musk’s most faithful supporter, even when his impulses buck convention

(Bloomberg) — On a day hundreds of Twitter Inc. employees were debating whether to resign, Ella Irwin showed up with a pep talk. Elon Musk had offered her a battlefield promotion to manage trust and safety, the division in charge of limiting harmful posts on the network.

It was half past noon on Nov. 17. Twitter employees had a little over an hour to decide whether to click “yes” on a form Musk had sent out, asking that they declare they were willing to do their jobs in “hardcore” mode, or quit. Irwin encouraged the team to stay and find a way to work with Musk, according to three people who attended the meeting at the San Francisco headquarters. Though his methods were unconventional, she said, employees needed to adjust and support how he wanted to lead the company.

Some of the staff had a major problem with her pitch: The way Musk wanted to lead was going directly against the policies and procedures Twitter had spent years refining, an effort to build trust with the public and turn around its reputation for exposing users to toxic experiences. Already, Musk’s antics had spooked major advertisers and led to online harassment for former Twitter executives. Much of the remaining trust and safety staff hoped primarily to change his mind or protect Twitter from him, according to people familiar with the matter — but not Irwin.

Irwin has become the chief executor of Musk’s whims, even when it goes against established protocols for social media content work that Twitter and its peers have refined for the past decade, according to more than a dozen current and former employees. Musk has rewarded Irwin’s loyalty, trusting her to explain Twitter’s moves to the public in tweets and news articles.

Since Musk took the helm of Twitter, Irwin has helped him break conventions in how Twitter manages user account policy. Twitter rolled out — then abruptly revoked — a policy banning the promotion of accounts on other social networks. The company temporarily suspended journalists who cover Musk and Twitter, removed key misinformation policies, and banned leftist activists because Musk wanted to do so. And access to internal documents and tools has been granted to outside writers handpicked by Musk to support a narrative — disputed by former staffers — that before he stepped in, Twitter was servile to US intelligence and federal health agencies. 

“Nearly all the people who know how to build safety systems at Twitter have left the company, and those who are still there appear to be unwilling or unable to tell their boss that the things he is asking them to do are dangerous or violate Twitter’s legal commitments,” said Laura Edelson, a computer scientist at New York University who studies online political communication.

In an emailed response to an interview request, Irwin, 47, said she could not speak for others, but she believes there are “many folks at Twitter who understand how to build safety systems and work on these systems daily.”

Being the head of trust and safety at Twitter has long been a crucial and closely scrutinized job, given the potential impact that person has over speech on one of the world’s most influential platforms. Previous leaders have been tasked with making complicated and controversial decisions, including when to ban accounts when they cross a line, be it by jeopardizing public health in the midst of a pandemic, or by threatening the safety of democratic elections around the world. Twitter’s decisions are often later probed by politicians and regulators, and so they are typically made with careful documentation pointing to specific policy justifications for the action, the current and former employees say. 

But now, internal documentation shows a decision-making process amounting to little more than unilateral directives issued by Twitter’s new owner. In late November, an account belonging to the leftist activist Chad Loder was banned from the platform. In Twitter’s internal system, a note read, “Suspension: direct request from Elon Musk,” according to a screenshot viewed by Bloomberg. On Dec. 11, Jack Sweeney, the creator of a bot tracking Musk’s private plane, posted a screenshot showing Irwin had sent a Slack message directing employees to restrict visibility to Sweeney’s bot account, @elonjet. On Dec. 15, when Twitter suspended prominent journalists covering Twitter and Musk, the action was accompanied by an internal note: “direction of Ella.” 

Twitter used to have a group called the Global Escalations Team that could be a check on power at the top of the company, overruling executives based on existing policy. Employees say that group has folded, and Irwin and Musk can no longer be challenged through a formal process. In her emailed response, Irwin said that was “not accurate at all,” declining to elaborate.

Still, this month Irwin confirmed more cuts to teams handling global content moderation, hate speech, misinformation policy, global appeals and state media. Nine days later, two Taliban officials briefly gained access to blue checkmarks through Twitter Blue, the platform’s paid subscription tier. Twitter’s moderation research consortium, introduced in late 2021, is now effectively defunct, with no program managers left to oversee the work. Dozens of as-yet unpublished — but completed — reports detailing information operations on the platform will likely never become public, according to four former staffers who worked on the studies. (In an email, Irwin said she did not know of the reports and could not comment on them.)

“It’s like Musk is taking all of the content moderation best practice norms the trust and safety community has built up over the past decade and is trying to set them on fire,” said Evelyn Douek, an assistant professor at Stanford Law School. “The entire trend has been towards giving users more transparency, predictability and due process. What Musk is doing is like the antithesis of this.”

Yoel Roth, who led the company’s trust and safety team when Musk took over in late October, was initially optimistic about the new owner’s plans, according to four people familiar with the matter. Just days into his tenure as CEO, Musk met with the leaders of several civil rights groups and said he wanted to form a Twitter content moderation council to think through complicated decisions, like whether to bring back former President Donald Trump’s account. Musk also leaned on Roth, who had been at Twitter more than 7 years, for his institutional knowledge, and started holding him up publicly as the top executive dealing with Twitter’s policy decisions. While other departments at Twitter were cut dramatically, trust and safety under Roth lost less than a quarter of its employees in the first round of layoffs.

But within days, it became clear to Roth that Musk would be making decisions unilaterally about Twitter’s rules and whose accounts would get reinstated and banned. He resigned, saying later in a New York Times op-ed that “a Twitter whose policies are defined by edict has little need for a trust and safety function dedicated to its principled development.”

Some Twitter critics celebrated Roth’s departure, given his role at the company during controversial decisions like Trump’s ban in 2021. But internally, employees were concerned that Musk would now be able to make decisions without any pushback. 

What little they knew of Irwin, who had joined a few months prior to Musk’s takeover, didn’t inspire confidence. Her background was not in content policy. Rather, she’d overseen Twitter’s division handling issues like abuse and spam. Before that, she held senior roles at Twilio Inc. and Amazon.com Inc., where she focused on preventing hacks and marketplace abuse, and much less on user speech. Irwin earned her bachelor’s degree in business management from California Lutheran University in 2000, then pursued a postgraduate master’s degree in the same field at Golden Gate University, graduating in 2005, according to her LinkedIn profile.

Yusupha Jow, an engineering manager who worked for Irwin at Twilio, said that she was highly organized and relentless at work, traits she picked up during her time at Amazon. “Everything she did required a certain standard of excellence,” he said. But he acknowledged her management style wouldn’t suit everyone. “If you are overly sensitive you probably want to recalibrate.”

She joined at a tough time for Twitter’s health and policy teams. Musk had spent months publicly criticizing Twitter executives, arguing that the company was misleading the public about how many bot and spam accounts were included in Twitter’s calculation for total users. While Twitter defended itself in legal filings in a court fight with Musk, the team Irwin inherited grappled with a narrative that their work overseeing bots and spam was incompetent, said a former employee, who declined to be named discussing internal matters.

Irwin sent her new team a multi-page document advising them on how best to work with her, including details about her personality and preferred operating style. She asked employees to defend their ongoing projects using a template she devised, killing projects she deemed not worth the resources. While it’s standard for new managers to take stock of their team’s strategy, Irwin’s abrupt approach alienated some, employees said. The process also slowed things down as teams were forced to wait for her approval to continue working.

Irwin, in her emailed statement, said that she killed initiatives because “there were fewer people than there were desired projects to complete.”

Irwin and Roth also directly butted heads in the months before he left the company, according to people familiar with the matter. As part of the review of unnecessary projects, she ordered a pause of work Roth oversaw that scanned the social network for spammy actors or people who wished to inject disinformation into the platform, such as those who spread falsehoods favorable to the Chinese Communist Party, according to four former employees. Roth, who was a lateral peer of Irwin’s, bristled at what he saw as overreach by Irwin into crucial processes executed by his team, the people said. Roth overruled her, saying it was essential work, they said. Irwin said she could not remember “any specific conflict” that she had with Roth and that the two “worked very well together.”

The friction with Roth made Irwin an unlikely successor. Her colleagues believed she had left the company soon after Musk’s takeover, when he was slashing non-essential workers. But after Roth resigned, Musk asked Irwin back. Her first day back in the office was the day of her pep talk. “I encouraged the team to embrace change and keep an open mind,” she said. “I was never fired or unemployed from Twitter.”

Employees thought that while she was a strong operator, she didn’t have the background in content work to push back on Musk’s decisions that might reverberate in society and affect which information users would be exposed to on Twitter, according to multiple people familiar with the situation. Musk’s Twitter Files project — which involved leaking internal emails and documentation to external journalists — was her first test.

On Dec. 8, the writer Bari Weiss posted a Twitter thread that purported to show that company employees had covertly blacklisted accounts and tweets; in reality, the documents she shared showed workers earnestly debating the spirit of their content moderation policies. Weiss posted images that only select employees have access to, and can be used to see private details of a user’s profile. The images contained the identifier  “eirwin4903ZWlyd21u863” — revealing that Irwin was the company source for the material.

The images were photos taken of a computer screen, which implies Irwin could have been sitting side by side with Weiss while viewing Twitter users’ accounts, people familiar with the systems said. “From a security standpoint, it’s horrific,” said one ex-employee. Though Irwin said she never gave Weiss access to people’s direct messages, two people said the images showed Irwin had access to them while provisioning the screenshots for Weiss.

Multiple employees explained that as Irwin wouldn’t have used the tool in her prior job, she may not have known her identifier would display publicly. Irwin says that’s not true. “I was very aware (but unconcerned) of it being there,” she wrote in an email. 

It is “bizarre” for a head of trust and safety to share an internal tool with outsiders, said Steve Weis, an engineer at the software company Databricks who has worked in trust and safety teams at social networks. “Employees misusing internal access like this undermines the very trust that a head of trust and safety is supposed to be building,” he said. 

There used to be an entire information security team dedicated to following processes, auditing what staffers were using the tool for, and looking proactively for unusual access patterns, said a former high-level Twitter employee — a team that no longer exists because members either quit or were fired. 

Other documents in the Twitter Files displayed the email addresses and names of junior workers involved in high-profile decisions, exposing them to public attacks and threats. Roth had to flee his home after Musk attacked him online, according to people familiar with the matter.

In December, Irwin continued to back up Musk in criticizing Twitter’s work from before he took over. Musk said Twitter “refused to take action on child exploitation for years,” a statement Twitter co-founder Jack Dorsey disputed. Irwin tweeted: “I wish this was false but my experience this year supports this.”

Two employees who worked on child safety at Twitter said the team had been hit hard by attrition by the time Irwin joined, but to say that it refused to take action on the problem was false. Twitter used to maintain a world map with pushpins for the locations of the dozens of child predators who got arrested as a result of cyber tips the company submitted.

The National Center for Missing and Exploited Children, a federally designated clearinghouse for online child sexual abuse imagery that works with law enforcement agencies, also refuted the idea that Twitter had not taken action on child exploitative content before Musk’s takeover. “It’s been disheartening to see that rhetoric because we had relationships with people that really, truly cared about the issues,” said Gavin Portnoy, a spokesman.

In her email, Irwin said she never claimed no work was being done on the issue — just that it was understaffed. 

As Irwin prioritizes supporting Musk, a former company executive said they believed that the platform is on a path to primarily serving the interests of the already-powerful, and those whose ideologies align with him.

“Twitter’s policies and practices in the trust and safety space were built around defending the rights of users around the world, especially the most vulnerable and marginalized communities,” the former executive said. “Since the acquisition, the company’s only actions have been to silence critics of Elon, to expose journalists and others to harm, and to violate basic ethical standards and privacy laws.”

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.