Can ethical AI surveillance exist? Data scientist Rumman Chowdhury doesn’t think so

Rumman Chowdhury, the former director of machine learning ethics, transparency and accountability at Twitter, said at a recent talk that she does not believe ethical artificial intelligence surveillance can exist. 

“We cannot put lipstick on a pig,” the data scientist noted at New York University’s School of Social Sciences. “I do not think ethical surveillance can exist.”

In an interview published Monday in The Guardian – which spotlights that statement – Chowdhury warned that the rise of surveillance capitalism is hugely concerning to her. 

She asserted that it is a use of technology that, at its core, is unequivocally racist and, as such, should not be entertained. 

‘GODFATHER OF AI’ SAY THERE’S A ‘SERIOUS DANGER’ TECH WILL GET SMARTER THAN HUMANS FAIRLY SOON

Rumman Chowdhury

Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit developing accountable AI systems, poses for a photograph at her home Monday, May 8, 2023, in Katy, Texas. (AP Photo/David J. Phillip)

In a recent op-ed for Wired referenced in the piece, Chowdhury also said that only an external board of people can be trusted to govern AI. 

“We’re getting all this media attention,” she told The Guardian, “and everybody is kind of like, ‘Who’s in charge?’ And then we all kind of look at each other and we’re like, ‘Um. Everyone?’”

In the interview, she lamented what she calls “moral outsourcing,” or reallocating responsibility for what is built onto the products themselves. 

Her approach to regulation is that “mechanisms of accountability” should exist – and she says lack of accountability is a problem.

Rumman Chowdhury uses a computer

Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit developing accountable AI systems, works at her computer Monday, May 8, 2023, in Katy, Texas. (AP Photo/David J. Phillip)

“There is simply risk and then your willingness to take that risk,” she explained, stating that when the risk of failure becomes too great, it moves to an arena where the rules are bent in a specific direction.

OPENAI CEO ALTMAN BACKTRACKS AFTER THREATENING TO EXIT EUROPE OVER OVERREGULATION CONCERNS

“There are very few fundamentally good or bad actors in the world,” she continued. “People just operate on incentive structures.” 

The Harvard University Responsible AI fellow said she aimed to bridge the gap of understanding between technologists who “don’t always understand people, and people [who] don’t always understand technology.” 

“At the core of technology is this idea that, like, humanity is flawed and that technology can save us,” she said.

Sam Altman standing

Sam Altman, CEO and co-founder of OpenAI, speaks during an event at the Microsoft headquarters in Redmond, Washington, on Tuesday, Feb. 7, 2023. (Chona Kasinger/Bloomberg via Getty Images)

Notably, Chowdhury is working on a red-teaming event – during which hackers and programmers are encouraged to try and curtail safeguards and push tech to do bad things – for Def Con, which is a convention hosted by the hacker organization AI Village. The “hackathon” is supported by industry leaders – including OpenAI, Google and Microsoft – and the Biden administration.

CLICK HERE TO GET THE FOX NEWS APP 

She said she believes that it Is only through such collective efforts that proper regulation and enforcement can occur, although cautioning that overregulation could lead models to overcorrect. 

The outlet said Chowdhury added that it is not easy to define what is toxic or hateful. 

“It’s a journey that will never end,” she said. “But I’m fine with that.”

Check Also

Larian Studios shocks fans by not planning any Baldur’s Gate 3 DLC or expansions, with no Baldur’s Gate 4 in sight. Time for something new!

During a panel at the Game Developers Conference (GDC) today, the founder of Larian Studios, …