Among the many voices clamoring for urgent regulation of artificial intelligence is Timnit Gebru.
Gebru has all the hallmarks of a Big Tech star: a masters and PhD from Stanford, engineering and research roles at Apple and Microsoft before joining Google as an A.I. expert.
But in 2020 her time co-leading the ethical A.I. team at the Alphabet-owned company came to an end, a decision triggered by a paper she wrote warning of the bias being embedded into artificial intelligence.
Bias is a topic that experts in the field have raised for many years.
In 2015 Google apologized and said it was appalled by its photos apppowered by A.I.labeling a photograph of a black couple as gorillas.
Warnings about A.I. bias are now becoming higher profileearlier this year the World Health Organization said that although it welcomed improved access to health information, the datasets used to train such models may have biases already built in.
Such cautions are the reason the public needs to remember it has agency over what happens with artificial intelligence, argued Gebru.
In an interview with The Guardian, the 40-year-old said: It feels like a gold rush. In fact, it is a gold rush.
And a lot of the people who are making money are not the people actually in the midst of it. But its humans who decide whether all this should be done or not. We should remember that we have the agency to do that.
Gebru also pushed for clarification on what regulation would entail, after thousands of tech bossesincluding Teslas Elon Musk, Apple co-founder Steve Wozniak and OpenAIs Sam Altmansaid some guardrails need to be put on the industry.
But leaving it to tech bosses to regulate themselves wouldnt work, Gebru continued: Unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.
Its humansnot robots
The founder and director of The Distributed AI Research Institute (DAIR)an independent A.I. research unitalso had a powerful reminder about the hypothetical threat the technology is posing to humanity.
Fears range from a Terminator-like apocalypseif you ask Muskto the technology being used as a weapon of war, with others suggesting that the technology already thinks of mankind as scum.
Gebru isnt sold.
A.I. is not magic, she said. There are a lot of people involvedhumans.
She said the theories that services like large language models could one day think for themselves ascribes agency to a tool rather than the humans building the tool.
That means you can aggregate responsibility: Its not me thats the problem. Its the tool. Its super-powerful. We dont know what its going to do. Well, noits you thats the problem, Gebru continued.
Youre building something with certain characteristics for your profit. Thats extremely distracting and takes the attention away from real harms and things we need to do. Right now.
However Gebru remained optimistic: Maybe, if enough people do small things and get organized, things will change. Thats my hope.