A  woman who signed up to help test a new version of a robotic vacuum cleaner did not expect pictures of her taken on the toilet to end up on social media. But through a third-party leak, thats what happened.

The trial in 2020 went sideways after iRobot which produces Roomba autonomous robotic vacuum cleaners asked employees and paid volunteers to help the company gather data to help improve a new model of the machines by using them in their homes. iRobot said it made participants aware of how the data would be used and even affixed the models with recording in process tabs.

But through a leak at an outside partner which iRobot has since cut ties with and is investigating private pictures ended up on social media.

The machines are not the same as production models which are now in consumers homes, the company was quick to add, saying it takes data privacy and security very seriously not only with its customers but in every aspect of its business, including research and development.

Growing mistrust

As A.I. continues to grow both in the professional and private sectors, mistrust of the technology has also increased because of security breaches and lack of understanding.

A 2022 study by the World Economic Forum showed that just half of the people interviewed trusted companies that use A.I. as much as they trust companies that dont.

There is, however, a direct correlation between people who trust A.I. and those who believe they understand the technology.

This is the key to improving users experience and safety in the future, said Mhairi Aitken, an ethics fellow at the Alan Turing Institute the U.K.s national establishment for data science and artificial intelligence.

When people think of A.I. they think of robots and The Terminator they think of technology with consciousness and sentience, Aitken said.

A.I. doesnt have that. It is programmed to do a job and thats all that it does sometimes its a very niche task. A lot of the time when we talk about A.I. we use the toddler example: that A.I. needs to be taught everything by a human. It does, but A.I. only does what you tell it to do, unlike a human it doesnt throw tantrums and decide what it wants to try instead.

A.I. is used widely in the publics day-to-day life, from deciding which emails should go into your spam folders to your phone answering a question with its in-built personal assistant.

Yet its the entertainment products like smart speakers that people often dont realize use artificial intelligence, Aitken said, and these could intrude on your privacy.

Aitken added: Its not like your speakers are listening, theyre not. What they might do is pick up on word patterns and then feed this back to a developer in a faraway place who is working on a new product or service for launch.

Some people dont care about that some people do, and if youre one of those people its important to be aware of where you have these products in your home, maybe you dont want it in your bathroom or bedroom. Its not down to whether you trust A.I., its about whether you trust the people behind it.

Does A.I. need to be regulated?

Writing in the Financial Times, the international policy director at Stanford Universitys Cyber Policy Center, Marietje Schaake, said that in the U.S. hopes of regulating A.I. seem a mission impossible, adding the tech landscape will look remarkably similar by the end of 2023.

The outlook is slightly more optimistic for Europe after the European Union announced last year it would create a broad standard for regulating or banning certain uses of A.I.

Issues like the Roomba breach are an example of why legislation needs to be proactive, not reactive, Aitken added: At the moment were waiting for things to happen and then acting from there. We need to get ahead of it and know where A.I. is going to be in five years time.

This would require the buy-in of tech competitors across the globe however, Aitken says the best way to combat this is to attract skilled people into public regulation jobs who will have the knowledge to analyze what is happening down the line.

She added awareness around A.I. is not just down to consumers: We know that Ts&Cs arent written in an accessible way most people dont even read them and thats intentional. They need to be presented in a way in which people can understand so they know what theyre signing up for.

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY