Microsofts chief economist has a retort to people who worry about artificial intelligence being misused for crime or other nefarious purposes: Cars can be unsafe too. The key is putting safeguards in place. 

We do have to worry a lot about [the] safety of this technologyjust like with any other technology, Michael Schwarz said about A.I. during a World Economic Forum panel on Wednesday. 

Vehicles, for example, get people to where they want to go. But theyre also a danger because of accidents and pollution. 

I hope that A.I. will never, ever become as deadly as [an] internal combustion engine is, Schwarz said. 

A.I.s dangers can be hard to dismiss even for companies working on them, and Schwarz acknowledges that the technology can cause harm in the wrong hands. 

I am quite confident that A.I. will be used by bad actors, and yes it will cause real damage, Schwarz said. It can do a lot of damage in the hands of spammers, people who want to manipulate elections and so on. 

But some of that can be avoided, he said. Once we see real harm, we have to ask ourselves the simple question: Can we regulate that in a way where the good things that will be prevented by this regulation are less important? Schwarz said. The principles should be the benefits from the regulation to our society should be greater than the cost to our society. 

Microsoft is focused on the good that A.I. can bring society and is working to develop A.I. to help people achieve more by making them more efficient, Schwarz said. 

We are optimistic about the future of A.I., and we think A.I. advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly, a Microsoft spokesperson told Fortune

Microsoft has been a key player in the recent surge of generative A.I. technology. The company has built a chatbot using technology from OpenAI and has incorporated it into a number of products. Microsoft also plans to invest $10 billion in OpenAI over several years, after having already pumped money into the startup in 2019 and 2021. 

Schwarzs warning about A.I. echoed, to a point, recent remarks by Geoffrey Hinton, a former Google vice president and engineering fellow, who helped create some of the key technologies powering the widely-used A.I. tools today and who is referred to as the Godfather of A.I. He warned that it may be tough to stop A.I. from being used for fraud. 

It is hard to see how you can prevent the bad actors from using it for bad things, Hinton told the New York Times in an interview published Monday. 

I console myself with the normal excuse: If I hadnt done it, somebody else would have.

One of Hintons concerns is the availability of A.I. tools that can create images and aggregate information in a matter of seconds. They could lead to the spread of fake content that an average person would have trouble telling is accurate or not. 

While Schwarz and Hinton worry about how bad actors may misuse A.I., the two experts diverge in how they think A.I. may impact certain jobs.

During the WEF panel, Schwarz said people are paranoid about their work being replaced by A.I. and that they shouldnt be too worried about it. But Hinton, who had worked at Google since 2013 until recently, said that there is a real risk to jobs in an A.I.-dominated work environment. 

It takes away the drudge work, Hinton said. It might take away more than that. 

Calls to halt advanced A.I. development

In March, over 25,000 tech expertsfrom academics to former executivessigned an open letter asking for a six-month pause in the development of advanced A.I. systems so that their impact could be better understood and regulated by governments. The letter argued that some systems, such as OpenAIs GPT-4 introduced earlier that month, are becoming human-competitive at general tasks, threatening to help generate misinformation and potentially automating jobs at a large scale. 

Executives at tech giants like Google and Microsoft have said that a six-month pause will not solve the problem. Among them is Alphabet and Google CEO Sundar Pichai. 

I think in the actual specifics of it, its not fully clear to me how you would do something like that today, he said during a podcast interview in March, referring to the six-month moratorium. To me, at least, there is no way to do this effectively without getting governments involved. So I think theres a lot more thought that needs to go into it.

Microsofts chief scientific officer told Fortune in an interview in April that there are alternatives to pausing A.I. development. 

To me, I would prefer to see more knowledge, and even an acceleration of research and development, rather than a pause for six months, which I am not sure if it would even be feasible, said Microsofts Eric Horvitz. In a larger sense, six months doesnt really mean very much for a pause. We need to really just invest more in understanding and guiding and even regulating this technologyjump in, as opposed to pause.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY