A technologists career delivers only a few moments when they can look up and honestly say: Wow, things just changed. The web browser was one. So was the iPhone. Now its a new wave of artificial intelligence.

Technologies like the just-released GPT-4 are swiftly being folded into search engines, apps, and other systems that reach billions of people every day. We are clearly at an inflection point. The question is: Can we trust what were building?

Of course, there is a great deal to be excited about. Were already seeing software developers, artists, and designers create entirely new kinds of things on top of these new tools and APIs. Yet, in a rush to experiment, play, and build, we are racing past the critical questions raised over the last decade about how A.I. can impact peopleand society. Is it biased? Will it hurt us? Will its ability to quickly produce convincing lies be weaponized?

As we race forward, we risk increasing the harms from the last era of tech: monopolies, misinformation rabbit holes, and regional imbalances of economic power.

Meanwhile, established tech brands are using this new wave of A.I. to further consolidate their control over cloud computing and other key building blocks of the modern internet. These platforms have provided the computing powerand much of the fundingneeded to build most of the new A.I. models youve heard about.

Tech giants are quickly rolling out cloud services designed to lock up the market for the infrastructure that startups and companies will need to use these new tools. If theyre successful, changing the game will be nearly impossibleand the problems we have with the internet today are likely to get worse.

The good news? Hundreds of thousands of scientists, artists, developers, policymakers, startups, activists, and everyday people have already spent the last few years discussing and experimenting with a different approach to A.I. and tech. The job before us is to turn this loose alliance into a force that can build a truly diverse and trustworthy A.I. ecosystem.

So what would this next wave of A.I. look like if we succeed in changing the status quo? Imagine a world where the technologies we use every day to read the news, connect with our friends, order food, or just check the time are designed with our mental and physical well-being in mind. They would gently let us know things we need to know when we need to know themand automate the tasks that we dont want to do ourselves. Pause for a moment to picture the barrage of unhelpful notifications and offers that surround us today, and then imagine that going away.

Imagine a world where A.I. truly worked in each of our interests. It would talk to social networks, online shopping services, the government, and the long list of people who have tasks they want you to do. Where possible, it would take care of mundane things, and it would summarize things it needed our input on. Most importantly, it would be owned by each of us and run by trusted third parties, and not by any of the companies or governments weve asked it to negotiate with.

We could imagine a world of digital infrastructureor, thinking a bit smaller, a highly powerful cloud hosting platformthat is decentralized, low cost, and has a low carbon footprint. Maybe its owned by a network of small companies spread across every continent. Or, it could be a platform controlled by a cooperative of app developers, designers, and others who are making a living on top of this infrastructure. Whatever it is, it is an alternative to the dominant U.S. and Chinese platforms that control the marketplace today.

How do we make all this a reality? By collectively creating an open-source toolkit that makes it easy for developers, artists, companieseveryoneto pull A.I. that is trustworthy by default into whatever they are building.

Startups and builders in the A.I. space have a big role to play here, especially those already working on trustworthy or open-source A.I. (or both). Weve met engineers who left comfortable jobs at big platforms to explore the idea of responsible recommendation engines. Ultimately, the open-source tools they are building could be taken off the shelf and rolled into apps and services, building content feeds optimized for user input and control, not simply engagement.

Independent researchers across disciplines from computer science to mental health to economics also need to play a role, helping us solve some of the bigger and newer problems were seeing emerge with generative A.I. For example, weve started working with a team of researchers developing algorithms to help people figure out when A.I. is hallucinating or being deceitful. If theyre successful, their tools could be built into apps alongside things like ChatGPT, offering an additional safety layer as A.I. pervades more areas of our digital lives.

Finally, Western governments have already signaled their interest in creating publicly funded research clouds. That could also be a key part of the solution. They could stand up infrastructure that makes it possible for researchers, startups, and nonprofits to work on projects on par to whats currently being built on the cloud platforms of the dominant tech players. This in turn would help decentralize and diversify innovation in A.I.

We hope that Mozilla can also be a part of the solution. Thats why were creating Mozilla.ai, a startupand a communitythat aims to help build out an open-source, trustworthy A.I. stack. Well start by building tools that start to add a trust layer on top of the large language models that drive generative A.I.

The dominant players shaping the A.I. landscape today should also be part of the solutionbut only a small part.

Better A.I. doesnt have to be science fiction. A.I. technologists have already shown that we can build tools that make it easy to create amazing things. We need to add to that toolkit so its just as easy to bake trust, safety, and human well-being into the fantastic things we create. And we need to do this urgently.

Mark Surman is the president of the Mozilla Foundation and the chair of the Mozilla.ai Board. Moez Draief is a computer scientist and the incoming managing director of Mozilla.ai.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY