On this weeks episode of Fortunes Leadership Next podcast, co-hosts Alan Murray and Michal Lev-Ram talk with Reid Hoffman, cofounder of LinkedIn and partner at Greylock. They discuss the pros and cons of generative A.I.; why Hoffman thinks the A in A.I. should stand for amplification instead of artificial; and the clone voice he put to work for the audiobook version of his new book, Impromptu: Amplifying Our Humanity Through AI.

Listen to the episode or read the full transcript below. 


Transcript

Reid Hoffman: The reason I like light bulb jokes is I feel that theyre a form of cultural haiku. How many Californians does it take to change a light bulb? Five. One to do it. Four to share the experience.

Alan Murray: Leadership Next is powered by the folks at Deloitte, who, like me, are exploring the changing rules of business leadership and how CEOs are navigating this change.

Welcome to Leadership Next, the podcast about the changing rules of business leadership. Im Alan Murray.

Michal Lev-Ram: And Im Michal Lev-Ram.

Murray: Michal, you want to tell us why Reid Hoffman was telling light bulb jokes at the beginning of this episode?

Lev-Ram: Because theyre funny. But really, the reason is that this has become kind of his own little Turing test for ChatGPT. And what he realized with GPT-4 is that its finally, this technology, A.I., is at a stage where it can tell some pretty funny, and, you know, compelling and human-like light bulb jokes. And hes tried this before with other iterations, other technologies. and that hasnt been the case. So this was kind of like a little bit of an a-ha moment for him. This is ready for primetime.

Murray: There is no question that A.I. is the topic of the moment. Every conversation I have with a business leader these days, sooner rather than later gets into A.I. ChatGPT, I think, has captured everyones imagination.

Lev-Ram: Yeah, and thats why we thought it would be a good topic to discuss on Leadership Next. Reid knows a lot about generative A.I. and A.I. more broadly, for many other reasons. He is the cofounder of LinkedIn, perhaps hes best known for that. But hes also a partner at the VC firm Greylock, and he was an early investor in OpenAI, the company that, of course, developed ChatGPT. And hes written a book called Impromptu: Amplifying Our Humanity Through AI.

Murray: Its a great book, but whats distinctive about it is he actually used ChatGPT-4 to help him write the book.

Lev-Ram: I think Reid is out to prove that singularity is here. He is well on his way. And he had a lot of really interesting things to say, you know, how this technology is being applied today, what it can do in the future. And, you know, hes also kind of like the light bulb joke, hes fun to talk to.

Murray: He is fun to talk to. Lets go to it. Heres our conversation with the real Reid Hoffman. This is not a clone. This is not ChatGPT. This is Reid Hoffman himself.

Lev-Ram: Reid, welcome to Leadership Next.

Reid Hoffman: Great to be here.

Lev-Ram: All right. Im going to start with your book, since that came out recently. I received a personalized copy just a little while back. And Im curious, you know, obviously, youve been really early on in all things generative A.I., but what prompted you to write this book and especially the way you went about it, love to hear?

Hoffman: Well, I love the fact that you use prompted, given Impromptu is the title of the book. Im sure with a general literary wit, that was a deliberate 

Hoffman: word. Exactly. It started with a kind of realization, when I got access to GPT-4, July, August last year, I realized that this was going to be the watershed moment that I had been predicting was upon us. And I wanted to kind of demonstrate some of the my thinking on it. And my thinking, you know, reflected in the book, is that artificial intelligence is more amplification intelligence than artificial intelligence. And I said, Well, how do I show that? Not just tell it, but show it myself? And I was like, well, I could write a book. And I could write a book, using GPT-4 for as my co-author, the first book about A.I. with A.I. as a co-author. And then I said, Okay, well, what should it be? And I was like, well, maybe a travelogue through the different areas of human concern and experiences. And obviously, you cant get them all, but to select a set of the important ones. And then the personalized copy that you got is, as I was starting to work on that, I realized that among the many transformations that A.I. as a personal assistant, a personal intelligence helping you brings to you, is you can do this mass kind of book where you also are doing one-to-one, and so you can do prompts that are specific to a person, have the book be specific to a person. And it was like, Okay, well, let me do that, too.

Murray: Of course. 

Lev-Ram: I want to hear also about the kind of the a-ha moment for you, not just for the book, but for the technology. And whats your deal with light bulb jokes? Has this been like a long-time thing for you?

Hoffman: Well, thats in the personalized content thing. And the reason I like light bulb jokes is I feel that theyre a form of cultural haiku. Right? You know, how many surrealists does it take to change a light bulb? You know, fish. How many Californians does it take to change the light bulb? Five. One to do it. Four to share the experience. You know, things that that kind of like encapsulate in this little haiku moment. You know, it may it might be a bad stereotype, but kind of a stereotype lens that fits within our kind of cultural experience. And it was one of the things that was amazing [about] GPT-4 is that it has, it has a sense of humor. And it can at least do kind of dad jokes, of which, you know, light bulb jokes can also be a version. And then for me, the a-ha moment started years back, it was part of helping stand up OpenAI.

Lev-Ram: And youre one of the earliest investors in this company, we should say. 

Hoffman: Yes, exactly. I helped Sam and Elon and others set it up, and then joined the board. And then in February, you know, felt that there would be potential conflicts between all the startups asking for special access and all the rest, which, until I left the board, I was like, I cant help you. And you know, its like, well, given my Greylock job, as an investor, its always a call, its not usually the answer I want to have. And so we talked about it and Sam said, Look, you can continue to help the company very well not being on the board. And Ill continue to do that, and kind of fit my fiduciary and board responsibilities as such. And its like, were realizing finally the benefit of the transistor. Or if you want to look at it in a different lens, Steve Jobs said, the computer is a bicycle for the mind. And now we have a steam engine for the mind. And were having a cognitive industrial revolution. And I knew that that would come, and exactly which year and exactly which shape, I thought it would come with the launch of GPT-4. And actually, since they launched ChatGPT, with 3.5 as the backdrop, and everyone could suddenly start using it. It was actually the ChatGPT that kicked off the, oh my gosh, you know, this important moment is here now.

Murray: And it was with a light bulb joke.

Hoffman: Well, it wasnt with the light bulb joke. That was part of my general exposure. I mean, I was also doing things like, I did this mini series on gray matter of fireside chat bots, podcast interviewing ChatGPT. So 3.5. And one of the things that I used that, because Ive been using GPT-4 for to do this, was how would you apply Wittgensteins theory of following a rule and language games to large language models. And the fact that it gave me coherent interesting responses was stunning, because it already means we have an A.I. that has superpowers, because most human beings on the planet cannot answer that question coherently. And so it was the fact that it could was was just like, you know, mind-blowing and awesome.

Murray: Reid, Ive been talking to a lot of CEOs of large companies since generative A.I. popped into my consciousness, which was much later than it popped into yours, but Id say last November. So Ive had many of these conversations. To a person, they agree with you that this is transformative technology. But I have to say, most of them, maybe even the vast majority of them, dont quite know how. Theyre still not quite sure, what the hell do I do with this? Can you provide some guidance? What the hell do they do with this?

Hoffman: So three lenses. First lens that I published last fall with my partner Saam Motamedi from Greylock, which is every professional activity. Obviously, theres wide ranges from you know, journalism, law, medicine, you know, engineering, research analysis, investing. Every of these activities will have essentially a personal A.I. assistant or a copilot within two to five years. And that means that that assistant will be between useful and essential. That itself gives you industry transformation, because if you think about, every industry has a bunch of professional activities, and that amplification and that change will change. You know, I wrote an essay last year, when DALL-E came out saying, Look, this is like having Photoshop, like if youre a graphic designer, and you dont know how to use this image generation, its like just saying, Well, Im not Im not a graphic designer, just like I didnt know how to use Photoshop. Its kind of a similar kind of amplification. So thats lens one. Second lens is theres going to be a shift in capabilities, kind of in the more general sense, which is kind of think of it as research assistants. So what what these things are is like a research assistant that gives you an immediate answer. Now, the immediacy is amazing and important. Now, it will also be, although, you know, OpenAI and Microsoft and others are working on this, occasionally quite wrong.

Murray: Im glad you said wrong, and not hallucinations or some sort of word thats sort of fuzzes it over. Its incorrect.

Hoffman: Yes. Exactly. Its incorrect, and its incorrect with seeming vigor and strength and deep articulation.

Murray: We have some journalists like that.

Hoffman: Its not an unhuman characteristic. And then the third is how products and services will actually, in fact, be changed. For example, lets think of like one of the areas where I think there will be substantial job impact, which is customer service, because its cost center, and anything thats a pure cost center, people will try to figure out well, if you 10x every person and you 10x every customer service rep, well, then well have 10% of them. But say for them, youre looking at that function, youre going well, what if we could now make this function not just a, whats the cheapest way we can get you off the phone? But we could make it a relationship-building moment, a brand-building moment or we could help you and, and kind of interact and give things to you from our particular brand perspective and build our relationship with you? Well, thats now available as kind of a new product.

Murray: And Michal, if I could follow up on that, because thats a great, those three frames give you a great sense of what it can do. Can you just hit a few more notes about what it cant do? Obviously, it cant fact-check. Weve established that. But it also cant really reason or, you know, somebody said it doesnt do math. Can you talk a little bit about the limitations?

Hoffman: Yes.

Lev-Ram: Even by the way, Reid, in your book, you mentioned, I think, asking GPT-4 for the fifth line of the Gettysburg Address and how challenging that is for the technology, which I found interesting. 

Murray: Counting.

Hoffman: Yes, exactly. One thing and like, easy way to screw these large language models up is ask them about prime numbers, things that human beings can understand pretty well. And its very easy to get them to be equally insistent about something thats wrong on prime numbers. So one cautionary note on, and I will express limitations, is that the technology is evolving a lot. So for example, both OpenAI and Microsoft are working to have current information. Theyre working to reduce hallucination. Theyre working to have, you know, kind of sources of information or ways that its error rate, kind of, in general, more approaches a human beings error rate on these things? 

Because remember, you know, our standard is human being, right? And that is not error, error free. And so, so you know, math, usually when you ask, because because its trained to try to be super interesting in its response to you. Like, what, what would be really compelling and interesting to you. And if you ask it a question, it doesnt really know very much aboutlike I said, maybe it would know, Alan, your biography, but maybe it doesnt. And I say, wow, you know, did Alan, like, ask the question, which, which account presumes a yes. Did [Alan] create a really interesting journalism video game? And itll go, Oh, shit, maybe he did, and I dont know about it. And then well create this Wikipedia page about how you created a video game about journalism. Thats interesting. So thats the kind of thing. And thats also, you say, give  me citations. And it goes, Okay, well, he really wants citations. So well make some citations. And those citations are incorrect. Like, you wouldnt look the citations. And heres the most funny one, one of the personalized books that I sent out, which I didnt cross check. Some of the prompts I cross checked, because I wanted to see. But other prompts, I didnt cross check, because I thought, Oh, itll just get this right, its fine. So, create a music list for you. One of the music lists it created, it created three fictional songs, like those songs dont exist. And youre like, Oh, I wasnt cross checking that because I didnt think it would get that wrong. 

Murray: Cant we do better than that?

Lev-Ram: But why does it do that? I mean, is it just aiming to please humans? Like what, why? Why does it make up stuff?

Hoffman: Fundamentally, that. Because its trained to be generative and creative and interesting. And obviously through, like, algorithm and human feedback, were trying to train it to be true as well, in which case, it frequently is true. And its safer when its not factual stuff. Its safer when its like principles, like, you know, what would be the questions one would ask in due diligence of a technology company of type X? You know, hardware software. Itll be pretty good about, like, the general class questions and so forth. Because as opposed to being factual about it, and going, Oh, I dont know, so Im going to invent because Im creative. Itll go, Okay, heres the stuff and itll be very, very good. And so thats the reason why as a co-pilot, as a personal assistant, its really good. Now, I think these are solvable problems. I think the math stuff is solvable problems. I dont think this is a these will always be this way. But its kind of a snapshot in time about how to use them as a system, how to use them as a transformation of work. And thats again, one of the reasons why I say well, well Ill just have you know, GPT-4 do my marketing. Well, that could be a bad idea.

[Music starts]

Murray: Jason Girzadas, the CEO-elect of Deloitte US, is the sponsor of this podcast and joins me today. Welcome, Jason. 

Jason Girzadas: Thank you, Alan. Its great to be here.

Murray: I have a sense, Jason, from conversations on Leadership Next and elsewhere, that business leaders today better understand the benefits of having a diverse set of voices at the management table. But what are some of the lessons youve learned through Deloittes own DEI journey?

Girzadas: Yeah, lots of lessons learned. I think weve certainly made progress. We feel like thats a function of a couple of things. Deloitte is very proud to have published twice a transparency report that sets forth long-term expectations for the diversity of our workforce, and how we hold ourselves accountable. That is meant to be and, I think, has served to be a role model stance for us to take, and one that we encourage all businesses to replicate. The second is to get specific. In addition to transparency, the specific objectives around gender diversity, around Black, and Hispanic, Latinx, as well as other cohorts that we have really established, not only recruitment and retention, but also advancement goals for. And finally, adding to the mix, how we intend to hold ourselves accountable for supplier diversity, as well as longer-term ambitions for us in this space. So our experience is somewhat emblematic of what a lot of large organizations go through. But for us, the commitment and transparency, as well as the specificity around cohorts, has made a difference. And weve seen positive results in the last two years that were hoping to build upon. Do we declare success? Absolutely not. But its made all the difference for us.

Murray: Jason, thanks for your perspective and thanks for sponsoring Leadership Next.

Girzadas: Thank you. 

[Music ends]

Lev-Ram: I think its fascinating that you kind of have your own little personal Turing test for this technology, which is the light bulb joke. And clearly, one of the reasons I think that, you know, this has exploded into kind of the mainstream consciousness, is because its so creative, and so fun to interact with. But theres a lot of concern, theres a lot of fears, and disruption to the labor market, and call it amplification artificial, or whatever you want to call it. How should CEOs be talking about this to their employee base? Were seeing IBMs CEO has already come out and said that, you know, this will impact 30% of jobs in a in a certain category. But theres a lot of fears, theres, you know, the writers strike and Hollywood, like. Thats one of the fears is that theyre going to be replaced. I mean, Alan and I could be replaced, you know.

Hoffman: Not anytime soon.

Lev-Ram: Well, maybe next time, well have your co-writer on.

Murray: Thank you for that, Reid. Well take that to the bank.

Lev-Ram: But really, like, how are CEOs thinking about this? How should they be thinking about this? Whats your advice to them? And, you know, also curious to hear these are a lot of questions. Sorry. But are tech CEOs looking at it differently than non-tech CEOs, do you think?

Hoffman: Tech CEOs are probably a little bit more familiar and a little bit ahead of the curve, but probably its similar as a group, as a tribe? So one lens into this is to think so you, you said okay, well, these assistants, these co-pilots, give everyone 10x superpowers. Look through a company say, Well, you got salespeople, you going to have less than because you have 10x superpowers? No, no, we like sales, even 10x sales or whatever else. Now, the jobs will be different, like so for example, oh, we hire these people to be running our digital ad campaigns and its a lot of form filling and all the rest. Well all the form filling stuff is going to be, you know, really amplified. We dont need as much people doing that. Well need more people doing things like thinking about like, well, what are the other ways to think about it and what to do. And if you walk through most of the areas, product engineering, operations, finance, even legal, by the way for other things, people are very hopeful that legal bills will go down. But you go through a whole thing and you go well, actually, in fact, that doesnt necessarily, it changes, it transforms the nature of the human job, but doesnt necessarily go okay, now we can slash and burn. Now, the IBM CEOs comments, I think that was a little bit of a kind of allowing, let me justify in a difficult market the fact that Im kind of doing layoffs, and freezes and so forth and let me blame A.I. And I think well see a lot of that. Its way too early to be saying 30% of this job function is going away. The tools arent there yet. They might get there. And if you think you have no upside in your business, and you only have to, to cut costs and downside, well, then that will be a natural thing of how you increase profits. Im not saying its a clean sailing, blue skies, you know, etc, etc. These transformational moments will be real. There will be job transformation. There will be some jobs that there will be off for this and navigating all thats really important, both as CEOs and as societies. Now one of the things I love about A.I. as a technology and again, part of the reason why I did Impromptu was to say, well A.I. can be part of the solution. Like say, take customer service. You say well all right, a bunch of customer service people are not going to have jobs. All right, well, how do you reskill them? How do you help match them to other jobs? How do you give them superpowers to do other jobs? Well, A.I. is an answer on all three of those things. And so when you say, Well, what should we be doing as leaders? What should we be doing as government people? What should we do? Well, lets help people. Lets use the technology to help do the transition to being in the full swing of the cognitive industrial revolution.

Murray: Reid, youre an optimist and Michal and I are optimists and, and I think

Lev-Ram: Wait, why did you lump me in with the optimists here? 

Murray: Okay, Michal is sometimes an optimist. And I think theres some theres historical experience to support that optimism. But I want to take you down a dark hole here for a minute. I mean, Ive been a journalist since I was nine years old. Michal has been a journalist her whole life. We were raised on a great respect for facts. We believe in facts. We think facts actually exist. That there is, you know, in some areas, there is, you know, discernible truth and we were trained on techniques to find it. Thats obviously deteriorated in recent years. Social media certainly has something to do with that. The fact that everybody is in our business now has something to do with that. There are lots of other reasons that you can cite. But Im really worried about this, that this was loosed upon the world with zero respect for facts, and what is the effect going to be on our society as we continue to devalue and undercut the factual basis of our interactions?

Hoffman: Well, as a philosopher by training, I am also a great believer in truth with a capital T and facts with a capital F. I wouldnt say it was loosed with a zero respect. There was a lot of effort to try to get factual information. And so it doesnt mean its perfect, its error rate is higher than wed like for sure. Also, by the way, theres easy ways to do this. Theres this whole stack of how the tech is going, which is, like, theres going to be this area of meta prompting. And if you put meta prompts in that have something you say, Well, this is a fact and use this as part of your response, it will then conform to that fact. So I dont think the zero regard for facts, I think its a its a nice slogan, but not true. Speaking of facts, but on the other hand, I completely concur. Oh, my gosh, have we been having a degradation of civil discourse of the importance of truth seeking of discerning facts, and that we need to be there. And then we need to figure out how we get there as kind of human beings. And by the way, A.I. can help with that. So for example, one of the things that I most liked during the election, you know, this is a Twitter pre-Elon, was one of the things that Twitter was doing, which is say, hey, if this if something seemed very off, expert consensus, open a little box around it, and say, Look here to get the facts. Right? It wasnt saying this is wrong, you cant say the moon is made out of blue cheese, or 2020 was an unfair election or whatever. But you could say, Hey, if youre saying, were going to put this little box around it to direct people to say, Over here is where you can find facts. And that kind of thing is the kind of thing that A.I. can help with a lot. And so I think its more of a human problem to solve the problem that youre talking about, Alan, and I want to solve it, and I think we should. I think its necessary because I think, what we should be doing, and its one of the things I love about good media, of which, you know, part of reason Im on this podcast is I agree with you guys on the stuff, is to say, we should be collectively learning. Like, there is such a thing as facts, there is such a thing as a truth and we should be learning towards that together, and its an infinite journey but thats a good thing to do. And so Im strongly bullish on that.

Murray: Thats good. And if I did overstate my question, and if I did, its because I asked ChatGPT to write my short biography, and it made me 10 years older than I actually am.

Lev-Ram: I think you are so personally offended 

Murray: I was. 

Lev-Ram: I think, you know, the hope is obviously that this is not only that the people who are leading the charge here are going to be thoughtful about it, but also that the regulatory forces that be actually make some smart decisions here. Well see. But in the meantime, its just its moving so fast, which makes it so much more difficult, right?, to do all that good stuff in conjunction. As an investor though, putting your investor hat on, huge opportunity, you know, OpenAI aside, I feel like as a journalist every other pitch I get, actually all pitches I get, have some generative A.I. slant at least. Like what happens next? Are you just, are you seeing you know, boundless opportunity? Is there some shakeout? What percentage of it is kind of B.S.? Like whos really utilizing generative A.I.?

Hoffman: Well, its just like any of these major tech waves. Even though I think A.I. is the most major of my lifetime, in part because its a crescendo. It builds on the internet, it builds on mobile, it builds on cloud and its an amplifier across all it. 

Lev-Ram: Is it more major than any one of those individually? 

Hoffman: Yes, because its an amplifier, right? It amplifies on top of that. But remember, like internet, we had all kinds of crazy stuff. Mobile, we had all kinds of crazy stuff. And so itll be a bunch of crazy stuff too. Itll be like, theyll be, you know, like, its not really A.I. Is overstated claims. It doesnt really do what it claims to do. Well have all that stuff. Thats, thats human entrepreneurship, when everyones running towards the gold rush. You will also of course, have many many amazing things. And so thats, like, super important things for us to to kind of move forward on. And of course, then the investing theory, you know, is well across all this now, I had the the fortune of position to have seen this early. So we at Greylock started investing years ago on this stuff, which, you know, is like Adept and Inflection and Crest and Snorkel and all these all these companies and all of our, our portfolio companies started pivoting towards kind of the generative A.I., increasing their features. And you know, like Tome and CODA and everything else well before the public market realized it because, you know, thats one of the benefits of having a lucky venture firm along with you. And I think theres a ton of stuff thats still available. Its not just like, oh, really good investment was two years ago or three years ago. I think theres a bunch. You have to be discerning about a lot of the principles that still apply within business, like, whats your go to market? Whats your competitive differentiation? You know, why is it that this will be a good for example, startup product, versus a good product from a larger company? Because there is, you know, there are some places here, not just the usual set of customers kind of in depth enterprise relationship. Some other advantages that the large companies have is, well, if youre going to be training a compute on a multibillion dollar computer, you know, large companies do multibillion dollar computers much better than startups. So you have to kind of sort through all that as an investor. But you know, I think there is just, what is it? Theres gold in those hills.

Murray: So theres one other issue that I think we need to address. And that is, what does this do to intellectual property? If somebody can take this podcast and create the Hoffman voice, how do you stop that? Or if somebody is painting pictures in the style of, how does the artists stop that? I was talking to somebody whos pretty deep into the technology, who said, the first big challenge of the Supreme Court on this will be a copyright challenge. So whats the answer to that?

Hoffman: Well, I think were going have to work out new law for it. I think the old law wont apply exactly right. Because by the way, if I created a painting, you know, me in the style of x thats allowed. If I, you know, said, Hey, Im going to, Im totally incompetent at this so couldnt do this, but I was going to take either of you and try to, like, like, voice impersonate you, thats allowed. You know, I cant say that Im you but I could say that its in the style of and thats an allowed thing. So now I have this tool that Im doing with it, that suddenly gives me the superpowers to do it that was previously limited. All right, what am I, you know, am I allowed to do it in those ways now that I that because I have a tool? So the law is going to have to be careful on this and we want to navigate it. Now, my suggestion would be, and this is early, so I could easily mod this suggestion in a couple of months as I see it through because its kind of the human dynamics of, you know, protecting the intellectual work of human beings to be able to have the incentive to do it. Its part of the reason why we we have the those laws in the first place. [Inaudible] I would tend to say that, that you have to kind of disclose that youre using the tools. You have to be clear that its in the style of when you produce data, just like robots dot text of can you put it in a search engine or not? You have to say, can you use this for a training run or not? And, you know, contact me if you want to use it. You know, that kind of stuff, I think is is part of what I think elements of the future probably look like.

Lev-Ram: This is maybe one place where Im not overly optimistic. Is the law catching up in time? We havent seen good examples of that. But maybe well be surprised this time around. Okay, and perfect segue to the audiobook, because clearly you are embracing this. So tell us whos going to be narrating the audio version of the book.

Hoffman: So one of the internal products that Microsoft has is a incredibly good voice cloning product, which I think they are unlikely to release because they want to be good to all the creators and so forth and they dont want to have people voice cloning other people. But I went to them and I said, Look, this product is really amazing. Ive just done this book. Can I use this product to voice clone myself? Because its me voice cloning me to do this. So thats what were going to do. And I think itll be out pretty soon. We are cross checking it is where, you know, were the product alpha test because its like, oh, the pronunciation of this unusual name? Not quite right. And therell be some of those errors anyway. But you know, its itll, hopefully it will amaze and delight. 

Murray: By the way that voice cloning thing is, I hate to take a stab in the dark side again, but that is one of the spooky things I heard Nikesh Arora say that someone, you know, some people are doing that voice cloning to trick people into moving money around, you know, to do cyber attacks.

Hoffman: A hundred percent. But by the way, again, when you say, well, A.I. is part of the solution, you have an A.I. assistant running on your phone that says, Wait a minute, are you sure about this? This could be a phishing attack. And thats part of the reason why I think the good actor is moving faster to build up the defenses. That is definitely you know, one of the like, a whole bunch of cyber hacking is amongst the things, human amplification, amplification of bad humans and bad activity is precisely one of the things that we should be most worried about when were talking about the risks. 

Lev-Ram: So what youre saying is that were setting the stage for just this massive battle between good A.I. and bad A.I. Thats whats going to happen basically.

Hoffman: Or A.I. in the hands of good humans and the A.I. in the hands of bad humans. 

Lev-Ram: Amplification. It all goes back to that. Reid, thank you so much. I feel like we could go on and on we all have so many questions about this. This is the big question for all of us, you know, and I think not only the business world but beyond. So thank you for shedding some light on amplification intelligence. Thats what were supposed to call it right?

Lev-Ram: And I cant wait for the audiobook. Im going to spend some time with that Hoffman clone.

Hoffman: Yes. And I look forward to your feedback.

Lev-Ram: Well tell you which one we like better.

Hoffman: Uh-oh. I might be scared to hear that. But Id be delighted.

Lev-Ram: Thank you, Reid.

Hoffman: Thank you.

Lev-Ram: Leadership Next is produced by Alexis Haut and edited by Nicole Vergara. Our theme is by Jason Snell. Our executive producer is Megan Arnold. Leadership Next is a production of Fortune Media.

Murray: For even more Fortune content, use the promo code LN25. Thatll get you 25% off our annual subscription at Fortune.com/subscribe.

Leadership Next episodes are produced by Fortunes editorial team. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte or its personnel. Nor does Deloitte advocate or endorse any individuals or entities featured on the episodes.


Newspapers

Spinning loader

Business

Entertainment

POST GALLERY