PodClips Logo
PodClips Logo
The Tim Ferriss Show
#541: Eric Schmidt The Promises and Perils of AI, The Future of Warfare, Profound Revolutions on the Horizon, and Exploring The Meaning of Life
#541: Eric Schmidt  The Promises and Perils of AI, The Future of Warfare, Profound Revolutions on the Horizon, and Exploring The Meaning of Life

#541: Eric Schmidt The Promises and Perils of AI, The Future of Warfare, Profound Revolutions on the Horizon, and Exploring The Meaning of Life

The Tim Ferriss ShowGo to Podcast Page

Eric Schmidt, Tim Ferriss
·
38 Clips
·
Oct 26, 2021
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
This episode is brought to you by Peak T. That's Pi Q UE. I have had so much tea in my life. I've been to China. I've lived in China and Japan. I've done ttours. I drink a lot of tea and 10 years plus a physical experimentation. And tracking has shown me many things, Chief among them, that gut health is critical to just about everything and you'll see where T is going to tie into this. It affects immune function, Weight Management, Mental performance, emotional health. You name it? I've been drinking, fermented puer.
0:30
Artie specifically pretty much every day for years now who are T? Delivers more polyphenols and probiotics than you can shake a stick at it's like providing the optimal fertilizer to your microbiome. The problem with good power is that it's hard to Source. It's hard to find real Pou error that hasn't been exposed to pesticides and other nasties, which is super common. That's why Peaks fermented Puerta. Crystals have become my daily. Go to, it's so simple. They have so many benefits that I'm going to get into and I
1:00
Learned about them through my friends. Dr. Peter, Atiya and Kevin Rose. Peak. Crystals are cold extracted using only wild harvested leaves from 250 year, old tea trees. I often kick start my mornings with their poo are green tea, their poor black tea and I alternate between the two the rich earthy flavor of the black specifically is amazing. It's very, very it's like a delicious Barnyard very PD if you like whiskey and stuff like that, they triple toxin screen all of their products for heavy metals, pesticides.
1:29
Besides and toxic mold. Contaminants commonly found in tea. There's also zero prep or bring required as the crystals dissolve in seconds. So you can just drop it into your hot tea or I also make iced tea and that saves a ton of time and hassle. So Peak is offering 15% off their priorities for the very first time exclusive to you my listeners. This is a sweet offer. Simply visit Peak t.com Tim. That's Pi Q UE T EA.com forward, slash Tim. This promotion is
2:00
The available to listeners of this podcast that speak t.com Port, slash Tim. The discount is automatically applied when you use that URL. You also have a 30 day satisfaction guarantee. So your purchase is risk-free one more time. Check it out. Peak t-that's PIQ, ete. A.com slash Tim.
2:21
This episode is brought to you by butcher box. Put your box. Makes it easy for you to get high-quality humanely raised meat that you can trust. They deliver delicious, 100% grass-fed grass-finished beef, free-range organic chicken Heritage breed pork and wild caught Seafood directly to your door. For me in the past few weeks. I've cooked a ton of their salmon, as well as to delicious barbecue. Rib racks in the oven super simple. They were the most delicious pork ribs I've ever prepared in my freezer is full of butcher.
2:51
Ox, when you become a member, you're joining a community focused on doing. What's better for all that means, caring about the lives of animals, the livelihoods of farmers treating our planet, with respect, and enjoying better meals together, but your box partners with folks small farmers included who believe in going above and beyond when it comes to caring for Animals, the environment and sustainability, and none of their meat is ever given antibiotics or added hormones. So, how does work? It's pretty simple. You choose your box and your delivery frequency. They offer five boxes for curated.
3:21
These options as well as the popular custom box. So you get exactly what you and or your family love box options and delivery frequencies can be customized to fit your needs. You can cancel at any time with no penalty, which are box ships. Your order Frozen for freshness and packed in an eco-friendly. 100% recyclable box. It's easy. It's fast, it's convenient. I really, really enjoy it. And best of all, looking at the average cost. It works out to be less than $6 per meal. Skip the lines for your Thanksgiving turkey. This.
3:51
Butcher box is proud, to give new members a free turkey. Just go to Butcher box.com, Tim to sign up. That's butcher, box.com Tim to receive a free 10 to 14 pound turkey in your first box at this altitude. I can run flat out for a half mile before my hands start shaking the miles to the personal question. Now, we're just living tissue over metal endoskeleton.
4:29
Hello boys and girls. Ladies and germs, this is Tim Ferriss. Welcome to another episode of the Tim Ferriss show. My guest today is Eric Schmidt, Eric Schmidt. That's schm. IDT on Twitter at Eric Schmidt is a technologist entrepreneur and philanthropist. He joined Google in 2001. Helping the company grow from a Silicon Valley startup to a global technology leader. He served as chief executive officer and chairman from 2001 to 2011 and as executive, chairman and Technical advisor, thereafter under his leadership, Google dramatic.
4:59
Skilled its infrastructure and diversified its product offerings while maintaining a culture of innovation in 2017. He co-founded Schmidt Futures a philanthropic initiative that bets early on exceptional, people making the world better. He serves as chair of the broad Institute and formerly. Served as chair of the National Security Commission. On artificial intelligence. He is the host of reimagine with Eric Schmidt, a podcast exploring, how Society can build a brighter future after the covid-19. Pandemic with co-authors Henry, a Kissinger and Daniel Hutton Locker.
5:29
Rick has a new book out titled the age of AI and our human future. You can find him again on Twitter at Eric Schmidt and at Eric Schmidt.com, Eric. Welcome back to the
5:38
show. Thank you. I really look forward to this conversation.
5:43
I have been looking forward to this and I want to confess first and foremost that I have tremendous insecurity around my lack of clarity on a I. So I am really looking forward to digging into many facets, but before we get to that,
5:59
I want to pick out Henry, Kissinger. How did you come to collaborate with Henry Kissinger
6:07
about 12 years ago. I met him at a conference called Bilderberg. And my father worked for the Nixon Administration. When I was very young and my father had Henry as a hero. He said he was the most. Brilliant hardest-working and he was animatic because he has both a very
6:29
Reputation. But also very controversial reputation. So we chatted and he said, the only problem I have with Google is I think that you're going to destroy the world. And I thought, well, that's a challenge from Henry Kissinger. So we invited him to Google where he gave a speech, and he confronted the employees directly on the manipulation that Google was doing in his view of the public discourse. And his criticisms were so apt.
6:59
And the people enjoyed it so much that we struck up a friendship and all the were politically very different with very different backgrounds. I've come to learn that working with someone This brilliant at any age is phenomenal. But when you're co-author is 98, it's a special treat.
7:17
I looked up the age. I'm glad you mentioned 98 to what do you attribute him remaining cognitively sharp into his late 90s. Is that just
7:29
Good Hardware out of the box. Is there more to it.
7:32
He works harder than a 40 year old. I can tell you that he gets up in the morning and he works all day. He has dinner with his wife and his family and he works at night and he keeps up that pace 7 days a week at 98. I am convinced that the secret to longevity is being a workaholic. And, and the reason I say that is that
7:59
At Henry Kissinger. At the age of 90 knew nothing about the digital world.
8:06
Although we had a lot of opinions about it, but he has mastered the digital world and artificial intelligence with the alacrity in the speed of people who are just getting into it now.
8:19
That's unique to him. That's a gift and that's why his analysis of our world is so incredibly interesting to
8:27
me. All right, we're going to spend a little more time on on Henry because I must scratch this itch. So you are very good at systematizing thinking from first principles, whether it's systematizing, Innovation, or hiring thinking of things at scale. There's a method to not the madness, but the outcomes
8:49
Was Henry's ability to learn so quickly based on some approach that he has to First principles or a framework of any type that you've seen exhibited in him that he applies to new domains. There have been some
9:05
studies about the age at which you are your most productive professionally and as you know, in math and science, Brilliance tends to show up young in their 20s, they tend to get early Awards.
9:18
Historians however seem to get better with age. Maybe it's the accumulation of perspective and the accumulation of reasoning and the depth of wisdom that is represented by increasing age. And so dr. Kissinger has both the benefit of being a brilliant historian and also having Changed History and lived in the moment. And today he spends a great deal of his time with people talking to him.
9:48
About current affairs and judging them with his historical principles in mind. It is from that basis of insight that when he looked at AI, he said this is a very much bigger thing than people think it is. And I said to him why I honestly didn't know, and he said, because we're discussing artificial intelligence as though, it's a technology.
10:17
This is like the beginning of the Renaissance and I said, tell me about the Renaissance. What else do you say to a historian? Who's famous? And we started about the Renaissance and he said that the Renaissance is really about the Age of Reason. It's about individuals being able to think through their systems. It's about Society, allowing experts to criticize other people before, the Renaissance decision-making was essentially hierarchical. And from a king.
10:47
Religious leader
10:49
that change allowed us to develop, intellectual thought he is arguing that we're entering a new epic similar to the Renaissance this age of artificial intelligence because Humanity has never had a competitive intelligence similar to itself, but not human.
11:10
I'd love to hear you elaborate just a bit and then we're going to dig into a whole slew of questions that I have in front of me. How
11:17
You thought about the composition of the co-authors on this book and certainly would love to learn more about Daniel. I know a little bit about Daniel Hutton locker, but I'd love to hear more and we could start with dr. Kissinger. I realized calling him Henry's probably going to give me Bad Karma. So I'll start with dr. Kissinger. Is it? His broad familiarity with history as well as his knowledge of geopolitics and statecraft.
11:47
That you were hoping to augment everything else in the book. Maybe you could just speak to how you think about what each party brings to this project
11:57
in our collaboration. The book suggestion was actually from dr. Kissinger and what he basically said is that we have an opportunity to architect the questions that need to get answered in the next 10. 20, 30 years. I would parenthetically offer that
12:17
We didn't understand when we invented social media, 15 years ago or so the extraordinary and compound benefits and costs of social media to our society, in particular, our political discourse.
12:32
So armed with that knowledge, that the tech industry can invented a tool that outstripped. The governance of it, at least initially. And maybe for a long time. Dr. Kissinger said this is an opportunity to ask the right questions. We recruited ban hunting Locker who is the dean of MIT, computer science, part of it because he's a good friend and partly because he's such a good scientist. He will make sure that our claims are accurate and
13:01
And worked very hard to get the path of a. I correct many of the books about artificial intelligence are speculative, but I'm not aware of any book that has both the geopolitical social and historical context, but also has the technology,
13:19
correct?
13:21
Just to look at the opposite of speculative from a first-hand perspective. Could you tell a story or two about what you've seen or experienced first hand with a
13:34
i
13:36
Will you use a are today? Whenever you use Google search, Google ads, spelling correction, Google, translate the story on AI. That's not really well. Understood. Is that in the 1960s and 70s? When I started a I was going to happen within a decade and my friends who were a, i obsessed. Got their phds in this area.
14:02
And then everything stopped, it stopped working and there was a period of about 20 years, which is known as the AI winter, where the systems didn't work and then a series of mathematicians in the 80s and 90s invented. What is today, known as deep learning? I'll spare you the technical details. The important thing about this deep learning is it allows the manipulation of patterns at scale that allowed these algorithms to work?
14:30
And the big breakthrough was in 2011 with a process called imagenet where there was a contest to see if computers could see better than humans. And today computers can see better than humans. Their vision is literally better and I didn't realize at the time how important site was for everything.
14:54
Car should be driven by a computer. The doctor should use an AI system to examine you and then give him or her recommendations on your care. I'd much rather have the computer. Look at my skin rash or my retina in my eye because we now know from many, many tests. That humans make observational mistakes, even the best, but computers, when properly trained don't. So, he was from this Insight that you could do Vision at
15:24
Scale that you began to be able to do prediction at scale.
15:29
And all of a sudden. Now we're beginning to see systems that can predict the next thing and computers have gotten very good at predicting. What will happen next? The most recent weather sort of three events in the last three years that really are the index points. The first is that and dr. Kissinger wrote an article after alphago.
15:52
Called the end of History. Basically and inspired by the fact that go it was a game that humans had played for 2,500 years. It was thought to be in computable.
16:04
Not only did a computer solve the game, but it beat the top humans. Both in Korea and China. I know because I was physically their computer against human and it was great fun. But in that process, the computer invented, some new moves and strategies that have not been known to humans for 2,500 years. Now, that's a big deal.
16:27
The next thing that happened was that at MIT a set of synthetic biologists, and computer scientists did a very complicated trick involving going through a hundred million different compounds and figuring out which compounds would create a reaction for antibiotic use. And they came up with using this technique, a new drug that could not be foreseen. Is called phallosan, and it appears to be the next broad scale.
16:57
Antibiotic, we haven't had one in roughly 40 years. The third thing that happened was that a group called open. A I built, what is are now called Universal models, where they read everything they could find on the web into something called G PT 3.
17:17
This is called a basically a Transformer and these generative models can generate things. So all of a sudden, we have a computer that can speak what it knows.
17:30
And these models are interesting because you train them and you don't know what they know. And furthermore. They can't tell you. You have to ask them. So it's like a teenager.
17:41
And many people think that these Universal models are going to profoundly change our understanding of language and thought because they only get better with scale. So we've now got four or five companies, couple big ones, couple startups that are building what are called trillion parameter models. These trillion parameter models cost a hundred million dollars or so to make
18:07
That's how exciting this new area is. So you've got strategy and the form of go. You've got medicine and Drug Discovery really scientific discovery in the form of how Allison and now you've got language models and learning models in. We Believe collectively. They're the next 10 years. This is going to come together and transform everything.
18:29
I'm just taking a breath to let that settle in G. PT 3, really got my attention in part and I want you to disabuse me of any misunderstandings, but I saw interviews generated between people who are no longer living. So I saw Marcus, Aurelius interviewing. Let's just call Mark, Twain, Mark, Twain, interviewing Abraham Lincoln. And I thought to myself, once we have enough audio and video on.
18:59
Line, this will really simplify my guest recruitment for the podcast because I won't actually need to reach out to the living human. It'll be great. I won't have to keep banging on the door of Oprah or anyone else. I'll just be able to generate an interview. It'll save me recording time to
19:14
you say that in jest. But we're busy building that, right? And so, let me just be clear. How did the Tim Ferriss show would work. So you have an enormous amount of video and audio data, so there's enough to learn to build a Tim.
19:29
Eris question and answering thing where you would have not only your tone and your the fun that you represent, but it would also represent the sum of your Insight. So we would say, I was speaking with so-and-so and he or she said, this, what do you think now? In this particular scenario? This is long after your own passing and all your guests passing, too.
19:54
There's every reason to think that we'll be able to mimic your intelligence and charm and wit, and that of your family, your parents, your grandparents historical figures.
20:10
That will be fun.
20:12
The question is, will it really change society
20:15
or how will it change society? I recall being in a small meeting private meeting with the content. I'm going to mention is public and the discussion centered on the legal cases that will be forthcoming related to a lot of this technology and there was at the time something related to. Now, I should be clear. I want to take a back step in a minute and Define artificial intelligence and what that is so that doesn't get conflated with other things.
20:42
In terms of developing technology, deep fakes, copyright defamation. There's been a lot in the news, at least in the last few years, related to deep fix of Taylor, Swift and lawsuits. And we are all going to have to contend with entirely new breed's of legal cases. I would imagine. So, that's another facet of this. That is going to be incredibly fascinating terrifying and complex. Possibly. I would love for you if you don't mind too.
21:13
Just Define for me quite frankly artificial intelligence because as I'm sure you have seen and can imagine, there are a lot of terms that kind of get copy and paste it into startup decks whenever they're hot that would be deep learning machine learning and AI. What is AI? And what is it? Not
21:30
the simplest explanation for AI is it's a system that gets better through learning that it's busy learning something.
21:37
And that's probably the easiest and current definition of it. When I say AI to my non-technical friends, they typically think of a movie that they saw and the movie always has a robot and the robot goes awry. And that robot is slain by a female scientist who triumphs
22:01
And I propose a variant of that movie where the computers all conspire against humans and start killing humans. The humans notice this, their computers run away from the humans and a female scientist figures out how to unplug the computers one by one by one. That's not what AI is. That's a movie fiction and it'll be very long time. Hopefully never. They will have to deal with that.
22:29
What AI really is is a system of knowledge that is implemented inside the cloud. So it's around you all the time and it's very good at looking at large data sets and predicting things. It's very good at looking and finding patterns. That humans can't see. I mentioned earlier that the computers were very good at Vision. That was the first breakthrough, but what they're now doing is looking for patterns of correlation. That humans. Can't
22:59
See, so let's imagine that on Tuesday. You do something on on Friday. Someone else does something and it turns out that you're magically linked in the universe, such the Tuesday causes Friday, but it's not a pattern that's apparent.
23:12
Computers can discover that pattern that needle in a haystack, particularly. Well, when you operate this stuff at scale, you end up with systems that look human like because they can aggregate data and they can think what you think they can generate solution. There's a technology called Gans generative, adversarial networks, where the computer can generate candidates. And another Network says not good enough. Not good enough, not good enough until they
23:42
And it Gans have been used for example, to build pictures of actresses and actors that are fake. But look so real, you're sure, you know, them.
23:52
So, one of the concerns, the one of the real concerns about AI is that it's going to be very difficult to tell the difference between information and misinformation. And I'll give you an example. Let's assume that all of your readers and listeners are obviously human and that the standard rules of humans. Apply to all of us. There's somebody called anchoring bias there. Somebody called recency bias, you know, people aren't completely rational computers, nor should they, we don't want them to be, we love humans.
24:22
Let's imagine the computers gets enough of such people and it figures out that if I say the following outrageous thing first, you'll always believe me.
24:32
And the computer discovers this. And so all of a sudden, everything, it does, boom. Boom. Boom. I've thought a lot about the way politicians speak. If you watch carefully, they take a set of phrases and they repeat them over and over again. They're simple phrases. That's anchoring. They're trying to get the audience, their voters to start with this fact and then judge passed. It will computers will be incredibly good at exploiting them.
25:00
That means that the world our social world around us within the next decade will become impossibly confusing because there's so many actors that will want to miss. Inform us businesses, politicians, our opponents or fun. God knows, we don't know how to manage that
25:17
when you mentioned things coming together in some fashion in the next 10 years.
25:23
Could you speak to if and how generalized or general artificial intelligence fits into that? Because I know that's a big question for a lot of people including those who don't really understand the technology when generalized artificial intelligence will arrive so to speak.
25:41
I think it's important to establish some of these terms AGI stands for artificial general intelligence and AGI refers to computers that are human-like in their strategic.
25:53
And capability today's algorithms are extraordinarily good at generating content, misinformation guidance, helping you with science and so forth, but they're not self-determine itive. They don't actually have the notion of who am I? And what do I do next? If you ask G PT 3, who it is GPT Theory, will say I'm a computer.
26:17
But it won't give you a philosophical basis for its existence. Its purpose and how its determining where it goes. That is the distinction between Ai and AGI in my view now within the community the AGI optimists, think that within 10 years. We will have such computers, the pessimists think that it will be far longer if ever. So, let's say 15 years from now. It may be
26:47
Possible to have computers. That have a sense of self determination. That they know roughly what they're trying to achieve against a broader objective. Many people think that this will become an enormous competition between humans because remember that those computers will be faster Learners than humans. They'll have access to more data than humans. Other people believe that they'll be fundamentally flawed because they won't have the Nuance. They won't have.
27:17
The background. They won't have the cultural background of the countries that we all grew up, in in the cultures. We grow in. We don't really know my own view is that this will occur, but that the amount of computation required to do, this will be so great that they will only be a few of them. And, furthermore, they'll become so important. That they'll be like, nuclear weapons.
27:43
And let me tell you why let's imagine you have a truly evil person who Encounters this AGI. What's the first question? They will ask them. Tell me how to kill a million people.
27:54
Obviously, a terrible thing to ask and answer and the AGI because he doesn't have morals. Could actually answer.
28:02
He could actually say well do this this this and this and furthermore, it could articulate something that's not generally known because it knows everything.
28:11
So I think a fair reading of these AG is is that in the most extreme form, they're going to be so powerful that they're going to have to be protected. We don't want them. Broadly used, they're going to have to be used in specialized scenarios. Now such an intelligence will be enormously valuable for drug Discovery. Material Science, climate change, making the world more efficient making people more educated in. Imagine with an AGI, you could
28:41
Tell me how to teach a million children English better, and he could figure that out. Because you could get its own thinking, to the re it's thinking, it could look at all the patterns figure out how to do it. And you could actually ask it to write the program.
28:57
The beginning of this is today. There are companies that are offering tools, which will help you write code. They don't know what code you're trying to write. But once you start, it can fill in a bunch of it for you. That's the beginning of this phenomenon in the extreme case. There are scientists and basically science fiction around the future that could use something called The Singularity which is the point where the computer
29:27
Shin is so much faster than humans. Now that's speculation. When you talk to people in my world, most people think the singularity will occur 20 years from whenever you ask them that question.
29:40
So that we say 2041. Where do you fall? If I may ask your personal opinion assuming and this is a big assumption that there are people in the Techno Optimist and pessimist side who are very technically sophisticated.
29:57
What is your current best guess or intuitive feel? However, you want to answer it on the arrival of something. We would consider a GI
30:06
the top scientists, the greatest inventors in this area, collectively believe that we need one or two more breakthroughs.
30:16
To get to volition to get to Consciousness in the way that humans do it. When you wake up in the morning. You have so many choices of how to spend your time. How do you choose? How do you handle unplanned situations and all of that? Most people believe that the computer is that I'm talking about will become enormously valuable Partners. My physicist friend. I said to him, what would you like the computer to do? And he says, it's really simple.
30:45
There are so much physics being written about now that I don't have time to read it all. And I want the physics assistant to go and read everything figure out what disagrees with each other suggest things for me to think about while I'm sleeping. The biologists have similar questions. People who are philosophers want new insights read everything. Help me think through this.
31:12
I believe that in the next decade that will be the primary achievement in Ai, and that is extraordinary because it means for example, that we can answer questions in physics. Math medicine. So forth that have been unanswerable. We've not been able to understand the behavior of subatomic particles. We can do it now.
31:34
The enormous breakthroughs but that's not the same thing as real human intelligence. I think it's going to take another break through or to
31:43
I know this is maybe trying to look into the crystal ball with too much intensity, but could you elaborate on if those breakthroughs are known what they are? What like what the problems are that need to be solved? Are they known
31:55
problems? There are people who are working really hard on the question of goals.
32:01
And the computer systems have knowledge. They cannot explain which we can think of, as intuition. So the system gets trained and it knows things. But if you ask it, why it knows it. It can't tell you, that's intuition. So the question here is, how much more computational power and scenario planning. Do you have to do to look at all the different choices that you have every day? There are people who believe that it's perhaps,
32:31
A hundred times more computing power and computing power goes up with doubles every two to three years right now. And so that gives you a sense that within 15 years. It's possible to imagine that amount of computing power required to do this. And the reason we use computing power is one. We don't have a better metric but also because the computer doesn't think. The way humans. Do the computer looks at scenarios in eliminates. If it says well, what if this
33:01
Us. And what if this? And what if this? And the computer itself is eliminating, this choice, this choice, this choice, this choice, and finally, settles on a good choice. So many people believe that the compounding of knowledge will take what we say, a couple of orders of more magnitude, which is why these computers are likely, at least, initially to be few extremely expensive and very, very large,
33:28
does the 15-year hypothetical time.
33:31
Rise in factor in and and this is me getting into very slippery territory since I'm non-technical, but does that factor in the potential applications of quantum
33:42
Computing? For at least 20 years people have been focusing on this notion of a different kind of computer, called a quantum computer, and quantum computers are hard to explain, but basically think of them as they do all the computations at the same time. So instead of going kachunk, kachunk kachunk, all the chunks go at the same time.
34:02
And the term quantum Supremacy means that everything occurs at once, rather than taking the computer time to go through each of the mathematical calculations. The reason of this is of such great interest, is that one of the core aspects of security in the computer age, is the difference between multiplication and division and with quantum computers, you could break all of the codes and all of the secrets and all of the keys and all of that, that we all use every
34:31
A.
34:33
So lot of people think that National Security groups are busy working hard to do that, the consensus in the industry is that we are 8 to 10 years away from that. And the way they get there is that the current quantum computers have an error rate because it's not perfect. When they operate their essentially mimicking a natural process digitally.
34:56
And Google showed last year that they could do this on a computer that would have taken a million years that took roughly 10 seconds because of it. So, we know it's possible. The question now, is, how do you actually build a real one? That's useful. If you had a real one, you could, instead of simulating a physical process. The typical one is called a kneeling where two metals basically merge, and this is crucial for high strength steel and titanium.
35:26
And things like this and is of great business interests, but the same thing would apply to a i if it worked because the algorithm instead of looking at all these different scenarios would examine them all, at the same time, pick the best one. Technically, the one with the lowest energy State, the current Quantum learning is not proceeding very quickly. It's a very hard problem. My assumption is that Quantum learning will occur in our lifetimes, but not very soon.
35:54
Thank you. And, for people, listen to you.
35:56
Want to do a real deep dive on, Quantum learning Quantum Computing. I have a conversation with Steve jurvetson. That goes very deeply into this subject matter and it really does start to be a mind Bender. When you get into the details. It is quite something.
36:16
Just a quick thanks to one of our sponsors and we'll be right back to the show. This episode is brought to you by shipstation. The holiday season is fast approaching and we know the people will be buying more stuff online than ever before. All of these Trends to e-commerce have been accelerated due to covid and much more. If you're an e-commerce seller. Are you ready to meet the demands of record-breaking online shopping season? Be ready with shipstation. Shipstation.com is the fastest easiest and most affordable way to manage and ship your orders in.
36:46
A few clicks you're managing orders, printing out discounted shipping labels and getting your products out, fast, happier, holidays for you and your customers shipstation takes the hassle out of holiday shipping, no matter where you're selling on Amazon. Etsy, your website via Shopify or other platforms, shipstation brings all of your orders into one simple interface and shipstation works with all of the major carriers, USPS FedEx, UPS, even International, you can compare, and choose the best shipping solution every time.
37:16
And you can access the same postage discounts that are usually reserved for large Fortune, 500 companies. It's no wonder that ship station is the number one choice of online Sellers. And right now my listeners that's you guys. Can try shipstation free for 60 days when you use offer code, Tim just go to the homepage shipstation.com. Click on the microphone at the top of the homepage and type in Tim T IM. That's it. Go to shipstation.com then enter offer code, Tim shipstation.com make ship.
37:46
Happen.
37:49
I want to ask you, you mentioned what the Philosopher's might ask for, in terms of augmentation or help you mentioned the biologists physicists. I am going to ask you. I'm just planting a seed. What Eric Schmidt, might use AI technology for in the future. But before I get there, I want to just look at a snapshot of current day. What are
38:15
Some of the coolest or most impressive things that you've seen a, I figure out on its
38:20
own. I mentioned this new drug and the new drug and Drug Discovery will accelerate combination of the MRNA, achievements plus the ability. Now to essentially replace the way the drug LAB Works. My sort of stereotype, is the chemistry except in the morning and says, let's try the following seven compounds. They tried the seven.
38:45
None of them work and at five o'clock, they go home to have dinner and think watch television in the next morning. They think of another seven. Well, the computer can do, as I mentioned, a hundred million in a day. That's a huge accelerant in what they're doing. I'm very interested in the development of humans.
39:07
Together with AI systems. And the example I would offer is you have a two-year-old and the two-year-old gets a plush. Toy. It happens to have a, i inside of it, and by agreement, as this child ages, every year, they get a better toy. And of course, the toy gets smarter.
39:27
We don't know it all.
39:30
What happens when a child's best friend is not a human or a dog?
39:37
We don't know what that does to the child's bonding to other children to their parents, when, you know, frustrated parents and the kid is busy, and they give them a computer and do whatever you want. But imagine if that, computer is learning talking thinking educating, at the same time. It's a godsend, right?
40:01
But what is it, beeping? What is it teaching? What are its Norms? What are its values? Will such a child end up being very stilted with real humans and really comfortable with digitales. We honestly just don't know. We know that people get attached to inanimate objects. There are many religions where inanimate objects have what? We would think of, as a bit of a soul, we mentioned this in the book.
40:28
But we don't know. Give you another example with elderly. A lot of studies indicate that the elderly are very lonely, just sort of sad and imagine if their best friend is a digital friend. What does that do? Does that extend their lives? Is it? Make him crankier? Does it make their loneliness more perverse? I don't think we know yet.
40:51
Yeah, or is that digital friend and emulation of a
40:55
relative? That's right. So now we get
40:58
Our grandmother, as an example. We recreate her husband. And what does it do when she can chat with her husband, who has now passed away. We honestly don't know when I go through and I look at these Technologies. They're really for qualities in AI, that are different. The first is its imprecise.
41:20
It's imprecise in that you can't tell you exactly what it's doing and it makes mistakes. Don't use it for life. Critical decisions. You want a human who can make mistakes Consulting a computer. You don't want the computer, flying the airplane and maybe never but certainly not for a long time. It's dynamic in the sense that it's changing. It can learn it can assess the situation around it and change its Behavior. This is the AI. Now
41:49
It has emergent Behavior. Emergent means things that come out that we don't expect strange things. Now. We've seen this a bit with social media. I don't think anyone expected the government interference in the elections in 2016. That was an emergent Behavior was unexpected by Humanity, using these tools and those were not a high-powered. The final point is, it's capable of learning.
42:16
So you've got a system using the child scenario. You've got a child. It's got a teddy bear. It looks like a teddy bear but it's imprecise Dynamic, emergent and capable of learning. What's the teddy? Bear learning? So the teddy bears watching TV too. And the teddy bear says, look, I think this show sucks. And the kid says, I agree with
42:35
you
42:38
again. We don't understand the implication, especially on formative.
42:43
Behaviors. There's a whole nother set of arguments about National Security and how governments will work and how dominance games will play and who will be winners and losers from this technology that we don't really understand. And we talked about that. Well, let's
42:57
actually touch on that for a second and this is something on the minds of a lot of folks. Certainly. I know a lot of investors who are trying to understand AI in the capacity of acting as an investor. I'm not saying
43:12
We go there, but you have spoken to Congress about say, China's announced Ambitions with AI 10 to 15 years. Isn't that far away? It's really close and certain things move very slowly. Certain facets of say, government moved quite slowly regulation. What types of Corners are most important to look around and I'd love to hear you speak a little bit more about the geopolitical components.
43:42
I was fortunate to be the chairman of an AI commission for congress. We just released our report, very proud of it and we studied a lot about where the world is in AI. Not so much on these AGI questions, but the more tactical things that I've been discussing, we concluded that the United States is slightly ahead of China. In these areas that China has a national program to focus on this. They're pouring literally billions and billions of dollar equivalence into this. They generate four
44:12
times more Engineers, the as we do just because of population size and they're extremely focused on dominance of AI by 2030, which is soon in our report. We speak a great deal about what the government needs to do to help. It starts with more research, access to more data. Making sure our values are represented. So we don't end up with systems that have prejudice and so forth and violate both our laws as well as our morals. We talked about Partnerships and all of this kind of stuff. The reason this is
44:42
So important is that pretty much every national security issue. Will be tainted or controlled by AI in the future. If you look think about cyber attacks, the most obvious War scenario. In the future is the following North Korea decides to attack the United States. It begins its attack China decides. This is a bad idea and blocks. The attack America's defenses are awake alert, America announces a counter.
45:12
Attack, which stops this the entire War.
45:17
That I just described took about 10 milliseconds, give you another example in the military. There's a presumption of human control, which is very important. I was part of a team that wrote some of the AI ethics that are now used by the US military, which is obviously strongly support and they say, we want the principle of human, control human authorization.
45:43
So let me give you an example. You're on a ship and a new kind of missile. Let's say a very fast Hypersonic. Missile is coming in which people are developing countries are and it's coming in so fast that the ship can't see it. The humans can't see it but the AI has figured it out. So the AI says, to the commander.
46:05
In 15 seconds, a missile is going to be showing up and it's going to destroy you and your entire crew. I recommend that you press this button to launch the counter-attack and the human says 14 13, 12 10. What do I do? And at 3 seconds, that humans going to press that button?
46:28
So, the compression of time, the amount of data and the potential for error.
46:35
Create a whole set of problems around military Doctrine.
46:41
Now, everything that we live in today, deterrence, all of the other things that we are, all of us are familiar with great. Power competition are Concepts that were invented over the last hundred years ago. And dr. Kissinger actually invented much of the containment strategy in the 50s and 60s before he was in government.
47:01
As part of a team that sort of invented all this work. So I asked him. What does this look like now?
47:08
How would you address this now? And his answer is, let's get an equivalent team of people and try to figure these things out. Containment, for example, doesn't work. Entertainment is about keeping another country from getting something, because these algorithms, and this software is pervasive.
47:28
It leaks, and it leaks through ideas and through, discovery. Not just from Criminal leaking. It's not like Los Alamos where you could keep a secret.
47:38
So the way to maintain competitive dominance that is National Security is to invest in these areas both in terms of data and algorithms and to be excellent. But also to begin, some kind of dialogue about what the limitations of automatic War are
47:57
So another example and I'll make one up now. So the scenario I described of the 10 millisecond War.
48:06
So let's say that. China in this case, develops, an AGI ahead of everyone else.
48:13
And this AGI is thought to be so powerful that it could defeat any of our defenses. Then the logic on the American side would be to do a pre-emptive attack to prevent that possibility, and that's destabilizing of great power competition. So you want to avoid an arms race.
48:35
In an arms race. In a, I could look a lot like the arms race that we went through in the 50s and 60s, and people have forgotten how much of our military industrial complex, how much money and so forth was devoted to build more than 30,000 nuclear weapons. All of which could destroy the world. Many many times over many, many rounds of negotiation got those numbers down to 3,000, 4,000 such weapons, which are still plenty to destroy everything. It's an example of overreach.
49:05
So I worry that because we don't have an agreement on, even what the rules are, what the landscape of limitations are. We don't have diplomats. You can have the conversations and no single National Security Group. No. Single country is going to self limit and say, oh we're not going to do that. This is not Costa Rica, which doesn't have a military.
49:29
So the natural course of logic will be the development of these incredibly fast and potentially destabilizing weapons for which we don't have a language.
49:40
I think I'll just act as a stand-in for listeners. That sounds very terrifying. It does sound not just terrifying but challenging to address and I may come back to that and questions of policy and geopolitics. I
49:58
Ask you the personal question that I alluded to earlier before I let that go which is what could you achieve or what would you want to achieve or ask of AI in the future that would allow you to do things. You cannot do today.
50:14
Almost every hard problem in our society is based on either, the computers can't figure it out. We don't have enough data.
50:25
We honestly don't know how to solve it. I'd like to get some breakthroughs. So I'd like, for example, to get better climate models in my philanthropy. I founded a group at Cal Tech, which figured out a way to predict climate better using an AI system. It would model clouds. I didn't know this, but it turns out clouds are really complicated. And so they used an AI system to approximate, how clouds would behave which allowed?
50:55
Them to solve the prediction problem. It's called clean a CL. I am a over and over again. I would like the AI system to educate me and entertain me and keep be curious about the dynamism of the world. Speaking personally. When I look at the news feeds, we're obsessed about politics and President Trump and so forth to the point where you get the impression that there's nothing else going on.
51:24
But one of the great things about humanity and our world is, it's incredibly Dynamic. We never hear what those people are doing. I'd like to see if we could figure out a way to advance against these really hard problems. Let's get some solutions around mental health. Let's get some solutions around drug abuse. Let's get some solutions around the rise of inequality.
51:47
Let's have our computers, help us with those Solutions
51:50
and look to talk about this something, you know, a lot more about than I do. But how programming may be able or incapable of addressing, morals and ethics? And I'll give an example, probably a bad example and not technically accurate, but it seems like we're already at a point with say, the development of autonomous vehicles where
52:16
questions that might have been presented in a freshman philosophy class. Like the trolley problem actually need to be programmed on some level, in the sense that cars need to make decisions if there is an accident and they need to say choose between going on the sidewalk at high speed and hitting three elderly or swerving, right? And hitting one child. Let's just say not that we would ever want to be in that type of situation.
52:46
Asian, but these things happen. How do you foresee computer, scientists, attempting to if this is even going to be an objective in still ethics and morals into Ai? And does it just require a level of self-awareness that perhaps, we don't have? I'm curious how you might think about
53:07
that. There are many, many computer scientists working on this. Every major computer science program. I'm aware of has a computer science and ethics research project.
53:16
There are a lot of problems. One of the first problems is that computer science and AI are based on learning data, literally learning and therefore the information that they're learning has biases. So whatever, Prejudice and bias and religious problems that Society has around it. The computer will absorb. So the research is, how to identify those biases? Because remember, in many cases these systems cannot tell you what they know.
53:45
And how to mediate them. It's pretty clear to me that there's going to end up being the system that knows everything. And then there's going to be a supervisory system that limits it. So two different systems, one will be the knowledge system and another one which will keep it within some guardrails. So the question that was asked is not a permissible question, don't send it to the system.
54:10
And you're going to have all sorts of problems. My favorite self-driving. Car problem is here in New York. Eventually. All the cars were self-driving because it's such a crowded grid and traffic moves perfectly. The engineers at Google and everywhere else have figured out exactly how to optimize under a set of assumptions. The Aggregate and average delivery time of a human from place to place. It's a perfect computer science solution. So now we've got a
54:40
He's going into labor who needs to get there? Faster. Is there a button in the car that says I'm pregnant. I'm about to give birth and I need to get there faster. Okay, so she presses the button and somehow it works. How do we make sure that a gentleman who's lying cannot press the same button?
55:03
In the trolley problem. All of these are of the form. How do you choose one life or the other?
55:10
The correct answer is don't kill anything and so we're going to have to find ways to avoid being in the situation where you have to choose between three old people and one child. And these are the great debates of philosophers. And how do we value life and so forth, but I think we should start with the premise that the system should be designed such that it optimizes overall, happiness and overall wealth. In this societal sense of wealth as an example. We tolerate double parking.
55:39
We tolerate people speeding, we don't track every car, and give them an automatic ticket, every time they go above the 30 mile, an hour speed limit by one mile, but technically, that would be easy. If you want to eliminate the vast majority of crime. In our society, put cameras in every public space with face recognition. It's by the way illegal to do. So, but if you really care about crime to the exclusion of personal freedom and the
56:09
freedom of not being surveilled and we could do it. So the technology gives you these choices and those are not choices that computer scientists should make those are choices for society. I use the surveillance cameras as an example to say that in Britain, there very widely accepted. When you're in Britain on a street, you're on camera in the United States. Its partial. Sometimes you are, sometimes you're not, but in Germany with a long history of the Stasi, they are very violently opposed to such surveillance.
56:39
These are three different democracies that have made the choices in different ways. So I don't think that computer scientists and the tech industry should make these decisions, but I do think what we should tell people is our tools. Give you this range of choices. If you look at leadership in China, China is a couple Generations ahead in surveillance. They can actually spot you and your gate. Now, the way you walk, not just your face recognition. This is not something we want to lead in.
57:06
I'm just letting that all sink in for a second.
57:09
One of those conversations. For me. It's great. There's so much to chew on. And I'm gonna take a 90-degree turn back to the physicist and the biologist and others who will be able to consult a. I in the way. I think it was presented to help assimilate new information. New developments get up to speed. They can't possibly even with physical constraints.
57:39
Keep up or digest, the amount. They would like to digest. Now, many people. Dr. Kissinger, certainly over decades, and many, many others, spend a lot of their time, ingesting information, and that could be in the form of reading. It could be in the form of conversations like this. I suppose there are two questions. One is, how do you think?
58:05
Thought workers were humans will spend their time. Once that ingestion is dramatically reduced or removed and along with that if we get to the point where that is possible. And there's a question of how it will be conveyed to the human.
58:22
How far are we from there? Simply being a direct brain computer interface where that information is somehow seamlessly, integrated into our Consciousness without being spoken or imparted to us. Externally
58:40
dancer. The last part. First. We are in our industry. We're working on Direct brain connections at one kind or another. They're all in startups. They're all speculative at this point.
58:52
Shown some gains.
58:55
If you think about 200 years from now.
58:59
To pick a random number, it's probable that we'll know the complete details of the human brain. How its wired how it works.
59:08
So if you think far enough Advanced well past our lifetimes. It's probable that will know a great deal about how to manipulate the brain externally to the brain will understand how it works. Will understand the wiring. Maybe they'll be some attachment and so forth. Most people think that that will occur no one has any idea when when you go back to the the gains, you want to get the basis of what the next 20 years look like. And I think it's fair to say they were going to be Awash in information.
59:38
Asian and misinformation at a scale that is overwhelming and it's already overwhelming.
59:45
So the most likely scenario is that each of us will have the equivalent of an AI Communications assistant.
59:54
That will watch like while you and I are speaking. My AI assistant is watching what's going on and it knows my preferences. It knows what I care about and it has a good sense of judgement.
1:00:06
And when we're finished speaking, it'll say, by the way over here in Arkansas. The following thing happened that you might want to check out more importantly, this AI assistant will battle with the misinformation assistants and it'll say prove to me that you're a human before I expose you to my human.
1:00:28
And so you can imagine a scenario where the solution to the misinformation problem and the solution to this information space problem, is that each of us has our own AI assistant that think of it as the equivalent of a phone. Although in practice. It's a supercomputer that's accessed through your phone or equivalent. This sort of keeps you sane. If you go back to my earlier comments about children, we have no idea what the rules should be for that assistant.
1:00:57
So let me give you an example. We've learned. Unfortunately in the last few years that there are still horrific racial Prejudice, horrific lie, hateful people. Well, they get an assistant, too.
1:01:11
Is there a system going to pattern their racism?
1:01:16
Or their misogyny.
1:01:18
Or their violence against children, are all these sort of hateful, behaviors because these are part of humanity to or will there be some regulation that says that your assistant has to be politically correct? It has to say are you a, he a she or, you know, in other words, how will Society resolve that and my prediction will be that each government and each culture will adopt different rules about them, but they're not going to be unregulated.
1:01:45
What we've learned. And I've learned really the hard way, is that the technology that I helped work on? And that is being invented. Now is no longer optional. 15 years ago. I used to give this speech saying, look, you hate the internet. Turn your phone off.
1:02:02
Have dinner with your family. Do whatever you're going to do, but get off.
1:02:08
And that's not a practical answer today.
1:02:12
The internet is no longer optional. It's essential partly because of the pandemic, but because of e-commerce and business and knowledge and so forth. So it's going to get regulated. And the question is how and under what
1:02:25
terms, I've I believe red, you use the analogy of the telephone. Maybe I'm making that up, but in the sense that just because there are Bad actors, who can use the telephone for wrongdoing for crime.
1:02:42
Cetera, it doesn't mean we eliminate the telephone. It means that we have means of Regulation and enforcement and so on. I guess the telephone by comparison seems to be such a clean. Although, increasingly maybe not discrete system. What are the most important next steps from your perspective? With respect to even thinking about regulation, the best questions to ask or just concrete steps.
1:03:12
I was
1:03:12
like to use self-driving cars because everyone can relate to this. So in California, there's a concept called a rolling stop where
1:03:21
you have the California roll. Yes,
1:03:25
and when you come to the stop sign you sort of forget to fully stop you roll through. So the policeman comes over and says you did the Rolling stop and I say to the policeman sir. I did not and then it said who did. And then the car says, I did Sir and the policeman says why
1:03:42
And the car says, I don't know
1:03:45
now very frustrating for the police officer. Yeah,
1:03:48
so so who gets the ticket? So your choices are the human, the car itself, the manufacturer of the car or the data that the car was trained with. So that's the debate to have give you another example. I feel very strongly in favor of free speech for humans, but I don't feel very strongly for amplification.
1:04:12
Equation by computers. And so what we're seeing now is we're seeing humans who have wacky, false conspiratorial ideas. What have you get? Picked up an amplified in these systems?
1:04:25
The drives everyone crazy because we can't tell the difference between a genuine social movement and concern versus a single crazy person. Who has an amplified idea that seems plausible but is basically false. We have no way of falsifying such things. That's something that's got to get sorted out. We can't live in an information space.
1:04:47
That is so full of misinformation and manipulation that we can't get through the day. When we go back in the book to talk about the Renaissance. We say, this is a new epic. Because in addition to this misinformation that we've been speaking about, we also have never had a situation in our Human Experience where there was an intelligence that was similar to ours but not the same. That was non-human.
1:05:14
And so imagine a situation where these intelligences exist and they can be consultant. Well who gets to consult them what happens to their answers? So for example, if it's an oracle, do you have a rule that every answer from the Oracle is published to the world because it's presumably beneficial. But what if it's a private question or a secret what if it's being used in National Security? Again, these are questions that we have not resolved. So what we say is these are really hard questions. What happens when
1:05:44
The AI perceives aspects of reality that humans do not that it sees a connection. There are physicists who talked about new universes and new worlds. And you can imagine a situation where there's some thing that we as humans cannot conceive of because of our limited intelligence, but the computer detects it. And then we say, oh my God, in math. There's a long story about an object.
1:06:14
That lives in a two-dimensional world and the third dimension appears. But if you're in a two-dimensional World new, never see the third dimension, so, you're always confused. So what if this a I can find a dimensionality of the world that we as humans? Not a single one of us can understand and we're now confused. We don't know how to handle that. What happens another example is dr. Kissinger believes.
1:06:41
That I'll paraphrase.
1:06:44
When something is not understandable by humans, they will perceive it as an act of God.
1:06:52
Or they will resist it. And you can imagine a situation where you end up with nativists who decide that this world that I'm describing is so impossibly unpleasant, if they turn it all off and they go to the equivalent of the woods of Pennsylvania, and they say, leave me alone and you have other people who learn to drive it and manipulate it for good and evil.
1:07:21
You'll have a choice. And then one final scenario to think about.
1:07:26
Goes like this. Let's say you're older. Kids are grown, and it's time for you to take a break from the world. So, you put on headphones and VR glasses, and in those VR glasses and headphones. You have a life of you as a much younger person. A much more, beautiful handsome, wealthier stronger, whatever.
1:07:49
With the friends that you remember of the time recreated, even though they may have passed away. That might be a more fun life for you every day. Then the life that you have in this scenario, what happens when we lose people, I call that Crossing to the other side. When they wake up in the morning. They just want to be over there. That will happen to
1:08:11
it. Strikes me that
1:08:14
It is science, fiction can be prescient in some instances and it doesn't make it future fiction. It just makes it current-day fiction. But there's I do want to say predictive power but I think of really good science fiction, of course subjective, but snow crash the description of the metaverse or you have someone William Gibson's work. I'm just curious. Do you or have you read science fiction much yourself?
1:08:40
One, Neuromancer is particularly good. If you look at
1:08:44
in Eve's reason book, it's an extraordinary composition of of the importance of humanity, surviving the destruction of the Earth in in 7, Yves toward the end.
1:08:56
In their science fiction, they have assistance.
1:09:01
It have a funny name that serve this function that I'm describing.
1:09:06
So, as usual, science fiction, anticipates, many of the things in technology, but if you think about it.
1:09:14
We Believe today from the Reformation.
1:09:18
That we have the sole power of understanding reality.
1:09:23
But at some point, that's not going to be true.
1:09:28
When that happens, what will our self-conception look like?
1:09:33
How will we conceive of ourselves? How will we organize ourselves?
1:09:38
How will we value ourselves? Let me ask you some follow-up questions on this word, reality, and perception because it makes me think this conversation makes me think of maybe getting the the full name wrong. So I'll correct it in the show notes if I do but I believe
1:09:57
His name is Donald Hoffman, who is a cognitive scientist? Also computer scientist at UC Irvine and is a book that's called believe the case against reality and also a TED Talk. And the general exploration focuses on how easy it is to prove and demonstrate that we are optimized for very few things, reproduction principal among them. And that what we perceive is not some
1:10:27
Objective reality. And that, if you are a mantis shrimp, for instance, you would see things very differently. They these incredible optic systems and so on and so forth. What does the word reality mean to you? And how do you think personally that might change? Once we are deeper into the era of well-developed AI 50 years
1:10:49
ago? Reality was television and normal life.
1:10:54
Today reality is this online world, that is constantly demanding attention constantly, full of stressors of one kind or another constantly demanding your engagement, because that's the way it's built. The engagement is around, monetization spreading information and misinformation and so forth. Everyone. I know is being driven Crazy by the explosion of that. I think that in the next decade, the explosion will continue.
1:11:24
And the solutions will need to be developed because more and more the digital world becomes the real world.
1:11:34
An example that I used, which is, for example, you can see it in the movie Ready Player, one, where people cross over and they spend most of their time in a virtual world is true for many, many teenagers. And there's every reason to believe in the next 10 or 20 years. Those worlds will become extraordinarily sophisticated. So I'll give you a simple example at Google because there was so much going on. We developed a pattern where people would be in meetings, but they would also be doing their email at the same time.
1:12:05
And we became very good at doing two things at once, and this was a cultural norm, but I had to put in a rule that when you had Outsiders in the room, you couldn't do this because it was seen as so incredibly, rude, which it is, but it was a cultural norm that we adopted an invented and worked well.
1:12:27
So you can see that the addiction cycle and our world is largely now locked into addiction Cycles. Whether it's drug related or whether its technology related or its attention related. I mean again humans are like this. I am very, very concerned that we will lose perspective on what reality really is humans are not built for the kind of stressors that occur every day. We just aren't organized. We are organized around the campfire.
1:12:57
In the lion and that sort of thing which didn't occur at warp speed. And so, I'm assuming that this pressure on human development. The sort of craziness. This constant anxiety, will lead to more neuroses, more paranoia, not less.
1:13:15
And that people will then have to figure out a way just in the same way that parents are trying to keep their kids off line some of the time because kids are online from the moment. They wake up to the moment. They go to sleep. Kids, want them, go play ball for an hour. I'm taking your phone away. We're going to have to do the same thing for
1:13:32
adults. What rules or
1:13:36
Constraints, do you have for yourself around social media or other types of digital stimuli?
1:13:45
One of my friends said that
1:13:46
he stopped watching
1:13:47
television because he reads all his news online and I thought, okay. Well, that's a reasonable time trade-off. Of course, you're changing one stressor for another but that says, I only have time for one. I've chosen the online world.
1:14:03
And so I think the first question is there's limited time and then the second thing is, you've got to have some rules. So for me, I'm online all the time, but I try to turn it off during dinner.
1:14:15
I can usually get it done for about an hour. So the question is, when you go without it for a while, give another example in 2012, a group of us went to North Korea and we went from Beijing and this was at a time it was legal to do. So as an American and we left our phones in the Beijing airport in a trusted person, and we sat in the lounge, as the plane was about to take us to Pyongyang.
1:14:43
Without our phones and it's the strangest feeling. And by the time we got there and got settled in the hotel. It's a group of maybe 10. We started talking to each other, which we would never have done. And within three days. We were best friends. The moment. We got back to Beijing and got our phones. We were on them.
1:15:04
We lost all context of what we were doing and we were back in the soup.
1:15:09
So there's going to be some in the same sense that people go to Spa vacations. There's going to have to be some equivalent of a spa vacation from your
1:15:19
devices.
1:15:21
Yeah, I'm wondering what what type of agency humans will be able to harness to position themselves competitively in the world of AI brings up so many questions. I mean, such as the AI assistant will there be tiered assistance and will the well-off be able to afford better assistance? I mean, certainly the better off have different types of access now that help them to separate signal from noise better than the
1:15:51
You're already so that will probably continue to be the case. Do you think there? And there's no right answer here. By the way, do you think there's anything uniquely human that? Ultimately, we will be able to find Value in, in a world of, of AGI. For
1:16:08
instance. A lot of people are speculating.
1:16:12
That the eventual future is a much richer world, where people are largely Idol, and that the units of production are so efficient. This is food production buildings and so forth that everyone can live like a millionaire, which means they don't have to work. And a lot of people would prefer not to work. Many of the jobs. They find not that interesting and they have hobbies that are more interesting to them and so forth.
1:16:38
It's possible that will be true. It's also possible that that world is dystopian because humans need meaning. So I think the answer to your question, has a lot to do with whether the systems produce more meaning for humans or less meaning. If the computer replaces me that's less meeting. If the computer augments me. It's more meeting. This is true at every level of society. So,
1:17:08
There will be a small number of countries. The US will be one. China will be another. There will be others, which will be the leaders in these Technologies. And because the way Network platforms work, those leaders will get far ahead of the other countries. So there is going to be a division between the AI enabled Powerhouse countries and the following countries who are using it, but not inventing it. That divided will lead to an
1:17:38
Normos, societal and economic changes, which we don't fully understand. Put another way. If you don't have a leading University in your country is doing this kind of stuff. You're going to be left out. So at the human level, some people will be comfortable in this new and very, very curiosity driven, very interesting age. But an awful, lot of people will feel displaced.
1:18:03
Will they be less motivated or more motivated? Another example, we should be able using these tools to build learning systems that teach in the most efficient way possible each and every person. We know people learn differently, but the old rear ends in seats, 30 people in front of a chalkboard is an outmoded concept. So there will be a completely different way for kids to learn and grow up, which has got to be good because it'll maximize their potential.
1:18:32
So you've got a person has been maximize their potential, will the economic system, give them opportunity to be maximally potentialized. Give another example, I use science examples, but let's imagine that you really want to be a painter or a musician. Well, in the future, you'll say to the AI system. I'm imagining a song about a woman and a Riverboat in New Orleans and it's a sad song, but it has Willows in the background.
1:18:57
Or something like that. And the computer will generate a song for you which won't be as good as what you can do, but you'll listen to it and you'll say oh that then stimulates your idea. So I want to invent a form of cubism that's different from Picasso. So the computer will then go through a series of scenarios of cubism that doesn't look like Picasso and I'll say I can do better and then I'll be stimulated. So the people who can engage in that I used musicians and Painters and
1:19:27
And thinkers and writers are going to be the economic winners. But what about everyone
1:19:32
else? Yeah, it's a really great question. I also immediately start thinking it's just shows you how boring I am, but about questions of copyright and intellectual property, you know, will you have some type of blockchain identified?
1:19:47
And verified assistant, who is kind of one-to-one correlated with your your identity and social security number. Therefore, you can copyright and own anything. The AI helps you to generate or generates itself for that matter. It's going to raise all sorts of
1:20:05
questions. I think we should build that company immediately you and I let's found it right
1:20:09
now.
1:20:11
The use of blockchain to do authentication is going to be Central in this.
1:20:17
Out because otherwise, we're do you know the source who invented this? How did it actually happen?
1:20:22
Yeah. Well she talked about it.
1:20:26
We're going to need Source authentication very very much in this. I
1:20:30
was looking back at my notes from our last conversation. We covered a lot of ground. I recommend everyone also list of the last conversation because we went so deeply into your, your background, your history, the trajectory different mentors. It was a great conversation. I really enjoyed it and we spoke a lot about
1:20:46
Bill Joy and when in the notes at least, if I'm remembering correctly, once he became a venture capitalist, he would read research papers figure out who the best author is or participants were in then call them and ask what's the most interesting thing in your field and I would love to hear any examples of startups, bigger companies, academics particular teams at universities, anyone or any groups that stand out to you is doing very, very interesting.
1:21:17
Things in the sphere of AI,
1:21:20
there are two well-known examples in AGI, one is called deepmind, which is in the UK. It's owned by Google. I've been heavily involved with them. And another one is open AI, which is the inventor of G PT 3, which has a big partnership with Microsoft. Those two are probably the largest focuses in these new areas. So, called reinforcement learning, so called generative learning.
1:21:46
And there are a couple of other labs. There's a series of University projects. I've been funding AI applied science in the leading universities and the AI science goes something like this. The physicists know how something works but our computers. Can't calculate it. It's too complicated a calculation, but they can make an approximation. So often in science. A AI system is used to approximate some
1:22:15
Thing. Good enough so that we can understand how the system works the most obvious area where this will play out will be in Biology. Another friend of mine said that math is to physics. What a I will be to biology. You needed math, understand how physics worked. You'll need a i to understand how biology works because it's so incredibly complicated. We still don't understand how to model A Cell. We still don't understand how the brain is organized. There are so many basic things that you
1:22:45
You would think as humans we would want to know that have not been calculable for us. And that's where all the great discoveries will
1:22:52
be. You know, I have to share something just because it's on this topic. In a sense. I have a set of papers from the late Richard Fineman on which he drew the Krebs cycle and I just found that crossover so to speak. So fascinating and we don't have to go deeply into Fineman.
1:23:15
Certainly, one of the people I've paid a lot of attention to biology is incredibly complex, and I was thinking of the examples you've given and thinking of say, protein folding or trying to identify receptors, and the shapes of receptors or even structure, the, and modify the shapes are receptors the process. As you've already said, is so incredibly labor-intensive and we've tried all sorts of things to try to pick up the slack using idle computers in a distributed fashion, but once
1:23:45
You have a i as a player on the field. I mean that could change things. So fundamentally as you mentioned with that one example, I think Alison was the example that you mentioned. It's a big, big deal. I wonder how a I may augment natural prospecting since nature often times the molecules. We identify nature are just so beyond the wildest imaginations of anyone who would start from scratch with a say ground-up synthetic approach. It raises so many.
1:24:15
Interesting questions. It's worth noting, that there was a competition between two groups. One a group called Rosetta at the University of Washington. And another one in Deep Mind to develop proteins protein folding algorithms and this year, both have reported their results in the open source, and they published how proteins the most common proteins that are part of biology fold. The reason this is important is the way that they fold determines the way they interact.
1:24:45
Act with other cells and it is the basis again of drug Discovery medicine, how our bodies work genetic expression and so forth. These proteins are super important. These are the kinds of discoveries that if they were done by humans. Would Merit Nobel prizes and this is happening right now.
1:25:06
The way they do them is the same as I described, which is they generate many, many different candidates and they evaluate them using Ai and then they choose the one that has the best fit most optimal outcome. And then they release it and then that technology will be used for the next ones and so
1:25:25
forth. What do you hope the impact of this book will be? It's so all-encompassing in some respects this topic. What is the the hope?
1:25:36
With this book in terms of impact or what people will do or how they will change their assumptions or beliefs after being exposed
1:25:45
to it.
1:25:46
Twenty years ago when we started the internet as you currently know it, the social media World, many of the other tools. No one debated. What the impact on society would be? We were way busy building these systems to great success without understanding the impact.
1:26:07
Artificial intelligence is much more powerful than any of the Technologies up till now, because as I mentioned its imprecise, it's emergent. It's learning and insightful. We need to understand how we're going to deal with these things ahead of their development over and over again, technologist. Build these Technologies, without understanding how they'll be used in misused.
1:26:32
The goal of the book is to lay out the fundamental questions. It's Society has to decide around these emergent Technologies which will happen and they will happen over the next five to ten years. If our book turns out to be the index case, where, after reading this book and after its publication, people say, holy crap. I've got to get ahead of this. I've got to figure out a philosophy.
1:27:03
Around this, I have to figure out a way where humans can coexist with these new systems. It doesn't dry humans crazy and makes the world better not worse. That's a great outcome from our book,
1:27:15
the age of AI and what a Powerhouse of
1:27:20
the trio, the co-authors involves. It's tremendously important and I'm thrilled that we've had a chance to Deep dive into. So many facets of this subject that has been of great interest to me. But as come along with great insecurity because I come at it, from a lay perspective, before we go. I must ask. How has it been to start your own
1:27:43
podcast.
1:27:45
I enjoyed it. It turns out to be harder than I thought because I actually had to prepare and I had to get context and so forth, but it is had enough of an impact that I will continue.
1:27:55
We imagine with Eric Schmidt. You're an excellent conversationalist. I'm continually impressed. I don't need to do much. That's the key to good. Interviewing is like your subject.
1:28:14
Of what you and I do.
1:28:17
You're a fan of Richard Feynman. When will there be a computer as smart as Richard Fineman?
1:28:22
And the answer is a very long time from now. And so we always like to focus on the lawyers, who will lose their jobs, in the politicians, who will lose their jobs, but that's not how it actually works. What really happens is, AI is going to be used to eliminate repetitive jobs jobs, which are boring and so forth. I mentioned Vision, most of the military's activities are watching things.
1:28:47
I'd much rather have the computer watching and then when there's an exception, say, hey, the, something happened and alert a human to take a look at to better use of both the human and the computer. And so, the good news. Is that what you do? And what I do is not going to be replaced by computers soon.
1:29:06
I'll take it. I will absolutely take it. And I encourage people to read this book, the age of a, I pick it up, you can find Eric on Twitter.
1:29:16
ER, at Eric Schmidt and then certainly at Eric, smith.com will link to everything including the book and all resources. Companies technology is mentioned in the show notes at Tim dot blog, / podcast. Eric. Thank you so much. This has been extremely, extremely enjoyable and educational for me. So I appreciate you taking the time. Thank you, Tim.
1:29:39
Hey guys, this is Tim again. Just one more thing before you take off and that is five. Bullet. Friday. You enjoy getting a short email from me every Friday. That provides a little fun before the weekend, between one and a half, and two million people. Subscribe to my free newsletter. My super short newsletter called Finally Friday, easy to sign up, easy to cancel. It is basically a half page that I send out every Friday to share the coolest things. I found or discovered, or have started exploring over that week kind of
1:30:08
Like my diary of cool things, it often includes particles and reading books. I'm reading albums. Perhaps gadgets, gizmos, all sorts of tech, tricks, and so on, that gets sent to me by my friends, including a lot of podcast guests. And these strange, esoteric things end up in my field and then I test them and then I share them with you. So, if that sounds fun, again, it's very short. A little tiny bite of goodness before you head off for the weekend. Something to think about, if you'd like to try it out.
1:30:38
Just going to Tim that blog / Friday type that into your browser. Tim DOT, log / Friday, drop in your email and you'll get the very next one. Thanks for listening. This episode is brought to you by butcher box, which are box. Makes it easy for you to get high-quality humanely raised meat that you can trust the deliver delicious, 100% grass-fed grass-finished beef. Free-range organic chicken Heritage breed pork and wild caught Seafood directly to your door for me in the past few weeks. I've cooked
1:31:08
Another salmon as well as to delicious barbecue. Rib racks in the oven, super simple. They were the most delicious pork ribs I've ever prepared. My freezer is full of put your box. When you become a member. You're joining a community focused on doing. What's better for all that means caring about the lives of animals livelihoods of farmers. Treating our planet, with respect, enjoying better meals together, which a box partners with folks, small farmers included who believe in going above and beyond when it comes to caring for Animals, the environment and sustainability, and none of their meat is
1:31:38
Give it antibiotics or added hormones. So how does work pretty simple, you choose your box and your delivery frequency, they offer five boxes for curated box options as well as the popular custom box. So you get exactly what you and or your family love box options and delivery frequencies can be customized to fit your needs. You can cancel at any time with no penalty, which are box ships. Your order Frozen for freshness and packed in an eco-friendly. 100% recycle box. It's easy. It's fast. It's
1:32:08
Convenient. I really, really enjoy it. And best of all, looking at the average cost. It works out to be less than $6 per meal. Skip the lines for your Thanksgiving turkey. This holiday butcher box is proud, to give new members a free turkey. Just go to put your box.com Tim to sign up. That's butcher box.com /, Tim to receive free 10 to 14 pound turkey in your first box.
1:32:32
This episode is brought to you by Peak T. That's Pi Q UE. I have had so much tea in my life. I've been to China. I've lived in China and Japan. I've done ttours. I drink a lot of tea and ten years, plus a physical experimentation and tracking has shown you many things, Chief among them gut health is critical to just about everything and just see where T is going to tie into this. It affects immune function, Weight Management, Mental performance, emotional health. You name it? I've been drinking, fermented poo.
1:33:01
Ooh are tea specifically pretty much every day for years. Now who are T? Delivers more polyphenols and probiotics than you can shake a stick at it's like providing the optimal fertilizer to your microbiome. The problem with good power is that it's hard to Source, hard to find. Real Pou error that hasn't been exposed to pesticides, other nasties, which is super common. That's why Peaks fermented. Queer key crystals have become my daily. Go to, it's so simple. They have so many benefits that are going to get
1:33:31
I first learned about them through my friends. Dr. Peter, Atiya and Kevin Rose. Peak. Crystals are cold extracted using only wild harvested leaves from 250 year, old tea trees. I often kick start my mornings with their poo are green tea or black tea and I alternate between the two the rich earthy flavor of the black specifically is amazing. It's very very it's like a delicious Barnyard repeat if you like whiskey and stuff like that, they triple toxin screen all of the products for heavy metals.
1:34:01
As pesticides and toxic mold. Contaminants commonly found in tea. There's also zero prep or bring required as the crystals dissolve in seconds. So you can just drop it into your hot tea or I also make iced tea and that saves a ton of time and hassle. So Peak is offering 15% off their priorities for the very first time exclusive to you my listeners. This is a sweet offer. Simply visit Peak t.com Tim. That's Pi Q UE T EA.com forward slash Tim this
1:34:31
She was only available to listeners of this podcast that speak t.com Port. Slashed in the discount is automatically applied when you use that URL. You also have a 30 day satisfaction guarantee. So your purchase is risk-free one more time. Check it out peek t-that's Pi qet, a.com slash tip.
ms