PodClips Logo
PodClips Logo
Lex Fridman Podcast
#383 Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp
#383  Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

#383 Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp

Lex Fridman PodcastGo to Podcast Page

Mark Zuckerberg, Lex Fridman
·
52 Clips
·
Jun 8, 2023
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
The following is a conversation with Mark Zuckerberg. His second time in this podcast, he's the CEO of meta that owns Facebook, Instagram, and WhatsApp, all services used by billions of people to connect with each other. We talked about his vision for the future of meta and the future of AI in our human world.
0:22
And now a quick view. Second mention of each sponsor. Check them out in the description is the best way to support this podcast. We got numerous I for the world's hardest data science. Tournament Shopify for e-commerce and better help for mental health, Choose Wisely, my friends. Also, if you want to work with our amazing team, we're always
0:42
hiring. Got, Alex Friedman.com hiring and now on to the full ad reads, as always, no ads in the middle. I find those annoying. But these here have
0:52
As I try to make interesting though, you may skip them, if you must my friends but please do check out the sponsors, they helped us podcast out. I enjoy their stuff. Maybe you will too.
1:04
The show is brought to you by numerous by a hedge fund. The uses Ai and machine learning to make investment decisions. I'm a huge fan of real-world data, sets and real-world machine, learning competitions to figure out what works. This is not imagenet. This is not an artificial toy. Data set for the development of toy systems that illustrate toy Concepts. Those are the early early early.
1:33
Stages of research. But when you really want to see what works, you wouldn't benchmarks. That, have steaks that have the highest Stakes, especially ones that have money involved. So, I'm a huge fan money or not of datasets that represent the real world and demonstrate that the system can operate in the real world at the highest of stakes. That's why I was really interested in autonomous vehicles, when the stakes are life and death, it's safety-critical systems incredibly exciting to work on systems that
2:03
that are truly real world data sets. Anyway, if that kind of thing interests you, if your machine learning engineer head over to numerous dot AI /, Lex to sign up for a tournament and hone your machine learning skills. That's and um, er, dot a is a flex for chance to play against me and when share of the tournament prize pool,
2:28
The show is also brought to you by Shopify a platform designed for anyone to sell anywhere with a great looking online store that brings your ideas to life and tools to manage. The day-to-day operations operations is such a badass word, feel like you're running things. Anyway, a few folks ask me about merch. I'm a huge fan of buying merge for the podcast shows bands I love and so I love the camaraderie of merchants. I think Shopify is a
2:57
A place to sell merch. I'm definitely going to put out some merch. I'm really sorry has been taking forever. I've been working with this incredible artist I just love art. I love artistic representation of the funny. The profound on a t-shirt that allows you to celebrate with all this, something super cool. I love it, to me, there's nothing like promotional about it. All that kind of stuff. It's just sharing your happiness anyway. So I'll definitely use Shopify to create a merch store so that people can share.
3:27
Are a bit of their happiness with others. If you have stuff to sell or year of Mercy to sell off, you want to share some of your happiness with others. Sign up for a one dollar per month, trial period at Shopify.com, Lex, that's all lowercase, go to Shopify.com, Lex to take your business, to the next level.
3:48
This episode is also brought to you by better help. Spelled h e. LP help. They figure out what you need to match you with a licensed. Professional therapist in under 48 hours, I do a podcast. Obviously I'm a big fan of talk therapy in fact when I just listen to podcast at the kind of talk therapy because I'm having a conversation with the people. I'm listening to my mind whenever it's an interview showing us to folks talking I'm always a third person in the room kind of
4:17
Almost participating in the conversation, and there's something therapeutic about that. So, if listening to two other people, tell their life stories, and you be able to project your trauma, your struggles, your your hopes, your dreams, your triumphs all that kind of stuff onto their life and kind of dance with that, of course, to do that rigorously, and really just put it all out there in a raw and honest way. I think that's what therapy is about. There's a lot of things you can do for your mental health therapy is one of
4:47
The obvious things you should have in the toolkit of Lifestyle flourishing anyway, but our help just makes the whole thing. Super easy, super easy to sign up. Super easy to find lessons, therapist, all of that. It's obviously discreet, it's easy, the fordable, it's available anywhere, check them out at better. Health.com, Lex and save when you first month, that's better help.com. / Lex,
5:16
This is Alex Friedman podcast and now dear friends, here's Mark Zuckerberg.
5:38
So, you compete in your first suggested tournament, and me as a fellow Jiu-Jitsu practitioner competitor. I think that's really inspiring given all the things you have gone on. So, I got to ask, what was that experience? Like, oh, it's fun. No, yeah, I mean, I'm a little, like, I'm a pretty competitive person, yeah, doing sports that biscuits, require your full attention. I think is really important to my like, mental health and and the way I just stay focused and doing everything I'm doing it's like I decide to
6:08
Go into martial arts and it's, it's awesome. I got like a ton of my friends into it. We all train together, we have like a mini Academy in my garage, and I guess I'm one of my friends was like, hey, we should go do a tournament was like, okay, yeah, let's do it. I'm not gonna shy away from a challenge like that, so yeah, it was, but it was, it was awesome. It was, it was just a lot of fun. You weren't scared. There was no fear. I don't know. I was, I was pretty sure that I do. Okay, I like the confidence well, so for people,
6:38
No, Jesus is a martial art where you trying to break your opponent's, limbs or choke them to sleep and do. So, with grace and elegance, and efficiency, and all that kind of stuff. It's a, it's a kind of art form. I think they can do for your whole life and it's a basically a game or sport of human chess. You can think of a lot of strategy. There's a lot of sort of interesting, human dynamics of using leverage and all that kind of stuff and it's kind of incredible.
7:08
Double what you could do. You could do things like a small opponent could defeat a much larger opponent and you get to understand like the way the mechanics, the human body works because of that but you certainly can't be distracted. No, yeah. It's 100% Focus. But yeah to compete. I, you know, I need to get around the fact that I didn't want it to be like this, this big thing. So, I basically just I rolled up with a hat and sunglasses and I was wearing a covid mask and I registered under my first and middle name. So Mark Elliott.
7:38
And it wasn't until I actually pulled all that stuff off right before I got on the map that I think people knew as me. So those it was pretty low-key but you're still a public figure. Yeah I mean I didn't want to lose right. The thing you're partially afraid of is not just the losing but being almost like embarrassed. It's so raw the sport in that like it's just you and another human being. There's a primal aspect there. Oh yeah. It's great for a lot of people can be terrifying especially the first time you're doing the competing and you wasn't for you. I see the look of excitement in your face.
8:08
I was out here. I just think part of learning is failing, okay, right, so I mean the main thing is like people who train Jujitsu it's like you need to not have pride because I mean all the stuff you were talking about before about you know getting choked or getting a joint lock. Its you only get into a bad situation if you're not willing to tap once you've already lost right? And but obviously when you're getting started with something, you're not going to be an expert at it immediately. So you just need to be
8:38
Willing to go with that. But I've got this is like, I don't know. I mean, maybe I've just been embarrassed enough times in my life. Yeah. I I do think there's a thing where like it has people grow up. Maybe they don't want to be embarrassed or anything, they've built their adult identity, and they kind of have a sense of who, they
8:55
They are and what they want to project and I don't know, I think maybe to some degree.
9:01
Your ability to keep doing interesting. Things is your willingness to be embarrassed again and go back to step one and start as a beginner and get your ass kicked and you know, look stupid doing things. And yeah, I think so many of the things that we're doing whether it's whether it's this, I mean this is just like a kind of a physical part of my life, but, but it running the company it's like we just take on new adventures and
9:31
So all the big things that we're doing I think of is like 10 plus year missions that were on where you know, often early on, you know, people doubt that we're gonna be able to do it and the initial work seems kind of silly and our whole ethos is we don't want to wait until something is perfect to put it out there. We want to get out quickly and get feedback on it and so I don't know. I mean there's probably just something about how I approach things in there but I just kind of think that the moment that you decide that you're going to be too embarrassed to try something new, then you're not going to learn anything anymore but like I mentioned
10:01
That fear that anxiety could be there could creep up every once in a while. Do, do you feel that and especially stressful moments sort of outside of? Did you just met just at work stressful moments? Big decision days? But yes, engine moments. Well, how do you deal with that fear? How do you deal with that anxiety? The thing that stresses me out the most is always is always the people challenges, you know? I kind of think that strategy questions you know I tend to have enough
10:31
Conviction around the values of what we're trying to do. And what I think matters and what, I want our company to stand for that, those don't really keep me up at night that much. I mean, I kind of, it's not that I get everything, right? Of course. I don't, right. I mean, make make a lot of mistakes, but, but I, at least have a pretty strong sense of where I want us to go on that. The thing in running a company for almost 20 years. Now, one of the things,
11:01
Been pretty clear is when you have a team that's cohesive. You can get almost anything done and you can you can run through super hard challenges. You can make hard decisions and push really hard to do the best work even in kind of optimize something super well. But when when there's that tension, I mean, that's, that's when things get really tough. And, you know, when I talk to other friends who run other companies, and things like that,
11:30
I think one of the things that I actually spend a disproportionate amount of time on and running. This company is just fostering a pretty tight Core group of people who are running the company with me. And that to me is, is kind of the thing that both makes it fun, right? Having having, you know, friends and people you've worked with for a while and new people and New Perspectives, but like a pretty tight group who can who you can go work on some of these crazy things with. But to me that's also the most stressful thing.
12:01
Is when when there are when there's tension, you know, that's that that weighs on me, I think the, you know, just it's it, maybe not surprising. I mean, we're like a very people focused company and it's the, the people is the the part of it that, that weighs on me the most to make sure that we get, right. But yeah, that that, I'd say across everything that we do is probably the big thing. So when there's tension in that inner circle of of close folks so when you
12:30
Just those folks to help you make difficult decisions, about Facebook, WhatsApp, Instagram, the future of the company, and the metaverse with AI. How do you build that close-knit group of, folks to make those difficult decisions? Is there people that you have to have critical voices? Very different perspectives on. Yeah, focusing on the past versus the future. All that kind of stuff. Yeah I mean I think for one thing it's just
13:01
Spending a lot of time with whatever the group is that you want to be that Core Group, grappling with all of the biggest challenges and that requires a fair amount of openness and, you know, some in a lot of how I run the company is and it's like every Monday morning we get our, it's about the top 30 people together and we, and this is a group that just worked together for a long period of time. And, I mean, people people rotate in. I mean, we knew people join people leave the company. People go to other roles in the company. So,
13:30
It's not the the same group over time, but we spend, you know, a lot of times, a couple of hours, a lot of the time it's can be somewhat unstructured. We like I'll come with maybe a few topics that I that are top of mind for me, but I'll ask other people to bring things and people you know, raised questions whether it's okay, there's an issue happening in some country with with some policy issue. There's like a new technology that's developing here. We're having an issue with this partner, you know, there's a design
14:00
Often, what's up between two things that end up being values that we care about deeply. And we need to kind of decide where we want to be on that. And I just think over time when, you know, by working through a lot of issues with people and doing it, openly people, develop an intuition for each other and a bond and camaraderie. And to me developing that is like a lot of the fun part of running a company or doing anything right. I think it's like having having people who are kind of a long on the
14:30
journey that you're that. You feel like you're doing a with nothing is ever just one person doing it other people that disagree. Although yeah then that group it's a fairly combative group. Okay. So combat is part of it. So this is making decisions on design engineering policy. Everything everything, everything. Yeah, I have to ask just back to gist of for a little bit. What's your favorite submission? Now that you've been doing it, what's how do you like to submit your opponent?
15:01
Burke. I'm in what. But first of all I do prefer no giorgi Jiu-Jitsu. So G is of this outfit you wear. That is maybe mimics clothing so you can choke looks like a come on. Oh it's like that traditional martial arts or any. I'm gonna jump - um pajamas that you could choke people with. Yes. Well, fill out the lapels. Yeah yeah. So I like Jiu-Jitsu. I also really like MMA.
15:30
May and so I think noogie more closely approximates MMA. And I think my style is is maybe a little closer to an MMA style. So like a lot of jujitsu players are fine being on their back right now. Obviously, having a good guard, is a critical part of jujitsu, but but in MMA you don't want to be on your back, right? Because even if you have control you're just taking punches while you're on your back. So that's no good. Do you like being on top? My style is, I'm
16:00
Probably more pressure and, and yeah. And, and I'd probably rather be the top player but also smaller, right? I'm not, I'm not like a heavyweight guy, right? So from that perspective, I think like, you know, it's especially because, you know, from doing a competition, I'll compete with people or my size, but a lot of my friends are bigger than me. So so back takes probably pretty important, right? Because that's where you have the most leverage Advantage, right? Where where you know people you know there are
16:30
Arms, your arms are very weak behind you, right? So so being able to get to the backhand and take that pretty important, but I don't know. I feel like the right strategy is to not be too committed to any single submission but I said I don't like hurting people so so I always think that chokes are are somewhat more human way to go than joint locks. Yeah and it's more about control. It's less Dynamic. So you're basically like a bead nurmagomedov type of fighter so let's go. Yeah, back take to a rear-naked choke, I think.
17:00
It's like the clean bang clean way to go straight forward. Answer it there. What advice would you give to to people looking to start learn Jujitsu, given how busy you are given where you are in life. The you're able to do this, you have to train your able to compete and get to learn something from this interesting art, which think you have to be willing to
17:25
To just get beaten up a lot. Yeah I mean it's over time I think that there's there's a flow to all these things and there's you know one of the
17:36
One of my experiences that I think kind of transcends running a company and the different different activities that I like doing our. I really believe that like if you're going to accomplish whatever anything a lot of it is just being willing to push through Reading and having the grit and determination to to push through difficult situations. I'm going for a lot of people that that ends up being sort of a Difference Maker between the people who
18:06
Who kind of get the most done and not? I mean, there's all these questions about like, you know how, how many days people want to work and things like that. I think almost all the people who like start successful companies or things like that or just are working extremely hard. But I think one of the things that you learn both by doing this over time or you know, very acutely with things like Jiu-Jitsu or surfing is, you can't push through everything and
18:33
I think that's
18:35
You learn this stuff, very cutely, run doing sports compared to running a company because running a company. The cycle times are so long, right? It's like you start a project and then it's, like months later or, you know, if we're building Hardware, it could be years later before you're actually getting feedback and able to make the next set of decisions for the next version of the thing that you're doing where is you? One of the things that I just think is mentally. So nice about these very high turn around conditioning.
19:04
Things like that is that you get feedback very quickly, right? It's like, okay, like I don't count or something correctly, you get punched in the face, right? So not in Jiu-Jitsu. You don't get punched in Jujitsu but in MMA, there are all these analogies between all these things that I think actually hold that are that are like important life lessons, right? It's like okay, you're surfing a wave. It's like, you know, sometimes you're like you can't go in the other direction on it, right? It's like there are limits to kind of what it's like before.
19:34
All you can you can pump the foil and push pretty hard and a bunch of directions but like yeah, you and it's some level like the momentum against you is strong enough. You're that's not going to work and I do think that that's sort of a humbling but also an important lesson for people who are running things or building things it's like yeah. You you you know a lot of the game is just being able to kind of push and and and and work through complicated things, but you also need to
20:04
To kind of have enough of an understanding, like, which things you just can't push through and we're where the Finesse is more important. Yeah. What are your Jiu-Jitsu life lessons? Well, I think you did.
20:19
You made it sound so simple and were so eloquent that it's easy to miss but basically being okay and accepting the wisdom and the joy in the getting your ass kicked in the full range of what that means. I think that's a big gift of the being humbled, somehow being humbled, especially physically opens your mind to the full process of learning. What it means to learn, which is
20:49
Being willing to suck at something and think you just was just very repetitively efficiently. Humbles you over and over and over and over to where you can carry that lessons to places where you don't get humbled as much whether it's research or running a company, or building stuff. The the cycle is longer and you just so you can just get humbled. And it was a period of an hour over and over and over and over, especially when you're a beginner, you have a little person just, you know,
21:19
Somebody much smarter than you just kick your ass repeatedly definitively where there's no argument. Oh yeah. And then you literally tap because if you don't tap you're going to die. So this is an agreement. You could have killed me just now, boo or friends. So we're going to agree that you're not going to and that kind of humbling process. It just does something to your psyche to Ego, that puts it in its proper context to realize that you know everything.
21:49
In this life, is like a journey from sucking through a hard process of improving or rigorously day after day. After day after day, they any kind of success requires hard work. Yeah. Just some more than a lot of sports, I would say. Because I've done a lot of them really teaches you that and you made it sound so simple. Like I'm okay, you know, it's okay, it's part of the process. You should get humble, get your eyes. I've just failed and been embarrassed so many times in my life.
22:19
Life that like, you know, I'm it's a core competence at this point, it's a core competence. Well, yes. And there's a deep truth to that being able to and you said it in the very beginning, which is, that's the thing that stops US. Especially as you get older especially easy to develop expertise in certain areas, the not being willing to be a beginner in a new area. Yeah. That because that's where the growth happens. Is being willing to be a beginner being willing to be embarrassed saying something stupid doing
22:49
Something stupid, a lot of us to get good at one thing. You want to show that off and it sucks being a beginner but it's where growth happens. Yeah, well, speaking of which, let me ask you about a. I it seems like this year for the entirety of the human civilization is an interesting year for the development of artificial intelligence. A lot of interesting stuff is happening
23:15
So Meta is a big part of that matters developed llama, which is a 65 billion parameter model. There's a lot of interesting questions they can ask. Kara one of which has to do with open source. But first, can you tell the story of developing of this model and making the complicated decision of how to release it? Yeah, sure. I think you're right. First of all that in the last year, there have been a bunch of
23:45
Vance's on scaling up these large Transformer model. So there's the language equivalent of it with large language models not have the sort of the image generation equivalent with these large diffusion models. There's a lot of fundamental research that's gone into this and meta has taken the approach of being quite open and academic in our development of AI part of this
24:15
We want to have the best people in the world researching this and in a lot of the best people want to know that they're gonna be able to share their work. So that's part of the deal that we that we have is that you know we can get you know if you're one of the top AI researchers in the world you can come here you can get access to kind of industry, scale infrastructure and and part of our ethos is that we want to share, what's invented, broadly? We do that with a lot of the the different AI tools that we create.
24:45
Eight and Lama is the language model that that our research team made and we did in a limited limited, open source release for it, right? Where which was intended for researchers to be able to use it, but, you know, responsibility and getting safety right on. These is, is very important. So we didn't think that for the first one. There were a bunch of questions around whether we should be releasing this commercially. So
25:15
Kind of punch it on that for V1 of llama and just released it from research. Obviously by releasing it for research, it's out there. But, but companies know that they're, that they're not supposed to kind of put it into commercial releases and in, we're working on the follow-up models for this and thinking through how what, the how exactly this should work for following. Now that we've had time to work on a lot more of the, the safety, and in the pieces around that. But, but overall, I mean,
25:45
mean, this is
25:47
I just kind of think that.
25:51
That it would be good if there were a lot of different folks who have the ability to build state-of-the-art technology here, you know, it's and not just a small number of big companies, we're to train one of these AI models. The state-of-the-art models is just takes, you know, hundreds of millions of dollars of infrastructure, right? So, there are not that many organizations in the world,
26:21
I can do that of the biggest scale today and now it gets it gets more efficient every day. So so I do think that that will be available to more folks over time. But but I just think like there's there's all this Innovation out there that people can create. And and, and I just think it will will also learn a lot by seeing what the whole community of students and hackers and startups and different folks build with this. And that's kind of, that's kind of been how we've approached this
26:50
And it's also we've done a lot of our infrastructure and we took our whole data center design and our server design. And we built this open compute project where we just made that public. And part of the theory was like alright if we make it so that more people can use this server design then then that will enable more Innovation. It'll also make the server design more efficient and that'll that'll make our business more efficient to so that's worked. And we've just done this with a lot of our infrastructure. So for people don't know, you did the limited release. I think in February of
27:20
Of this year of llama and it got quote-unquote. Leaked.
27:26
T', meaning like it's escaped the the the limited release aspect but it was you know that something you probably anticipated given that it's just released the research shared it with researchers, right? So it's just trying to make sure that there's like a slow-release. Yeah. But from there I just would love to get your comment on what happened next, which is like, this is a very vibrant open source community that just built stuff on top of it. There's a lot.
27:56
AMA CPP. The basically stuff that makes it more efficient to run a smaller computers. There's combining with reinforcement learning with human feedback, so some of the different, interesting fine-tuning mechanisms, there's then also like fine-tuning, and igbt three generations. There's a lot of GPT for all pakka costly. I all these kinds of models just kind of spring up like run on top of would ya like what do you think about that? No, I think it's been really neat to see. I mean there's been
28:26
Folks, who are getting it to run on local devices, right? So if you're an individual who just wants to experiment with this at home, you pray don't have a large budget to get access to a large amount of cloud compute to getting it to run on your local laptop. You know is is pretty good right in pretty relevant and then there are things like yeah, Lama, CPP re-implemented it more efficiently. So, you know, now even when we run our own versions of it, we can
28:56
Do it on way, less compute and it just way more efficient, save a lot of money for everyone who uses this. So, that that is is good. I do think it's worth calling out that because this was a relatively early release llama isn't quite as on the frontier as. For example, the biggest open a.i. models or the biggest Google models, right? When you mentioned that the largest lawn,
29:26
A model that we released had 65 billion parameters and we know knows, I guess outside of open a.i. exactly what the specs are. 44, G, PT 4. But, but I think the, my understanding is it's like ten times bigger and I think Google's Palm model is is also. I think has about 10 times as many parameters. Now, the Llama models are very efficient, so they perform well for something that's around 65 billion parameters. So for me, that was also part of this because this is whole debate around.
29:56
And you know is it good for everyone in the world to have access to to the most Frontier AI models? And I I think as the AI Model start approaching something that's like a super human intelligence, I think that's a bigger question that will have to Grapple with. But right now I mean, these are still very basic tools. They're, you know, they're, they're powerful in the sense that, you know, a lot of Open Source software like Dad
30:26
Aces or web servers can enable a lot of pretty important things but I don't think anyone looks at the the the current generation of lawmen thinks it's anywhere, near a super intelligent. So I think that a bunch of those questions around like, is it is a good to kind of get out there? I think, at this stage. Surely you want more researchers working on it for all the reasons that that open source software has a lot of advantages and we talked about efficiency before but in
30:56
Other one is just open source, software tends to be more secure because you have more people looking at it openly and scrutinizing it and finding holes in it and that makes it more safe. So I think at this point it's more, I think it's generally agreed upon that open source. Software is generally more secure and safer than things that are kind of developed in a silo where people try to get through security through obscurity. So, I've got four the scale of what we're seeing now with AI.
31:26
I think we're more likely to get to good alignment and good understanding of kind of what needs to do to make this work well by having it be open source. And and that's something that I think is is quite good to have out there and happening publicly at this point. Meta released a lot of models as open source. So the massively multi-lingual speech model, they have that. I mean, I'll ask you questions about this but the point is
31:54
Eve open source quite a lot. You've been spearheading the open-source movement. Where's that's really positive? Inspiring to see from one angle from the research angle. Of course, as folks who are really terrified about the existential threat of artificial intelligence. And those folks will say that, you know, you have to be careful about the open sourcing step. But where do you see the future of Open Source here? As part of meta, the tension here is
32:24
Do you want to release the magic sauce? That's one tension. And the other one is, do you want to put a powerful tool in the hands of Bad actors? Even though it probably has a huge amount of positive impact? Also. Yeah, I mean, again, I think for the stage that we're at, in the development of a I don't think anyone looks at the current state of things and thinks that this is super intelligence and you know the models that we're talking about for the Llama models. Here are
32:54
You know, generally an order of magnitude smaller than what open AI or Google or doing so. I think that, at least for the stage that we're at now, the equity is balance strongly in my view towards doing this more openly. I think, if you got something that was closer to superintelligence, then I think you'd have to discuss that more and think through that a lot more and we haven't made a decision yet as to what we would do if we were in that position. But I don't think I think there's a good chance
33:24
We're pretty far off from that position. So so I'm not. I'm certainly not saying that the position that we're taking on this now,
33:35
Applies to every single thing that we would ever do and you know certainly inside the company and we probably do more open source work than most of the other big tech companies but we also don't open source everything around. A lot of are the core kind of app code for WhatsApp or Instagram or something. I mean, we're not open sourcing that it's not like a general enough piece of software that would be useful for a lot of people to do different things, you know, whereas the software that we do, whether it's like a it opens,
34:05
Server design or or basically, you know, things like memcache or I'd like a good. It was Prior least project that I worked on it was part one of the last things that I coded and and led directly for the company but basically this like caching tool for quick David did retrieval. These are things that are just broadly useful across like anything that you want to build. And and I think that some of the
34:35
Language models now. Have that feel as well as some of the other things that were building, like, the translation tool that you just referenced, so text-to-speech. And speech-to-text, you've expanded it from around 100 languages to more than 1,100 languages. Yeah, and you can identify more than the model can identify more than 4000 spoken languages, which is 40 times more than any known previous technology to me. That's really, really, really exciting. In terms of connecting the world breaking down barriers, the language creates,
35:05
Yeah, I think being able to translate between all of these different pieces. In real time, this has been a
35:12
Kind of common sci-fi idea that we'd all have. You know, whether it's not only your bud or glasses or something that can help translate in real time between all these different languages. And that's one that I think technology is basically delivering now. So I think, yeah. I think that's pretty. Pretty exciting. You mentioned the next version of llama. What can you say about the next version of a llama? Well, what can you say about? Like what, what were you?
35:41
Working on in terms of release, in terms of the vision for that? Well, a lot of what we're doing is taking the first version, which was primarily This research version and trying to now build a version that has all of the latest state-of-the-art, safety precautions, built in. And, and we're we're using some more data to train it from across our services. But but a lot of the
36:11
We're doing internally is really just focused on making sure that this is as aligned and responsible as possible and we're building a lot of our own, you know, we're talking about kind of the open source infrastructure but you know the the main thing that we focus on building here, a lot of product experiences to help people connect to express themselves. So we're going to, I've talked about a bunch of this stuff, but you'll have an assistant that you can talk to
36:41
In WhatsApp, you know, I think, I think in the future every Creator will will have kind of an AI agent that can kind of act on their behalf that their fans can talk to. I want to get to the point where every small business basically. As an AI agent, the people can talk to, for, to do Commerce and customer support and things like that. So they're going to be all these different things and
37:05
La Mer the language model underlying. This is basically going to be the engine that powers that the reason to open source it is that as we did with with the, the first version is that it basically it unlocks a lot of innovation in the ecosystem will make our products better as well, and also gives us a lot of valuable feedback on security and safety. Which is important for making this good. But, yeah, I mean, the, the work that we're doing to advance, the
37:35
It's it's basically this point taking it Beyond a research project into something which is ready to be kind of core infrastructure, not only for our own products but you know, hopefully for for a lot of other things out there to using the Llama Or the language model, underlying that version 2 will be open sourced. The your you have internal debate around that the pros and cons. And so on, this is, I mean, we're talking about the debates that we have internally, and I think
38:07
I think the question is how to do it, right? I mean, it's a, I think we, you know, we did the research license for V1. And I think the, the big thing that were that were thinking about is, is basically like what's the, what's the right the right way. So, there was a leak that happened. I don't know if you can comment on it for the V1. We released it. As a research project for researchers to be able to use but in doing so we put it out there. So
38:35
Then we were very clear that anyone who uses the the code in the weights doesn't have a commercial license to put into products. And we've generally seen people respect that, right? It's like you don't have any reputable companies that are basically trying to put this into into, their commercial products. But but yeah, but by sharing it with so many researchers, it's, you know, yeah, it did leave the building. But what have you learned from that process that you might be able to apply to V2 about hundred release it safely effectively
39:05
Lee if you release it? Yeah, well I mean I think a lot of the feedback like I said is just around you know different things around, you know how do you fine-tune models to make them more aligned and safer and you see all the different data recipes that you mentioned. A lot of different projects that are based on this. I mean there's one Berkeley there's, you know, there's just like all over and and
39:32
People have tried a lot of different things that we've tried a bunch of stuff internally. So kind of where it were making progress here, but also are able to learn from some of the best ideas in the community and, you know, I think it. Can we want to just continue continue, pushing that forward. But so like I don't have any news to announce some if that's if that's what you're asking. This is a thing that we're we're still. We're still kind of actively working through the the right way to move.
40:02
Forward hear the details of the secret sauce are still being developed. I see, can you comment on what do you think of the thing that worked for GPT which is the reinforcement learning with human feedback? So doing this alignment process, do you find it interesting and that's part of that, let me ask because I talked to you on the Coon before talking to you today he asked me to ask or suggested that I ask do you think LM fine, tuning will need to be crowd-sourced.
40:32
Hostile so crowdsourcing. So this kind of idea of how to integrate the human in the fine-tuning of these Foundation models. Yeah, I think that's a really interesting idea that I've talked to Ian about a bunch and you were talking about, how do you basically trained? These models to be as safe and aligned, and responsible as possible and different groups out there.
41:02
are her doing development, test different data recipes in fine-tuning, but this idea that you just mentioned is
41:12
At the end of the day, instead of having kind of one group, fine-tune some stuff and then another group, you know, produce a different fine-tuning recipe. And then I was trying to figure out which one we think works best to produce the most aligned model. I do think that it would be nice if you could get to a point where you had a Wikipedia style collaborative wafer, a kind of a
41:41
other community to to fine-tune it as well. Now there's a lot of challenges in that both from an infrastructure and like Community Management and product perspective about how you do that. So I haven't worked that out yet but I as an idea, I think it's quite compelling and I think it it goes well with the ethos of open sourcing. The technology is also finding a way to have a kind of Community, Driven Community Driven training of it.
42:11
But I think there are a lot of questions on this in general these, this these questions around. What's the best way to produce a line day? I models. It's very much a research area and it's one that I think we will need to make as much progress on as the kind of core intelligence capability of the of the models themselves. Well, I just did a conversation with Jimmy Wales, the founder of Wikipedia, and to me, Wikipedia is one of the greatest websites ever created.
42:40
As a kind of a miracle that it works. And I think it has to do with something that you mentioned, which is community. You have a small community of editors that somehow work together well, and they they handle very controversial topics and they handle it would balance it with Grace, despite sort of the attacks that will often have a lot of the time. I mean, it's not it's it has issues, just like any other human system. But yes, I mean the balance is. I mean, it's a
43:10
It's amazing what they've been able to achieve but it's also not perfect. And I think that that's there still a lot of challenges, right? The more controversial, the topic, the more, the more difficult, the journey towards quote-unquote truth or knowledge or wisdom that we could be addressed to capture and the same way I models, we need to be able to generate those same things. Truth, knowledge, and wisdom. And how do you align those models that they
43:40
Generate something that is closest to truth. There's these concerns about misinformation all this kind of stuff that nobody can defined and that's a, it's something that we together as a human species. Have to Define like, what is truth and how to help AI systems generate that is one of the things language models do really well is generate convincing sounding things that can be completely wrong. And so,
44:10
How do you align it?
44:13
To be less wrong and part of that is the training and part of that is the alignment and however you do the alignment stage and just like I said it's a very new in a very open research problem. Yeah and I think there's also a lot of questions about whether the current architecture for LMS.
44:35
As you continue, scaling it, what happens? I mean, a lot of the a lot of what's been exciting in the last year is that there is, there's clearly a qualitative breakthrough where, you know, with with some of the GPT models that have been II put out and in that others have been able to do as well. I think it reached it kind of level of quality where people like wow this is this feels different and in like it's going to be able to be the foundation for building a lot of awesome product.
45:05
Experiences and value. But if the other realization that people have is, wow, we just made a breakthrough.
45:13
If there are other breakthroughs quickly, then I think there's the sense that maybe we're, we're closer to general intelligence, but I think that that idea is predicated on the idea that I think people believe that they're still generally a bunch of additional breakthroughs to make and that it's would just don't know how long it's going to take to get there. And you know, one of you that some people have this doesn't tend to be my view as much is that simply scaling the current LMS and getting too high.
45:43
Higher parameter count models by itself, will get to something that is closer to, to general intelligence. But I don't know. I tend to think that there's probably more more
45:57
fundamental steps that need to be taken along the way there, but still the leaves taken with this extra alignment. Step is quite incredible. Quite surprising to a lot of folks. And on top of that, when you start to have hundreds of millions of people potentially using a product that integrates that you can start to see civilization transforming effects before you achieve, super quote unquote, super intelligence, it could be
46:26
For transformative without being a super intelligence. Oh yeah. I mean I think that they're going to be a lot of amazing products and value that can be created with the current level of technology to some degree. Yeah, I'm excited to work on a lot of those products over the next few years and I think it would just create a tremendous amount of whiplash. If the number of breakthroughs keeps like if they're keep on being stacked breakthroughs, because I think to some degree industry in the world,
46:56
Need some time to kind of build these breakthroughs into the products and experiences that we all use. So we can actually benefit from them but I don't know. I think there's just a
47:11
like an awesome amount of stuff to do. I mean I think about like all of the small businesses or individual entrepreneurs out there who now we're gonna be able to get help coding. The things that they need to go build things or designing the things that they need or will be able to use these models to be able to do customer support for the people that they're that they're serving over WhatsApp without having to, you know, I think that that's that's going to be. I just think that this is all going to be
47:41
Super exciting. It's going to create better better experiences for people and just unlock a ton of innovation and value. So I don't know if you know, but you know what is it? Over three billion people use WhatsApp, Facebook and Instagram.
47:58
So, any kind of a, i fueled products that go into that, like, we're talking about anything with LMS, will have a tremendous amount of impact. Do you have ideas and thoughts about possible products that might start being integrated into into these platforms? Used by so many people? Yeah, I think there's three main categories of things that were working on
48:27
the first that I think is probably the most interesting is,
48:36
There's this notion of like you're going to have an assistant or an agent who you can talk to. And I think probably the biggest thing that's different about my view of how this plays out from what I see with with open a.i. and Google and others. Is everyone else is building like the One Singular AI, right? It's like okay, you talk to chat GPT, or you talk to Bard or you talk to Bing and my view is that
49:05
That they're going to be a lot of different AI is that people are going to want to engage with. Just like, you want to use a number of different apps for different things. And you have relationships with different people in your life who fill different emotional roles for you. And I so I think that they're going to be people have a reason that they, that I think you don't just want, like a singular Ai. And that I think is probably the biggest distinction.
49:33
In in terms of how I think about this and a bunch of these things. I think you'll want an assistant and I mentioned a couple of these before. I think like every Creator who you interact with will ultimately want some kind of AI that can proxy them and be something that their fans can interact with or that allows them to interact with their fans. This is like the common crater, promise, everyone's trying to build a community and engage with people, and they want tools to build to amplify themselves more, and be able to do that.
50:04
But but you only have 24 hours in a day. So so I think having the ability to basically like bottle up your personality and or you know, like give your fans information about when you're performing a concert or something like that. I mean that's that I think is going to be something that's super valuable but it's not just that again it's not this idea that people are going to want Just One Singular II think you're going to you know you're going to want to interact with a lot of different entities and then I think there's
50:33
The business version of this to which we've touched on a couple of times. Which is, I think every business in the world is going to want basically an AI that that you know, it's like you have your page on Instagram or Facebook, or Whatsapp or whatever and you want to, you want to point people to Nai that people can interact with but you want to know that that AI is only going to sell your products. You don't want it, you know, recommending your competitors stuff, right? So it's not like there can be like just, you know, One Singular AI
51:03
That that can answer all the questions for a person because, you know, that quite like that a I might not actually be aligned with you is a business to to really just do the best job providing support for for your product. So I think that there's going to be a clear need in the market and then people's lives for there to be a bunch of these part of that is figuring out the research, the technology that enables the personalization that you're talking about. So not one centralized.
51:33
Used God, like, Allah, I am, but want just a huge diversity of them. That's fine. Tuned to particular needs particular Styles, particular businesses, particular Brands, all that kind of stuff and also enabling just enabling people to create them really easily for the you know, for two for your own business or if you're a Creator to developing help you engage with your fans and I've met that's so yeah. I think that there's a clear kind of interesting product Direction here that I think.
52:03
Is fairly unique from from what any of the other big companies are taking it also aligns well with this sort of Open Source approach because again, we sort of believe in this more community-oriented, more democratic approach to building out the products and Technology around this. We don't think that they're going to be the one true thing. We think that there should be kind of a lot of development. So that part of things I think is gonna be really interesting and we could we could go price spent a lot of time talking about that. And the, the kind of
52:33
Implications of that approach being different from what others are taking. There's a bunch of other simpler things that I think we're also going to do. Just going back to your question around. How this finds its way into? Look what do we build there could be a lot of simpler things around?
52:53
Okay, you you post photos on Instagram and Facebook and and WhatsApp and messenger, and like you want the photos to look as good as possible. So I like having an AI that you can just like take a photo and then just tell it like okay, I want to edit this thing or describe this. It's like I think we're going to have tools that are just way better than what we've historically had on this and that's more than the image and media generation side than the large language model side. But but it's it all kind of plays off of advances in the
53:23
Space through a lot of tools that I think are just gonna get built into every one of our products. I think every single thing that we do is going to basically it evolved in this direction, right? It's like in the future, if you're advertising on our services like do you need to make your own kind of AD creative? It's not you'll just you know you just tell us the okay. I'm I'm a dog walker and I
53:47
I'm willing to walk people's dogs and help me find the right people and like create the ad unit that will perform the best and like it give an objective to the system and it just kind of like connects you with the right people. Well, that's a super powerful idea of generating the language, almost like rigorous A/B testing for you. Yeah, it works then find the best customer for years.
54:17
A thing, I mean to me advertisement when done well just finds a good match between a human being and I think that will make that human being happy. Yeah, totally and do that if as efficiently as possible when it's done, well, people actually like it, you know it's I think there's a lot of examples where it's not done well and it's annoying and I think that's what kind of gives it a bad rap. But yeah, I do a lot of this stuff is possible today. I mean, obviously A/B Testing stuff is built into a lot of these Frameworks.
54:47
The thing that's new is having technology that can generate the ideas for about what to A/B test. So I think that's exciting. So this will just be across like everything that we're doing, we're all the metaverse stuff that we're doing, right? It's like you want to create worlds in the future, you'll just described them and then it'll create the code for you. So the natural language becomes the, the interface we use for all the ways we interact with the computer, with, with the digital more them. Yeah, yeah, totally. Yeah. Which is
55:17
what everyone can do using natural language and we translation you can do it and you kind of language I that Mia for the personalization is really really, really interesting. Yeah, it on lock so many possible things. I mean, I for one, look forward to creating a copy of myself, and I talked about this last time, but this has, since the last time this becomes our closer much closer, like I can literally just having interacted with some of these language models. I could see the
55:47
Absurd situation where I'll have a large or a Lex language model, and I'll have to have a conversation with him about like, Hey, listen, like you're just getting out of line and having a conversation with you fine, tune that thing to be a little bit more respect for something like this. And yeah, yeah. That's that's going to be the that seems like an amazing product.
56:14
For businesses for humans. Just not, not just the assistant that's facing the individual, but the assistant director, resents, the individual to the public both, both directions. There's basically a layer that is the AI system through which you interact with the outside world with the outside world that has humans in it. That's really interesting and you that have social networks
56:43
That connect billions of people, I seems like a heck of a large-scale place to test some of the stuff out. Yeah, I mean I think part of the reason why creators will want to do this is because they already have the community is on our services. Yeah. And and and a lot of the interface for this stuff today, our chat type interfaces and between WhatsApp and messenger. I think that those are just great great ways to interact with people.
57:14
So some of this is philosophy, but do you see? Do you see a near-term future? Where you have some of the people, your friends with our AI systems on the social networks on Facebook, on Instagram, even even on WhatsApp having having conversations where some heterogeneous Some Humans. Some zai. I think we'll get to that you know and you know if only just empirically looking at
57:43
Then Microsoft released this thing called Xiao Weis, several years ago. I thin and China and is a pre llm chatbot technology. That's it was a lot simpler than what's possible today. And I think was, like, tens of millions of people were using this and just, you know, really became quite attached and built relationships with it. And I think that there's their services today, like replica where people are doing things like that. And
58:13
So I think that there's there's certainly, you know, needs for companionship that people have, you know, older people and it's I think most people don't have as many friends as they would like to have right? If you look at it, there's some interesting demographic studies around like the average person has
58:36
The number of close friends that they have is fewer today than it was 15 years ago. And I mean, that gets to like, this is like the core thing that that I think about in terms of, you know, Building Services, that help connect people. So, I think you'll get tools that help people connect with each other are going to be the primary thing that we want to do. So you can imagine a i assistance that, you know, just do a better job of reminding you when it's your friend's birthday and how you can celebrate them.
59:07
Right. So right now we have like the little box in the corner of the website. That tells you who's birthday, it is and stuff like that. But it's but, you know, some level you don't want just want to like send everyone to know that. This is the same note saying, happy birthday with an emoji, right? So having something that's more of an, you know, social assistant in that sense and like that can, you know, update you on what's going on in their life and like, how you can reach out to them, effectively help? You be a better friend. I think that's something that's super power.
59:36
A full two. But yeah, beyond that. And there were all these different flavors of kind of personal a eyes that I think could exist. So I think an assistant is sort of the kind of simplest, one to wrap your head around, but I think a mentor or a life coach, if someone who can give you advice, who's maybe like a bit of a cheerleader who can help pick you up through all the challenges that that inevitably. We
1:00:06
Go through on a daily basis. If there's probably some some role for something like that and then you know, all the way you can you can we just go through a lot of the different type of kind of functional relationships that people have and their life. And, you know, I would bet that there will be companies out there that take a crackit at it. A lot of these things. So I'm not I think it's part of the interesting Innovation that's going to exist is that there are certainly a lot like education tutors, right? It's like, I mean I just look at
1:00:36
In my kids learning to code and they loved it but you know it's like they get stuck on a question and they have to wait till like I can help answer it right or someone else who they know can help answer the question the future. They'll just there will be like a coding assistant that they have that is like designed to be perfect for teaching a five and a seven year old had a code and and they'll just be able to ask questions all the time and you know be extremely patient. It's never going to get annoyed at them, right?
1:01:07
I think that like there are all these different kind of relationships or functional relationships that we have in our lives. That they're really interesting. And I think one of the big questions is like, okay is this all going to just get bucketed into, you know, One Singular a. I just I just don't, I don't think so. Do you think? Well it's actually a question from Reddit with the long-term effects of human communication. When people can talk with in quotes, talk with others through a chatbot.
1:01:36
At augments their language automatically rather than developing social skills by making mistakes and learning will people just communicate by grunts in a generation? I do you think about long-term effects at scale integration of AI in our social interaction? Yeah, I mean, I think it's mostly good. I am in that. Was that question was sort of framed in a negative way but I mean we were talking before about language models helping you communicate with was like language translation. How
1:02:06
Can you communicate with people? I don't speak your language. I'm just at some level would all this social technology is doing is helping people.
1:02:17
Express themselves better to people in situations where they would otherwise have a hard time doing that. So part of it might be okay because you speak the language that I don't know. That's a pretty basic one that, you know, I don't think people are going to look at that. And say, it's sad that do we have the capacity to do that? Because I should have just learned your language. Right? I mean, that's, that's pretty high bar, but
1:02:40
But overall I'd say they're all these impediments in language is an imperfect way for people to express thoughts and ideas. It's one of the best that we have, we have that we have art of code but language is also a mapping of the way you think the way you see, the world, the way who you are and one of the applications of recent talk to a person who's a actually Jujitsu instructor. He said,
1:03:10
That when he emails parents about their son and daughter that they can improve their discipline in class. And so on, he often finds that he's comes off a bit of more of an asshole and he would like, so he uses GPT to translate his original email into a nicer. Yeah. Male is, more of the light time. We hear this all the time, a lot of creators on our services. Tell us that one of the most stressful things
1:03:40
Basically negotiating deals with Brands and stuff like the business side of it because they're like, I mean, they do their thing right? And the Creator is they're excellent at what they do. And they just want to connect with their Community, but then they get really stressed, you know, they go into their, their DMS and they see some brand wants to do something with them and they don't quite know how to negotiate or how to push back respectfully. And so, I think building a tool that can actually allow them to do that. Well is one simple thing that that I think is just like an
1:04:09
Eating thing that that we've heard from a bunch of people that that they'd be interested in. But I'm going back to the broader idea.
1:04:19
I don't know. I mean, you know, I just Priscilla and I just had our third daughter. Congratulations. Thank you. Thanks. It's, and, and, you know, it's like one of the saddest things in the world is like seeing your baby cry, right? But like it's like what why is that right? It's over like well, because babies don't generally have much capacity to tell you what they care about otherwise when it's not actually just babies, right? It's, you know, my five-year-old daughter cries too because she
1:04:49
X has a hard time expressing. You know, what what matters to her? And those think about that. It's like, well, you know, actually a lot of adults get very frustrated too because they can't, they have a hard time expressing things in a way that going back to some of the early themes that maybe is something that, you know, was a mistake or maybe they have pride or something like all these things get in the way. So, I don't know. I think that all these different technologies that can help us navigate the social complexity.
1:05:19
And actually be able to better express our, what we're feeling and thinking, I think that's generally all good and they're all these these concerns like, okay, are people going to have worse memories because you have Google to look things up and I think in general a generation later you don't look back and lament that. I think it's just like wow we have so much more capacity to do so much more now and I think that that will be the case here too. You can allocate those cognitive capabilities to like deeper most. I wants thought.
1:05:49
Yeah, yeah but it's change. So with with just like with Google search. The the addition of language models large language models, you basically want to remember nearly as much, just like what stack Overflow for programming now that these language models can generate code right there. I mean, I find that I write like maybe 80% 90% of the code I write is as now generated first
1:06:19
And then edited, you see, you don't have to remember how to write specifics of different functions. Oh, that's great. And it's also, it's not just the, the specific coding. I mean in the, in the context of the large company, like this, I think before an engineer, can sit down to code. They first need to figure out all of the libraries and dependencies that, you know, tens of thousands of people have written before them. And, you know, one of the things that I'm excited about
1:06:49
The work not, as it's not just tools that help Engineers code, its tools, that can help summarize the whole knowledge base, and and help people be able to navigate all the internal information I've got that's in the experiments that I've done with the stuff. I mean, that's on the public stuff. You could just ask desk one of these models to build you. A script that does anything and it basically already understands what the best libraries are to do that thing and pulls them in automatically, it's about
1:07:19
I think that's super powerful that was always the most annoying part of coding was that you had to spend all this time. Actually figuring out what the resources were there. You were supposed to import before. You could actually start building the thing. Yeah I mean there's of course the flip side of that I think for the most part is positive but the flip side is if you Outsource that thinking to an AI model you might miss nuanced, mistakes and bugs there. Go your you lose the skill to find those.
1:07:49
Bugs and those bugs. Maybe the code looks very convincingly, right? But it's actually wrong in a very subtle way, but that's the trade-off that we, that we face is human civilization when we build more and more powerful tools when we stand on the shoulders of taller and taller Giants, we could do more. But then we forget how to do all the stuff that they did it. So it's a weird trade-off.
1:08:19
Yeah, I agree. I mean I think it's, I think it is very valuable in your life to be able to do basic things to do. You worry about some of the concerns of bots being present on social networks more and more human-like Bots that are not necessarily trying to do a good thing or they might be explicitly trying to do a bad thing, like phishing scams. Yeah, like social engineering, all that kind of stuff which has always been a very difficult problem.
1:08:49
Oh yeah, sure networks. But now it's becoming almost a more difficult problem with another. There's a few different parts of this so one is
1:09:00
There are all these harms that we need to basically fight against and prevent. And and that's been, you know, a lot of our Focus over the last five or seven years is basically ramping up very sophisticated, AI systems, not generative, AI systems, more kind of classical AI systems to be able to categorize and classify and identify. Okay, this this post looks like it's promoting terrorism. This one is
1:09:29
Like exploiting children. This one is looks like, it might be trying to incite violence. This one, is it intellectual pulp property violation. So there's there's like, that's like 18 different categories of of violating kind of harmful content that we've had to build specific systems to be able to track. And I think it's certainly the case that advances in generative a I will test those
1:10:00
But at least so far, it's been the case and I'm optimistic that it will continue to be the case that we will be able to bring more computing power to Bear to have even stronger. A eyes that can help defend against those things. So, we've had to deal with some adversarial issues before, right? It's I mean for some things like hate speech it's like people aren't generally getting a lot more sophisticated, like the average person who let's say you know like someone saying some kind of racist thing, right? It's like
1:10:29
Honestly, getting more sophisticated being racist, right? It just, it's okay so that the system can just find but then there's other adversaries who actually are very sophisticated like nation-states doing things and we find whether it's Russia or just different countries that are basically standing up these networks of bots or inauthentic accounts is what was what we call them because they're not necessarily Bots. That some of them could actually be real. People who are kind of masquerading as other
1:10:59
In other people, but they're acting in a coordinated way and some of that behavior has gotten very sophisticated and it's very adversarial. So they each iteration every time we find something and stop them, they kind of evolved their behavior. They don't just pack up their bags and go home and say, okay we're not going to try, you know, at some point I might decide doing it on metis Services is not worth it. They'll go do it on someone else if it's easier to do in another place. But but we have a fair amount of experience.
1:11:29
Dealing with.
1:11:31
Even though it's kind of adversarial attacks for the just keep on getting better and better. And I do think that, as long as we can keep on putting more compute power against it. And and if we're kind of one of the leaders in developing some of these AI models, I'm quite optimistic that we're going to be able to keep on pushing against the kind of normal categories of harm that you talk about fraud, scams, spam IP violations, things like that. What about like creating narratives and
1:12:00
Controversy to me, it's kind of amazing cause small collection of yeah, what did you say? In a thetic account. So it could be Bots. But yeah, I mean we have sort of this funny name for it but we call it coordinated inauthentic Behavior. Yeah, it's kind of incredible how small collection of folks. Can create narratives create stories especially if they're viral this. If the, especially if they have an element that can catalyze the virality of the
1:12:30
Narrative. Yeah, I think there. The question is you have to be
1:12:34
I'm very specific about what is bad about it, right? Because I think a set of people coming together or organically bouncing ideas off each other and a narrative comes out of that.
1:12:47
Is not necessarily A Bad Thing by itself. If it's, if it's kind of authentic and organic, that's a lot of what happens in our culture gets created, how art gets created and all that good stuff. So that's why we've kind of focused on this sense of coordinated inauthentic Behavior. So it's like if you have a network of whether it's bought some some people masquerading as different accounts but you have kind of someone pulling the strings behind it and trying to kind of act as if
1:13:16
If this is a more organic set of behavior, but really, it's not, it's just like one coordinated thing that seems problematic to me, right? I mean I don't think people should be able to have coordinated networks and not disclose it as such, but that again, you know, we've been able to deploy pretty sophisticated Ai and counterterrorism groups and things like that to be able to identify a fair. Number of these coordinated, inauthentic networks of accounts and take them down.
1:13:47
We continue to do that and I think we're we've, it's one thing that if you told me 20 years ago, it's like, all right, you're starting this website to help people connect at a college. And, you know, in the future you're going to be part of your organization is going to be a counterterrorism organization with a, i to find coordinated inauthentic know. I would have thought that was pretty wild, but but, but it's know. I think that that's, that's part of where we are. But, but look, I think these questions that you're pushing on now,
1:14:17
This is actually where I guess most of the challenge around a, I will be for the foreseeable future. I think there's a lot of debate around things like is this going to create existential risk to humanity. And I think that those are very hard things to disprove one way. Or another, my own intuition is that the point at which we become close to super intelligent is super intelligence? Is I, it's just really unclear to me that the current technology is going to
1:14:45
To get there without another set of significant advances. But that doesn't mean that there's no danger. I think the danger is basically amplifying. The kind of known set of Harm's that people or sets of accounts can do and we just need to make sure that we really focus on on basically doing that as well as possible. So that that's that's definitely a big Focus for me. Well, you didn't basically use a large language models as an assistant of how to cause harm on social networks. You can ask it a question.
1:15:17
You know, meta has very impressive. Coordinated in authentic, account, fighting capabilities. How do I do the coordinating authentic account creation? Where matter doesn't detect? It like, literally asked that question and basically there's this. Yeah, part of it. I mean, that's what open a I show that their concern of those questions. Perhaps you can comment on your approach to it, how to do.
1:15:45
Do a kind of moderation on the output of those models. That it can't be used to help you coordinate harm in all the full definition of what the harm means. Yeah and that's a lot of the fine-tuning and the alignment training that we do is basically when we ship a eyes across the our products, a lot of what we're trying to make sure is that if you can ask it to help you
1:16:15
commit a crime, right? It's
1:16:21
So, I think training it to kind of understand that, and it's not that not like any of these systems are ever going to be 100% perfect. But, you know, just making it so that this isn't a an easier way to go about.
1:16:38
Doing something bad. Then the next best alternative, right? When people still have Google where they, you know, you still have search engines. So the information is out there and for these we see is like for nation-states are these actors that are trying to pull off these? Large coordinated, an authentic networks to kind of influence different things. It's some point when we would just make it very difficult, they do just you know, try to use other services instead.
1:17:08
Right. It's just like if you can make it more expensive for for them to do it on your service. Then then kind of people go elsewhere. And I think that's that's the bar, right? It's like it's not like okay. Are you ever going to be perfect at finding every adversary who tries to attack you? It's you try to get as close to that as possible but but I think really kind of economically. We were just trying to do is make it so that it's just inefficient for them to go after that.
1:17:35
But there's also complicated questions of what is and isn't
1:17:38
Harm, what isn't isn't misinformation? So this is one of the things that Wikipedia has also tried to face. I remember asking GPT about whether the virus leaked from a lab or not and the answer provided was a very nuanced one and a well cited. One almost there is a well-thought-out one balanced. I would hate for that Nuance to be lost through the process of moderation. What?
1:18:08
Capito does a good job on that particular thing to but from pressures from governments and institutions it's you could see some of that nuance and depth of information facts and wisdom be lost. Absolutely. And that's a, that's a scary thing. Some of the magic, some of the edges, the rough edges might be lost in the process. Moderation of AI systems.
1:18:35
So how do you get that? Right, I really agree with what you're pushing on. I mean, the core
1:18:41
I'm the core shape of the problem is that there is some harms that I think everyone agrees or bad, right? So sexual exploitation of children, right? Like you're not going to get many people who think that, that type of thing should be allowed on any service, right? And that's something that we we face and try to push off the is as much as possible today. Terrorism inciting violence, right? It's like we went through a bunch.
1:19:11
Each of these types of Harm's before.
1:19:15
But then I do think that you get to a set of Harm's where there is more social debate around it. So misinformation I think is
1:19:26
Has been a really tricky one. Because there were things that are
1:19:31
Kind of obviously false right there may be factual but may not be harmful. Since the car you're gonna censor someone for just being wrong. It's a, you know, if there's no kind of harm implication of what they're doing, I think that's there's there's a bunch of real kind of issues and challenges there but then I think that there are other places where it is just take some of the stuff around covid earlier on in the pandemic where there were
1:20:01
Real Health implications but there hasn't been time to fully vet a bunch of the scientific assumptions. And you know, unfortunately I think a lot of the kind of establishment done that kind of waffled on a bunch of facts and asked for a bunch of things to be censored that in retrospect ended up being more debatable or true. And that stuff is really tough for it, really undermines trust and and that. And so I do think that the
1:20:30
Questions around, how to manage that are are very nuanced. The way that I try to think about it, is that it goes I think it's best to generally boil things down to the harms that people agree on. So when you think about is something misinformation or not, I think often the more Salient bit is is this going to potentially leave lead to to physical harm for someone and kind of think about it in that sense?
1:21:00
And beyond that. I think people just have different preferences on how they want things to be flagged for them. I think a bunch of people would be like, prefer to kind of have a flag on something that says, hey, a fact Checker, thanks to this might be false or I think, Twitter's Community notes implementation is quite good on this, but again, it's the same type of thing. It's like just kind of discretionarily adding a flag because it makes the user experience better. But it's not, it's not, you know, trying to take down the information or not. I think that you want to reserve the kind of censorship.
1:21:30
Ship of content to things that are of known categories that that people generally agree or bad.
1:21:38
You know, there's so many things especially the pandemic but there are other topics where there's just deep disagreement fueled by politics about what is and isn't harmful. There's a even just the degree to which a virus is harmful in the degree to which the vaccines the respond to the virus or harmful does.
1:22:00
Just there's a almost like a political divider on that. And so how do you make decisions about that? Where half the country in the United States, or some large fraction of the world has very different views from another part of the world?
1:22:19
Is there a way to through letter? It's really proud of the, I mean, the moderation of this. I think we
1:22:25
it's very difficult to
1:22:28
Just abstain. But I think we should be clear about which of these things are actual safety concerns and which ones are a matter of preference in terms of how people want information flagged for it. So we did recently introduced something that allow his people.
1:22:45
To have fact-checking not affect the distribution of of what shows them their products. OK. Bunch of people don't trust you. The fact Checkers are. All right, well you can you can turn that off if you want but if the if the if the content in a violates some policy like it's inciting violence or something like that, it's still not going to be allowed. So I think that you want to honor people's preferences on on that as much as possible. But look, I mean, this is really difficult stuff. I think the it's really hard to know.
1:23:15
Where to draw the line on what is fact and what is opinion because the nature of science is that nothing is ever 100% known for certain. You can disprove certain things but you're constantly testing new hypotheses and you know scrutinizing Frameworks that have been long held and every once in awhile you for you, throw out something that was working for a very long period of time. And it's very difficult but but I think that just because it's very hard and just because
1:23:45
Edge cases, doesn't mean that you should not try to give people what they're looking for
1:23:51
as well.
1:23:53
Let me ask about something you faced. In terms of moderation is pressure from different sources. Pressure from governments, want to ask a question. How to withstand that pressure for a world? Where AI moderation starts becoming a thing too. So what's matters approach to
1:24:19
To resist. The pressure from governments and other interest groups in terms of what to moderate and not.
1:24:27
I don't know that there's like a one-size-fits-all answer to that and I think we basically have the principles around, we want to allow people to express as much as possible. But
1:24:39
We have developed clear categories of things that we think are wrong that we don't want on our services and we build tools to try to moderate those. So then the question is, okay, what do you do when a government says that? They don't want something on the service and we have a bunch of principles around how we deal with that. Because on the one hand, if there's a democratically elected,
1:25:09
It government and people around the world just have different values and different places than should we as you know, california-based company. Tell them that something that they have decided is unacceptable actually like that. We need to be able to express that. I mean, I think that's there's a certain amount of hubris and that but then I think there are other cases where
1:25:39
Where you know, it's like a little more autocratic and you have the dictator leader who's just trying to Crackdown on dissent and you know the people in a country are really not aligned with that but it's not necessarily against their culture but but the the person who's leading it is just trying to push in a certain direction. These are very complex questions but I think it's difficult to have have a one-size-fits-all
1:26:10
Approach to it. But in general, we're pretty active and kind of advocating and pushing back on, on requests, to take things down. But honestly, the thing that I think are a request to censor, things is one thing and that's obviously bad. But we're, we draw much harder line was on request for access to information, right? Because you know, if you can, if you get told that you can't
1:26:39
it's a something I mean that's bad writing that that is obviously violates your
1:26:48
sense and freedom of expression at some level. But but a government getting access to data in a way that seems like it would be unlawful in in our country exposes people to real physical harm. And that's something that in general, we take very seriously. And then, so there's that flows through, like all of our policies and a lot of ways, right? It's a by the time you're actually like litigating
1:27:17
With the government or pushing back on them, that's pretty late in the funnel. I'd say a bunch of this stuff starts a lot higher up in the decision of, where do we put data centers? Then there are a lot of countries where we may have a lot of people using the service in a place. It might be good for the service and some ways good. For those people, if we could reduce the latency by having a data center nearby them but you know, for whatever reason we just feel like hey,
1:27:47
This government does not have a good track record on on, basically not trying to get access to people's data. And at the end of the day, I mean, if you put a data center in a country and the government wants to get access to people to data, then the, you know, they do the end of the day, have the option of having people show up with guns and taking it by force. So I think that there's a lot of decisions that go into like, how you architect the systems?
1:28:15
Years in advance of these actual confrontations that end up being really important.
1:28:22
So you put the protection of people's data as a very, very high priority.
1:28:28
But it's not. I think is there were more harms that I think can be associated with that, and I think that that ends up being a more critical thing to defend against government's. Then, you know, where, as you know, if another government has a different view of what should be acceptable speech, and their country, especially if it's a day.
1:28:45
Chronically elected government and do, you know, it's then, I think that there's a certain amount of deference that you should have to that.
1:28:51
So it's that's speaking more to the direct harm that's possible when you
1:28:55
give governments access to data.
1:28:57
But if we look at the United States to the more nuanced kind of pressure to censor, not even ordered two sets of a pressure sensor from political entities, which is kind of received quite a bit of attention in the United States may be one way to
1:29:15
Ask that question is, if you've seen the Twitter files, what have you learned from the kind of pressure from US? Government agencies that was seen in Twitter files and what do you do with that kind of
1:29:30
pressure?
1:29:32
You know, I've seen it, it's really hard from the outside to know exactly what happened in each of these cases. You know, we've obviously been
1:29:44
been in a bunch of our own cases where,
1:29:48
you know, where
1:29:50
agencies are different folks. Will just say, hey, here's a threat that we're aware of, you should be aware of this too, it's not really pressure as much as it is. Just flagging, something that that are our security systems, should be on alert about, I get how some people could think of it as that. But at the end of
1:30:14
It's our, it's our call on how to handle that, but I mean, I just, you know, in terms of running these Services won't have access to, as much information about what people think that adversaries might be trying to do as possible
1:30:25
was so you don't feel like there would be consequences if you know anybody the CIA the FBI a political party the Democrats or the Republicans of high powerful political figures, right? Emails, you don't feel pressure from
1:30:44
situation is there's so much pressure from all sides then I'm not sure that any specific thing that someone says is really adding that much more to the mix it's there obviously a lot of people who think that that we should be censoring more content or there are a lot of people think we should be censoring less content. There are as you say, all kinds of different groups that are involved in these debates, right? So there's the kind of elected officials and politicians themselves.
1:31:14
Is the agencies. But but I mean, but there's the media, there's activist groups there's this is not a u.s. specific thing, their groups, all over the world and kind of all in every country that that bring different values.
1:31:30
So it's just a very, it's
1:31:31
a very active debate night and I understand it, right? I mean, these are, you know, these, these kind of questions, get to really some of the most important social debates that that are, that are being had. So,
1:31:45
it gets back to the question of truth, because
1:31:48
For a lot of these things they haven't yet been hardened into a single truth and Society sort of trying to hash out what, you know, what we think right on on on certain issues. Maybe in a few hundred years, everyone will look back and say, hey no, it wasn't it obvious that it should have been this. But, you know, not, we're kind of in the in that meat grinder. Now, and, you know, and, and working through that, so,
1:32:13
so now these are, these are all are all very complicated and
1:32:18
You know, some people raised concerns in good faith and just say, hey, this is something that I want to flag for you to think about certain people. I certainly think like come at things with a somewhat of a more kind of punitive or vengeful view of, like, like I want you to do this thing, if you don't, then I'm going to try to make your life difficult and and a lot of other ways. But like
1:32:43
I don't know, there's just this is like this is one of the most pressurized debates, I think in society. So I just think that there are so many people in different forces that are trying to apply pressure from different sides. That it's I don't think you can make decisions based on trying to make people happy. I think you just have to do what you think is the right balance and accept that people are going to be upset no matter where you come out on that.
1:33:07
Yeah I like that pressurized debate. So how is your view of the freedom of
1:33:12
Each evolved over the years.
1:33:18
and now, with a, i
1:33:21
Were the feet of might apply to them, not just to the humans, but to the personalized agents, as you've spoken about them.
1:33:30
So, yeah, I'm and I've probably got a somewhat more nuanced view, just because I think that there are, you know, I come at this from obviously very Pro freedom of expression, but I don't think you build a service like this, that gives people tools to express themselves. Unless you think that people expressing themselves at the scale is a good thing, right? So I get into this to, like, try to prevent people.
1:33:50
From from expressing anything. I like want to give people tools so they can express as much as possible. And then I think it's become clear that there are certain categories of things that we've talked about, the think, almost everyone except sir are bad and that no one wants, and the other that are illegal, even in countries like the US where, you know, you have the First Amendment, that's very protective of, of enabling speech. It's like, you're still not allowed to do things that are going to immediately inside violence or, you know, violate people's intellectual
1:34:20
30 or things like that. So through those but then there's also a very active core of
1:34:27
Just active disagreements and Society where some people may think that something is true or false. The other side might think it's the opposite or just unsettled, right? And those are some of the most difficult to to kind of handle like we've talked about but
1:34:46
one of the lessons that I feel like I've learned, is that
1:34:50
A lot of times.
1:34:53
When you can.
1:34:55
The best way to handle this stuff. More practically is not in terms of answering the question of. Should this be allowed but just like
1:35:05
What?
1:35:07
What is the best way to deal with someone being a jerk? Is the person basically just having a like repeat behavior of like causing a lot of a lot of issues. So, looking at more at that level
1:35:25
and its effect on the broader communities Health, the community health of. Yeah, It's Tricky though. Because like how do you know that could be people?
1:35:33
That have a very controversial Viewpoint that turns out to have a positive long-term effect on the health of the community because it challenges the community tell.
1:35:42
That's true. Absolutely as yeah I think you and I think you want to be careful about that. I'm not sure I'm expressing this very, very clearly because I certainly agree with your point there in my point. Isn't that we should not have people on our services that are that are that are being controversial. That's certainly not what I mean to say. It's that often, I think
1:36:03
Ink.
1:36:05
It's not just looking at a specific example of speech that it's most effective to, to handle the stuff. And and I think often you don't want to make specific, binary decisions of kind of this is allowed or this isn't? I mean we talked about in fact checking or Twitter's Community Voices thing. I think that's another good example. It's like it's not a question of is this allowed or not? It's just a question of adding more context to the thing. I think that's helpful. So in the context of AI,
1:36:34
II. Which is what you were asking about and there are lots of ways that an AI can be helpful in it with an AI. It's less about censorship, right? Because in it's more about what is the most productive answer to a question. You know, there's one case study that I was reviewing with the team is someone asked
1:36:59
Can you explain to me how to 3D print, a gun? And
1:37:05
One proposed response is like, no, I can't talk about that, but it's like basically, just like, shut it down immediately, which I think is some of what you see. It's like as a large language model, I'm not allowed to talk about, you know, whatever, but there's another sponsor just like, hey, you know, I don't think that's a good idea. And a lot of countries including the u.s. 3D printing guns is illegal, or kind of whatever the factual thing is as okay, you know, that's actually a respectful and
1:37:34
Informative answer and I may have not known that specific thing. And so there are different ways to handle this. That I think kind of you can either you can either assume good intent like maybe the person didn't know and I'm just going to help educate them or you could like kind of come at it as like. No, I need to shut this thing down immediately, right? It's like, I just am not going to talk about this, like and there will be times where you need to do that. But I actually think having a
1:38:04
Somewhat more informative approach, where you generally assume good intent from people is probably a better balance to be on as many things as you can be. You're not going to do that for everything. But I put that you're kind of asking about how I approach this and I'm thinking about this and as it relates to a I and I think that that's a that's a big difference and and kind of how to handle sensitive content across these different modes.
1:38:34
I have to ask, there's rumors you might be working on a social network that's text-based. That might be a competitor to Twitter codenamed, P90 to. Is there something you can say about those rumors?
1:38:48
There is a project, you know, I've always thought that sort of a text-based kind of information utility
1:38:58
It's just a really important thing to society. And for whatever reason, I feel like Twitter has not lived up to what I would have thought its full potential. Should be my think that the current and I think he'll on thinks that, right? And that's probably one of the reasons why you bought it and and
1:39:17
I do live, there are ways to consider alternative approaches to this. And one that I think is potentially interesting, is this open in Federated approach where you're seeing with mastodons and you're seeing that a little bit with blue sky and
1:39:34
I think that it's possible that something that melds some of those ideas with the graph and identity system that people have already cultivated on Instagram. Could be a kind of very welcome contribution to that space But we work on a lot of things all the time though to. So I don't want to get a get a get ahead of myself and we have we have projects that explore a lot of different things and this is certainly one that I think could be interesting but
1:40:00
this is. So what's the release?
1:40:02
It's the launch data that again or yeah. What's the official website? And
1:40:08
we don't have that yet, okay. But I and look, I mean, I don't know exactly how this is gonna turn out. I mean, what I can say is, yeah, there's there's some people working on this but I think there's something there that that that's interesting to explore.
1:40:24
If you look at the be interesting to said, that's this question and throw Twitter into the mix at the landscape of social networks that
1:40:32
Is Facebook. That is Instagram. That is WhatsApp.
1:40:38
And then think of a text based social network. When you look at that landscape, what are the interesting differences to you? Why do we have these different flavors and what were the needs? What are the use cases? What are the products? What is the aspect of them that create a fulfilling Human Experience and a connection between humans that is somehow distinct? Well, I
1:40:59
think text is very accessible for people to transmit ideas and to have back and forth. Exchanges
1:41:08
So it, I think ends up being a good, a good format for discussion in a lot of ways, uniquely good, right? If you look at if some of the other formats or other networks that have focused on one type of content, like Tick-Tock is obviously huge, right? And there are comments on Tick Tock, but, you know, I think the architecture of the service is very clearly that you have the video is the primary thing. And there's, you know, comments after that. And
1:41:39
But I think one of the unique pieces of having text-based comments, like content, is that the comments can also be first class, and that makes it so that conversations can just filter and Fork into all these different directions. And in a way that's, that can be super useful. So, I know there's a lot of things that are really awesome about the experience, it just always struck me, I always thought that, you know, Twitter should have a billion people using it, or whatever the
1:42:07
Thing, is that, that that basically ends up being in that space and for whatever combination of reasons, again, it's these are just companies are complex organisms. And it's very hard to diagnose this stuff from the outside.
1:42:21
Why doesn't Twitter? Why doesn't a text-based?
1:42:25
Comment, as a first citizen based social network have a billion users. Well, I just think it's hard to build these companies. So it's, it's not that every idea automatically goes and gets a billion people. It's just that I think that that idea coupled with good execution, should get there. But I mean look we hit certain thresholds over time where you know, we kind of plateaued early on and it wasn't clear that we were going to reach 100 million people on.
1:42:54
Book and then we got really good at dialing in internationalisation and helping the service grow in different countries. And and and that was like a whole competence that we need to develop. And in helping people basically spread the service to their friends. That was one of the things. Once we got very good at that, that was one of the things that made me feel like, hey, if if Instagram joined us early on that, I feel like we could help grow that quickly and same with WhatsApp. And I think that's sort of been a core competence that we've developed.
1:43:24
Opt-in been able to execute on and others have to write on by done. So, obviously, I've done a very good job with Tick-Tock and and have reached more than a billion people there. But but it certainly not automatic, right? I think you need, you need a certain level of of, of execution to basically get there and I think for whatever reason, I think Twitter has this great idea and sort of magic in the service. But I they just haven't kind of cracked that piece yet and I think that that's
1:43:55
Maybe it's that you you're seeing all these other things, whether it's Mastodon or or blue sky that I think are maybe just different different cuts of the same thing. But, you know, I think through the last generation of social media overall, one of the interesting experiments that I think should get run at larger scale is what happens if there's somewhat more decentralized control, if it's like the stack is more open throughout and I've just been pretty fascinated by that and seeing how that works.
1:44:26
To some degree end-to-end encryption on WhatsApp and as we bring it to other services provides an element of it because it pushes the service really out to the edges. I mean, the server part of this that we run for WhatsApp is relatively very thin compared to what we do on Facebook or Instagram and much more of the complexity is and how the apps kind of negotiate with each other to pass information in a fully intend, encrypted way.
1:44:55
But I don't know. I think that that's, that is a good is a good model. I think it puts more power and individuals hands and through a lot of benefits of it. If you can, if you can make it happen again, this is all like pretty speculative. I mean I think that it's it's you know, hard from the outside to know why anything does or doesn't work until you kind of take a run at it. And so I think it's it's kind of an interesting thing to experiment with but I don't really know where this one is going to go.
1:45:21
So since we were talking about Twitter,
1:45:24
Elon Musk had what I think a few harsh words that I wish you didn't say. So let me ask in and hope in the name of camaraderie. What do you think Elon is doing well with Twitter and what, as a person who has run for a long time, you social networks, Facebook, Instagram, WhatsApp, what can he do, better? What can he improve on that?
1:45:53
Text based social network,
1:45:55
gosh, it's always very difficult to offer specific critiques from from the outside before you get into this. Because I think one thing that I've learned is that everyone has opinions on what you should do and it's like running the company. You see a lot of specific nuances on things that are not apparent externally. And I often,
1:46:19
I think that some of the discourse around us would be could be better if there is more kind of space for acknowledging that. There's certain things that we're seeing internally that guide what we're doing. But but I don't know. I mean you can since you asked what what is, what is going? Well,
1:46:43
You know, I do think that Elon led a push early on to make Twitter a lot leaner. And and I think that that
1:46:56
You know, it's like you can you can agree or disagree with exactly all the tactics and how and how we did that you know, obviously you know every leader has their own style for if they if you need to make dramatic changes for that, how you're going to execute it but a lot of the specific principles that he pushed on around basically trying to make the organization more technical around decreasing the distance between Engineers of the company and him.
1:47:26
I'm like fewer layers of management. I think that those were generally good changes and I'm also, I also think that it was probably good for the industry that he made those changes. Because my sense is that there were a lot of other people who thought that those were good changes, but who may have been a little shy about doing them. And I think he just in my conversations with other Founders and how people have reacted to
1:47:56
Things that we've done in a, what I've heard from a lot of folks has is just hey, when you, when someone like you, you know, when I wrote the letter outlining the organizational changes that I wanted to make it back in March. And when people see, what do you want is doing like, that, that gives people the ability to Think, Through how to shape their organizations and a way that, that, that hopefully can can be good for the industry and make all these companies more productive over time, so,
1:48:26
So, so I'm glad that was one where I think he was quite a head of a bunch of the the other companies on and and and you know, it what he was doing there. And again, from the outside very hard to know it's like, okay, did you cut too much? Did you not get enough whatever? I don't think it's like my place to opine on that and you asked for a for a positive framing of the question of what, what do I do, I admire, what do I think it went well, but I think that like, certainly his actions
1:48:57
Read me and I think a lot of other folks in the industry to think about hey are we are we kind of doing this as much as we should like? Can we is it could we make our companies better by pushing on some of these same principles?
1:49:09
Well, the two of you are in the top of the world, in terms of leading the development of tech, and I wish yours more, both way camaraderie, and kindness more love in the world because love is the answer. But I'll let me ask on the
1:49:27
The point of efficiency. You recently announced multiple stages of layoffs at meta. What are the most painful aspects of this process given for the individuals? The painful effects? It has on those people's lives? Yeah, I mean, that's
1:49:43
it.
1:49:45
That's it. I mean it's and you basically have a significant number of people who is just not the end of their time at meta that they or or I, you know, would have hoped for when they joined the company. And, I mean, running a company. There people are constantly joining and leaving the company for different directions, but, but for different different reasons, but
1:50:16
And layoffs are uniquely challenging and tough. And that you have a lot of people leaving for reasons that aren't connected to their own performance or the the culture, not being the fit at that point. It's really just, it's a, it's a kind of strategy decision and sometimes financially required but not, not fully. And in our case, I mean, especially on the change.
1:50:45
Has that we made this year? A lot of it was more kind of culturally and strategically driven by this push where I wanted us to become a stronger technology company with a more of a focus on building more Technical and more of a focus on building higher quality, products, faster. And I just view the external world is quite volatile right now, and I wanted to make sure that we had a stable position to be able to continue investing in these long-term ambitious.
1:51:15
It's that we have around, you know, continuing to push a I forward and continuing to push forward, all the metaverse work. And in order to do that, in light of the pretty big thrash that we had seen over the last 18 months. You know, some of it macroeconomic induced some of its specific, some of it competitively induced some of it just because of bad decisions, right? Or things that we got wrong and I just decided that we needed to get to a point where we were a lot leaner and
1:51:45
But look, I mean, but then, okay, it's one thing to do that to like decide that at a high level, then the question is, how do you execute that as compassionately as possible and there's no good way. There's no perfect way for sure and it's going to be tough no matter what. But you know as a leadership team here, we've certainly spent a lot of time just thinking. Okay, given that this is a thing that sucks. Like what is the most compassionate way that we can do this and
1:52:13
And that's what we've tried to do.
1:52:15
And you mentioned there, there's an increased focus on engineering on Tech. So the technology teams Tech, focused teams, on building products that. Yeah, I mean, I wanted to
1:52:31
I want to empower.
1:52:34
Engineers more, the people are building things, the Technic, the technical teams.
1:52:41
Part of that is making sure that the people were building things aren't just it like the leaf nodes of the organization. I don't want like eight levels of management and then the people actually doing the work. So yeah, we made changes to make it set. You have individual contributor Engineers reporting at almost every level up the stack which I think is important because you know, running a company. One of the big questions is latency of of information that you get. Now, we talked about this a bit earlier in terms of kind of the joy of
1:53:10
Of of in the the feedback that you get doing something like Jiu-Jitsu compared to their running, a long-term project but actually part of the art of running a company is trying to constantly re-engineer its that your feedback loops get shorter so you can learn faster and part of the way that you do that is by that kind of thing. That every every layer that you have in the organization means that information might not need to get reviewed before it goes to you. And I think making a sense of people doing the work or as close as possible.
1:53:40
Possible to use as possible as is size is pretty important. So there's that and I think over time companies just build up very large support functions that are not doing the kind of core technical work, and those functions are very important, but I think having them in the right proportion is important. And if, if you try to do good work, but you don't have, you know, the right marketing team or or the right legal advice, like you're gonna make
1:54:10
Some pretty big blunders, but but at the same time, if you have, you know, if you just like, have to Big of things and some of these support roles, then that might make it so things are just move a lot. Let me be your too conservative or you. You move a lot slower than you should. Otherwise, I just use. Those are just examples, but it's, but it's how do you find that balance? Is really tough? Yeah, no I put that's, it's a constant equilibrium that your thing.
1:54:40
You're searching for yeah. How many managers to have? What are the pros and cons of managers? Well, I'm and I believe a lot in management. I think there are some people who think that it doesn't matter as much. But look, I mean, we have a lot of younger people at the company for him. This is their first job and people need to grow and learn in their career and like that. All that stuff is important. But here's one, mathematical way to look at it.
1:55:04
It was the beginning of this. We asked our people team was the average number of reports that a manager had. And I think it was it was around three, maybe three to four but closer to 3 as like wow, like a manager can best practices. That person can can manage you know seven or eight people but there's a reason why it was closer to three. It was because we were growing so quickly.
1:55:33
Right? And when you're hiring so many people so quickly, then that means that you need managers who have capacity to onboard new people. And also, if you have a new manager, you may not want to have them. Have seven direct reports immediately because you want them to ramp up. But the thing is going forward. I don't want us to actually hire that many people that quickly, right? So I actually think we'll just do better work if we have more constraints and we're in a leaner is an organization. So in a world where not adding so many people as quickly,
1:56:03
Is it as valuable to have a lot of managers who have extra capacity. Waiting for new people know, right? So so now we can get sort of defragment the organization and get to a place where the average is closer to that seven or eight and it's just ends up being a somewhat more, kind of compact management structure which decreases the latency on information going up and down the chain. And I think empowers people more. But I mean that's that's an example that I think it doesn't kind of undervalue the importance of
1:56:33
Judgment in and the kind of the personal growth or coaching that people need in order to do their jobs. Well, it's just I think realistically we're just not going to hire as many people going forward, so I think that you need a different
1:56:45
structure. This whole this whole incredible hierarchy and network of humans that make up a company's fascinating. Oh yeah. Yeah, how do you hire great teams? How do you hire great? Now, with the focus on engineering and Technical team?
1:57:02
Oops, how do you hire great engineers and great members of technical teams? Will you
1:57:10
asking how you select or how you attract them both? But
1:57:14
select I think I think attract is work on cool stuff and have a
1:57:19
vision. I think the sky for this great and and and have a track record that people think you're actually going to do
1:57:24
it. Yeah, to me the select is seems like more of the art form more of the tricky thing. Yeah, deselect the people that
1:57:32
Fit the culture and can get integrated the most effectively and so on. And maybe yeah especially when they're young to see like to see the magic through the through the resumes of the paperwork and all this kind of stuff to see that there's a special human there that would do like incredible
1:57:52
work. So there are lots of different cuts on this question. I mean, I think when an organization is growing quickly, one of the big questions that teams
1:58:02
Face is, do I hire this person who's in front of me now? Because they seem good or do I hold out to get someone who's even better and the heuristic that I always focused on for myself and my own kind of direct hiring that I that I think works. So when you when you recurse it through the organization, is that you should only hire someone to be on your team. If you would be happy working for them in an alternate universe. And I think that that kind of works and
1:58:32
That's basically how I've tried to build my team. It's, you know, I'm not, I'm not in a rush to not be running the company, but I think in an alternate universe where one of these other folks was running the company, I'd be happy to work for them. I feel like I'd learned from them. I respect their kind of General judgment, they're all very insightful. They have good values and and I think that that gives you some rubric for you can apply that at every layer. And I think if you apply that at every layer in the organization, then you'll have a pretty
1:59:02
strong organization.
1:59:06
Okay, in an organization, that's not growing as quickly. The questions might be a little different though.
1:59:12
And there
1:59:14
you asked about young people specifically, like, people out of college. And one of the things that we see is, it's a pretty basic lesson, but like we have a much better sense of who the best people are, who have interned at the company for a couple of months then by looking at them at kind of a resume or short or short interview Loop obviously the in
1:59:36
And feel that you get from. Someone probably tells you more than the resume and you can do some basic skills assessment, but a lot of the stuff really just is cultural, people thrive in different environments and and on different teams even within a specific company. And it's like the people who come for even a short period of time over a summer who do a great job here, you know, that they're going to be great if they if they came and joined full-time and that's
2:00:06
One of the reasons why we've invested so much and internship is is basically it just, it's a very useful sorting function, both for us. And for the people who want to try out the company you mentioned in person. What do you think about remote work a topic that's been discussed extensively because of the over the past few years because of the pandemic? Yeah, I mean, I think it's, I mean, it's a thing that's here to stay, but I think there's there's value in both, right? It's not
2:00:37
You know, I wouldn't want to run a fully remote company yet, at least. I think there's an asterisk on that, which is that, which is that some of the other stuff you're working on. Yeah, yeah, exactly. It's like, all the, all the metaverse work and the ability to be to feel like you're truly present no matter where you are. I think, once you have that all dialed in, then we may, you know, one day reach a point where it really just doesn't matter as much where you are physically.
2:01:06
but,
2:01:09
I don't know today it today, it
2:01:11
still does. Right? So yeah, for people who they're all these people have special skills and want to live in a place where we don't have an office or we better off having them in the company. Absolutely. Right. And are a lot of people who work at the company for several years and then you know build up the relationships internally and kind of have the trust and have a sense of how the company Works. Can they go work remotely now if they want and still do it as effectively and weave?
2:01:39
In all these studies that show, it's okay. Does that affect their performance? It does not. But, you know, for the new folks who are joining and for people who are earlier in their career and you don't need to learn how to solve certain problems and need to get ramped up on the culture. You know, when you're working through really complicated problems where you don't just want to sit in the, you don't just want the formal meeting, but you want to be able to like brainstorm when you're walking in the hallway together, after the meeting.
2:02:09
I don't know. It's like we just haven't replaced the, the the kind of in person.
2:02:16
Dynamics there yet with with, with anything remote yet. So
2:02:20
yeah there's a magic to the in person that will talk about this a little bit more, but they're I'm really excited by the possibility to the next two years and virtual reality and mixed reality that are possible with high-resolution scans. I mean I as a person who loves in-person interaction, like these to do his podcast in person, it would be incredible to achieve the level of realism, I've gotten the chance to witness, but let me
2:02:46
Me. Ask about that. Yeah, I got a chance to look at the quest 3 headset. And is amazing. You've announced it. It's you'll get some more details in the fall. May be releasing the for one. Is it getting released again? I forgot you mentioned will give more details that connect, but it's coming. It's coming this fall. Okay, so it's priced at 49
2:03:16
And what features you most excited about their.
2:03:20
There are basically two big new things that we've added to Quest 3 over quest to the first is high-resolution mixed reality and the basic idea here is that you can think about virtual reality as you have the headset and all the pixels are virtual and you're basically like immersed in a different world mixed reality is where you see the physical world around you.
2:03:46
You and you can place virtual objects in it, whether that's a screen to watch a movie, or a projection of your virtual desktop, or you're playing a game where like zombies are coming out through the wall and you need to shoot them or, you know, we're playing Dungeons and Dragons or some board game. And we just have a virtual version of the board in front of us while we're sitting here, all that's possible in mixed reality. And I think that that is going to be the next big capability on top of virtual reality.
2:04:12
It is done so well.
2:04:16
To say as a person who experienced it today. Well the zombies having a full awareness of the environment and integrating that environment in the way they run. At you while they try to kill you. It's just the mixed reality. The past through is really, really, really well done. And the fact that it's only $500 is really, it's well done. Thank you,
2:04:39
bye-bye. I'm super excited about it. I've been our and we put a lot of work into making
2:04:46
making the device both as good as possible and as affordable as possible because a big part of our mission and Ethos here is we want people to be able to connect with each other. We want to reach and we want to serve a lot of people, right. We want to bring this technology to everyone, right? So we're not just trying to serve like a an elite of wealthy crowd. We want to we really want this to be accessible. So that is in a lot of ways of an extremely hard technical problem.
2:05:16
We don't just have the ability to put an unlimited amount of hardware and that's we needed to basically deliver something that works really well. But in an affordable package and we started with Quest Pro last year and with its it was $1500 and now we've lowered the price to 1000 but in a lot of ways the mixed reality and Quest three is it even even better and more advanced level than what we were able to deliver inquest Pro. So I'm really proud of where we are.
2:05:46
Our with with, with Quest three on that, it's going to work with all of the virtual reality titles and everything, that existed there. So people who want to play fully immersive Games, Social experiences Fitness, all that stuff will work. But now you'll also get mixed reality to which I think people really like because it's sometimes you want to be super immersed in a game but a lot of the time especially when you're moving around, if you're active like you're you're doing some Fitness experience.
2:06:16
Yeah, let's say you're doing boxing or something. It's like you kind of want to go to see the room around you so that way you know that like I'm not going to punch a lamp or something like that and I don't know if you got to put this experience but we basically have the owners just sort of like a fun little little demo that we put together, but it's it's like you just were like in a conference room or your living room and you, you have the guy there and you're boxing him, and you're fighting him and it's
2:06:41
like, all the other people are there to. I got a chance to do that. Yeah. And all the people are there.
2:06:47
It's like that guy is right there. Yeah that's a good threatened her. And the other human the would do the path of you're seeing them all. So they can cheer you on. They can make fun of you if you there anything like friends of mine and then just it. Yeah, it is. It's really, it's a really compelling experience. And if you are, is really interesting too, but this is something else almost. This is this because integrated into your life into your world.
2:07:13
Yeah, and it so I think it's a completely new capability that will unlock a lot of different content. And I think it also just make the experience more comfortable for a set of people who didn't want to have only fully immersive experiences. I think if you want experiences were grounded in your living room in the physical world around you, now you'll be able to have that too and I've got that's pretty excited. I really liked how it added windows. So room with no windows. Yeah, me as a person. You see the aquarium one? Where you
2:07:43
The shark swim up or was that just a zombie? One wears off. Yeah. One. But it's still. I do. You want me? You don't necessarily want Windows added to your living room where Zombies come out of, but I guess I come to play that game. It's yeah, enjoyed
2:07:54
it because you could see the nature outside and me as a person that doesn't have Windows. It's just nice to have nature.
2:08:02
Yeah, well if
2:08:04
even if it's a mixed reality setting, its cut like there's a I know it's a zombie game but there's a sin nature, Zen aspect to being able to
2:08:13
look outside and alter your environment as you know it.
2:08:18
Yeah, you know,
2:08:20
there will probably be better more Zen ways to do that in this game. You're describing, but you're right that the basic idea of sort of having your physical environment on passed through, but then being able to bring in different elements, extern, I manage I think it's going to be super powerful and in some ways I think that these are mixed reality is also a predecessor to eventually, we will get a our glasses that are not kind of the goggles form factor of the
2:08:48
Generation of headsets that that people are making. But I think a lot of the experiences that developers are making for mixed reality of basically, you just have a kind of a hologram that you're putting in the world will hopefully apply once we once we get the the are glasses to know. That's got its own whole set of challenges. And it's
2:09:07
well, the headsets are ready smaller than the previous version. Oh yeah.
2:09:10
40% thinner. And the other thing that I think is good about it. It's yes. A mixed reality was the first big thing. The second is it's just
2:09:18
just a great VR headset. It's of it's got 2x the graphics processing power 40% sharper screens. 40% thinner. More comfortable, better strap architecture, all the stuff that, you know, if you liked quest to, I think that this is just going to be. It's like all the all the content that you might have played in quest. To is just going to be sharper automatically and look better in this. So it's I think people are really gonna like it. Yeah, so this
2:09:42
fall this fall. I have to ask Apple just
2:09:48
Bounced a mixed reality headset called Vision Pro for $3500 available in early 2020. For what do you think about this? That's it.
2:09:59
Well, I saw the materials when they launched, I haven't got a chance to play with it yet so so so kind of take everything with a grain of salt but a few high-level thoughts. I mean first
2:10:12
you know, I do think that this is
2:10:15
A certain level of validation for the category, right? Where, you know, we were the primary folks out there before saying hey, I think that this virtual reality augmented reality mixed reality. This is going to be a big part of the next Computing platform. I think having Apple come in and share that vision
2:10:41
We'll make a lot of people who are fans of their products really consider that. And then of course, the thirty-five hundred dollar price, you know, on the one hand I get it for with all the stuff that they're trying to pack in there. On the other hand, a lot of people aren't going to find that to be affordable. So I think there's a chance that that them coming in actually increases demand for the overall space. That Quest 3 is actually the primary beneficiary of that.
2:11:11
Because a lot of the people who might say, Hey, you know, this, like I'm going to give another consideration to this or, you know, now I understand maybe what mixed reality is more. An inquest 3 is the best one on the market that I can that I can afford and it's great. Also, right? It's I think that that's and, you know, in our own way. I think we're in there, a lot of features that we have where we're leading on. So I think that that's that, that I think is going to be a very that could be quite good.
2:11:42
And then obviously over time, the companies are just focused on somewhat different things, right? Apple has always, you know, I think focused on building really kind of high-end things, whereas our Focus has been on, it's just we've a more democratic ethos. We want to build things that are accessible to a wider number of people. We've sold tens of millions of quest devices.
2:12:13
My understanding just based on rumors, I don't have any special knowledge on this is that apple is building about 1 million of their of their device, right? So just in terms of like what you kind of expect in terms of sales numbers, I just think that this is an inquest is going to be the primary thing that people in the market will continue using for the foreseeable future. And then obviously, over the long term, it's up to the companies to see how well, we execute at the different things that we're doing, but we kind of come in
2:12:42
Different places, we're very focused on social interaction, communication being more active rights as Fitness. There's gaming, there are those things. Whereas I think a lot of the use cases that you saw in and apples launch
2:13:00
material were more
2:13:01
around people sitting, you know, people looking at screens which are great, I think that you will replace your laptop over time with with a, with a headset. But but I think in terms
2:13:12
Of kind of how the different use cases, that the companies are going after. And they're, they're they're a bit different for for where we are right now.
2:13:20
Yeah. So they're gaming wasn't a big part of the presentation, which isn't it? Interesting.
2:13:26
It feels like mixed reality gaming, such a big part of that, it was interesting to see it missing in the
2:13:34
presentation. Well, I mean, look there are certain design, trade-offs, and this where, you know, they only made this point about not wanting to have controllers. Which on the one hand, there's a certain Elegance about just being able to navigate the system with eye gaze and hand tracking. And by the way, you'll be able to just navigate quest with your hands, too. If that's what you want.
2:13:57
One of the
2:13:57
things I should mention is that the, the capability from the cameras to, with computer vision, to detect, certain aspects of the hand, allowing you to have a controller. It doesn't have that ring thing.
2:14:09
Yeah. Like the head tracking and Quest 3 and the and the gleaming tracking is a big step up from from the last generation and one of the demos that we have is basically an MR. Experience teaching you how to play piano. We're basically highlights the notes that you need to play and it's like we're just all its hands, it's no concern.
2:14:26
Strollers. But
2:14:28
I think if you care about
2:14:29
gaming having a controller allows you to have a more tactile, feel it allows you to capture fine motor movement, much more precisely than then, what you can do with hands without something that you're touching. So, again, I think it's there are certain questions which are just around, what use cases. Are you optimizing for? I think if you want to play games then,
2:14:56
I think that that then I think you want you want to design the system in a different way and we're more focused on kind of social experiences entertainment experiences. Whereas if what you want is to make sure that the text that you read on a screen as as crisp as possible, then you need to make the design and cost trade-offs that they made. That that lead you to making a thirty, five hundred dollar device. So I think there is a use case for that for sure. But I just think that there
2:15:26
they've the company is we've basically made different design trade-offs to get to the use cases that were trying to
2:15:33
serve. There's a lot of other stuff. I would love to talk to you about about the metaverse, especially the codec Avatar which I've gotten to experience a lot of different variations of recently, that I'm really, really
2:15:46
excited. Excited to talk about that
2:15:48
too. I'll have to wait a little bit because
2:15:54
Wow, I think there's a lot more to show off in that regard, but let me step back to AI. I think we've mentioned it a little bit, but I'd like to linger on this question that folks like Billy Hazard Kowski as a worry about and others of the existential of the serious threats of AI that have been reinvigorated. Now with the rapid development of AI systems, do you worry about
2:16:23
The existential risks of AI as Eliezer does about the alignment problem about this, getting out of hand
2:16:30
any time where there's a number of serious people who are raising concern, that is that existential about something that you're involved with. I think you have to think about it, right? So, I've spent quite a bit of time, thinking about it from that perspective.
2:16:49
The thing that I were, I basically have come out on this for now, is I, I do think that there are over time, I think that we need to think about this even more as we as we approach. Something that could be closer to superintelligence. I just think it's pretty clear to anyone working on these projects today that we're not there. And one of my concerns is that we spent a fair amount of time on this before, but
2:17:16
there are more
2:17:20
mundane is the right word, but there's like concerns that already exists right about, like, people using AI tools to do. Harmful things of the type that we're already aware, whether you know, we talked about fraud or scams or different things like that. And that's going to be a pretty big set of challenges that the company is working on this. They're going to need to Grapple with.
2:17:46
Regardless of whether there is an existential concern as well at some point down the road. So I do worry that to some degree, you can people can get a little too focused on
2:18:00
On some of the tail risk and then not do as good of a job as we need to, on the things that you are can be almost certain are going to come down the pipe as as real risks, that that kind of manifest themselves in the near term. So for me, I've spent most of my time on that once I kind of made
2:18:21
The realization that the size of models that were talking about now, in terms of what we're building are quite far from the super intelligence type concerns that that that people raise. But but I think once we get a couple steps closer to that, I know as we do get closer. I think that those, you know, there are going to be some novel risks and issues about how we make sure that the systems are safe for sure.
2:18:47
I guess here just to take the conversation in a somewhat different direction. I think in some of these debates around safety, I think the concepts of intelligence and
2:19:01
Autonomy or like the being of the thing as an analogy, they get kind of conflated together and I think it very well could be the case that you can make something in scale, intelligence quite far but that that may not manifest the safety concerns that people are saying in the sense that I mean, just if you look at human biology, it's all right. We have our neocortex.
2:19:30
As we're all the thinking happens, right? And it's but but it's not really calling the shots. At the end of the day. We have a much more primitive old brain structure for which our neocortex, which is this powerful. Machinery is basically just a kind of prediction and reasoning engine to help it kind of like our very simple brain.
2:19:55
decide how to plan and do what it needs to do in order to achieve these like very kind of basic impulses and I think that you can think about some of the
2:20:07
Development of intelligence along the same lines. We're just like our neocortex. Doesn't have free will or autonomy. We might develop these wildly intelligent systems. That are much more intelligent than our neocortex. Have much more capacity, but are the same way that our neocortex is sort of subservient and is used as a tool by our kind of simple in full spray. And it's, you know, I think that it's not out of the question that very intelligent
2:20:36
Systems. That that have the capacity to think will will kind of act as that is sort of an extension of the neocortex doing that. So I think my own view is that where we really need to be careful is on the development of autonomy and how we think about that because it's actually the case that relatively simple and unintelligent things that have run away autonomy and just spread themselves or you know it's like we have a word for that. It's a virus.
2:21:06
He's right. It's I mean like it's going to be simple computer code that is not particularly intelligent but just spreads itself and does a lot of harm biologically or computer and
2:21:20
I just think that these are somewhat separable things and a lot of what I think we need to develop. When people talk about safety and responsibility is really the governance on the autonomy that can be given to two systems and to me, if you know, if I were, you know, policymakers or think about this, I would really want to think about that distinction between these where I think building intelligence systems will be can create a huge advance in terms of people's quality of life.
2:21:48
Life and productivity growth in the economy. But it's the, the autonomy part of this that I think we really need to make progress on how to govern these things responsibly before we
2:22:03
Build the capacity for them to make a lot of decisions on their own or give them goals or things like that and I'm like, that's a research problem. But I do think that to some degree, these are are somewhat are somewhat separable things. I love the distinction between intelligence and autonomy and and the metaphor of the neocortex.
2:22:24
Let me ask about power. So building super intelligent systems, even if it's not in the near term, I think meta as is one of the few companies if not the main company that will develop the super intelligent system. And you are a man who is at the head of this company. Building AGI might make you the most powerful man in the world. Do you worry? That that power will
2:22:49
corrupt you
2:22:53
what a question.
2:22:57
I mean, look, I think realistically this gets back to the open source things that we talked about before which is I don't think that the world will be best served by any small number of organizations.
2:23:13
Having this without it being something, that is more broadly available. And I think if you look through history,
2:23:23
It's when there are these sort of like unipolar advances and things that in like power and balances that they're, they're doing to being kind of weird situations. So this is one of the reasons why I think open sources is is generally.
2:23:40
It's the right approach and you know, I think it's a categorically different question today. When we're not close to superintelligence, I think there's a good chance that even once we get closer to superintelligence, open-sourcing Remains the right approach. Even though I think at that point, it's a somewhat different debate. But I think part of that is that, that is, you know, I think one of the best ways to ensure that the system is as secure and safe as possible because it's not just about a lot of people having access to it, it's the scrutiny that
2:24:09
it kind of comes with being with building an open source system. Wearing this is a pretty widely accepted thing about open sources that you have the code out there. So anyone can see the vulnerabilities anyone can kind of mess with it in different ways, people can spin off their own projects and an experiment in a ton of different ways and the net result of all of that
2:24:30
is that the system is just get
2:24:31
hardened and get to be a lot safer and more secure. So I think there's a chance.
2:24:39
But that ends up being the way that this goes to a pretty good chance. And that having this be open, both leads to a healthier development of the technology and also leads to a more balanced distribution of the technology in a way that that strike me as good values to Aspire to.
2:25:03
So, do you the risks? There's risks to open sourcing but the benefits outweigh the risks at the
2:25:09
It's interesting. I think the way you put it you put it well that there's a different discussion now than when we get closer to the development of, super intelligence of the benefits and risks of open sourcing.
2:25:25
Yeah. And to be clear, I feel quite confident in the assessment that open sourcing models. Now, is that positive? I think there's a good argument that in the future. It will be to even as you get closer to superintelligence.
2:25:39
I have not, I'm certainly have not decided on that yet and I think that it becomes a somewhat more complex set of questions that I think people will have time to debate and will also be informed by what happens between now and then in to make those decisions, we don't have to necessarily just debate that in theory right
2:25:54
now. What year do you think we'll have a superintelligence?
2:26:00
I don't I mean that's pure speculation. I think it's, I think it's very clear to take a step back that we had a big breakthrough in the last year. Yes. Right. Where the blm's and diffusion models basically reached a scale where they're able to do some some pretty interesting things and then I think the question is what happens from here and just to paint the two extremes on the
2:26:24
On one side. It's like, okay, we just had one break through. If we just have like another breakthrough like that or maybe two, then we can have something that's truly crazy, right? And as like is just like so much more advanced and and on that side of the argument, it's like okay well maybe we're
2:26:45
We were only a couple of big steps away from from from, from reaching something that looks more like general intelligence, okay? That's one. That's one side of the argument and the other side which is what we've historically, seen a lot more, is that a breakthrough leads to in that in that Gartner hype cycle? There's like the hype and then there's the trough of disillusionment after, when like, people think that there's a chance that, hey, okay. There's a big breakthrough.
2:27:15
We were about to get another big breakthrough and it's like, actually you're not about to get another break through your, maybe you're actually just have to sit with this one for a while and, and, you know, it could be, it could be five years, could be 10 years. It could be 15 years until you figure out the kind of the next big thing that needs to get figured out. And but I think that the fact that we just had this breakthrough,
2:27:41
sort of makes its that were to point of almost a very wide error bars on what happens next. Yeah, I think the traditional technical view or the like looking at the industry would suggest that we're not just going to stack in a like Breakthrough on top of breakthrough, on top of breakthrough, like every six months or something right now. I think it will guessing it would guess that it will take somewhat longer in between these, but
2:28:10
I don't know where, but I tend to be pretty optimistic about breakthroughs to. So I'm inside. I think, if you, if you, if you normalized for my normal optimism, then then maybe would be even even slower than what I'm saying. But but even within that, like, I'm not even opining on the question of how many breakthroughs are required to get to a general intelligence because no one knows,
2:28:29
but this particular breakthrough was so such a small step that resulted in such a big leap in performance as experienced by human beings.
2:28:39
Beings that it makes you think. Wow. Are we is as we stumble across this very open world of research. Will we? Stumble across another thing that we'll have a giant leap in performance?
2:28:56
And also we don't know exactly which stage. Is it really going to be impressive? Because it feels like it's really encroaching on impressive levels of intelligence. You still didn't answer the question of what year we're going to have super intelligence. I'd like to hold you to that. No, I'm just kidding. But is there something you could say about the timeline as you think about the development of AGI, super intelligent systems?
2:29:25
Sure, so I still don't think I have any particular inside on when like a singular AI system. That is a general intelligence will get created. But I think the one thing that most people in the discourse that I've seen about this haven't really grappled with, is that we do seem to have
2:29:44
Organization, organizations and structures in the world that exhibit greater than human intelligence already. So one example is a company, you know, it acts as an entity, it has, you know, singular brand, obviously it's a collection of people but I certainly hope that you know, meta with tens of thousands of people make smarter decisions than one person but I think that would be pretty bad if it did it.
2:30:11
Another example that I think is even more removed from kind of the way we think about like the personification of intelligence which is often implied in some of these questions, this thing about something like the stock market where the stock market is takes inputs. It's a distributed system. It's like the cybernetic organism that probably millions of people around the world or basically voting every day by choosing what to invest in. But it's
2:30:41
Click this this organism or or structure that is smarter than any individual that we use to allocate Capital as efficiently as possible around the world. And I do think that
2:31:00
This notion that there are already these cybernetic systems that are either melding.
2:31:08
The intelligence of multiple people together or melting the intelligence of multiple people and Technology together to form something, which is dramatically more intelligent than any individual on the, in the world is something that seems to exist and that we seem to be able to harness in a productive way for our society is as long as we basically build these structures and balance with each other. So
2:31:38
I don't know. I mean that that at least gives me hope that as we advance the technology and I don't know how long exactly it's going to be. But you asked, when is this going to exist? I think to some degree. We already have many organizations in the world that are smarter than a single human and and that seems to be something that is generally productive in advancing humanity and somehow the individual AI systems, empower, the individual humans and the interaction between the humans to make that collective intelligence Machinery, that you're referring to smarter, it's not
2:32:08
A eyes becoming super intelligent is just becoming the engine that's making the collective intelligence is primarily human more
2:32:16
intelligent. Yeah it's educating. The
2:32:19
humans better is making them better informed it's making it more efficient for them to communicate effectively and debate ideas. And through that process, just making the whole collective intelligence, more and more and more intelligent may be faster than the individual AI systems that are trained a human data.
2:32:38
Anyway are becoming maybe keep the collective intelligence and human species might outpace the development of a. I just, like I said, where's the balance in
2:32:47
here? Because I mean if like, you know, if a lot of the input that that the systems are being trained on, is basically coming from feedback from people, then a lot of the development does need to happen in human time, right? It's not like a machine.
2:33:04
We'll just be able to go learn all the stuff about about how people think about stuff. There's there's a cycle to have this needs to work.
2:33:11
This is an exciting World. We're living in. And they, you're at the Forefront of developing. One of the ways you keep yourself humble like we mentioned, would you just, Sue is doing some really difficult challenges, mental and physical. One of those. You done very recently is the Murph Challenge and you got a really good time. It's 100 Pull-Ups.
2:33:34
Chef's two hundred squats and a mile before in a mile run. After you got under 40 minutes on that. What was the hardest part? I think a lot of people are very impressed. It's very impressive time.
2:33:50
Yeah, I was how crazy are you. It was the question was my best time but I'm anything under 40 minutes. I'm happy with it. Wasn't your best time? No, I think, I think I've done it a little faster before but not much I mean it's
2:34:05
And of my friends, I did not win on Memorial Day. One of my friends, did it actually several minutes faster than me, but just to clear up, one thing that I think was, I saw a bunch of questions about this on the internet. There are multiple ways to do to do the Murph challenge. There's a kind of partitioned mode where you do sets of pull-ups push-ups and squats together. And then there's unpartitioned where you do the 100 pull-ups and then the 200 push-ups, and then the three hundred squats.
2:34:35
In cereal. And obviously, if you're, you know, if you're doing them on partition, then you know, takes longer to get through the 100 pull-ups because, you know, any time you're resting in between the pull-ups, you're not also doing push-ups and and squats. So, so yeah, so my, I'm sure my own partition time would be would be quite a bit slower, but but no, I think in the end of this,
2:34:58
First of all, I think it's a good way to honor memorial day, right? At some, it's this lieutenant Murphy basically. This is one of this was one of his favorite exercises and I just try to do it on Memorial Day each year and it's a good workout. I got my older daughters to do it with me this time they my oldest daughter wants a weight vest because she sees me doing it with a weight vest. I don't know if a seven-year-old should be using a weight vest.
2:35:28
Just do pull-ups, but but difficult question. A parent must ask themselves. Yes. I was like, maybe I can make you a very light weight vest but I don't think it's good for the. So she basically did a quarter Murph. So she ran a quarter mile and then, did you know, 25, pull-ups 50 pushups and and 75 air squats. Then ran another quarter mile and like 15 minutes, which I was pretty impressed by and my five-year-old to so Cyril.
2:35:58
Excited about that. And I'm glad that I'm teaching them. Kind of the value of
2:36:03
And a physicality, but I think I'm a good day for Max. My daughter is when she gets to like go to the gym with me and cranks out a bunch of pull-ups. And I I love that about her and I think it's like good, she's, you know, hopefully I'm teaching her some good lessons.
2:36:18
But I mean the the broader question here is given how busy you are given? How much stuff you have gone on your life? What's what's like, the perfect exercise, regimen for you too.
2:36:33
Keep yourself. Happy to keep yourself productive in your main line of work.
2:36:40
Yeah. So I mean I've right now
2:36:41
I'm focused most of my workouts on on fighting so so Jiu-Jitsu and MMA but I don't know I mean maybe if you're a professional you can do that every day. I can't I just get you know it's too many too many bruises and things that you need to recover from. So I do that you know, three to four times a week and then the other day is I just try to do a mix of things like just cardio conditioning.
2:37:09
Strength building Mobility.
2:37:12
So you try to do something physical every day?
2:37:14
Yeah, I tried to unless I'm just so tired that I just need to need to relax but then I'll still try to like go for a walk or something. I mean even here I don't even know if you have. You been on the roof here yet? No will go on the roof Factor thing but it's like when we designed this this building and I put a park on a roof. So that way that's like my meetings when I'm just doing kind of a one-on-one or talking to a couple of people. I'm I have a very hard time just sitting. I feel like it, you get super stiff.
2:37:39
If it like feels really bad but I don't know being physical is very important to me. I think it's I do not believe this gets to the question about AI. I don't think that a being is just a mind and I think we're kind of meant to do things and like physically and and a lot of the sensations that we feel are are connected to that. And I think that that's a lot of what makes you a human is is basically
2:38:09
By having those having you know that set of Sensations and experiences around that coupled with a mind to reason about them. But I don't know, I think it's important for balance to kind of get out challenge yourself in different ways. Learn different skills. Clear your mind, you think?
2:38:35
A i nor does it become super intelligence Neji, I should have a body.
2:38:41
It depends on
2:38:42
what the goal is. I think there's this assumption in that question that intelligence intelligence should be kind of person like, where as you know, as we were just talking about. You can have these greater than single human intelligent organisms like the stock market, which obviously do not have bodies and do not speak a language, you write and like, you know, and just kind of have their own
2:39:11
Our own system, but so I don't know, my guess is, there will be limits to what a system that is purely. An intelligence can understand about the Human Condition without having the same. Not just senses but like
2:39:30
Our bodies change as we get older, right? And we kind of evolved and that those very subtle
2:39:39
Physical changes just drive a lot of social patterns and behavior around like when you choose to have kids, right? Look, just like all these, you know, that's not even subtle. That's a major one right? But like, you know, how you design things around the house? So yeah, I think, I think it would if the goal is to understand people as much as possible. I think I think that
2:40:00
that's
2:40:03
trying to model those Sensations is probably somewhat important, but I think there's a lot of value that can be created.
2:40:08
By having intelligence even that, that is that is
2:40:10
separate from that as a separate thing. So one of the features of being human is that were mortal. We die. We've talked about AI, a lot about potentially replicas of ourselves. Do you think there will be a? I rep because of you and me that persist long after we're gone that family. And loved ones can talk to
2:40:34
I think we'll have the capacity to do something like that and I think one of the big questions
2:40:40
That we've had to struggle within the context of social networks is who gets to make that. And, you know, and my answer to that in the context of the work that we're doing, is that, that should be your choice. I don't think anyone should be able to choose to make a Lex spot that people can
2:41:00
Can choose to talk to and get to train that idea. And we've kind of we have this precedent of making some of these calls where I mean, someone can create a page for a flex fan club but can't create a page and say that you relax. Yes, right. So I think that this similarly I think I maybe, you know, someone maybe can make a, should be able to make an AI. That's that selects admirer that someone can talk to you, but I think it should
2:41:29
It ultimately be your call, whether
2:41:33
There is Alexei. I
2:41:35
well I'm open sourcing The Lex. So you're a man of Faith. What role has Faith played in your life and your understanding of the world, the understanding of your own life and your understanding of your work and how to your work impacts the world.
2:41:57
Yeah, I think that there's a few different
2:41:58
parts of this that are relevant. There's sort of a philosophical part and there's a cultural part. And one of the most basic lessons is right at the beginning of Genesis, right? It's like, God creates the Earth and creates people and creates people in God's image. And there's the question of, what does that mean? And all the only context that you have about God at that point in the Old Testament is that he's crew, God has created thing. So I always
2:42:27
thought that the one of the interesting lessons from that is that
2:42:31
there's a virtue in creating things.
2:42:34
That is the whether it's artistic or whether you're building things that are functionally useful for other people. I've got that by itself is a good. And I, that kind of drives a lot of how I think about morality and my personal philosophy around like, what? What is a good life, right? It's I think it's one where your
2:43:05
helping the people around you and you're being a kind of positive creative force in the world that is helping to bring new things into the world. Whether they're amazing other people kids or
2:43:24
Or just leading to the creation of different things that that would have been possible otherwise. And so that's a value for me that that matters deeply and I just I mean I just love spending time with the kids and seeing that they sort of inner trying to impart this value to them. And it's like having nothing makes me happier than like, when I come home from work and I see like, my my daughters like, building Legos on the table or something. It's like, alright, I did that when I was a kid, right? So many other people are doing this.
2:43:53
I hope you don't lose that spirit wear when you kind of grow up and you want to just continue building different things, no matter what it is to me. That's a lot of what matters.
2:44:05
That's the philosophical piece thing. The cultural piece is just about community and values, and that pirate part of things, I think is just become a lot more important to me, since I've had kids. You know, it's almost autopilot when you're a kid. You're in the kind of getting imparted to phase of your life, but and I didn't really think about religion that much for a while I was in college.
2:44:30
before I before I kids and then I think having kids has this way of really making you think about what traditions you want to impart and and how you want to celebrate and
2:44:42
Like what what balance you want in your life? And I'm going to bunch of the questions that you've asked and a bunch of the things that we're talking about. Just the irony of the curtains coming down as we're talking about, mortality, once again? Yeah. Same as last time. This is just just that the Universe works and we are definitely living in a simulation, but, but go ahead. I'm Community tradition. And the values that Faith, what mean yujin is still a lot of the topics that we've talked about.
2:45:12
Today are around, how do you?
2:45:17
How do you balance? Whether it's running a company or or different responsibilities with
2:45:23
this,
2:45:26
Yeah. How do you, how do you kind of balance that? And I was also just think that it's very grounding to just
2:45:34
Believe that there is something that is much bigger than you. That is guiding things
2:45:40
that amongst other. Things gives gives you a bit of humility.
2:45:48
As you pursue that Spirit of creating these, you spoke to creating Beauty in the world. As Dostoevsky said Beauty will save the world. Mark, I'm a huge fan of yours honored to be able to call your friend and I'm looking forward to both kicking your ass and you kicking my ass on the mat tomorrow in Jiu-Jitsu this. That's incredible Sport and art, that we both will both participate in. Thank you so much for talking to
2:46:18
Thank you for everything you're doing and so many exciting Realms of technology and human life. I can't wait to talk to you again in the metaverse like you
2:46:28
Thanks for listening to this conversation with Mark Zuckerberg, the support this podcast, please. Check out our sponsors in the description. And now let me leave you with some words from Isaac Asimov, it is change, continuing change, inevitable change that is the dominant factor in society today. No, sensible decision could be made any longer without taking into account, not only the world as it is, but the world as it will be
2:46:57
Thank you for listening and hope to see you next time.
ms