If free nations demand companies store data locally, it legitimizes that practice for authoritarian nations, which can then steal that data for their own nefarious purposes, according to Facebook CEO Mark Zuckerberg. He laid out the threat in a new 93-minute video of a discussion with Sapiens author Yuval Noah Harari released today as part of Zuckerberg’s 2019 personal challenge of holding public talks on the future of tech.
Zuckerberg has stated that Facebook will refuse to comply with laws and set up local data centers in authoritarian countries where that data could be snatched.
Russia and China already have data localization laws, but privacy concerns and regulations proposals could see more nations adopt the restrictions. Germany now requires telecommunications metadata to be stored locally, and India does something similar for payments data.
While in democratic or justly ruled nations, the laws can help protect user privacy and give governments more leverage over tech companies, they pave the way for similar laws in nations where governments might use military might to see the data. That could help them enhance their surveillance capabilities, disrupt activism or hunt down dissidents.
Zuckerberg explains that:
When I look towards the future, one of the things that I just get very worried about is the values that I just laid out [for the internet and data] are not values that all countries share. And when you get into some of the more authoritarian countries and their data policies, they’re very different from the kind of regulatory frameworks that across Europe and across a lot of other places, people are talking about or put into place . . . And the most likely alternative to each country adopting something that encodes the freedoms and rights of something like GDPR, in my mind, is the authoritarian model, which is currently being spread, which says every company needs to store everyone’s data locally in data centers and then, if I’m a government, I can send my military there and get access to whatever data I want and take that for surveillance or military. I just think that that’s a really bad future. And that’s not the direction, as someone who’s building one of these internet services, or just as a citizen of the world, I want to see the world going. If a government can get access to your data, then it can identify who you are and go lock you up and hurt you and your family and cause real physical harm in ways that are just really deep.”
That makes the assumption that authoritarian governments care about their decisions being previously legitimized, which might not be true. But for nations in the middle of the spectrum of human rights and just law, seeing role model countries adopt these laws might convince them it’s alright.
Zuckerberg said on this week’s Facebook earnings call that Facebook accepts the risks to its business of being shut down in authoritarian countries where it refuses to comply with data localization laws.
Throughout the talk, Zuckerberg explained his view that a lack of strong positive communities and economic opportunities push people to join extremist groups or slip into destructive behavior. That’s why he’s so focused on making Groups a centerpiece of Facebook’s product.
Is The User Always Right?
There was one big question to which Zuckerberg failed to give a straight answer: can we trust users to do what’s right for them and society in an age of manipulation by authoritarian governments, self-serving politicians, and greedy capitalist algorithms?
Harari did a great job of crystallizing this question, and bringing the conversation back to it again and again despite Zuckerberg challenging the premise that much has changed here rather than providing a response. Harari says:
“What I’m hearing from you and from many other people when I have these discussions, is ultimately the customer is always right, the voter knows best, people know deep down, people know what is good for them. People make a choice: If they choose to do it, then it’s good. And that has been the bedrock of, at least, Western democracies for centuries, for generations. And this is now where the big question mark is: Is it still true in a world where we have the technology to hack human beings and manipulate them like never before that the customer is always right, that the voter knows best? Or have we gone past this point? And we can know– and the simple, ultimate answer that “Well, this is what people want,” and “they know what’s good for them,” maybe it’s no longer the case.”
For Facebook, that raises the questions of whether users can be trusted to properly protect their own privacy, to only share facts rather than false news that fits their agenda, to avoid clickbait and low-value viral videos, and most importantly, to stop browsing Facebook when it’s no longer positively impacts their life.
Zuckerberg replied that “it’s not clear to me that that has changed . . . I think people really don’t like and are very distrustful when they feel like they’re being told what to do.” Yet that ignores how the urge for self-defeating or society-defeating behavior can come from inside after a lifetime of grooming by tech platforms.
Given we’re already prone to sugar, gambling, and TV addictions, the addition of online manipulation could further stoke our short-sighted tendencies. Until Zuckerberg can admit humans don’t always do what’s right for themselves and their world, it will be difficult for Facebook to change to support us in moments of decision-making weakness rather than exploit us.
We’ll have more analysis on Zuckerberg’s talk soon. Here’s the full transcript:
Mark Zuckerberg: Hey everyone. This year I’m doing a series of public discussions on the future of the internet and society and some of the big issues around that, and today I’m here with Yuval Noah Harari, a great historian and best-selling author of a number of books. His first book, “Sapiens: A Brief History of Humankind”, kind of chronicled and did an analysis going from the early days of hunter-gatherer society to now how our civilization is organized, and your next two books, “Homo Deus: A Brief History of Tomorrow” and “21 Lessons for the 21st Century”, actually tackle important issues of technology and the future, and that’s I think a lot of what we’ll talk about today. But most historians only tackle and analyze the past, but a lot of the work that you’ve done has had really interesting insights and raised important questions for the future. So I’m really glad to have an opportunity to talk with you today. So Yuval, thank you for joining for this conversation.
Yuval Noah Harari: I’m happy to be here. I think that if historians and philosophers cannot engage with the current questions of technology and the future of humanity, then we aren’t doing our jobs. Only you’re not just supposed to chronicle events centuries ago. All the people that lived in the past are dead. They don’t care. The question is what happens to us and to the people in the future.
Mark Zuckerberg: So all the questions that you’ve outlined– where should we start here? I think one of the big topics that we’ve talked about is around– this dualism around whether, with all of the technology and progress that has been made, are people coming together, and are we becoming more unified, or is our world becoming more fragmented? So I’m curious to start off by how you’re thinking about that. That’s probably a big area. We could probably spend most of the time on that topic.
Yuval Noah Harari: Yeah, I mean, if you look at the long span of history, then it’s obvious that humanity is becoming more and more connected. If thousands of years ago Planet Earth was actually a galaxy of a lot of isolated worlds with almost no connection between them, so gradually people came together and became more and more connected, until we reach today when the entire world for the first time is a single historical, economic, and cultural unit. But connectivity doesn’t necessarily mean harmony. The people we fight most often are our own family members and neighbors and friends. So it’s really a question of are we talking about connecting people, or are we talking about harmonizing people? Connecting people can lead to a lot of conflicts, and when you look at the world today, you see this duality in– for example, in the rise of wall, which we talked a little bit about earlier when we met, which for me is something that I just can’t figure out what is happening, because you have all these new connecting technology and the internet and virtual realities and social networks, and then the most– one of the top political issues becomes building walls, and not just cyber-walls or firewalls– building stone walls; like the most Stone Age technology is suddenly the most advanced technology. So how to make sense of this world which is more connected than ever, but at the same time is building more walls than ever before.
Mark Zuckerberg: I think one of the interesting questions is around whether there’s actually so much of a conflict between these ideas of people becoming more connected and this fragmentation that you talk about. One of the things that it seems to me is that– in the 21st century, in order to address the biggest opportunities and challenges that humanity– I think it’s both opportunities– spreading prosperity, spreading peace, scientific progress– as well as some of the big challenges– addressing climate change, making sure, on the flipside, that diseases don’t spread and there aren’t epidemics and things like that– we really need to be able to come together and have the world be more connected. But at the same time, that only works if we as individuals have our economic and social and spiritual needs met. So one way to think about this is in terms of fragmentation, but another way to think about it is in terms of personalization. I just think about when I was growing up– one of the big things that I think that the internet enables is for people to connect with groups of people who share their real values and interests, and it wasn’t always like this. Before the internet, you were really tied to your physical location, and I just think about how when I was growing up– I grew up in a town of about 10 thousand people, and there were only so many different clubs or activities that you could do. So I grew up, like a lot of the other kids, playing Little League baseball. And I kind of think about this in retrospect, and it’s like, “I’m not really into baseball. I’m not really an athlete. So why did I play Little League when my real passion was programming computers?” And the reality was that growing up, there was no one else really in my town who was into programming computers, so I didn’t have a peer group or a club that I could do that. It wasn’t until I went to boarding school and then later college where I actually was able to meet people who were into the same things as I am. And now I think with the internet, that’s starting to change, and now you have the availability to not just be tethered to your physical location, but to find people who have more niche interests and different kind of subcultures and communities on the internet, which I think is a really powerful thing, but it also means that me growing up today, I probably wouldn’t have played Little League, and you can think about me playing Little League as– that could have been a unifying thing, where there weren’t that many things in my town, so that was a thing that brought people together. So maybe if I was creating– or if I was a part of a community online that might have been more meaningful to me, getting to know real people but around programming, which was my real interest, you would have said that our community growing up would have been more fragmented, and people wouldn’t have had the same kind of sense of physical community. So when I think about these problems, one of the questions that I wonder is maybe– fragmentation and personalization, or finding what you actually care about, are two sides of the same coin, but the bigger challenge that I worry about is whether– there are a number of people who are just left behind in the transition who were people who would have played Little League but haven’t now found their new community, and now just feel dislocated; and maybe their primary orientation in the world is still the physical community that they’re in, or they haven’t really been able to find a community of people who they’re interested in, and as the world has progressed– I think a lot of people feel lost in that way, and that probably contributes to some of the feelings. That would my hypothesis, at least. I mean, that’s the social version of it. There’s also the economic version around globalization, which I think is as important, but I’m curious what you think about that.
Yuval Noah Harari: About the social issue, online communities can be a wonderful thing, but they are still incapable of replacing physical communities, because there are still so many things–
Mark Zuckerberg: That’s definitely true. That’s true.
Yuval Noah Harari: –that you can only do with your body, and with your physical friends, and you can travel with your mind throughout the world but not with your body, and there is huge questions about the cost and benefits there, and also the ability of people to just escape things they don’t like in online communities, but you can’t do it in real offline communities. I mean, you can unfriend your Facebook friends, but you can’t un-neighbor your neighbors. They’re still there. I mean, you can take yourself and move to another country if you have the means, but most people can’t. So part of the logic of traditional communities was that you must learn how to get along with people you don’t like necessarily, maybe, and you must develop social mechanisms how to do that; and with online communities– I mean, and they have done some really wonderful things for people, but also they kind of don’t give us the experience of doing these difficult but important things.
Mark Zuckerberg: Yeah, and I definitely don’t mean to state that online communities can replace everything that a physical community did. The most meaningful online communities that we see are ones that span online and offline, that bring people together– maybe the original organization might be online, but people are coming together physically because that ultimately is really important for relationships and for– because we’re physical beings, right? So whether it’s– there are lots of examples around– whether it’s an interest community, where people care about running but they also care about cleaning up the environment, so a group of organize online and then they meet every week, go for a run along a beach or through a town and clean up garbage. That’s a physical thing. We hear about communities where people– if you’re in a profession, in maybe the military or maybe something else, where you have to move around a lot, people form these communities of military families or families of groups that travel around, and the first thing they do when they go to a new city is they find that community and then that’s how they get integrated into the local physical community too. So that’s obviously a super important part of this, that I don’t mean to understate.
Yuval Noah Harari: Yeah, and then the question– the practical question for also a service provider like Facebook is: What is the goal? I mean, are we trying to connect people so ultimately they will leave the screens and go and play football or pick up garbage, or are we trying to keep them as long as possible on the screens? And there is a conflict of interest there. I mean, you could have– one model would be, “We want people to stay as little as possible online. We just need them to stay there the shortest time necessary to form the connection, which they will then go and do something in the outside world,” and that’s one of the key questions I think about what the internet is doing to people, whether it’s connecting them or fragmenting society.
Mark Zuckerberg: Yeah, and I think your point is right. I mean, we basically went– we’ve made this big shift in our systems to make sure that they’re optimized for meaningful social interactions, which of course the most meaningful interactions that you can have are physical, offline interactions, and there’s always this question when you’re building a service of how you measure the different thing that you’re trying to optimize for. So it’s a lot easier for us to measure if people are interacting or messaging online than if you’re having a meaningful connection physically, but there are ways to get at that. I mean, you can ask people questions about what the most meaningful things that they did– you can’t ask all two billion people, but you can have a statistical subsample of that, and have people come in and tell you, “Okay, what are the most meaningful things that I was able to do today, and how many of them were enabled by me connecting with people online, or how much of it was me connecting with something physically, maybe around the dinner table, with content or something that I learned online or saw.” So that is definitely a really important part of it. But I think one of the important and interesting questions is about the richness of the world that can be built where you have, on one level, unification or this global connection, where there’s a common framework where people can connect. Maybe it’s through using common internet services, or maybe it’s just common social norms as you travel around. One of the things that you pointed out to me in a previous conversation is now something that’s different from at any other time in history is you could travel to almost any other country and look like you– dress like you’re appropriate and that you fit in there, and 200 years ago or 300 years ago, that just wouldn’t have been the case. If you went to a different country, you would have just stood out immediately. So there’s this norm– there’s this level of cultural norm that is united, but then the question is: What do we build on top of that? And I think one of the things that a broader kind of set of cultural norms or shared values and framework enables is a richer set of subcultures and subcommunities and people to actually go find the things that they’re interested in, and lots of different communities to be created that wouldn’t have existed before. Going back to my story before, it wasn’t just my town that had Little League. I think when I was growing up, basically every town had very similar things– there’s a Little League in every town– and maybe instead of every town having Little League, there should be– Little League should be an option, but if you wanted to do something that not that many people were interested in– in my case, programming; in other people’s case, maybe interest in some part of history or some part of art that there just may not be another person in your ten-thousand-person town who share that interest– I think it’s good if you can form those kind of communities, and now people can find connections and can find a group of people who share their interests. I think that there’s a question of– you can look at that as fragmentation, because now we’re not all doing the same things, right? We’re not all going to church and playing Little League and doing the exact same things. Or you can think about that as richness and depth-ness in our social lives, and I just think that that’s an interesting question, is where you want the commonality across the world and the connection, and where you actually want that commonality to enable deeper richness, even if that means that people are doing different things. I’m curious if you have a view on that and where that’s positive versus where that creates a lack of social cohesion.
Yuval Noah Harari: Yeah, I mean, I think almost nobody would argue with the benefits of richer social environment in which people have more options to connect around all kind of things. The key question is how do you still create enough social cohesion on the level of a country and increasing also on the level of the entire globe in order to tackle our main problems. I mean, we need global cooperation like never before because we are facing unprecedented global problems. We just had Earth Day, and to be obvious to everybody, we cannot deal with the problems of the environment, of climate change, except through global cooperation. Similarly, if you think about the potential disruption caused by new technologies like artificial intelligence, we need to find a mechanism for global cooperation around issues like how to prevent an AI arms race, how to prevent different countries racing to build autonomous weapons systems and killer robots and weaponizing the internet and weaponizing social networks. Unless we have global cooperation, we can’t stop that, because every country will say, “Well, we don’t want to produce killer robot– it’s a bad idea– but we can’t allow our rivals to do it before us, so we must do it first,” and then you have a race to the bottom. Similarly, if you think about the potential disruptions to the job market and the economy caused by AI and automation. So it’s quite obvious that there will be jobs in the future, but will they be evenly distributed between different parts of the world? One of the potential results of the AI revolution could be the concentration of immense wealth in some part of the world and the complete bankruptcy of other parts. There will be lot of new jobs for software engineers in California, but there will be maybe no jobs for textile workers and truck drivers in Honduras and Mexico. So what will they do? If we don’t find a solution on the global level, like creating a global safety net to protect humans against the shocks of AI, and enabling them to use the opportunities of AI, then we will create the most unequal economic situation that ever existed. It will be much worse even than what happened in the Industrial Revolution when some countries industrialized– most countries didn’t– and the few industrial powers went on to conquer and dominate and exploit all the others. So how do we create enough global cooperation so that the enormous benefits of AI and automation don’t go only, say, to California and Eastern China while the rest of the world is being left far behind.
Mark Zuckerberg: Yeah, I think that that’s important. So I would unpack that into two sets of issues– one around AI and the future economic and geopolitical issues around that– and let’s put that aside for a second, because I actually think we should spend 15 minutes on that. I mean, that’s a big set of things.
Yuval Noah Harari: Okay. Yeah, that’s a big one.
Mark Zuckerberg: But then the other question is around how you create the global cooperation that’s necessary to take advantage of the big opportunities that are ahead and to address the big challenges. I don’t think it’s just fighting crises like climate change. I think that there are massive opportunities around global–
Yuval Noah Harari: Definitely. Yeah.
Mark Zuckerberg: Spreading prosperity, spreading more human rights and freedom– those are things that come with trade and connection as well. So you want that for the upside. But I guess my diagnosis at this point– I’m curious to hear your view on this– is I actually think we’ve spent a lot of the last 20 years with the internet, maybe even longer, working on global trade, global information flow, making it so that people can connect. I actually think the bigger challenge at this point is making it so that in addition to that global framework that we have, making it so that things work for people locally. Right? Because I think that there’s this dualism here where you need both. If you just– if you resort to just kind of local tribalism then you miss the opportunity to work on the really important global issues; but if you have a global framework but people feel like it’s not working for them at home, or some set of people feel like that’s not working, then they’re not politically going to support the global collaboration that needs to happen. There’s the social version of this, which we talked about a little bit before, where people are now able to find communities that match their interests more, but some people haven’t found those communities yet and are left behind as some of the more physical communities have receded.
Yuval Noah Harari: And some of these communities are quite nasty also. So we shouldn’t forget that. <laughs>
Mark Zuckerberg: Yes. So I think they should be– yes, although I would argue that people joining kind of extreme communities is largely a result of not having healthier communities and not having healthy economic progress for individuals. I think most people when they feel good about their lives, they don’t seek out extreme communities. So there’s a lot of work that I think we as an internet platform provider need to do to lock that down even further, but I actually think creating prosperity is probably one of the better ways, at a macro level, to go at that. But I guess–
Yuval Noah Harari: But I will maybe just stop there a little. People that feel good about themselves have done some of the most terrible things in human history. I mean, we shouldn’t confuse people feeling good about themselves and about their lives with people being benevolent and kind and so forth. And also, they wouldn’t say that their ideas are extreme, and we have so many examples throughout human history, from the Roman Empire to slave trade into modern age and colonialism, that people– they had a very good life, they had a very good family life and social life; they were nice people– I mean, I guess, I don’t know, most Nazi voters were also nice people. If you meet them for a cup of coffee and you talk about your kids, they are nice people, and they think good things about themselves, and maybe some of them can have very happy lives, and even the ideas that we look back and say, “This was terrible. This was extreme,” they didn’t think so. Again, if you just think about colonialism–
Mark Zuckerberg: Well, but World War II, that came through a period of intense economic and social disruption after the Industrial Revolution and–
Yuval Noah Harari: Let’s put aside the extreme example. Let’s just think about European colonialism in the 19th century. So people, say, in Britain in the late 19th century, they had the best life in the world at the time, and they didn’t suffer from an economic crisis or disintegration of society or anything like that, and they thought that by going all over the world and conquering and changing societies in India, in Africa, in Australia, they were bringing lots of good to world. So I’m just saying that so that we are more careful about not confusing the good feelings people have about their life– it’s not just miserable people suffering from poverty and economic crisis.
Mark Zuckerberg: Well, I think that there’s a difference between the example that you’re using of a wealthy society going and colonizing or doing different things that had different negative effects. That wasn’t the fringe in that society. I guess what I was more reacting to before was your point about people becoming extremists. I would argue that in those societies, that wasn’t those people becoming extremists; you can have a long debate about any part of history and whether the direction that a society chose to take is positive or negative and the ramifications of that. But I think today we have a specific issue, which is that more people are seeking out solutions at the extremes, and I think a lot of that is because of a feeling of dislocation, both economic and social. Now, I think that there’s a lot of ways that you’d go at that, and I think part of it– I mean, as someone who’s running one of the internet platforms, I think we have a special responsibility to make sure that our systems aren’t encouraging that– but I think broadly, the more macro solution for this is to make sure that people feel like they have that grounding and that sense of purpose and community, and that their lives are– and that they have opportunity– and I think that statistically what we see, and sociologically, is that when people have those opportunities, they don’t, on balance, as much, seek out those kind of groups. And I think that there’s the social version of this; there’s also the economic version. I mean, this is the basic story of globalization, is on the one hand it’s been extremely positive for bringing a lot of people into the global economy. People in India and Southeast Asia and across Africa who wouldn’t have previously had access to a lot of jobs in the global economy now do, and there’s been probably the greatest– at a global level, inequality is way down, because hundreds of millions of people have come out of poverty, and that’s been positive. But the big issue has been that, in developed countries, there have been a large number of people who are now competing with all these other people who are joining the economy, and jobs are moving to these other places, so a lot of people have lost jobs. For some of the people who haven’t lost jobs, there’s now more competition for those jobs, for people internationally, so their wages– that’s one of the factors, I would– the analyses have shown– that’s preventing more wage growth; and there are 5 to 10 percent of people, according to a lot of the analyses that I’ve shown, who are actually in absolute terms worse off because of globalization. Now, that doesn’t necessarily mean that globalization for the whole world is negative. I think in general it’s been, on balance, positive, but the story we’ve told about it has probably been too optimistic, in that we’ve only talked about the positives and how it’s good as this global movement to bring people out of poverty and create more opportunities; and the reality I think has been that it’s been net very positive, but if there are 5 or 10 percent of people in the world who are worse off– there’s 7 billion people in the world, so that’s many hundreds of millions of people, the majority of whom are likely in the most developed countries, in the U.S. and across Europe– that’s going to create a lot of political pressure on those in those countries. So in order to have a global system that works, it feels like– you need it to work at the global level, but then you also need individuals in each of the member nations in that system to feel like it’s working for them too, and that recurses all the way down, so even local cities and communities, people need to feel like it’s working for them, both economically and socially. So I guess at this point the thing that I worry about– and I’ve rotated a lot of Facebook’s energy to try to focus on this– is– our mission used to be connecting the world. Now it’s about helping people build communities and bringing people closer together, and a lot of that is because I actually think that the thing that we need to do to support more global connection at this point is making sure that things work for people locally. In a lot of ways we’d made it so the internet– so that an emerging creator can–
Yuval Noah Harari: But then how do you balance working it locally for people in the American Midwest, and at the same time working it better for people in Mexico or South America or Africa? I mean, part of the imbalance is that when people in Middle America are angry, everybody pays attention, because they have their finger on the button. But if people in Mexico or people in Zambia feel angry, we care far less because they have far less power. I mean, the pain– and I’m not saying the pain is not real. The pain is definitely real. But the pain of somebody in Indiana reverberates around the world far more than the pain of somebody in Honduras or in the Philippines, simply because of the imbalances of the power in the world. Earlier, what we said about fragmentation, I know that Facebook faces a lot of criticism about kind of encouraging people, some people, to move to these extremist groups, but– that’s a big problem, but I don’t think it’s the main problem. I think also it’s something that you can solve– if you put enough energy into that, that is something you can solve– but this is the problem that gets most of the attention now. What I worry more– and not just about Facebook, about the entire direction that the new internet economy and the new tech economy is going towards– is increasing inequality between different parts of the world, which is not the result of extremist ideology, but the result of a certain economic and political model; and secondly, undermining human agency and undermining the basic philosophical ideas of democracy and the free market and individualism. These I would say are my two greatest concerns about the development of technology like AI and machine learning, and this will continue to be a major problem even if we find solutions to the issue of social extremism in particular groups.
Mark Zuckerberg: Yeah, I certainly agree that extremism isn’t– I would think about it more as a symptom and a big issue that needs to be worked on, but I think the bigger question is making sure that everyone has a sense of purpose, has a role that they feel matters and social connections, because at the end of the day, we’re social animals and I think it’s easy in our theoretical thinking to abstract that away, but that’s such a fundamental part of who we are, so that’s why I focus on that. I don’t know, do you want to move over to some of the AI issues, because I think that that’s a– or do you want to stick on this topic for a second or–?
Yuval Noah Harari: No, I mean, this topic is closely connected to AI. And again, because I think that, you know, one of the disservices that science fiction, and I’m a huge fan of science fiction, but I think it has done some, also some pretty bad things, which is to focus attention on the wrong scenarios and the wrong dangers that people think, “Oh, AI is dangerous because the robots are coming to kill us.” And this is extremely unlikely that we’ll face a robot rebellion. I’m much more frightened about robots always obeying orders than about robots rebelling against the humans. I think the two main problems with AI, and we can explore this in greater depth, is what I just mentioned, first increasing inequality between different parts of the world because you’ll have some countries which lead and dominate the new AI economy and this is such a huge advantage that it kind of trumps everything else. And we will see, I mean, if we had the Industrial Revolution creating this huge gap between a few industrial powers and everybody else and then it took 150 years to close the gap, and over the last few decades the gap has been closed or closing as more and more countries which were far behind are catching up. Now the gap may reopen and be much worse than ever before because of the rise of AI and because AI is likely to be dominated by just a small number of countries. So that’s one issue, AI inequality. And the other issue is AI and human agency or even the meaning of human life, what happens when AI is mature enough and you have enough data to basically have human beings and you have an AI that knows me better than I know myself and can make decisions for me, predict my choices, manipulate my choices and authority increasingly shifts from humans to algorithms, so not only decisions about which movie to see but even decisions like which community to join, who to befriend, whom to marry will increasingly rely on the recommendations of the AI.
Mark Zuckerberg: Yeah.
Yuval Noah Harari: And what does it do to human life and human agency? So these I would say are the
two most important issues of inequality and AI and human agency.
Mark Zuckerberg: Yeah. And I think both of them get down to a similar question around values, right, and who’s building this and what are the values that are encoded and how does that end up playing out. I tend to think that in a lot of the conversations around AI we almost personify AI, right; your point around killer robots or something like that. But, but I actually think it’s AI is very connected to the general tech sector, right. So almost every technology product and increasingly a lot of not what you call technology products have– are made better in some way by AI. So it’s not like AI is a monolithic thing that you build. It’s it powers a lot of products, so it’s a lot of economic progress and can get towards some of the distribution of opportunity questions that you’re raising. But it also is fundamentally interconnected with these really socially important questions around data and privacy and how we want our data to be used and what are the policies around that and what are the global frameworks. And so one of the big questions that– So, so I tend to agree with a lot of the questions that you’re raising which is that a lot of the countries that have the ability to invest in future technology of which AI and data and future internet technologies are certainly an important area are doing that because it will give, you know, their local companies an advantage in the future, right, and to be the ones that are exporting services around the world. And I tend to think that right now, you know, the United States has a major advantage that a lot of the global technology platforms are made here and, you know, certainly a lot of the values that are encoded in that are shaped largely by American values. They’re not only. I mean, we, and I, speaking for Facebook, and we serve people around the world and we take that very seriously, but, you know, certainly ideas like giving everyone a voice, that’s something that is probably very shaped by the American ideas around free speech and strong adherence to that. So I think culturally and economically, there’s an advantage for countries to develop to kind of push forward the state of the field and have the companies that in the next generation are the strongest companies in that. So certainly you see different countries trying to do that, and this is very tied up in not just economic prosperity and inequality, but also–
Yuval Noah Harari: Do they have a real chance? I mean, does a country like Honduras, Ukraine, Yemen, has any real chance of joining the AI race? Or are they– they are already out? I mean, they are, it’s not going to happen in Yemen, it’s not going to happen in Honduras? And then what happens to them in 20 years or 50 years?
Mark Zuckerberg: Well, I think that some of this gets down to the values around how it’s developed, though. Right, is, you know, I think that there are certain advantages that countries with larger populations have because you can get to critical mass in terms of universities and industry and investment and things like that. But one of the values that we hear, right, both at Facebook and I think generally the academic system of trying to do research hold is that you do open research, right. So a lot of the work that’s getting invested into these advances, in theory if this works well should be more open so then you can have an entrepreneur in one of these countries that you’re talking about which, you know, maybe isn’t a whole industry-wide thing and, you know, certainly, I think you’d bet against, you know, sitting here today that in the future all of the AI companies are going to be in a given small country. But I don’t think it’s far-fetched to believe that there will be an entrepreneur in some places who can use Amazon Web Services to spin up instances for Compute, who can hire people across the world in a globalized economy and can leverage research that has been done in the U.S. or across Europe or in different open academic institutions or companies that increasingly are publishing their work that are pushing the state of the art forward on that. So I think that there’s this big question about what we want the future to look like. And part of the way that I think we want the future to look is we want it to be– we want it to be open. We want the research to be open. I think we want the internet to be a platform. And this gets back to your unification point versus fragmentation. One of the big risks, I think, for the future is that the internet policy in each country ends up looking different and ends up being fragmented. And if that’s the case, then I think the entrepreneur in the countries that you’re talking about, in Honduras, probably doesn’t have as big of a chance if they can’t leverage the– all the advances that are happening everywhere. But if the internet stays one thing and the research stays open, then I think that they have a much better shot. So when I look towards the future, one of the things that I just get very worried about is the values that I just laid out are not values that all countries share. And when you get into some of the more authoritarian countries and their data policies, they’re very different from the kind of regulatory frameworks that across Europe and across a lot of other people, people are talking about or put into place. And, you know, just to put a finer point on it, recently I’ve come out and I’ve been very vocal that I think that more countries should adopt a privacy framework like GDPR in Europe. And a lot of people I think have been confused about this. They’re like, “Well, why are you arguing for more privacy regulation? You know, why now given that in the past you weren’t as positive on it.” And I think part of the reason why I am so focused on this now is I think at this point people around the world recognize that these questions around data and AI and technology are important so there’s going to be a framework in every country. I mean, it’s not like there’s not going to be regulation or policy. So I actually think the bigger question is what is it going to be. And the most likely alternative to each country adopting something that encodes the freedoms and rights of something like GDPR, in my mind, the most likely alternative is the authoritarian model which is currently being spread, which says, you know, as every company needs to store everyone’s data locally in data centers and you know, if I’m a government, I should be able to, you know, go send my military there and be able to get access to whatever data I want and be able to take that for surveillance or military or helping, you know, local military industrial companies. And I mean, I just think that that’s a really bad future, right. And that’s not– that’s not the direction that I, as, you know, someone who’s building one of these internet services or just as a citizen of the world want to see the world going.
Yuval Noah Harari: To be the devil’s advocate for a moment,–
Mark Zuckerberg: <laughs>
Yuval Noah Harari: I mean, if I look at it from the viewpoint, like, of India, so I listen to the American President saying, “America first and I’m a nationalist, I’m not a globalist. I care about the interests of America,” and I wonder, is it safe to store the data about Indian citizens in the U.S. and not in India when they’re openly saying they care only about themselves. So why should it be in America and not in India?
Mark Zuckerberg: Well, I think that there’s, the motives matter and certainly, I don’t think that either of us would consider India to be an authoritarian country that had– So, so I would say that, well, it’s– <laughs>
Yuval Noah Harari: Well, it can still say– Mark
Zuckerberg: You know, it’s– <laughs>
Yuval Noah Harari: We want data and metadata on Indian users to be stored on Indian soil. We don’t want it to be stored in– on American soil or somewhere else.
Mark Zuckerberg: Yeah. And I can understand the arguments for that and I think that there’s– The intent matters, right. And I think countries can come at this with open values and still conclude that something like that could be helpful. But I think one of the things that you need to be very careful about is that if you set that precedent you’re making it very easy for other countries that don’t have open values and that are much more authoritarian and want the data not to– not to protect their citizens but to be able to surveil them and find dissidents and lock them up. That– So I think one of the– one of the–
Yuval Noah Harari: No, I agree, I mean, but I think that it really boils down to the questions that do we trust America. And given the past two, three years, people in more and more places around the world– I mean, previously, say if we were sitting here 10 years ago or 20 years ago or 40 years ago, then America declared itself to be the leader of the free world. We can argue a lot whether this was the case or not, but at least on the declaratory level, this was how America presented itself to the world. We are the leaders of the free world, so trust us. We care about freedom. But now we see a different America, America which doesn’t want even to be– And again, it’s not a question of even what they do, but how America presents itself no longer as the leader of the free world but as a country which is interested above all in itself and in its own interests. And just this morning, for instance, I read that the U.S. is considering having a veto on the U.N. resolution against using sexual violence as a weapon of war. And the U.S. is the one that thinks of vetoing this. And as somebody who is not a citizen of the U.S., I ask myself, can I still trust America to be the leader of the free world if America itself says I don’t want this role anymore.
Mark Zuckerberg: Well, I think that that’s a somewhat separate question from the direction that the internet goes then, because I mean, GDPR, the framework that I’m advocating, that it would be better if more countries adopted something like this because I think that that’s just significantly better than the alternatives, a lot of which are these more authoritarian models. I mean, GDPR originated in Europe, right.
Yuval Noah Harari: Yeah.
Mark Zuckerberg: And so that, because it’s not an American invention. And I think in general, these values of openness in research, of cross-border flow of ideas and trade, that’s not an American idea, right. I mean, that’s a global philosophy for how the world should work and I think that the alternatives to that are at best fragmentation, right which breaks down the global model on this; at worst, a growth in authoritarianism for the models of how this gets adopted. And that’s where I think that the precedents on some of this stuff get really tricky. I mean, you can– You’re, I think, doing a good job of playing devil’s advocate in the conversation–
Yuval Noah Harari: <laughs>
Mark Zuckerberg: Because you’re bringing all of the counterarguments that I think someone with good intent might bring to argue, “Hey, maybe a different set of data policies is something that we should consider.” The thing that I just worry about is that what we’ve seen is that once a country puts that in place, that’s a precedent that then a lot of other countries that might be more authoritarian use to basically be a precedent to argue that they should do the same things and, and then that spreads. And I think that that’s bad, right. And that’s one of the things that as the person running this company, I’m quite committed to making sure that we play our part in pushing back on that, and keeping the internet as one platform. So I mean, one of the most important decisions that I think I get to make as the person running this company is where are we going to build our data centers and store– and store data. And we’ve made the decision that we’re not going to put data centers in countries that we think have weak rule of law, that where people’s data may be improperly accessed and that could put people in harm’s way. And, you know, I mean, a lot has been– There have been a lot of questions around the world around questions of censorship and I think that those are really serious and important. I mean, I, a lot of the reason why I build what we build is because I care about giving everyone a voice, giving people as much voice as possible, so I don’t want people to be censored. At some level, these questions around data and how it’s used and whether authoritarian governments get access to it I think are even more sensitive because if you can’t say something that you want, that is highly problematic. That violates your human rights. I think in a lot of cases it stops progress. But if a government can get access to your data, then it can identify who you are and go lock you up and hurt you and hurt your family and cause real physical harm in ways that are just really deep. So I do think that people running these companies have an obligation to try to push back on that and fight establishing precedents which will be harmful. Even if a lot of the initial countries that are talking about some of this have good intent, I think that this can easily go off the rails. And when you talk about in the future AI and data, which are two concepts that are just really tied together, I just think the values that that comes from and whether it’s part of a more global system, a more democratic process, a more open process, that’s one of our best hopes for having this work out well. If it’s, if it comes from repressive or authoritarian countries, then, then I just think that that’s going to be highly problematic in a lot of ways.
Yuval Noah Harari: That raises the question of how do we– how do we build AI in such a way that it’s not inherently a tool of surveillance and manipulation and control? I mean, this goes back to the idea of creating something that knows you better than you know yourself, which is kind of the ultimate surveillance and control tool. And we are building it now. In different places around the world, it’s been built. And what are your thoughts about how to build an AI which serves individual people and protects individual people and not an AI which can easily with a flip of a switch becomes kind of the ultimate surveillance tool?
Mark Zuckerberg: Well, I think that that is more about the values and the policy framework than the technological development. I mean, it’s a lot of the research that’s happening in AI are just very
fundamental mathematical methods where, you know, a researcher will create an advance and now all of the neural networks will be 3 percent more efficient. I’m just kind of throwing this out.
Yuval Noah Harari: Yeah.
Mark Zuckerberg: And that means that, all right, you know, newsfeed will be a little bit better for people. Our systems for detecting things like hate speech will be a little bit better. But it’s, you know, our ability to find photos of you that you might want to review will be better. But all these systems get a little bit better. So now I think the bigger question is you have places in the world where governments are choosing to use that technology and those advances for things like widespread face recognition and surveillance. And those countries, I mean, China is doing this, they create a real feedback loop which advances the state of that technology where, you know, they say, “Okay, well, we want to do this,” so now there’s a set of companies that are sanctioned to go do that and they have– are getting access to a lot of data to do it because it’s allowed and encouraged. So, so that is advancing and getting better and better. It’s not– That’s not a mathematical process. That’s kind of a policy process that they want to go in that direction. So those are their– the values. And it’s an economic process of the feedback loop in development of those things. Compared to in countries that might say, “Hey, that kind of surveillance isn’t what we want,” those companies just don’t exist as much, right, or don’t get as much support and–
Yuval Noah Harari: I don’t know. And my home country of Israel is, at least for Jews, it’s a democracy.
Mark Zuckerberg: That’s–
Yuval Noah Harari: And it’s one of the leaders of the world in surveillance technology. And we basically have one of the biggest laboratories of surveillance technology in the world which is the occupied territories. And exactly these kinds of systems–
Mark Zuckerberg: Yeah.
Yuval Noah Harari: Are being developed there and exported all over the world. So given my personal experience back home, again, I don’t necessarily trust that just because a society in its own inner workings is, say, democratic, that it will not develop and spread these kinds of technologies.
Mark Zuckerberg: Yeah, I agree. It’s not clear that a democratic process alone solves it, but I do think that it is mostly a policy question, right. It’s, you know, a government can quite easily make the decision that they don’t want to support that kind of surveillance and then the companies that they would be working with to support that kind of surveillance would be out of business. And, and then, or at the very least, have much less economic incentive to continue that technological progress. So, so that dimension of the growth of the technology gets stunted compared to others. And that’s– and that’s generally the process that I think you want to follow broadly, right. So technological advance isn’t by itself good or bad. I think it’s the job of the people who are shepherding it, building it and making policies around it to have policies and make sure that their effort goes towards amplifying the good and mitigating the negative use cases. And, and that’s how I think you end up bending these industries and these technologies to be things that are positive for humanity overall, and I think that that’s a normal process that happens with most technologies that get built. But I think what we’re seeing in some of these places is not the natural mitigation of negative uses. In some cases, the economic feedback loop is pushing those things forward, but I don’t think it has to be that way. But I think that that’s not as much a technological decision as it is a policy decision.
Yuval Noah Harari: I fully agree. But I mean, it’s every technology can be used in different ways for good or for bad. You can use the radio to broadcast music to people and you can use the radio to broadcast Hitler giving a speech to millions of Germans. The radio doesn’t care. The radio just carries whatever you put in it. So, yeah, it is a policy decision. But then it just raises the question, how do we make sure that the policies are the right policies in a world when it is becoming more and more easy to manipulate and control people on a massive scale like never before. I mean, the new technology, it’s not just that we invent the technology and then we have good democratic countries and bad authoritarian countries and the question is what will they do with the technology. The technology itself could change the balance of power between democratic and totalitarian systems.
Mark Zuckerberg: Yeah.
Yuval Noah Harari: And I fear that the new technologies are inherent– are giving an inherent advantage, not necessarily overwhelming, but they do tend to give an inherent advantage to totalitarian regimes. Because the biggest problem of totalitarian regimes in the 20th century, which eventually led to their downfall, is that they couldn’t process the information efficiently enough. If you think about the Soviet Union, so you have this model, an information processing model which basically says, we take all the information from the entire country, move it to one place, to Moscow. There it gets processed. Decisions are made in one place and transmitted back as commands. This was the Soviet model of information processing. And versus the American version, which was, no, we don’t have a single center. We have a lot of organizations and a lot of individuals and businesses and they can make their own decisions. In the Soviet Union, there is somebody in Moscow, if I live in some small farm or kulhose [ph?] in Ukraine, there is somebody in Moscow who tells me how many radishes to grow this year because they know. And in America, I decide for myself with, you know, I get signals from the market and I decide. And the Soviet model just didn’t work well because of the difficulty of processing so much information quickly and with 1950s technology. And this is one of the main reasons why the Soviet Union lost the Cold War to the United States. But with the new technology, it’s suddenly, it might become, and it’s not certain, but one of my fears is that the new technology suddenly makes central information processing far more efficient than ever before and far more efficient than distributed data processing. Because the more data you have in one place, the better your algorithms and then so on and so forth. And this kind of tilts the balance between totalitarianism and democracy in favor of tota