A snippet of our conversation below
Transcript of our conversation 8 January 2023, edited for accuracy, with external links
Eric Topol
It’s a pleasure for me to have Liv Boeree as our Ground Truths podcast guest today. I met her at the TED meeting in October dedicated to AI. I think she's one of the most interesting people I’ve met in years and the first time I've ever interviewed a professional poker player who has won world championships and we're going to go through that whole story, so welcome Liv.
Liv Boeree
Thanks for having me, Eric.
Eric Topol
You have an amazing background having been at the University of Manchester in physics and astrophysics. Back around in 2005 you landed into the poker world. Maybe you could help us understand how you went from physics to poker.
From Physics to Poker
Liv Boeree
Ah, yeah. It's a strange story, I graduated as you said in 2005 and I had student debt and needed to get a job I had plans to continue in academia. I wanted to do a masters and then a PhD to work in astrophysics in some way, but I needed to make some money, so I started applying for TV game shows and it was on one of these game shows that I first learned how to play poker. They were looking for beginners and the loose premise of the show was which personality type is best suited for learning the game and even though I didn't win that particular show we were playing for a winner take all prize of £100,000 which was a life changing amount of money had I won it at the time. It was like a light bulb moment just the game and I’ve always been a very competitive person, but poker in particular really spoke to my soul. I always wanted to play in games where it was often considered a boy’s game and I could be a girl beating the boys at their own game. I hadn't played that much cards in particular, but I just loved any game that was very cutthroat which poker certainly is. From that point onwards I was like you know what I'm going to put physics on hold and see if I can make it in this poker world instead and then never really looked back.
Eric Topol
Well, you sure made it in that world. I know you retired back in about 2019, but that was after you won all sorts of world and European championships and beat a lot of men. No less. What were some of the things that that made you such a phenomenal player?
Liv Boeree
The main thing with poker is well the most important ingredient if you really want to make it as a professional is you have to be extremely competitive. I have not met any top pros who don't have that degree of killer instinct when it comes to the game that doesn't mean it means you're competitive in everything else in life, but you have to have a passion for looking someone in the eye, mentally modeling them, thinking how to outwit them and put them into difficult situations within the game and then take pleasure in that. So, there’s a certain personality type that tends to enjoy that. The other key facet is you have to be comfortable with thinking in terms of probability. The cards are shuffled between every hand so there's this inherent degree of randomness. On the scale of pure roulette which is all luck no skill to a game like chess which has almost no luck (close to 100% skill as you can get) poker lies somewhere in the middle and of course the more you play the bigger the skill edge and the smaller the luck factor. That's why professionals can exist. It's a game of both luck and skill which I think is what makes it so interesting because that's what life is really, right? We're trying to get our business off the ground, we're trying to compete in the dating market. Whatever it is. We're doing our strategy, the role of luck life can throw your curved balls that you can do everything right and still things don't go the way you intended them to or vice versa, but there's also strategies we can employ to improve our chances of success. Those are the sort of skills that poker players particularly this idea of gray scale probabilistic thinking that you really have to hone. I've always wondered whether having a background in science or at least you know studying having ah a scientific degree helped in that regard because of course the scientific method is about understanding variables and minimizing uncertainty as much as possible and understanding what cofounding factors can bias the outcome of your results. Again, that's always going on in a poker player's mind, you'll have concurrent hypotheses. Oh, this guy just made a huge bet into me when that ace came out, is it because he actually has an ace or is it because he's pretending to have an ace and so you've got to weigh all the bits of information up as unbiased as possible in an unbiased way as possible to come to a correct conclusion. Even then you can never be certain, so this idea of understanding biases understanding probabilities I think that’s why a lot of top poker players have backgrounds in scientific degrees a very good friend of mine he had a PhD in in physics. Especially over time poker has become a much more sort of scientific pursuit. When I first allowed to play it was very much a game of street smarts and intuition in part because we didn't have the technological tools to understand really the mechanics of the game as well. You couldn't record all your playing data if you were playing just in a casino unless you were writing down your hands. Otherwise, this information wasn't getting stored anywhere, but then online poker came along which meant that you could store all this data on your laptop and then build tools to analyze that data and so the game became a much more technical scientific pursuit.
Eric Topol
That actually gets to kind of the human side of poker. Not the online version —especially since we're going to be mainly talking about AI the term “poker face” the ability to bluff is that a big part of this?
Liv Boeree
Oh, absolutely. You can't be a good poker player if you don't ever bluff because your opponents will start to notice that so that means you're only ever putting your money on the line when you have a good hand so why would they ever pay you off. The point of poker is to maximize the deception to your opponents so you have to use strategies where some of the time you might be having a strong hand and some of the time you might be bluffing where you might have a weak hand. The key is this is getting into the technical sort of game theory side of it, but you want to be doing these bluffs versus what we call value bets as in betting with a good hand with the right to sort of frequency. You need these right ratios between them, so bluffing is a very core part of the game and yes having a poker face obviously helps because you want to be as inscrutable to your opponents as possible. At the same time online poker is an enormously popular game where you can't see your opponent's faces.
Eric Topol
Right, right.
Liv Boeree
Yet you can still bluff which could actually lead us into this topic of AI because now the best players in the world are actually AIs.
Eric Topol
Well, it's interesting because it takes out that human component of being able to bluff and it may be good for people who don't have a poker face. They can play online poker and be good at it because they don't have that disguise if you will.
Liv Boeree
Right.
Game Theory and Moloch Traps
Eric Topol
That gets me to game theory and a big part of the talk you gave at the TED conference about something that I think a lot of the folks listening aren't familiar with— Moloch traps. Could you enlighten us about that because what the talk which of course we’ll link to is so illuminating and apropos to the AI landscape that we face today?
Liv Boeree
Yeah, I’ll leave it for people to go and watch the TED talk because that's going to be much more succinct than me to explain the backstory of how it came to be called a Moloch trap because Moloch is a sort of biblical figure a demon and it seems strange that you would be applying such a concept to what's basically a collection of game theoretic incentives, but essentially what a Moloch trap is the more formal name for it is a multipolar trap which some of the listeners may be familiar with. Essentially a Moloch trap or a multipolar trap is one of those situations where you have a lot of competing different people all competing for 1 particular thing that say who can collect the most fish out of a lake. The trap occurs when everyone is incentivized to get as much of that thing as possible so to go for a specific objective, but if everyone ends up doing it then the overall environment ends up being worse off than before. What we're seeing with plastic pollution – It’s not like packaging companies want to fill the oceans with plastic. They don't want this outcome. It doesn't make them look good. They're all caught on the trap of needing to maximize profits and external and one of the most efficient ways of doing that is to externalize costs outside of their P&L by using cheap packaging that perhaps ends up in the lakes or the oceans and if everyone ends up doing this but well basically you're a CEO in a decision of I could do the more expensive selfless action, but if I don't do that then I know that my competitors are going to do the selfish thing. I might as well do it anyway because the world's going to end up in roughly the same outcome whether I do it or not because everyone ends up adopting this mindset they end up being trapped in this bad situation. Another way of thinking of it is if you're watching a football at a stadium or a concert and before the show starts everyone's sitting down, but then a few people near the front want to get a better view so they stand up. That now forces the people right behind them to make a decision. I don't really want to block the people behind me but I can't see anymore, so now I have to stand up. The whole thing sort of falls down until everyone is now stuck standing for the rest of the show. No one really actually has a comparative advantage anymore. No one's got a particularly better view than before because it's just the same that now everyone's standing, but overall everyone is net worse because now they have to stand for the whole thing and there's no easy way for everyone to coordinate. A Moloch trap is the result of a competitive landscape where the individual short-term incentives push people to take actions that from a God's eye view from the whole from the whole system's perspective makes everyone worse off than before and because there are so many people it's too hard for everyone to coordinate and really go back to the state before so it creates these kind of arms race dynamics these tragedy of the commons. These are all a result of these Moloch traps which is essentially just another name for bad short-term incentives that hurt the whole overall.
Eric Topol
No, that's great. You know someday you should write the book on competition because you have a deep understanding of that. You understand the whole range from healthy, sometimes we call managed competition. The kind that brings out the best in people to unhealthy, I might even call reckless competition, as I mentioned when we were together. Now let's go to as you say arms race nuclear, there's so many examples of this but in the AI world you were polite during your talk because you referred to one of the major CEOs without actually mentioning his name about making another one of the large AI companies titans. Make them dance as part of the competition and I think you came on to something very important which is we're interested in the safety of AI. As we move towards what seems to be inevitable artificial general intelligence, we'll talk more about that there's certainly concerns at least by a significant perhaps plurality of people that this is or can be dangerous. The fact that these this arms race if you will of AI is ongoing. What are your thoughts about that? How seriously bad is this competition?
“I hope with our [ChatGPT] innovation they will want to come out and show that they can dance. I want people to know we made them dance”—Satya Nadella, Microsoft CEO, on Google
The A.I. Arms Race
Liv Boeree
If it were the case that building powerful AI systems that it was trivially easy to align them with the best of humanity and minimize accidents then we would want more competition because more competition would encourage everyone to go faster and faster and we would want to get to that point as fast as possible. However, if we are in the world where it is not trivially easy to align powerful AI systems with what we want and make sure that they could not do reward hacking or create some kind of unintended consequence but fall into the wrong hands easily you know into the hands of people who want to use it for the various purposes then we wouldn't want as much competition as possible because that would make everything go faster. The thing is when your trajectory is pointing in the wrong direction the last thing you want is more speed, right? I have not yet seen a compelling argument that the current trajectory is sufficiently aligned with what is good for humanity and certainly not for the biosphere that we rely upon. This is not just with AI I mean it's the wider sort of techno capital system in many ways. Obviously, capitalism has been wonderful for us. We are living here speaking across the airwaves in a warm, comfortable environments. We have good food and God bless capitalism for providing us all with that. At the same time there are clearly externalities piling up in our biosphere whether it's through climate change whether it's through pollution and so on and so forth. One particular thing about AI is that if we're going to hack the process of intelligence itself it means intelligence by definition ubiquitous. It can be used to increase the process. It can be but can be used to make more of whatever you want to do. You can do it more efficiently faster more effectively. If you think the system is aligned with exactly what we want then that's a good thing, but I see lots of evidence of the ways it is not sufficiently aligned and I'm very concerned that if we're not thinking in more depth about which goals we should be optimizing for in the first place then we're going to just keep blindly going forward as fast as possible and create a bunch of unintended consequences or even in some cases intended ones with as I said it falling into the wrong hands of people.
Eric Topol
You're right on it, I think the issue is how to get the right balance of progress versus guardrails.
Liv Boeree
You mentioned this particular CEO that I quoted in the TED talk again I won't mention him by name, but anyone can go Google he basically said I want people to know we made our competitor dance and the reason why that resonated with me so much is because it reminded me of my old self in my early 20’s when I first learned to play poker and as I said you need this to win at poker which is by definition a 0 sum game you need this cutthroat almost bordering on psychopathic type willingness to like go after your opponents and get them by the throat metaphorically speaking to get their money, right? That mindset can be very useful when you're playing a game where the boundaries are clearly defined. Everyone is opting in and there's minimal externalities and harms to the wider world, but if you're using that same mindset to build something as powerful as artificial general intelligence which we don't know whether that's no one's certain whether it's going to be trivially easy whether it's impossible whether it will be controllable, whether it be completely uncontrollable, whether we're making a new species, whether it's just another tool or technology. No one really knows, but what I do know is that that is not the mindset or the impetus we want of the leaders building such incredibly powerful tools. Tools that couldn't be used to make them more powerful than any human and ever in history, tools that they may even lose control of themselves, we don't know That's really what alarms me the most is that first of all, we might have leaders who have that mindset in the first place but again even if they were all very wise and positive some mindset they weren't out there trying to just compete against each other and it's like pardon my French but like dick swinging contest even if they were perfectly enlightened they're still trapped in this difficult game theoretic dilemma this Moloch trap. I want to let my team build this safely as a priority, but I know that the other guys might not do it as safely, so if I go too slowly, they're going to get there ahead of me and deploy their really powerful systems first, so I have to go faster myself. Again, what suffers if everyone's trying to go as fast as possible the slow boring stuff like safety checks like evaluation testing etc. This is the real the fundamental nature of the problem that we need to be having more honest conversations about it's twofold. It's the mindset of the people building it. Now again some of them I know some of them personally, they're amazing people. Some of these CEOs I deeply respect and I think they understand the nature of the problem and they're really trying to do their best to not fall into this Moloch mindset, but there are others who truly are just wanting to I don't know solve some childhood trauma thing that they have through. I don't want to psychoanalyze them too much but whatever's going on there plus you have the game theoretic dilemma itself and we need to be tackling both of these because we're building something as powerful. Whether again it's AGI or not even narrow AI systems. LLMs are getting increasingly generalizable multimodal, they're starting to encroach into your area of expertise into biology I was reading about which I can't remember which chatbot it was but there's a really cool paper you guys could link to on archive talking about whether LLMs could be used to democratize access to use of technology like DNA synthesis. Is that something we want no safeguards on because that's sort of what we're careening towards and there are people actively pushing to be like no, that you can't deny anyone access to information. Google right now if you search if you Google how do I build a bomb. There's it's something like they just put it on front page. That information they don't give you the step-by-step recipe and yes, okay, you could go and get your chemistry degree and get some books and figure out how to build a bomb, but the point is there's a high barrier to entry and as these LLMs become more generalizable and more and more accessible we have this problem where the barrier to entry for anyone who is really murderous or omnicide or a terrorist mindset these are going to be falling into the hands of more and more of these people it’s going to be easier and easier for them to actually get hold of this information and there is no clear answer of what to do with this because how do we strike a balance between allowing free flow of information so that we're not stifling innovation which it also would be very terrible or even worse creating some kind of centrally controlled top-down tyrannical control of the internet saying who can read what that's an awful outcome, but then in and the other direction we can't have it widely available to but people like ISIS or whoever how to build a pathogen that makes COVID look like the common cold. How do we navigate this terrain where we don't end up in tyranny or self-terminating chaos. I don't know but that those are the problems. That's all we have to figure out.
Effective Altruism
Eric Topol
The idea that you conceptualize what's going on in AI as a Moloch trap I think is exceedingly important but now you also cited that there were a few companies that deserved at least credit with their words such as OpenAI where they're putting 20% of their resources towards alignment and Anthropic as well as DeepMind that's done a lot of great work with AlphaFold2 and life science, but as you said these are just words we haven't seen that actually translated into action. As we go forward one of the terms tossed around a lot that also was surrounding Sam Altman's temporary dismissal and brought back to OpenAI is effective altruism What is EA?
Liv Boeree
There's two ways of thinking about EA. There's the body of ideas, the principles which to summarize them as quickly as I can and as best as I understand them would be there are many different problems on earth there are only finite resources in terms of intellectual capital and actual capital in order to be spent on fixing these problems and so because of that we need to triage and figure out where is the most effective place to spend our time and money in order to solve these problems. How do we rank these problems in terms of scale and electiveness and so on and then how do we deploy our resources as efficiently and as effectively as possible in order to achieve these big problems. So those are the sort of principles and then. Out of those principles over time sprang up a community of people who adhere to those principles and in part have been very aligned with that I started a fundraising organization alongside some other poker players back in 2014 following these principles and encouraging poker players basically to donate to a range of different charities. Most of which were to do with because it if you want to save a life on average the most cost-effective way to do that averages out to people in sub-Saharan Africa dying from extreme poverty related illnesses particularly malaria turns out that providing bed nets on average will save a life for about $5000 from malaria there's vitamin A supplementation etc. That was my involvement I'm going off track, but that was my involvement in EA, but basically out of that sprang a movement and as that movement evolved then it became there were sort of different categories because it's very hard to concretely go well that's definitely problem number one because you have some which are well right now we know that there are this many people dying per day needlessly from this particular tropical disease or you could zoom out and go okay but over the next thirty years these are the kind of risks that civilization is facing so actually if we give that a 10 % probability then that could be 10 % of this many people so actually this is the biggest issue or you could go I care more about I don't just care about human lives I care about animal lives in which case then you. Then math would lead you to conclude that factory farming is actually the biggest issue particularly the amount of needless suffering that is going on factory farms like there's small rules changes that could be made in the way that these animals are treated during slaughter or raised pigs in gestation crates. Small changes there could have a huge positive impact on billions upon billions of animals' lives per year so out of these ideas sprung sort of different subcategories of EA of people focusing on different areas depending on what their personal calculations may led them to and of the category of sort of risks to humanity AI if you follow the if appreciate the game theoretic dilemmas that are going on and see just how fast things are going and how much safety is fallen by the wayside there's strong arguments that AI becomes a very important topic. Effective altrurists became from what I can see very concerned about AI long before almost the rest of the world did and so they became I guess kind of synonymous with the idea of AI safety measures and then I don't really understand well I mean there's reasons why I guess that that seems like the way the Sam Altman thing came up was because two members of the board have been associated with AI safety and effective altruism and they were 2 of the 3 that seems like they tried to you know, vote him out. Then this whole hooha drama came up about it and I wish I knew more I would love to know their reasons why they felt like Sam had to go. What it seems like again I'm purely speculating here but what I've heard through the grapevine was that it was more to do with him lying and misrepresenting them as opposed to a safety concern, but I don't know so that's the I guess Sam Altman EA drama.
The AGI Threat
Eric Topol
In many ways it's emblematic of what we've been talking about because you know there were a couple of board members that were there was a lot of angst regarding pushing hard on AGI. Whether or not there are other things of course that's a different story, but this is the tension we live in now that is we have on one hand some leaders like Yann LeCun, Andrew Ng who are not afraid who say you know still humans are going to be calling the shots as this gets more and more refined to whatever you want to call AGI, but more comprehensive abilities for machines to do things. The other the real concerns like Jeff Hinton and so many others have voiced which is we may not be able to control this, so we'll see how this plays out over time.
Liv Boeree
Look I hope that Andrew Ng and Yann LeCun turn out to be right. I deeply hope so, but I have yet to see them make compelling arguments because really the precautionary principle should apply here, right? When we're when we're playing such high stakes when we're gambling so high and there's a lot of people who don't have any skin in the game whose lives are on the line even if it's with a very small probability then you need to have real air type proof that your systems will do exactly what you want them to and even with ChatGPT-4 when it came out you know obviously there wasn't a threat to humanity in any explicit way, but that went through six months of testing before they released it. Six months and they got lots of different people. They put a lot of effort into testing it to make sure that it reliably did what they wanted it to when users used it. Within three days of it being available on the internet there were all kinds of unintended consequences coming up. It made the front page of The New York Times. Even with six months of testing I believe you know OpenAI really worked hard to make that be as bounded as possible and they thought they'd I'm sure they were expecting some things to slip through, but it was trivial once you got thousands of users on it figuring out ways to jail break it.
Liv Boeree
There hasn't been that's surely a data point to show that you know even with lots of testing this is not a trivially easy problem the people building a machine will always be able to control it and as systems get more and more powerful and more and more emerging properties come out of them as they increase in complexity that's what emergent seems to do. If anything is going to become harder to predict everything that they could do not easier and it's I don't know I as I say I would love for Yann and Andrew to be correct, but even I think even both of them when pushed for example on the topic of what about controlling access to LLMs that could be used for pathogen synthesis in some way or as a sort of put as a tool to help you figure out which DNA synthesis companies have the least stringent checks on their on their products and we'll just send you anything because that some really do have very low stringency there. They didn't have a good answer to that they couldn't answer it and they'll just sort of go back to yes, but you can't constrain information. It's still yeah, you have to give it all for free. It's like you can't be an absolutist here like there are tradeoffs. Yes, and we have to be very careful as a so civilization not to swing too much into censorship or to swing too much into just like letting all guardrails off. We have to navigate this, but it is not comforting to me as a semi layperson to see leaders who are building these technologies dismiss the concerns of alignment and unintended consequences as like trivially easy problems when they clearly aren't that's not filling me with confidence. They're hubris— I don't want a leader who's showing hubris and so that's end of my rant.
Eric Topol
It's really healthy to kind of vet the ideas here and that's what's really unique about you Liv is that you have this poker probabilistic thinking you know competition is fierce as it can be and how we are in such exciting times, but also in many ways daunting with respect to you know where we're headed where this could lead to and I think it's great. I also want to make a plug for your Win-Win that's perfect name for a podcast that you do and continue to be very interested in your ideas as we go forward because you have such a unique perspective.
Liv Boeree
Thank you so much, I really appreciate you plugging it. I remain optimistic there's a lot of well-intended people. Incredibly brilliant people working within the AI industry who do appreciate the nature of the problem. The question is I wish it was as simple as oh, just let the market decide just let profit maximization guide everything and that will always result in the best outcome I wish it was that simple that would make life much easier, but that's not the case externalities a real, misalignment of goals is real. We need people to reflect on just be honest, over the fact that move fast, and break things is not the solution to every problem and especially when the possible things you are breaking are the is the very biosphere or playing field that we all rely on and live on. Yeah, it's going to be interesting times.
Eric Topol
Well, we didn't solve it, but we sure heard a very refreshing insightful perspective. Liv, thanks for what you're doing to get us informed and to learn from other examples outside of the space of AI and your background and look forward to further discussions in the future.
Liv Boeree
Thank you so much. Really appreciate you having me on.
Share this post