Een goed gesprek, waar vind je dat nog? Kom praten met menselijke chatbot Paul Rutger Bastiaan over ‘kunst & matige intelligentie’. Kan een machine kunst maken? Is A.I. God of HAL2000?
Tussen 31 maart en 16 april heb ik als menselijke chatbot in het kader van Festival Kaalstaart de vraag gesteld ‘kan een geautomatiseerd systeem (AI) kunst maken?’, waarmee eerst de vraag aan de orde komt wat kunst is. “Kunst is: ik kan iets dat jij niet kan en dat vind jij leuk” zei iemand. Een ander zei dat iedereen zelf mag bepalen of iets kunst is. We leven in een ‘post-truth society’ en dus ook in een ‘post-art society’. Op de vraag ‘Kan AI kunst maken?’ zei iemand “Nee, A.I. is juist het tegenovergestelde van kunst” Iemand kwam met een gedachte-experiment: Een kunstenaar heeft een systeen geprogrameerd om iedere dag een kunstwerk te genereren. Na drie dagen valt de kunstenaar dood neer. De machine blijft nieuw werk printen. Is dat dan nog steeds het werk van die kunstenaar?
Iemand zei: Nee, een machine kan geen kunst maken, alleen mensen kunnen dat. In een wereld waarin de mens centraal staat is daar wat voor te zeggen, maar ik denk dat wij zozeer verdwaald zijn in onze fantastische systemen dat we niet meer weten wie we zelf zijn.
Mijn cliënten waren heel dankbaar voor de gesprekken.
Hieronder de bewerking van een gesprek tussen Kara Swisher en Tristan Harrris, oprichter van Center for humane technology.
Tristan Harris: “AI is the most powerful and consequential technology we’ve ever deployed, and we’re deploying it faster than any other one in history. It took Facebook four and a half years to get to a hundred million users. It took TikTok nine months, it took ChatGPT two months.
People say, “Why are we suddenly so worried about AI? AI has existed for 20 years. Siri still mispronounces my name and Google Maps still pronounces my address wrong. We haven’t freaked out about it until now.”
So we explain to people that in 2017 AI changed, because a new class of AI came out called transformers. It’s 200 lines of code and based on deep learning which created this explosive wave of AI based on generative, large language, multi-modal models; G-L-L-M.
To differentiate this new sort of era of AI from the past and to make people understand why this is so explosive, we decided to call them Golem-class AI’s, after the Jewish myth of an inanimate object that gains animate capabilities.
One of the things with generative large language models, is that as you pump them with more information and you train on them, they actually gain new capabilities that the engineers themselves didn’t program into them.
(…)
Human beings tend to get obsessed with the question of whether these systems can think. But that just demonstrates the predispositions of humans. Imagine Neanderthals making homosapiens in a lab, and they become obsessed with the question of if this thing is more intelligent: Will it be sentient like Neanderthals? It’s just a bias of how our brains work. What really matters is, can you anticipate the capabilities of something that’s smarter than you? Imagine you’re living in a neanderthal brain. You can’t think about humans once they start inventing computation, inventing energy, inventing oil-based hydrocarbon economies, inventing language.
(…)
There’s enormous dangers that can emerge from growing these capabilities, and entangling this new alien intelligence with society faster than we actually know what’s there.
(…)
AI may have already taken control of humanity in the form of social media. What are all of us running around doing every day? What are all of our political fears? What are all of our elections? They’re all driven by social media. It’s been feeding us the worldviews that define how we see reality for the past 10 years, mostly by the noisiest people. That has warped our collective consciousness. So how free are you really if all the information you’ve ever been looking at has been determined by an AI? We’re running confirmation bias on a stack of stuff that has been pre-selected from the ‘outrage selection feed’ of Twitter and the rest of them. So you could argue that AI has already taken over society in a subtle way. Not in the sense that it’s values are driving us, but in the sense that just like we don’t have regular chickens anymore, we have the kind of chickens that have been domesticated for their eggs and meat. We don’t have regular cows, we have the kind of cows that have been domesticated for their milk and their meat. We don’t have regular humans anymore. We have AI engagement optimized humans.
(…)
One of the more compelling criticisms of an ‘ÁI pause’ is the danger of the U.S. falling behind China. If worse actors beat you in AI dominance, people with no morals, with digital authoritarianism values or Chinese Communist Party values, then we certainly won’t want to lose from them.
But I would actually argue that unregulated deployment of AI is exactly what is causing the West to lose from China. Social media was the unregulated deployment of AI to society and look what happened. Democracies are backsliding everywhere around the world, all at once. I’m not blaming it all on social media, but we’re seeing it happen rapidly in all these countries that have been governed by the information environment created by social media. And if a society cannot coordinate, can it deal with poverty? Can it deal with inequality? Can it deal with climate change? We shot ourself in the foot and now we’re going for the arms. So I would say unregulated deployment of AI would be the very reason we lose from China.
(…)
People talk about AI firms becoming the new urban planners of the attentional landscape, which is the race to dominate, own, and commodify human experience. Social media is the biggest player in that space. VR is in that space. YouTube is in that space. Netflix is in that space. All the things that construct your reality, from the moment you wake up to the moment you close your eyes at night, that’s the attention economy.
AI will supercharge the harms of social media. Before we had people AB testing a handful of messages on social media and figuring out, like Cambridge Analytica, which one works best for each political tribe. Now you’re going to have AIs to do that, so you can actually sample a virtual group. Instead of running focus groups around the world, you can have a chatbot that you talk to and it will answer questions as if it’s a 35-year-old in Kansas City with two kids. You can run perfect message testing, so you don’t need to talk to people anymore. You can do a million things like that. The loneliness crisis that we see, the mental health crisis that we see, the sexualization of young kids that we see, the online harassment situation that we see, all that’s just going to get supercharged with AI. It’s the ability to create ‘alpha persuade’, which is like AlphaGo and alpha chess, where the system’s playing chess against itself and getting better and better. It’s now going to be able to hyper-manipulate you and hyper-persuade you.
(…)
We were too late with social media because we waited for it to entangle itself with journalism, with media, with elections, with business. Now businesses can only reach their consumers if they have an Instagram page and use marketing on Facebook and Instagram and so on. Social media captured too many of the fundamental life organs of our society. And now it’s very hard to regulate because certain parties benefit, certain politicians benefit. Would you want to ban TikTok if you’re a politician or a party that’s currently winning a lot of elections by being really good at TikTok?
Once things start to entangle themselves, it’s very hard to regulate them. There’s too many invested interests. With AI, we have not yet allowed this thing to roll out.
(…)
Think of AI as like a biosafety level 10 lab. Imagine that I invent a pathogen that the second it is released, it kills everyone instantly. Let’s just imagine that that was actually possible. You might say, “Well, let’s let people have that scientific capacity. We want to test it so we can build a vaccine or prevention systems against this pathogen that kills everyone instantly.”
But the question is, to do that experimental research, if we don’t have biosafety level 10 precautions, would we want to pursue it if we only had the biosafety level 10 dangerous capabilities.
I think the deeper question is that you cannot have the power of Gods without the wisdom, love, and prudence of Gods. And right now we are handing out and democratizing God-like powers without actually knowing what would constitute the love, prudence and wisdom that’s needed for them. In the parable of the Lord of the Rings, why do they want to throw the Ring into Mount Doom? Because they say “If we’re not actually wise enough to hold the Ring and put it on, let’s collectively never put on that ring.”
(…)
My mother died of cancer. Like any human being I would do anything to have my mother still be here with me. If you told me that there was an AI that was going to be able to discover a cure for my mother, obviously I would want that cure. If you told me that the only way for that cure to be developed was to also unleash capabilities that the world would get wrecked. The confusing thing is, is it possible on the current development path to get the goods without the bads? What if it isn’t?
(…)
If AI is unleashed and democratized to everybody, no matter how high the tower of benefits that AI assembles, if it also simultaneously crumbles the foundation of that tower, it won’t really matter. What kind of society can receive a cancer drug if no one knows what’s true, if there’s cyber attacks everywhere, if things are blowing up and there’s pathogens that have locked down the world. Remember how bad COVID was? And that’s just one pandemic. Imagine if that happens a few more times. We saw supply chain breakdown. We saw how much money had to be printed to keep the economy going. It’s pretty easy to break society if you have a few more of these things.
How will cancer drugs flow in a society that has stopped working? I don’t mean AI doom, the Eliezer Yudkowsky AGI that kills everybody in one instant. I’m talking about dysfunction on a grand scale.
(…)
Let’s imagine there’s kind of two attractors for where the world is going right now. One attractor is: We trust everyone to do the right thing and I’m going to distribute god-like AI powers to everyone. Everyone can build bioweapons, everyone can make generative media, find loopholes in law, manipulate religions, do fake everything. That world lands in continual chaos and catastrophe, because it’s just handing everyone the power to do anything. That’s one attractor. Think of it like a 3D field, sucking the world into one gravity while it’s just continuing to catastrophe.
The other side is dystopia, which is instead of trusting everyone to do the right thing with these superhuman powers, We don’t trust anyone to do the right thing. So I create this dystopian state that has surveillance and monitors everyone. That’s the Chinese digital authoritarianism outcome, the other deep attractor for the world. So the world’s currently moving towards both of those. And the more frequently the continual catastrophes happen, the more it’s going to drive us towards the direction of the dystopia. So in both cases, we’re getting a self-reinforcing loop.
(…)
We need a middle way or third attractor, which has the values of an open society, a democratic society, in which people have freedom but instead of naively trusting everyone to do the right thing, instead of also not trusting anyone to do the right thing, we have what’s called warranted trust. Think of it as a loop. Technology, to the degree it impacts society, has to constitute a wiser, more responsible, more enlightened culture. A more enlightened culture supports stronger upgraded institutions which set the regulation and guardrails for better technology, which then creates a loop for constituting better culture. That’s the upward spiral.
We are currently living in the downward spiral. Technology decoheres, digs outrage, lonelifies culture. The resulting culture can’t support any institutional responses to anything. That incapacitated, dysfunctional set of institutions doesn’t regulate technology, which allows the downward spiral to continue. The upward spiral is what we need to get to.
(…)
Taiwan is on the third way, actually proving that you can use technology in a way that gets you into the upward spiral. They’re having a national debate about it, which was modeled after the film The Day After.
This is a movie about the nuclear bomb blowing up, and they convened groups all over the country to talk about it; watch the movie, and then discuss it. It really was terrifying at the time. It was a TV movie commissioned by ABC that was noticing that the possibility of nuclear war existed in a repressed place inside the human mind. No one wanted to think about it although it actually was a real possibility, because it was the active Cold War, and it was escalating with Reagan and Gorbachev. So they decided to make a film and it became the largest made-for-TV film in all of TV history. A hundred million Americans tuned in. Reagan watched it in the White House film studio. His biographer later said he was depressed for weeks. They aired the film in the Soviet Union a few years later and it also scared the hell out of the Russians.
It made visible and visceral the repressed idea that we actually had the power to destroy ourselves. After the film, they aired a one-hour debate where they had Carl Sagan and Henry Kissinger and Brent Scowcroft, and Elie Wiesel who had studied the Holocaust, to really debate what we were facing. And that was a democratic way of saying, “We don’t want five people at the Department of Defense in Russia and the US deciding whether humanity exists tomorrow or not.”
(…)
Audrey Tang’s work in Taiwan shows that you can use AI to find unlikely consensus across groups. What if she creates a digitally augmented process where people put in all their ideas and opinions about AI and we can actually use AI to find the coherence, the areas of agreement that we all share. This is not techno utopianism, it’s techno realism, of applying the AI to get a faster OODA loop, a faster observe, orient, decide, and act loop, so that the institutions are moving as fast as the evolutionary pace of technology. Taiwan’s got the closest example to that.
(…)