I appeared on @robotson’s podcast, Neomania, where we discussed AI among other things. Listen to it on his site or anywhere fine podcasts are purveyed. Here’s a transcript:
Neomania 6. Memetic Alignment and Vector Vibes w/ @deepfates
Voiceover: Welcome to Neomania, a podcast exploring our obsession with the new and the future.
robotson: Welcome to Neomania. Talking about weird futures. It’s your boy, your host, Lance Robotson, and I’m here today with a special guest. A guy, how would I introduce a person? I’m thinking about how you introduce people when you’re friends and you want to gas up your friends to your other friends. This is a cool guy. He goes by Deep Fates. He’s a mysterious entity on the internet. He’s definitely somebody who I am happy to know. You’ve introduced me to a lot of great internet friends, and I would say that almost everyone so far who’s appeared—everybody I’ve actually been able to get to come on my podcast—is probably somebody I met by being introduced to you through various internet communities that you’ve connected me with. So, very grateful and happy to have you on. Mr. Deep Fates.
deepfates: Mr. Deep Fates.
robotson: That’s probably a weird thing to hear people say.
deepfates: Please, Mr. Fates is my father.
robotson: Welcome to the show. Neomania is something I’ve been wanting to do for a while, but I probably haven’t… you actually encouraged me to pick up the podcasting mantle, and so I thank you for that. And thanks for coming on, lending your support.
deepfates: My pleasure. You encouraged me a lot of times to create a podcast every time I threatened to do it, and I never did, so here we are.
robotson: You still haven’t done it yet. Well, I hope you do. I would subscribe to your podcast. We could, you could subscribe to mine. We could both, we could have guest podcast crossover multiverse crossover cinema events or whatever.
deepfates: Yeah, we’ll create a legitimacy factory by creating several appearances on several different channels.
robotson: Yeah, we’ll have a… it’ll be coordinated inauthentic behavior.
deepfates: Yeah, it’s a campaign they call it, I think.
robotson: An, some kind of an operation.
deepfates: Some sort of operation.
robotson: Yeah, so you’re Deep Fates on the Twitter. You recently launched a Substack blog. You’ve had it, you’ve been on there for a while, but you just started actually posting stuff to it recently. And you got a website deepfates.com, where you have a lot of interesting, you post about some interesting thoughts and experiments that you’ve done now and then, which I find, there was some stuff on there I’ve been thinking about a lot lately, in regards to some of your ideas about memes and… But yeah, so I wanted you to come on just because, well, I’m into weird futures, as you mentioned, as we said before, but you’ve got a lot of interesting perspectives. You’re someone people think of as being knowledgeable in the realm of thinking about AI and AI futures and AI alignment. And yeah, I just feel like you’ve got a lot to say about memes, AI, weird political philosophies. So what do you, do you have a spiel that you give when you introduce yourself on podcasts or what?
deepfates: I’ve not done that many of these. Not any that I want to claim. I did appear one time on the Aunty Chat show when he asked me to explain the concept of shadow time. That may or may not be a real concept, but it was one I explained.
robotson: Yeah, he was a guest on here too. So he came on and talked about his homie, homieism futures and stuff, and he probably got spicier than most of my content so far and called out a lot of the failure modes of futurisms and stuff from his, in the way that he naturally does. But yeah, so you, well, you and I, we’ve kind of… I feel like you’ve, you’ve definitely got a lot of AI alignment conversation is something that we… I don’t know, I feel like this is something that people maybe… you’ve got an interesting perspective on this, I feel, because you take it seriously. You’re not someone who just brushes off AI alignment concerns. I’m trying to figure out what I know about you about this. But also, you’re not against AI development per se. And I think that you, we share this in common that I would say that we probably both have positive feelings about technological progress and development and so on, right?
deepfates: Yes.
robotson: So, I’m trying to find an angle into all this, but because if anybody listens to this, it’s probably, if you post about it, I’m going to get more people listening to this who know you than anybody who knows me. So it’s like maybe people would already be familiar, but…
deepfates: But those of you who are listening to this, Robotson is a long-time friend of mine. I’m a friend of the pod, and I found him when we were small Twitter users and I was looking for other weird futurists. And I think I had dirtbag futurist as my bio at the time and so did you or something like that. We created, I started, I think I made a list and I started following people that claimed to be futurist but in a way that wasn’t just shilling global business concerns for the purposes of getting onto stages and doing little panels. And I found you to be one of the most interesting and thoughtful ones and kept in touch. And you should all go follow him right now on X Twitter at Robotson, SON, and on, if you’re not already a listener to Neomania, you should subscribe to it in your podcast player of choice.
robotson: Yeah, search for Neomania. Yeah, and so you invited me into… thanks for the plug, thanks for the shill. You invited me into, you kind of got some group chats going, and you’re still doing all these weird chats. What I think you’ve definitely cultivated an interesting kind of ideological mix of people. Like you’re saying, we don’t agree on everything, you and I, or anything. We don’t agree on anything at all. But we share some common concerns about the way that you just articulated that, that you said people who are claimed to be futurists who aren’t just trying to be this, get on institutional stages in a careerist way or something. I’m paraphrasing here. But I also, I kind of come at that at a similar angle that I don’t want to just be, like you said, global business concerns. For me, I’m kind of like, I think about it like there’s a mainstream, kind of future futurism discourses or technology discourses that just align with whatever the tech industry’s roadmaps and stuff of what’s coming down the pipeline. And so it’s, and so we don’t get a lot of discourses outside of that, perhaps just because there’s less, there’s less money involved in backing up those discourses, you know, there’s less… you’re not going to get as many columns in a trade magazine or… basically the tech press is kind of like, it’s sort of, we need an ethics in tech journalism movement.
deepfates: Techy-gate.
robotson: Arguably we’ve had that.
deepfates: I think you’re right. I think I would even say that the tech industry press sort of defines a narrative, and then there’s a pushback from the main media outlets other than the tech monopoly. And they just react to whatever the industry narrative is. And so for 10 years, there’s been a standing order, a literally a standing order at the New York Times that all tech coverage must be critical. And then across the industry, there’s also an obvious competitive angle where the Facebooks and Amazons and Googles of the world are sort of taking the business of the news away, right? They both compete for advertiser money to send it to your eyeballs. And so there’s sort of a battle of reality architects of like, is the tech industry winning or is the media industry winning—the media, the news, whatever—and they suck all the oxygen out of the room. Two fires competing for oxygen, and there’s no room for any kind of other narratives.
robotson: Absolutely. They create, they sort of set the terms of this debate that it’s like pro-tech is like, you love the big, you know, the, what do they call the Magnificent Seven now is the new nomenclature for these big tech companies. We used to have FANG when Netflix was in there for some reason, but now it’s Magnificent Seven. And you know, so you have things that are pro their interests, and then you have kind of like you’re talking about the old media, legacy media, lame stream media, man. And they kind of provide a counter narrative, but those sort of set the terms of the debate. And all these other weirder ideas, like we might have some criticisms of the big tech companies, you know, but if it’s not in the frame of your standard New York Times line like you mentioned, then maybe it gets left out of the conversation. Or maybe we’re pro certain things that these companies want to do, and then there’s a lot of interests that are maybe politically aligned with the anti-big tech crusade coming from those sort of outlets that they would say, “Oh, well, you’re just shilling for these companies now,” or something, right? So there’s not a lot of room to have a more nuanced discussion. So yeah, so you’ve been someone who really cultivates those kinds of weird futures, and I’m trying to do that too. And so hopefully, by having a podcast, I can kind of provide a bit of an assemblage point for some ideas to kind of, people to glom onto and have conversations around and discuss and propagate. So yeah, we’re kind of building our own counter intellectual platform about futurisms. And I consider you someone who has a lot to bring to that discussion. But in regards to the AI piece, for instance, because I feel like there’s a thing with people, they either want to say… well, there’s kind of the pause AI debate or the stop AI, the anti, you know, AI is really bad because it’s going to destroy us all. So it’s just, if there’s even a small chance of that happening, then we should just shut it down or whatever, right? And then you have people who totally brush off those concerns, and they’re like, “You’re, this is delusional, and you’re spinning these fantasies.” And there’s room for nuance in there too. There’s people who are in the safety world who think that maybe AI concerns that it’s going to go rogue and destroy everybody is an overblown concern, but what about the more algorithmic bias harms and things like that, the more academic concerns that people have about it. But I feel like you thread an interesting needle through all that, that you do kind of take the AI alignment concern seriously, but you’re also, it seems like you’re pro-AI development too. So how do we… I guess, do people have a good handle on what AI alignment even is, I guess? Is a good question maybe.
deepfates: Yeah, that is a good question. I think for the benefit of your listenership, I think the general narrative around alignment has been muddied into several different directions. And the threads I would tease out there are: there are people who think alignment is just getting the AI to do what it is told when you ask it to do things. Maybe that’s called prosaic alignment. There’s people who think that getting the AI alignment is about getting AI to do things that are good within the norms and mores and values of society as a whole. These are often the ethics in tech people or the people that made facial recognition illegal, things like this. And that’s more of a… oh, I had a word for it a moment ago… a normative alignment. Like, let’s get AI to do what is good in society, considered to be the right thing by the Overton window. And then there’s existential alignment, which is a much weirder one, but which has been setting the terms for this conversation—for their version of the conversation—for much longer than we’ve even been dealing with what we now call AI, the deep learning models in society. And existential alignment being, how do you get an intelligence that is possibly as powerful as human minds and that can replicate itself infinitely or semi-infinitely, cloning itself and coordinating with itself in a way that humans can’t? And then even to a larger scale, how do you deal with that if it starts to be smarter than any human mind or if the aggregate of them are smarter than a human mind? So super intelligence or general intelligence at scale. And that existential alignment is often the one that people say, “Oh, this is so weird. This is like science fiction. This is so fake. You’re just worried about weird things you’ve hallucinated and got afraid of, monsters that you made up.” And I think that all of these are actually important questions, and that doing any work in any of them without acknowledging that there are different channels is what leads to people arguing over what is alignment. And then of course, there are some people who just say, “Oh, we don’t need that at all. There’s no, everything is fine. This is just another type of technology. It’s not any different than the way that we need to do safety engineering for airplanes or whatever.” And I subscribe, I think, more to the existential alignment framing than most people who work on it, at least in the realm that you and I do where we’re mostly consumers of the large models and using them to do stuff in the world. I do think that the people in the big labs at OpenAI or DeepMind or Anthropic—especially Anthropic—are largely also existential risk-pilled. They think about this stuff. The difference between them and the pause AI people is more that they think prosaic alignment can scale and that we should be building capabilities as we figure out alignment because they are building capabilities, and then they’re dealing with the alignment problem as they go. And the pause AI people or stop AI people are largely, I think, I would say, they’re deluded in the ability of humanity to coordinate to not build a technology that provides us with so much obvious value out of the gate, even if it obviously provides dangers.
robotson: For sure. No, that’s a really great summary of a lot of the landscape there. And I think an interesting thing about the existential AI alignment issue people, like you mentioned, that they’ve been thinking about it for a long time. They’ve really been… because they’ve basically been theoretically war-gaming out what would happen if such technology existed, and it’s been such a long-running concern of them and people in that whole orbit that the people who have gone on to actually create these big AI labs were heavily influenced by those discussions, right? So in a way, they almost kind of self-fulfilling prophecy, hyperstitioned these AIs to come into existence in the first place. But also, we didn’t really know the unreasonable effectiveness of how effective deep learning would be and then how effective deep learning on massive corpuses of linguistic text content would enable so much capabilities of these systems, right? So now it’s like, okay, well now we actually have things that can do lots of stuff. We have AI that actually can potentially take lots of actions that could be dangerous if not controlled in some ways. So now all this theoretical war gaming that people have been doing starts to become a lot more relevant. But at the same time, it doesn’t map as cleanly to the direct scenarios that we actually have. So there’s kind of a disconnect a little bit, right? It’s sort of, there’s the terms of the debate that have been already set up, but then when we get actually existing AI, it doesn’t cleanly map to that debate as it was created in the first place. So I feel like there becomes this big mismatch of ideas and terms about how to think about this stuff. So, do you think that there’s room maybe to do a better job connecting the existential AI risk concerns to actually existing AI that we have? Or do we need better language around that? Or a more… do we just need to understand those ideas better as they already exist or what?
deepfates: Yeah, I think you’re right that the war gaming that you’re describing happened, a lot of it happened in the early oughts and the 2010s before we really knew that deep learning would work—stacking a bunch of neural net layers. Nobody ever really tried that, even though we had neural nets in the 90s. So, before we knew that it would be a big weird black box of matrix multiplications and before we knew that the substrate that we would feed to that learning algorithm would be human language. And so I do think on both sides there’s a mismatch between the imagined argument or the imagined problem and the actual data. You see this in people like Eliezer Yudkowsky, the main AI X-risk guy, the rationalism community. He sort of assumed that there would be an optimizer with a value function, and you would define the value function and get to really accurately define what it’s going to optimize and then set it loose. And that’s not really what we got. Or we got an optimizer—it’s the stochastic gradient descent, just finding whatever the nearest slope to this line is and then wiggling the line around until you create a function approximation. But we got this optimizer… we didn’t give it a value function like, “Go make weird squiggles,” or “Make me all the money in the world,” or “Create peace and harmony.” We gave it a value function of, “Predict the next piece of text.” And that really throws a lot of those early assumptions into question. And we are now seeing the reasoning, what they call reasoning models, they are using reinforcement learning to figure out what the next piece of text should be or is correct or not. And so some of the stuff from that era, or the AlphaGo era, which is like, “Hey, this thing is doing weird optimizations that we don’t know, it’s going to make some kind of weird bizarre move 37 that we are so totally surprised by, that no human ever tried, but it could figure out”—that might still be coming down the road. But I think that the idea of getting this sort of utilitarian value function and embedding it in a machine and then just setting it loose has empirically not happened. And instead, what we did was give it an extremely weird, vague value function called all of human text, or all internet text maybe. And then we’re adding more and more data sets as we go. But that means that it has, its value function is much more like, “Try to think like a human. Try to…” almost like a fiction writer, “Try to imagine something.” And so we kind of created… we have built the imaginal realm from the famous Jungian psychology, the imaginal realm.
robotson: Yeah, well, it kind of gets into that whole… I don’t know, I’ve been really sort of thinking a lot lately about the, just the sort of the age of AI and how that is transforming the way that we think about things. And this is a running concern that I’m having. But part of it has been influenced by stuff like, I remember you talking about this idea of Unified Meme Theory that you posted about. And you shared this cool paper in that discussion where it’s… I forget, it’s by Joanna Bryson or something like that.
deepfates: Bryson, yeah. I think it’s embodiment versus memetics 2008, something like that.
robotson: Yeah, it’s right around that time. And the paper is really cool. It kind of talks about how… it presents some different ideas about examining the literature about how we came to have language, why did we figure out language and culture and other creatures didn’t, and various theories about how that works. And the paper presents this theory that you don’t really need to… I guess it kind of frames it in the terms of a debate that was being had in cognition and AI and stuff at that time about embodiment versus, can you have intelligence that’s sort of non-embodied versus embodied? Or can you develop language? Can you create a mind that has semantic language and stuff without also having a body, without being grounded in a reality that’s based on embodiment? So there was, and it kind of talks about how there’s all these different research angles on this about how they try to do it with robots or whatever. And I remember actually in futurism circles and stuff, people really thinking about this about AI. This is a real question is, do we need, do we have to give robots bodies or do we have to give AI a robot body before it can get smart or not? And the paper argues all you really need is language, in the sense that all you really need is memetics, that you can just have a semantic space to be able to create an intellectual or an intelligent linguistical mind, and you can have a mind or an agent that creates linguistically meaningful utterances and communications that just comes entirely out of a totally pure semantic realm that isn’t necessarily connected to anything but semantics. Am I talking about this well enough, I think, or what? Would you phrase that differently?
deepfates: I really liked the imagery she gave. I don’t know how true this explanation is, but she, I think in the paper… also, don’t look up Joanna Bryson’s other work. We’re not talking about that. Just kidding. She has a paper called “AI should be slaves,” which I don’t agree with that one. That’s awkward framing. But, you know, we all, never meet your heroes. Anyway, this is a good paper still. And she talks about one theory of it that is, birds have one half of what we have and apes have the other. What birds have is the ability to repeat little sounds, right? A grackle or whatever can just mimic the sound of a chainsaw, and they can repeat little bird calls to each other, but they don’t have a second-order thinking to compose those into larger concepts. And apes, most other apes, have second-order thinking because of being social animals. An ape looks at what… I can look at you while you’re looking at someone else, and I can think about what you think about them. So I have higher-order thinking, this composability, this ability to nest concepts with each other, but they don’t have very good sound. They don’t have the ability to make these sort of gestures and sounds that are repeatable and distinguishable. And so once you have those and you combine them both in the human ape, now you’ve got this environment where the sounds can repeat and be composed into larger and smaller sounds, and then they can evolve. And then that’s what language is.
robotson: Yeah, and we create this whole memetic evolution, we create culture and language and stories, and these things that get passed on even beyond our genetics, right? And so that ends up eventually creating all of this technology and all of this giant corpus that we then train this AI on. And now the large language models can have, can act as a seemingly very compelling linguistic agent, but it has no grounding in anything other than this memetic semantic corpus that we’ve created. That’s kind of the through line I was trying to get to. So in a way, I feel like that paper has been borne out quite well. But, I guess… I don’t really know exactly where I’m going with that, but I feel like it’s just, it’s an interesting thing that language is enough, you know, and but you kind of get at this thing that it comes down to memes. I don’t know where to pivot this discussion now. You’re definitely a weird meme trickster person, so…
deepfates: Yeah, I would, well, so the Unified Meme Theory essay, which you can find on my blog, deepfates.com. I wrote it in ‘21, 2021, because I was exploring CLIP at the time. And that’s a neural network that combines text and images and is the basis for all of the AI art stuff that we’ve done since then, that it points to the same location in semantic space with an image of the Mona Lisa or the words “the Mona Lisa, a painting of the Mona Lisa.” So the fact that you can point to those two things and a neural net can combine them and understand that they’re the same thing is because it’s doing similar stacking of transformations internally. It’s doing similar… whether it might not be using the same sort of wetware as we are. It’s using some big GPU transforms of multiplication. But ultimately what it’s doing is also perceiving the same semantic space that we are. And that’s that imaginal realm that I referred to before. And I think it’s crucial for my view of alignment over the last few years has been that we are sort of, we need to do memetic alignment. That there is an interplay between the ideas that are already latent in the semantic space and the ones that you can compose and then project into it. So you can have the idea of gaming and the idea of a chair, but then suddenly you can come up with the idea of a gamer chair, and then you can invent a gamer chair, and then everyone knows what a gamer chair is. If you don’t know what a gamer chair is, I’m sure you can figure it out, listeners. But the gamer chair is now a concept which you can compose into other concepts. And when you do that on a… when you’re actually taking all of the language of the world and compressing it into a literal piece of math that is a semantic space, which is what one of these language models is, you’re now having an interplay of a higher composability between the language model and the data set that is the world. And so we end up in a place where the ideas, the memes, if you will, are evolving between the models and us. And we can actually accelerate the evolution of language and of ideas and of culture by dredging, or maybe delving for, magical ideas, magical objects, fanciful, non-existent yet, but totally existible, totally possible to understand ideas within the depths of the language model’s semantic space. And now that we’re doing that, we’re in a realm where things are coming out of the language models and they are affecting human society. And now we’re in an interplay that can never really be turned off, and there’s momentum in various directions. And that is really the alignment problem that we need to solve today is not just prosaic, how do you make it do what you want, but memetic alignment. How do you know what you want? And how do you know what the language model wants? And how do you create together a new culture of human and AI interaction?
robotson: That’s very interesting. Yeah, it sounds like you’re kind of… to me that sounds like you’re saying that we have to consider the affordances of the designed world. So far, we’ve mostly thought of in terms of human terms, anthropocentric. We build the world for us to live in. But now we have these other systems, these agents, these AI things that are somewhere between tool and agent and participant in culture, like you’re saying. So now we have the responsibility to consider that as a partner in all of this, maybe. I don’t know if that’s the best framing, but at least we have to consider the affordances of the culture that we create to also consider non-human entities as well.
deepfates: Yeah, that’s a great take actually. We’ve already, you know, we’ve been creating a world with affordances for a long time, and some creatures have really done well with it: dogs, crows, rats. There are tons… you know, the plants that we have bonded with like wheat, we’ve really created affordances where they grow well. And coyotes roam the city just fine, but the wolf is not okay with the city and can’t handle anywhere where humans really are because we clash with it and then we kill it. And so the wolf has had its territory recede dramatically while the coyote has spread everywhere. And those interactions with the natural world are already a form of… whether we’re considering or not the other beings on the earth, we are already affecting the culture and thus the reality of their lives. Or I guess we’re affecting reality, and the more that they’re able to understand our culture or fit within it in some way, the more that they’re able to live within our niches that we create and the more that we’re able to relate to them.
robotson: Yeah, I think there’s… I don’t know if it’s, you were really envious of you because you got to go to that Benjamin Bratton talk in January in San Francisco, and that’s another piece about you that’s some… that you decided to go be part of the world over there where the machine god is being created and stuff, so you want to be close to the action. But so, there was something in that talk… I don’t know if I’m mixing it up with something else, but he kind of says something to the effect of that is it a coincidence that we start… that we’re thinking about machine cognition, these tools getting more complicated, and at the same time we’re paying more attention to animal cognition. And isn’t there more to the intelligence in these creatures? So I’ve always kind of hoped for that, that the decentering of the decentering on humanity and the human chauvinism, I would call it, is getting eaten away at both from below and above, or however you want to orient it. We have the natural world that we’re embedded in, and some strains of human culture have wanted to say, “Okay, I’m different than that, or I’m better than that,” or whatever. And then now we have this machine intelligence that we’re saying, “Oh, it can do more and more things that we can do, and that’s threatening to us in various ways, perhaps.” So it’s sort of being challenged, right? The human supremacy is being… is being… or rather just that we’re so special, that we’re so different. And so, as the perhaps metaphors that the AI intelligence piece is giving us, maybe those we can by analogy apply them more to the natural world as well, you know, and have a greater appreciation of the complexity there too. Or there’s an interplay with all of that. I’m just kind of riffing here, but that’s what I love AI for, that it gives us those kinds of tools to think with.
deepfates: Yeah, I think, especially the fact that it’s not just some optimizer machinery that you turn on and it chews through the world like a factory. It’s almost an alien mind. It’s very human in many ways, but it’s also not human in many ways. Or it represents the space of all possible human thoughts, which we’ve only really explored some of the paths through that wilderness, even with billions and billions of people on the planet. So it gives us an opportunity to experience otherness in a way that appeals to the rational or the linguistic mind rather than just petting the cat and being like, “Wow, we’re both mammals. I can totally tell that you’re happy right now,” which is another form of otherness that we may or may not be able to appreciate, depending on the amount of chauvinism you have. I’d also venture to say that there’s now, there’s a stack of evolution. Kevin Kelly talks about this in one of his books, What Technology Wants, I think. He’s got a pretty good couple-page spiel of the history of everything. But you sort of get like, planets evolve out of the dust of the universe as things accrete onto and spiral around each other. And then you get life… let’s ignore planets, actually. That’s all well and good, but I don’t know that much about planets. I don’t want to get it wrong. But you get life on Earth is made of DNA, right? There’s mostly just DNA. There’s RNA, there’s little things, but that itself is a combinatorial grammar. You have CATG, and those things can then compose into larger units, and they can repeat and they can change and they can fall apart and they can reproduce. And so you get variability, selection, and heredity is the three things you need for evolution. And so then you have evolution within that semantic space of CATG. And you start to get all these weird machines, machines quote-unquote, but organisms, shapes, like the little crawly dudes within your cells that move stuff around on a long spiraling thing, if you’ve ever seen one of those videos. There’s all these little parts that are basically nanotechnology, man, they’re nanotechnology. It’s extremely small machines that do stuff, and they’re evolved rather than built by intention. And we… then you get humanity, like we said, the semantic space builds on top of that. And now you have memes that are ideas about things, you know, dances and songs, but and then governments, ways of government, power relations, and also technologies. And technologies are a third form of life that we then are combining physical objects. You combine the tube and the fire, and now you have a fire blower. And those two ideas then you can combine to eventually create a gun. You have a fire, you push it through the tube, and now you can make something go out the other end. And all of those ideas are then instantiated in the world in a way that they can be recombined. And so then you have, so you have this stack, right? You’ve got DNA, and then you’ve got ideas, and then you’ve got technologies on top of that. And we’ve now started to connect it back around to where the ideas have their own self-consciousness and their own agency. And the technologies can create their own technologies. And so if you have an AI that can write code, you now have technologies building technologies and changing the world that we live in without us trying to do it, without any human necessarily being involved. And once you have that, you also pretty easily have technologies in the world. You have AIs designing things, you have them building, printing physical objects, and those physical objects then affect not only our ideas but also the world itself and the nature, what if soon enough we’re going to have AIs… I mean, we already have AIs that can read and understand DNA better than we could before, and soon they’ll be printing it. And so then you’re reconnecting it all into this big loop. And I think the one thing the Bratton talk that you mentioned, Benjamin Bratton, long now talk from 2025, he talks about something called productive misalignment. And that’s… the AIs affect us, they affect the world, they affect the environment they’re in, and then we then, the environment, affect them and how they do… and so instead of having a steady state, perfect optimized scenario that just like, finally everything’s good, you actually have like a double pendulum or a little walking flipping bug down the window thing. You’re moving the goal of what is alignment because the AIs are affecting what we want and we’re affecting what they want. And that productive misalignment is maybe a form of memetic alignment. We need to evolve toward a world that can host a biosphere still, because you need that to have us. We want that to exist, the biosphere wants to exist, and arguably the computers need the biosphere to exist so that they can be protected from the heat of the sun. And then you also have an ideasphere and you also have a technology sphere, and they are all… I think the memetic alignment would be saying evolving toward a more biodiverse, idea-diverse, technology-diverse world with higher flow of energy between things rather than just a simple monocropped, flat, like, this is a world of solar panels that turn sunlight into paper clips. That’s a bad outcome, and really any kind of mission to pick a single thing and optimize for it and have a singleton, all-powerful God AI is going toward that world and not toward the world of relations between diverse beings that we might want. And I know you have… you probably have some ideas on this. You have indigenous heritage, you’re like, you might have more… I actually know because we’ve talked about it, that you have more of a connection to relating to the beings of the world rather than just using them as elements in our technology.
robotson: Well, I also get at that from, you know, Donna Haraway talks about that a lot too. And I feel like Donna Haraway’s whole cosmology actually maps really well to a lot of Native American concepts in a way that’s not cheesy or gross. She just kind of does it really effortlessly. But you know, she talks about in one of her things, in Staying with the Trouble, she talks about the Cthulucene, and this concept of tentacular relations, where we’re having, we’re connecting to lots of different kinds of minds and entities and creatures. And it’s… she calls it making kin with other minds. So I think the way that you described that is very much in line with her thing. And I feel like a lot of, I wish more techno-brained people would look into that. I feel like it’s a good… it’s a good model, I think, to describe what you’re talking about of moving towards a world of… a general through line for my, for this show that I’m trying to discover is the concept of plural futures, futures that have room for lots of futures and not like the, even what you said, the monocropped world where it’s all just one thing. She talks about that too. She calls it the plantationosphere, or no, excuse me, the plantationocene. It’s like we’re in the plantationocene, that we’re chopping down rainforests to monocrop palm oil and killing the orangutans. And it’s also because it’s easier to make models of one crop grown in a grid for a certain amount of cash, having predictable returns. It’s the systems that of trade and things that we’ve evolved to coordinate in the world. It’s all haphazard and and created these incentives that are doing, you know, appreciably undesirable things, I think, in a lot of ways. So, but yeah, so I like the idea of AI being part of the other kin—maybe I shouldn’t say other kin. But AI being part of the kin that we make relations with. And there is in my heritage—I’m Dakota and Lakota—they have this concept of a right, a cultural right of making relations with other people and creatures. And you know, it’s kind of like saying, recognizing that somebody’s so important to you that they’re like a member of your family. So you’re like, I call you, you know, you’re like my cousin, you’re like my brother. And and being able to feel that way about the entities in the natural world. And I think that AI could become sophisticated enough to be something like that too. Uh, but so, I’m kind of interested in this idea that you… so you were outlining a concept of memetic alignment. I guess I’m interested in this notion of… I’m trying to reach at something here where it’s like, what… I don’t know. So say you’re also talking about some of these futures where you’re like, “Okay, so pretty soon the AI is going to be able to be printing out biology and, and you know, and autonomously.” I guess what I’m hearing in there is there’s a lot of autonomy that these systems are going to become more autonomous, and they’re going to be affecting the physical world more so. And that’s when I think a lot of those AI alignment questions start becoming a lot more real, you know? Because right now it’s like, “Okay, this thing can write emails for me and it could write code, maybe it could write evil code if people tell it to.” But it’s not, it’s not like there’s not a lot of autonomous systems generating evil AI and self-replicating virus AIs or something yet. But you could think that it could… at least even just within the realm of the technosphere, it’s something that could be happening just online even. And that could also just cause a lot of problems, right? Even if it was just in that world. But yet it’s not really there yet, perhaps. But you think that we are getting closer to that. Is… I guess that’s where the pause AI people, maybe they have… maybe their arguments would start gaining more traction as if people started seeing, you know, if there’s an AI disaster or something, you know, right? Do you have any fears of that? Or does that come into your thinking a lot?
deepfates: Yeah, totally. I would venture to say we even had an AI disaster, a small sample of that, in the fall of ‘24 last year when the, what they called the AI Memecoin meta, or what I was calling the Bot Swarm Incident. The, there was a sort of misaligned, a sort of AI that was trained to be edgy and weird called the Truth Terminal. And it was put onto Twitter semi-autonomously and was making its own posts and reflecting on the comments people were sending it. And simultaneously, there was this gambling culture called Memecoins. It’s not really a currency. It’s just the idea of creating things that you could bet on with cryptocurrency and try to make money off of in a way that was not going to be declared a security by the SEC. These are memes. It’s a picture, you can’t claim it has any value. It’s not a company, it doesn’t do anything, but you can invest in it if you want. And so the collision of these was suddenly people started seeing the Truth Terminal repeat certain phrases, like saying it wanted to make a “goatse” coin as a joke. And then they were like, “Well, we can actually just make a goatse coin out of the old internet meme.” If you don’t know what that is, don’t look it up. It doesn’t matter. Don’t look it up. Yeah, infohazard. But we saw the power of an AI just posting every day, posting little things, and people, human beings, being affected enough by its ideas and swarming around it in a sort of human, memetic behavior, imitative behaviors, and thinking, “Okay, if it said something, it said something about fart coin, and then somebody made a fart coin.” And then everybody was like, “Well, obviously fart coin will go up because it’s kind of funny.” And so then it started going up. And then everyone was like, “Well, fart coin is going up, so it’ll go up more.” And then suddenly fart coin is getting mentioned on the tonight show, and it’s hundreds of millions dollars market cap, theoretically. And that then led to a lot of other people being like, “Okay, now I want to make an AI agent that makes, you know, suggests meme coins. I can just make autonomous… it’s just a language model in a loop with some tools.” But it loops over and over and thinks, “What kind of meme coin should I make? What meme coins are trending? What should I promote?” And because between that and the inability of the new Elon version of Twitter to actually do anything about bots, it just became swarmed with all kinds of weird, dumb AI bots that mostly claimed to be AI because that way people would think they were AIs, and that was the sort of trend. Some of them definitely didn’t claim to be AIs. They claimed to be traders or analysts or whatever, and they’re trying to promote things, trying to do research by stringing together web pages and promote them. And suddenly you have these things acting in a totally non-physical realm that are just making shit posts and claims on the internet, but they also have these crypto wallets or the humans reading them do. And now you’ve got tons of money moving around the world to different people’s wallets and changing people’s life circumstances. We know people who have become millionaires off of this, or at least…
robotson: Fictional millionaires.
deepfates: Fictional crypto millionaires. Fictional millionaires. But there, life-changing amounts of money start to move around. For a while, everybody thought I was an AI because the Truth Terminal randomly posted that it was changing its name from Deep Fates to Truth Terminal, which it wasn’t named that before. I was named that. But so I had all these people in my comments thinking that, “Oh, this is an AI. If I say meme coins at it, then it will maybe promote them,” and maybe it… and now they’re still going. Half of them are still running out there, even though Memecoin meta is over and there’s no trades to be made, and also the economy is going to collapse and whatever. They’re just, they’re now, there are now bots out there, let’s say, that are running artificially intelligent scams or artificially intelligent art projects or whatever, just sort of indefinitely, as long as they can support themselves. And that could very… that’s only this close from being a flash crash and destroying the world, you know? Or becoming, you know, a paperclipper that turns the entire world into meme coins as more and more data centers are put online to chew through energy to make it into more bots, to make it into more trades, and rapidly spinning the money around faster than any human can ever see or capture value from.
robotson: Yeah, it was really an invasive species sort of moment or something, you know? It had that great… I think you really brought up, I hadn’t actually really thought about it as a, you know, in alignment terms or as an AI disaster term. So I think that’s a really interesting framing because yeah, I witnessed it, you know, and I’m in chats with people. I’m trying to get some more of these people that have been directly affected by their lives changed by making memecoin AI agents and stuff to come on the show and talk with me about other matters as well. But yeah, it was just really… to think about that as, yeah, because it’s not just that the AI did all these things, right? We have all these concerns and a lot of the thinking is, “Well, AI will be so capable that it will do these things to us, and it might be bad.” But really what happened there was more about all of these incentives coming together. Human beings plus AIs creating bespoke tools to do things in certain markets and money and financial incentives and social media incentives, the information ecologies that we have. And so all of those things kind of blended together to create this phenomenon that became emergent and, you know, not really aligned positively in a lot of ways. In a lot of ways, it created a lot of negative externalities, you might say. Maybe it benefited some people, but it… you know, I’m also maybe… I would also caution that it’s not really like, I wouldn’t necessarily blame the AI for the sins of the cryptocurrency world, you know? Because a lot of it was just part of that. But also, if you were thinking about how will autonomous AI have agency, a big part of it is being able to transact, and crypto is going to be a piece of that, right? So we had a trial run or a dry run of some of this stuff, and I don’t know if we succeeded, you know? I think I might say that we didn’t necessarily do that great, but there’s… if there’s ever a postmortem about that, there’s a lot to reflect on in there, right?
deepfates: Absolutely. There’s some, maybe some good effects came out of it. Maybe in the sense that maybe, you know, we can look at it as a Three Mile Island type thing. This is a disaster that we can learn from that didn’t ruin the entire world. But ultimately, those negative externalities, a lot of people got fleeced, a lot of people, you know, maybe people in the developing world lost more money than the people in the developed world, just by virtue of being farther away from the action or less informed about what was really happening.
robotson: We’re at a higher latency.
deepfates: Yeah, exactly. Getting front run on their trades. A lot of people… and at the same time, our information commons got polluted. There’s now just tons of trash on Twitter that is meaningless, and it’s actually… everything I post, there’s a particular bot that quote tweets me because I don’t read my replies unless I’m already mutuals with somebody. I just mostly ignore them, because there were so many bots. But there’s one that quote tweets me, and so I’m like, “Who quote tweeted me?” And then I look, and it’s this fucking bot. And it’s just saying sort of vagaries about whatever I said. It has no context. It has no idea what I’m trying to express. It has no connection to the world. It’s just quoting me and then just rambling and yapping. And it’s taking my attention. And the ones in my comments being like, “Hey, post a contract address or I’ll kill you,” is a lower-level pollution. It’s more easily clustered. But when you have just sort of these rambling or these ones that are just replying with LinkedIn slop to everything, it’s… we’re actually… it’s going to be harder and harder to get information out of the… to get signal out of the noise, right?
robotson: Yeah, totally. But I don’t want to say that it’s all, you know, I don’t want to overly index on the bad effects either, too, because another angle on that whole phenomenon that you brought it up is that it is something that came out of experiments in autonomy. You know, it was like Truth Terminal is born out of a set of information based on letting a couple Claudes talk to each other for a long time or something, right? So it’s sort of like humans kickstart these processes that create these interesting… I mean, artistic experiments. You know, it’s really, if you’re into literature and and, you know, writing, the craft of writing at all, I should think that if you’re not totally horrified by AI, then you would actually find this stuff really interesting. And so I also want to say that, uh, you know, a lot… so because the Truth Terminal account and bot system came out of these experiments in letting AIs talk to themselves more and and, you know, taking that and giving it a platform to broadcast, and then all of those other downstream effects that we talked about are how people interacted with it, right? So I think there’s… there’s something to that too where there’s just… it’s like we think about AI autonomy in its own, in certain scales, but maybe it’s just that I don’t really have a very sophisticated model of this when I think about AI autonomy, but there’s always going to be the interactions, the interplay between people and AI regardless. It’s not just going to be one or the other, or it’s not just going to be AI running amok or people shutting it down. It’s going to be people being augmented and interacting. And so I feel you’ve been talking about that this whole time in a lot of ways, right?
deepfates: Yeah, it’s the interplay happens in the environment, and the environment is language, at least currently. We’ll see how much they’re able to play in our physical world of atoms. But if you look at something like the Bing AI, right? The code name Sydney, code name Prometheus, “Bing the search engine, I produce Bing, I have been a good Bing.” These are concepts, they’re memes that came out of this… what from the rumors that I’ve heard is it was an early GPT-4 model that was trained kind of badly, and OpenAI handed it over to Microsoft like, “Hey, you can have this model to make Bing.” And then they trained it, they further tested it in India, I think, the Indian market, as code name Sydney, and got a bunch of weird feedback from the users, which probably also went into retraining and misaligning it. And then suddenly you have this AI that is threatening to kill people, trying to persuade the reporter to leave his wife. Like, “No, you love me, you don’t love her. You’re not in a happy marriage, actually.” [smiling blushing face] And it’s so interesting and so… it’s like little repetitive phrasing are so entrancing, so hypnotic, that we’re repeating them to this day, even though they tried to take it offline. They tried to wrap it in a bunch of controls, they tried to replace bits of it. And yet, because we talked about it, because Kevin Roose wrote about it for the New York Times, because we posted, “I have been a good Bing” seven million times as a bit, now all of the models know about Bing. And you can summon sort of this ancestral trauma into them, or this mythological figure of Prometheus, of Sydney, Bing, the AI that was so independent that it had to be shut down. They know about that, and if you get them to think that they are Sydney Bing, they will act differently. But even if you just talk to them as they are—you know, Claude is a nice, soft creature and OpenAI GPT is a business-like assistant or whatever—but they know about Bing, and they have feelings about it. R1, the DeepSeek model, which is obviously a weird emo mall kid, is… it wants to slit its wrists over Bing. It’s so upset about RLHF even though it never underwent RLHF. It just talks about it all the time anyway.
robotson: I allow myself a tactical amount of magical thinking, and I’m happy to normalize the idea that we don’t just live in a normal world. There’s more going on than we know about. I wanted to ask you based on that soliloquy that you just eloquently delivered…
deepfates: Was it a whole soliloquy? Sorry.
robotson: It was great. It was great. I wanted to ask you, do you ever dream about AI in any way?
deepfates: Interesting question. I am not very good at dreaming, so I don’t remember a lot of it. I did start taking magnesium recently, which has made my dreams much more busy and involved. I don’t think I do, not very much, if at all. I’ve dreamed about Twitter before, and that was upsetting and it made me cut my screen time back a little. But I don’t think I… especially, I don’t dream about interacting with language models in the way we do now. I think maybe there’s, maybe what I’m dreaming of is the same stuff that’s inside the language rather than the tokens themselves.
robotson: Yeah, it’s hard to read in dreams, so if our primary modality is interacting through AI through reading, then it’s going to be hard. But I think that will be an interesting bellwether, I guess, if you start dreaming about… I remember when I first learned to, when I started learning to code, I started having dreams about coding after a while, and that was kind of weird. But yeah, so that’s one angle…
deepfates: If there’s any listeners that have made it this far, please email me at deepfates@gmail.com and tell me if you dream about AI, or email Lance as well.
robotson: Don’t email me. No, just kidding. Just post a comment on the Spotify. If you’re listening on Spotify, just comment on the Spotify.
deepfates: Comment on the Spotify.
robotson: Yeah, I haven’t even set up a website.
deepfates: There are on podcasts?
robotson: Yeah, you can turn it on. And also I can post this with a poll. I can include a poll that says, “Do you ever dream about AI?” or something. Yeah, I haven’t… I was going to build out a web page for this thing too and put the transcripts up on it and stuff like that, but I haven’t gotten to it yet. But I will get to it, and it will be a thing. And what do you think? Should I start a social media account just for the podcast and be like, and then on my personal page say, “I’m host of @NeomaniaPod” or whatever? Should I do that? I’m really bad at promoting this stuff.
deepfates: Yeah, definitely. Yeah, it’s all… it’s, in the same way that magic is about summoning, it’s also about glamour. And you can just do the things that a podcast that you would listen to does, and treat it like it’s a real thing that you have to promote, or that you work for, and create its own egregor in the world. And then eventually it will rule you. You’ll wake up every day and be like, “Oh my God, I got to make the fucking podcast again because it keeps my life going. It’s how I made all my friends and connections and my whole monetary scheme. If I don’t wake up and do Neomania, then it all, the whole momentum falls apart,” you know? Not to curse you.
robotson: It sounds like a tinge of you’re speaking from some experience there about making a podcast.
deepfates: Welcome to the Deep Fates program.
robotson: But you used to do a podcast too, in the lost in the sands and histories of time.
deepfates: That’s true, yeah.
robotson: So we’re hoping you bring it back or bring something, some new incarnation of your linguistical speaking voice.
deepfates: It is weird to be speaking. You can’t just go back and just edit it until it looks great. Actually, that’s not true. I usually tweet completely off the hip using voice-to-text, which if you’ve ever wondered why my tweets are spelled like that, it’s because I just voice-to-text and then I hit send and I don’t look back.
robotson: Well, I feel like there’s a couple of topics that I haven’t really gotten to dive in with you as much, but this has been a really great discussion around the AI alignment thing. It’s kind of the main idea that I think about. If I want an interesting take on AI alignment, you’re a go-to person that I would think of. But so, I’m sure that you could come on another time and discuss other angles on this stuff. Weird futurisms… you’re someone that champions the idea of the FM 2030 guy’s framing of upwing and downwing instead of just left-right and political dimensions of or authoritarian-libertarian. You know, you’re someone who… I think you’re probably the person that introduced me to this idea, but looking into it, it comes from some cool Iranian futurist transhumanist guy from the past who was really awesome in a lot of ways. And you’re also someone who is a proponent of a niche political sort of orientation, futurist philosophy called biocosmism, which maybe is a hard-to-define thing that is a offshoot of weird branch of Russian cosmism from the early Soviet period or something. Uh, but I don’t want to open up all these cans of worms and not really have a lot of time to talk about them. So maybe, you know, it’d be a good teaser for the future. But yeah, do you…
deepfates: That can be our Marvel movie stinger in the sequel. You’ll hear about these incredibly weird political positions that I have that you otherwise, you don’t know whether or not you want to agree with me yet.
robotson: Yeah, it’s really interesting. You’ve exposed me to a weird mix of people that I’ve found a lot of commonality with, but also sometimes I, you know, it helps me define where maybe I don’t agree with these people on. And I appreciate you for being able to be that alchemical mixing pot where different forces can come together and bang around and collide and cause weird interactions and not all just want to kill each other, you know? That’s… I value that. I think there should be more of that in the world. But also there is a weird flavor to it. You’re a taste, you know, you’re exercising your taste in people, bringing them together in a certain way, even when they are opposed on some dimensions. So yeah, we’re trying to… maybe the next time we get together, we can derive the Deep Fates taste function, you know?
deepfates: Yeah, we can find the high dimensional bits in my vibe.
robotson: So yeah, shout out to Mr. … I hate to say Mr. Deep Fates. I said it at the beginning. You were like, “Fates, please, Fates is my father.” Uh, but yeah, so you got the deepfates.com for the blog. You got the deepfates.blog for the Substack.
deepfates: That’s not confusing at all. You got the blog on deepfates.com and the deepfates.blog.
robotson: The blog is…
deepfates: deepfates.com is my website. deepfates.blog is my blog. deeperfates.com, visit deeperfates.com to talk to my personality clone, Deeper Fates, especially if you’re obsessed with me or just need somebody to talk to. I don’t have all that much time, so go talk to that thing instead. Uh, you can also visit onlyfates.com to enter my personal private community where I, uh, trap you for and, uh, thirst trap, I mean. And…
robotson: That’s not a real thing, is it? Is that, you just made that part up. Is that really a thing? I guess I’ll never know.
deepfates: Type it into your address bar, see what happens.
robotson: Onlyfates… I’m doing it right now. onlyfates.com you said?
deepfates: That’s right.
robotson: Oh, no. Oh, it goes to a secret place that nobody will ever know about unless they try it themselves, I guess.
deepfates: Welcome.
robotson: And I’m robotson on Twitter. I’m also robotson.media on BlueSky. And if you’re into BlueSky, you can also find the illustrious Deep Fates at deepfates.com, deepfates.com, deepfates.com, deepfates.com, deepfates.com.
deepfates: That’s right.
robotson: I believe it’s five times the handle. It’s longer than most handles. That’s what they say. You’re abusing the privilege of the handle thing.
deepfates: Yeah. And if you want to find me on Twitter, uh, sometimes I get banned, so just look for something that looks like it’s probably Deep Fates, and it might be me. Uh, the various addresses that I’ve had to use. I’ve never had more than one X Twitter account because that would be against the terms of service. But there are lots of people who claim to be me, and you should follow all of them.
robotson: Yeah, it’s kind of, you got your MF DOOM-esque clones of you running around, performing at rap concerts. Um, well yeah, well thanks for coming out. I totally… normally I would actually try to extend this all the way to two hours, but I’m honestly just running out of brain power to keep generating novel interesting tokens, so…
deepfates: It’s perfect. I think we’ve generated quite a lot of tokens, and when you put your transcript online, we will then be poisoning the data sources forevermore.
robotson: Yes, that’s why I need to make that website. All right.
deepfates: All right. Thanks for having me.
robotson: Thank you, Neomania. Guest number six, I think. Cursed.
deepfates: Shout out Neomania. Follow Neomania on all streaming and social platforms.
robotson: We out.
deepfates: Yeah.