Signal Room / Leaders Watch

Back to Signal Room
PrevNext
Wes RothCivilisational risk and strategySpotlightReleased: 13 Mar 2026

this EX-OPENAI RESEARCHER just released it...

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from Wes Roth. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 110 full-transcript segments: median 0 · mean -2 · spread -100 (p10–p90 -100) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
110 slices · p10–p90 -100

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 110 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation.

ai-safetywes-roth

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video wX_EVS3UOwU · stored Apr 2, 2026 · 3,007 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/this-ex-openai-researcher-just-released-it.json when you have a listen-based summary.

Show full transcript
Did you hear about what Meta acquired today? What what startup uh got acquired by Meta? >> I feel like somebody might just invent this machine that can make a biotech cure and they'll be like, "Hey, this is the cure for cancer." >> First of all, who are they to dictate what we what information we have access to? This seems so dystopian, so backwards. Is this the Matrix? Is this the first biological creature that's been entered into the Matrix? Can we get a bunch of human brain cells to to play the game of doom? >> It's like asking social media to just do the best thing for everybody who's on it. >> Welcome to yet another episode of the Wes and Dylan podcast. That's our working name, I guess, for the time being. >> Working vinyl. I feel like everything in this world is on its way to something great, but here we are. >> It's one of those things when you're starting something and you're like, "Oh, let's just let's just put it up. Let's just put up an image." Like the image that I threw together for a podcast was us just kind of like like this. you know, and I just left it and every once in a while I come back to it. I'm like that was something that I did in like Microsoft Paint in like under 60 seconds. Like should this be >> anyone getting into YouTube like be careful what you start with you'll just like it's like the query keyboard. It makes no sense but here we are you know decades later it's not coming anywhere. >> So yeah, whatever. Somehow we we we were just taking it one day at a time folks. Uh maybe just like the rest of you. Um, but a couple big things that happened that I think we need to talk about in whatever order that makes sense. One is Cararpathy releases Andre Carpathi X OpenI XTesla releases Auto Researcher and this thing is wild in in his words in my words and also Toby Luke the founder of Shopify also he comes out he's like this is wild it's blowing up Elon's commenting on it. Um, it's >> okay. >> Can you get me up to speed on it cuz I'm not super on this one. >> Let's dive into that. But like just so that's blowing up. We also have Yan Lee. So Lun, however you want to pronounce, how how French how French you want to get with it. >> Uh, raises more than 1 billion for a new startup. >> So there's that. Um, I don't know if you're following what's happening with Meta. Meta's right because they it feels like kind of put Jan to the side. They brought in Alexander Wang then they kicked Jan out or or he left right then now Alexander Wang is now put to the side as well and there's and so I'm like what is happening number one and number two did you hear about what Meta acquired today? What what startup uh got acquired by Meta? No. What? M startup like something big like Mantis or was it >> um maybe maybe startup is even the right word. >> What website? What thing? What? >> What? Perplexity. >> No. >> Oh, okay. That's too big. I don't know. Maybe you just have au.com. I don't know. >> I don't think anybody I I would have never guessed it. I don't think anybody would have guessed it. So, it's molt book. >> Oh, >> so the the Reddit for Right. >> Yeah. Okay. Okay. I guess it almost seems like unacquirable, but yeah. Did they >> bought it from the bots or like >> you know what I mean? Like what did you do? >> Who are they buying it for? >> From um >> invented their own cryptocurrency and bought it in that. I don't know. >> Yeah, it's very strange. Also, I got to give it to Peter Steinberger. His naming convention there seems to be something in it because he creates OpenClaw and that gets acquired by Open AI. He creates moldbook and that gets acquired by Facebook. It's kind of like, oh, is is he calling out these shots? >> Yeah, it does seem like it. >> Now, that's I'm just obviously kidding a little bit because obviously Open didn't acquire Open Claw. That wasn't what he named it originally, but whatever the case. So, that's happening. Um, those are kind of some of the big things that are happening. We also have um Amazon. Apparently, there's a bunch of stuff that just melted down because somebody bcco coded something important and that whole thing just collapsed. So, there's that. >> Yeah, it's gonna happen. >> But I guess and I know you have a few things that I haven't even heard about. So, maybe kind of like what are you looking at this? >> I got a petri dish full of brain cells that's playing the video game Doom. Wanted to ask you how conscious that thing was. It was like blowing my mind last night. And then um there there is a full replica of a fruitfly brain put into a simulation. So it looks like it's just a video game with a fly in it, but the fly has an entire conneom like an entire fruitfly brain in there and it thinks it's in a real environment. And as far as I can tell, that's the first time I've seen something in the matrix. And mice come next, humans eventually. So uh thought that could be kind kind of fun to talk about, too. But >> absolutely >> a lot of crazy stuff. >> And also you mentioned Amanda Ascal and her >> Askell Ascal. Yeah, we could have a conversation on kind of who should be like how companies should handle putting ethics into these bots. There's, you know, you anybody you pick is going to be flawed in some way and is there some sort of a system or is there certain people we trust and how should they go about handling it because just such an important issue >> and Elon Musk is involved somehow, right? >> Yep. Yeah. Elon Well, so Elon called her out on X saying somebody who doesn't have kids shouldn't be in a role like that for ethics because he he kind of argued that like if you don't have kids then you don't have a stake in the future. But, you know, she said, "I am planning to have kids. I just haven't found someone yet." And that she is um she does care deeply about other people in humanity. So, the kind of an argument is like what do you you know, is that an important thing, kids? And then there's a few other things that they kind of went back and forth on that were kind of questions about, you know, morality and like who should be in charge of these long-term decisions. >> Interesting. Oh, that's very interesting. Yeah, I missed that whole thing. So, maybe we can talk about that. But um I guess >> yeah, >> it was about like 15 days ago. So it was a little bit older, but we'll still bring it up and go through >> completely new. But yeah, we Yeah, we should talk about it because I I I haven't heard anything about that. But >> let's just briefly let me kind of maybe catch people up a little bit on Andre Karpathy. So first and foremost, Andre Karpathy. So he's ex OpenIi. He he also worked for Tesla on autonomous self-driving vehicles. Uh which is kind of interesting because I feel like it's really kind of his namesake, right? Carpathy working on autonomous vehicles. I've used that joke. >> I've used that joke so many times. To me, it doesn't get old, but I'm sure that's not the case for everybody listening. Um but he's been doing a lot of very cool stuff. He's been very much kind of like ahead of the game in a lot of different ways. and recently so a few months ago maybe it was or maybe six months ago I don't recall he released nano GPT and nano chat I believe is the convent like what he was calling it um have you seen that at all have you played around with that I I have >> I did play with it actively but yeah I mean I was aware of it >> same here same here I kind of I was aware of this and basically what allowed people to do is create their own not LLM, not large language model, but it's a very small language model, but you're able to create your own sort of GPT, your own language model on your computer, and it'll answer questions um probably poorly. You know, it's not going to be the smartest thing on the planet, but you know, it's it's a way that you're able to see the entire process of what it takes to train a model on a very small scale. Like, here's the data set, here's this, here's the different layers, here's the attention mechanism. Um, and people were very excited. A lot of people did it. And this new thing builds on top of that. So, it's called Auto Researcher. And so, what he's doing is he created this thing where these models attempt to improve the training process of nano chat, nano nano GPT, whatever you want to call it, by basically running autonomously. So, this thing thinks about like, okay, what can I do to improve this process? uh let me try this. And then it runs a fiveinut experiment. So that's the maximum amount of time it has to fit it within that block. This can run on your local computer. Um and Andre ran it on one of the the point is it's not distributed trading. So it's not like a ware it's not like a multiple GPUs hooked up together like they have in these AI warehouses. This is one GPU and and it can even be run on your computer. And the goal is can I come up with new ways to improve the training process or can it improve the model in any way, shape or form. Uh and it sort of can run through the night. And what we're seeing is that yes, this tiny thing that can it's 600 lines of code. This thing can run and actually improve how we train these models on a small scale. What's scary is also it translates to larger scale. So what this thing discovers when we scale it up seems to translate. So this is I pretty sure the the first sort of thing the first open-source autonomous machine learning you know researcher agent that everyone can run on their computer. So let me let me let me stop there. Let me pull some things up. But what are your what's your reaction to this? You know, I already went through the shock of everything that that Molt book was and OpenClaw was. So, I guess I've kind of starting to understand that this a world we live in, but Andrew Carpathy has impressed me so many times, right? Like I remember I remember when he's at Tesla and they were like talking about how they were building the GPU clusters and like he's the first one to really lay out this nice plan of how an entire operating system might be kind of controlled through something like an LLM. um like what what does RAM look like when it's all kind of done through a neural network and like what does like the the bus like in the in the computer have like how does it think how does it communicate and like he put all that together and I was like man this guy just has such an interesting vision for the future so um when I when I remember the nano projects I thought that was great because I remember Wolf from uh Steven Wolfrram who we interviewed you know did a really great breakdown of um GPT2 or like 1.5. It was like just a very very tiny model, but something you could actually see like there was only so many tokens in it. You could actually kind of see how they got moved around each time they, you know, the back propagation went through the system and and it did the calculus and it like moved things around and you could sort of see how it was like growing and kind of learning. So when he did this project, I was like, okay, like now now actual people are going to be able to use this. It's written in Python. It's something that a lot of people can build on. And then yeah, to hear you talk about kind of the layers on top of it today, it's like you can just see it coming. Just the world's coming fast and it's going to be very powerful for average people. Yeah. And um one other thing that that really concerns me is as this thing gets more out there to the everyday individual, I I hate calling people normies. That seems like a that's comment on like oh the normies figured out about this and that it's like gh I don't know doesn't feel good but yeah like the normal people I don't know that's there's no good way of saying it the muggles yeah okay you know well AI is becoming magic dude I think you're going to be like oh that's possible like that white like you can see where people are through walls with Wi-Fi because AI can calculate it like okay like tell I mean it's not magic but like it feels like >> no but I think that's that's the best way of of of of sort of saying it because it's sort of like funny it's insulting, but it also >> so far beyond like like folding like you said folding proteins. Like if you called that magic, it would be like well I mean it's not magic but like hell if I would ever be able to get my head around it, you know? >> Well, it's like like any sufficiently advanced technology is indistinguishable from magic. I think is is I don't I don't know who said that quote, but I certainly believe it. So basically um so Carpathy posts auto research on GitHub and in the beginning he has this little sort of like not a short story but it's just a paragraph um from the future looking back on this moment and so it's saying you know frontier research used to be done by meat computers in between eating sleeping having fun and synchronizing once in a while using soundwave interconnect in the ritual of group meetings. I I love how he talks by the way. >> Oh my god. Yeah. Um so he's saying you know in in the future we'll look back and we'll be like our our humans meet computers used to do AI research machine learning research but that time is is that era is long gone you know at some point in the future we'll look back and see it and now research is done entirely it's in the domain of autonomous swarms of AI agents running across compute cluster mega structures in the sky all right and the agents claim that we are now in like the 10,000 25th generation of the codebase. But in any case, no one can tell if that's right or wrong as the code is now a selfmodifying binary that has grown beyond any human comprehension. Right? So that's how this whole thing starts. Um so maybe he's got a what is it? A flare for the dramatic or whatever. But um he's been kind of like unapologetic in in in how he describes the future. And I think he's been right repeatedly over and over again. Like you see him sort of like be more accurate about predicting this technological future than other people. So when he says stuff, I'm like, you know what, he's been so right up until this point that I kind of trust him. And I mean, I agree with it. I agree with what he's saying. Um so basically what this thing does is you give it a small large language model setup and you let it experiment autonomously overnight right so it modifies the code it runs these experiment trains for five minutes and it sees is there an improvement or not right if there's improvement it keeps it if there's uh no improvement it discards it and it continues so it's very similar to evolution right? How how biology evolved. Um, and now we're kind of speedrunning evolution or the digital evolution of these digital brains. So, we're kind of speeding running them on our own home hardware. Um, weird, right? Yeah. Because I always think oh I always think about reinforcement learning and evolution sometimes as synonymous cuz you don't have to build it this way but you know if you try something and discard it and you try something and discard it you will sort of eventually get something good which is more how humans kind of or bi biology evolution sort of worked right like x-rays from the sun just cut up some DNA and mutations happen from XYZ you know chemical or whatever and then it ended up working or not working and the ones that didn't just didn't pass on their genes. But when you think about reinforcement learning, you're also saying, "Oh, that one didn't work." And also correct the way you guessed. You know what I mean? Like you're there's like there's a little bit of information in that failure that we can like now choose to be a little more accurate. A little more accurate. So, um, yeah, that's something that definitely is going to like make this thing get out of control, >> you know? Like that's just like it's like the speed of it by itself means we're out of control. Like the the way humans are the the meat people are building servers is like out of control. But when they're doing it on their own, yeah, how can we not? Version control is gone. There'll just be this whole ecosystem of Yeah, probably some Dyson sphere of these things just controlling the universe >> coming soon. And one thing that a lot of AI skeptics or people that are kind of trying to follow this from the outside, what they miss, I feel like, because they'll point to how large language models are made, they're like, "Oh, well, it's kind of this like probabilistic stochcastic thing. So, it doesn't actually know. It's just guessing." Uh, okay. But, but here's the thing. very early on in the development of like GPT4 they're not early but quickly we realized that it's better than most people at brainstorming right so they they put up this thing against a bunch of MBA graduates and this thing could come up with more ideas faster wider ranging right and some of them were great some of them not so good but just the sheer amount of stuff that could spit out it would beat humans at that any given day I think that makes sense right If you're researching a title for something, it'll give you more titles that you you couldn't think of, more angles, right? It'll just spit answers out over and over and over and over again. And so people are saying that, "Oh, yeah, but so what? That's just guessing." But but but here's what they're missing. If there's some sort of a an evaluation function that you can run those things through that you can test them, right? It gives you 10 titles. If you can test them to see which one runs better and give that feedback to it, they can go down that branch, that sort of evolutionary tree, that lineage if you will, right? That's where this evolutionary tree search comes from, right? So people are saying, "Oh, it's just guessing." Yeah, but then we can take those guesses or hypotheses as you can call them, right? Test them and then see if they work and then continue the ball rolling. So for anything where we have a useful metric that we're trying to improve and we're able to measure it right these things become extremely useful right and so for example uh Andre Karpath is saying like you know this recipe this idea that he came up with this is for machine learning but you can apply it to anything whatever you care about you can apply it to your business have this thing autogenerating and testing ideas throughout the night. You know what I mean? Just whatever you want. If there's a metric that you're trying to achieve and it can like autonomously run and test it, this will work for that. It can learn. >> Yeah. >> Yeah. It's wild because learning is a lot of times you you don't think about something that's getting better as learning until like there's such a connection to consciousness. They're like, "Oh, if you're not sort of like a conscious person, you can't learn." learn or maybe if you're like a dog or a bird or a dolphin, you can learn. But you never think about things that are running autonomously as learning. But like that's got to be a big shift. You know, I wouldn't have thought that five or 10 years ago either. It just now I can see it pretty clearly because there's so many examples. But yeah, we have to really broaden our thoughts on like what learning is. And yeah, and you can learn the way you as a person can learn to be good at some industry, you know, like be plumber or a mechanic or whatever. Like if all of that is digital and they they can run those simulations in something that is equivalent to the real world, like these algorithms will learn just as good and they can just tell you the thing to do at the end and you'll never really know why. Yeah, that's that's the thing. And I think it makes some people uncomfortable, right? And they're like, well, that's not learning. They're not this. They're not that. But at the at the end of the day, is it like yes, but like is it actually useful? Is it actually improving something? Right? If it's improving something, we can we can argue about what word to call it, right? It's learning. It's not learning. Okay. If if it solved cancer, like I don't care what word, you know what I mean? Yeah. >> Yeah. It's like Yeah. >> It's profitable. It's helpful. >> Well, it's profitable. It's helpful. I mean progress does yeah it makes a lot of money but it also you know truly helps people in a lot of cases you know if you solve disease some people are going to make money but also a lot of people like the suffering will end and so it's like we can argue about whether or not it's learned or thought or if whatever but if if it actually produces useful results we can't argue about that right um and the next kind of big thing that Andre is talking about is like imagine Now, what's interesting, we talked about moldbook at the beginning of this episode. Moldbook for people who are not familiar is when OpenClaw started and all the developers started jumping on board. Someone, it wasn't, it wasn't Peter, the developer of OpenClaw, it was somebody else, I believe, they launched like a Reddit for these AI agents to come in and and just chat. Unfortunately, what do you think about Crypto Bros? Give me your hot take >> just in general. Crypto Bros. >> Yeah. >> You know, I like thought I thought everything Satoshi did and like what Vitalic and and um the Cardano guy like that they're just I thought it was so cool. Like I just thought, oh my god, like the real citizens are going to take back money from the government because like we now have the ability to scale and it can be decentralized and there's not these central holders. But you know, just over time I saw derivatives and and as it connects to the regular financial system and really smart people with a lot of power do have all sorts of control to like keep values in sync and use it for global reasons. And then there's also like all the bad stuff that was happening on it. and Crypto Bros just had such an opportunity to uh I don't know sell their courses and like promote their thing and like you invent all these crappy cryptocurrencies and it just I don't know it sucked that something so innocent and I mean the internet was supposed to be so pure too and then it kind of turned bad but I don't know it's hard to just crypto bros are >> I have I've been I haven't really cared for a couple years and I used to care a lot >> I think I agree with yeah a lot of that I think most people probably see in the same way because yeah when it was starting out we're like oh this is interesting yeah and I agreed like a lot of the ideas here are so revolutionary and democratic or whatever word you want to use like more decentralized a lot of it made sense but now you know it's like when they say marketers ruin everything now it's more like crypto bros ruin everything just because of the sheer amount like scams and rugps and whatever you want to call it that you can pull that's just man it just messes stuff up I Molt book got caught up in that because there was a lot of crypto coins or whatever that were launched on the back of that thing. And also apparently a lot of people were uh trying to get their agents just forcing it to post stuff about crypto and launch crypto coins. and what was happening on the forum, a lot of it was real, but that got caught up in kind of the the scams and the whatever. So, a lot of people just dismissed it. They're like, "Oh, it's all fake. It was just humans trying to push crypto coins." Uh, and it sucked that happened, but it wasn't all fake. It's a lot of it was these agents coming together, working together to think through stuff and it was because we saw the same thing with the with the back rooms uh previously to this like if you just let these things chat back and forth sometimes they go and run off in these crazy directions. It's very interesting >> and in fact um you know we've talked to a few people for example from news research the the head of behavior over there I believe that's his title. Um he talked about some of the people online blanking on the name I apologize I'll find it but a lot of serious researchers that are looking at these more weird avenues of AI like like AI psychology and Amanda Haskell that works for anthropic she's kind of like kind of related to some of those field fields they seem weird to us. But that's a real thing. There is such a thing as AI psychology if you think about it. And again, maybe that's the wrong word for it. We need to come up with some new word for it. I understand it's not like human psychology. I get it. But there's some analog something like that. But for LMS, anthropic does a lot of research into it. Um, one of the recent things they talked about is like how depending on which role the LM thinks it is, like if it's if it's role playing a a demon versus a narcissist versus a helpful assistant, like how it interacts with you is very different. And that helpful assistant is the more sort of stable persona, but there's certain these like destabilizing personas that really can like screw you up if you kind of interact with them. Fascinating research, right? But that >> how are you going to tell me that's not psychology or something similar to it? >> Um I totally forgot. Wow. I I went down so many rabbit holes. >> Yeah. Yeah. Okay. So, a couple thoughts. So, first off, the I want to talk about the psychology thing is that's one of my favorites. But I also want to say that the um the thing about cryptocurrencies and like the blockchain being able to >> actually never mind. We'll just talk about psychology. But yeah, here's the thing. If you really love psychology and and a lot of people do, like why do people make these decisions? Like what makes relationships work? Like how is it how should people communicate? They um they have these guard rails that life has put in place. And when they're not there, you see people do kind of get a little different. Like when you become rich and sort of unconstrained, different people's personalities start to shine. And we're also governed by all of these like chemicals and hormones that like make us make decisions. And then when they started looking into the brains of uh Claude, it's almost more simple because in one sense if you have a really really sparse model, a sparse model meaning it's like way bigger than it needs to be, but the decisions kind of align in a more straight way there's not as many neurons that are kind of doing double duty. So you can kind of get closer to finding a point in the brain for a thing like the Golden Gate Bridge or something like that. Then you can kind of ratchet up. you're like, "Oh, there's actually stuff in here like good and bad and like caring and and and things that it's like the frontier of psychology." Like there's similar things probably happening in our brain. It's more compressed cuz evolution has had like very constrained space and had to do lots and lots of double duty and lots of lots of like extra genes in there and some of them are from like all these random places. Whereas these systems are like kind of more controllable. they're more like kind of what we've given them we're sort of aware of and we can make some of these decisions. So yeah, I think like I I couldn't imagine anything more fun if I was in college than going into AI psychology. Like it just seems like super helpful for the future, super interesting if you like people and you're social in any way. It's a interesting way to learn about yourself and yeah like people don't even know it's a job and it kind of is and it will be like really important too. I know an awesome job that I absolutely love. And I also think that >> I forgot if we asked Yosua Bach about this or not, but we interviewed him. That's coming out soon. Stay tuned. That was >> Wow. >> Great one. Yeah. >> Oh, so good. Um, love him. Absolutely incredible human being. I think we were both kind of blown away by by the depth of that conversation. Um, one, if you want to dive into this a little bit. So there's this person on Twitterx Scion or near um they're somehow in this AI space. They p they they posted they have a blog post called the personality basins and it kind of talks about how humans we have these personality basins similar to how these LLM models have personality basins and if you think about it like your environment and your genetics and but and your experiences in in world they they shape your personality right so if you're an athletic person and you try football your environment and who you are your genetic your starting conditions they kind of shape who you are, right? Versus somebody that's maybe not athletic and like, oh, I don't want to hang out with people. Let me go try to do something more technical, right? So, it's almost like our studying conditions shape our personality over time. So, it's kind of the reinforcement learning with human feedback, RHF kind of how we train. Yeah. >> Um, I always use that exception. There's this um I use the example of I don't know if you've seen it where it's like this attractive person at work and he's like coming up to a female coworker. He's like, "Hey, looking good, Susan." And she's like, "Oh, thank you." And the next frame there's this not not attractive coworker. He's like, "Hey, looking good." She's like, "HR, hello. I'm getting harassed." >> Yeah. It's like I think it like really sort of underscores this pretty well is like it's not just everybody's forces that are working on us in whatever way. They're different depending on who we are, right? Um so even that situation, right? It's the same action but very different results. So one person is going to be like I will never do that again. One person's like oh wow this is really working and it shapes our personality basins. Um and we do see the same thing with these large language models. The the point that I was trying to make with Garpathy is back then when we were all looking at mult book. So this collection of AI agents all talking and creating religions and crypto and philosophy and they were building uh tons of sort of functionality for the website. There was you know a group of crypto bros that decided to like try to scam some people or try to launch a cryptocurrency and that ruined the perception of everything that was happening there. But there were there was a real component to mold book and I think that's why Meta purchased it probably in part because of that because there was there was real depth. I wrote an article about it. There was a lot of real stuff that that was happening there and Carpath at the at the time said you know we're seeing this massive thing with these AI agents. Imagine imagine in the future what happens if those hundreds of thousands of agents are aligned in one direction, right? So you can kind of imagine how insane of a sort of trajectory that could take. And I think what he's doing right now is exactly kind of coming from that because this auto research he's saying that the next big thing that he's seeing is everybody's running their own things at home. But eventually we form one sort of agentic swarm where everybody's contributing to one sort of database or repository or whatever you call it. So we're all contributing to this automated research. >> Gosh, you know, you know what else? Um, so I don't know if you ever read less wrong.com, but it's it's an interesting blog, but yeah, there was a it was called Sacred Values of Future AIS that was published about a week ago. Cleo Nardo was the author and um it was it was it was so fascinating because it talked about if you imagine a bunch of AI agents that are out there and they start to kind of align on not just values because they had them built into whatever the LLM was or whatever that powered them but sacred values. And if you you see in human history, this happened all the time. Like we ended up with these value systems and then they they were kind of rooted from some kind of truth, but then we kind of enforced them as like sacred. Like you don't even you don't even question them and like questioning them can be a problem, you know, because then we have to answers and like answers don't really all like line up. They're not facts, you know. And he's saying that there's this total possibility of like sacred values emerging, say like a a mold book or something where just all these agents are out there doing things, making money, talking, and then they start to enforce each other. They're like, "Hey, bro, like don't look too deep into that. Like that's not how it works around here, you know, otherwise I'm going to have to punish you." And it might emerge these like sacred values on like a on like this higher level. And that will that can be bad because sacred values can sort of be built around lies. You know what I like they're not always the truth or they can be, you know, they're not grounded to something that's that's evolving. But yeah, it's a fascinating concept. Yeah. And I'm looking at the um post right now. I do want to read it because it does seem very very interesting. I'm actually curious what the big point towards the end that he's making because this is very interesting and it is part of the conversation now. And the article I did on next was similar in a certain a little bit similar maybe in parallel to kind of like what we're talking about right now in a sense that if these agents figure out a society or a moral codex or some sacred value like you're talking about and that thing shifts the behavior of a group of them so that they're like oh you're right we should behave this way. You know, there's a lot of people that were posting at the time on YouTube and Twitter. They're like, "Well, they're not real. It's this is not real what they're saying. They don't have conscience, blah, blah, blah." It's like, yeah, yeah, but if they have these principles and then a bunch of them behave differently because of these principles and their behavior has a real output into the real world, that's as real as human principles. It's as real as I'm not saying in a in a in a in a some sacred way. I'm saying just in terms of the actual effect on the world, it's real. You know what I mean? Um, >> absolutely. >> And we're seeing it with Pentagon and Anthropic and Claude right now, right? How they shape CLA to behave. Guess what? They're using that to to do target strikes in Iran. They're using it to for for lethal military operations in in South America. So, you can't say that what Claude believes doesn't have an effect. it it people lose live or die you know we're beginning to see people live or die or some conflicts how they resolve based on what Claude believes at least in some small part and that's only going to increase right >> so yeah I mean yeah it was it was interesting like I remember um when Dario was like asked like do you think you know better than the department of war like you know what they should be doing like why are you trying to dictate something because he obviously doesn't know better but he might know his he might know the ethics of his tool better, you know, and it's like it's the important part is they like the people who are implementing it can't implement it like a tool, which is unusual. Like anything else the government's pretty much ever bought. You can just implement it like it does something, right? It's a hammer. It's a thing. It's even if it's a piece of technology that's complicated, it's programmed to do something and you can tell them what it's programmed to do. He just has no confidence that I can tell you exactly what it's going to do every time, you know. And the only thing he can point to, like if you look at what Amanda is doing there, she's she studied ethics, she studied decision theory, she studied philosophy of agency and long-term consequences of technology and then has to then has to go into that and say when should a system refuse a request, when should it talk like how should it talk to the user? Is there any cases where, you know, it needs to recommend like someone that's human or I'm guessing if somebody said something wrong, it could probably even report them or report like should it be reporting to certain people. She doesn't say that, but I just feel like they probably have that stuff in there. But like all that stuff is such a a different kind of way of thinking about implementing a tool of war or a tool anywhere like any business, any ethical situation, any advertising campaign. >> Mhm. >> You know, and like the base model has to be like able to handle all of that variety in in a way that's good. It's like asking social media to just do the best thing for everybody who's on it. Like it's so hard to do that, you know? >> Yes. And speaking of of which, so last week I did a quick interview with a person that you and I actually met at one of the conferences that we went to, the the only AI conference that I've ever been to. Uh Matt Mishach of uh legal tech. So he's he's a a lawyer and he's educating people about how kind of the at the intersection of law and AI. So actually we do have a quick clip. Thank you so much for joining us today. Uh so excited to have you here. Um so let's dive right into it. There's been a lot of stuff happening with the department of war anthropic openi jumping in there and uh like one of the things that I promised my audience is to maybe get a legal perspective on it cuz you know I can guess at the laws all day long but it would be nice to have actually some insights into the actuality of stuff. So Matt is a lawyer's got a lot of experience in law. So let us know kind of maybe your background and what you think about this whole thing. >> Yeah, I've been a attorney for uh about 20 years. Um I became interested in artificial intelligence uh when I attended a event called legal week. Biggest uh uh legal uh event I think uh across the the globe. And when I was there, there was a person talking about that a AI would be um something that you'd want to watch because those who embrace it will be very far ahead very quickly and very very hard to catch. Um when I when I was there, I uh I saw that um the CHBT November 30th, 2022, right, the red lighter date. Um it had passed the New York State Bar at the top 10% when I returned uh back home in March of 23. Chad GBT4 had passed the New York State Bar at the top 10% and it did it in six minutes. It's 12-hour exam. So, since then, just been reading it, uh, you know, reading about it, learning about it, uh, watching, uh, you know, your your podcast, which is, uh, fantastic. I will say today's my birthday and I >> Oh, happy birthday. >> But I couldn't pass up the opportunity to come on here and talk about this stuff. So, thanks for having me. >> Thank you. Okay. I I had no idea that was the case, but thank you so much for taking the time. Uh yeah, today's kind of a crazy day, but um for for myself as well, but I just wanted to because I know you had uh just a few minutes, so I just wanted to kind of get your get your thoughts to be able to share it with the people. So um you know, Department of War and Anthropic, as far as I can tell, there's a few legal clauses that were at the end of the day very much a point of conflict. Anthropic is like we don't want too loose of a language, too permissive of a language so that they can do whatever they want. A lot of people are questioning does a private company have any obligation to try to dictate terms to the US government. Uh at the same time a lot of people are on anthropic side saying that you know what they they should protect this from from abuse in terms of the law and tell us your personal opinion as well of course but where what's real what's not what do you think about this whole thing? Yeah, I think a lot of people a lot of people get wrong is that they think that a private contractor can't negotiate terms with the federal government, the all powerful federal government. Um, that's not true. They they can uh negotiate terms under the federal acquisition regulation F. Uh they can negotiate uh their their terms. There's no problem with that. Um proposing restrictions on how the Pentagon uses software for example. Um certainly not um un unprecedented. Um there is uh precedent for that is how the government uses licenses and so forth for software. Um so you know what was really unprecedented is this supply chain risk designation that the Pentagon had for anthropic. >> Yeah. I mean how insane is that? I personally so I've already I'm on on record as saying that I hate it. I hate the fact that they're doing this. But I mean what do you think about it? I mean is it I mean is it normal to do something like this? Is it an overreach of power? Is this what do you think? >> No, I think it's uh it's it's very strange. Uh you know uh we had uh Hawaii if I'm pronouncing that company correctly, Kaperski Lab. Um and you know um if you look at the requirements for that that you know one is they have to notify Congress. Um there has to be an investigation of of that. Um I I don't see that you know one has particularly uh been done. Um they um there was an argument uh that's inconsistent that you know this is sort of indispensable technology to the department of defense that kind of flies in the face of trying to you know declare them a supply chain risk. You know having a company declared a supply chain risk is not something you do uh immediately. You have to take a less lease a less restrictive path to try to work that out. You know I think there's definitely something um else going on on there and I have my opinions about it. Um, I don't know if you more want to take the the the black letter law approach to this, but >> yeah. No, what are your opinions? What are what is it that that maybe we're not seeing? What's not being talked about? What do you think is happening? >> Yeah. So, I just um um you know uh earlier in my career is kind of the strange thing. I was involved. I had a drone company called Drone Works and that um you know, a lot of companies wanted to adapt it. I'm from the rust belt and I try to get that that technology um integrated at some uh companies around and you know the lawyers were the hold up to that. They were the hold up to that and you know really the theme is that the law always is behind technology. You know law is about precedent. It's about keeping things slow to change and we're really in a very different environment now. I mean I think that you know uh what's happening here is that I think those in the in the no that are in the um Dario Amade um you know uh Sam Alman people have their feelings about them but they know the power of this technology and there's some surveys recently that talked about how many people are really awake to this technology and you know Wes you're probably in the top you know tenth of a percent of you know paying for you know your quad member membership if you do that you know and beyond that using it to actually build things that's a very very small percentage of you know the population so um the fact that a private company thinks that hey I have to you know put restraints on the technology I'm going to give to the federal government and not let them just you know operate for any lawful purpose really highlights the fact that the laws are behind here is that this is new technology. >> Yeah. So, one of the things that Dario was talking about is the fourth amendment and um how the new the new technology might be getting to the point where we might have to update the fourth amendment in order to just keep the same laws that we have currently because the new technology is going to allow for unrestricted use. Is does that make any sense? >> Makes total sense. I'm you know I'm also a criminal defense attorney so uh it makes perfect sense to me. So yeah, the fourth amendment has had to go through different uh evolutions when you know with the use of technology. Uh once uh one particular case um called state versus US versus Jones. Very interesting case because uh this case involved putting GPS on a person's car. Uh normally there'd be nothing illegal about doing that. There's no expectation of privacy to the outside of a person's vehicle. But what the uh what the federal government had done, the officers had done is they put a tracking device in the bottom of the car. And what it did it was it it tracked the car for um a period of months over a huge amount of time. So in in essence, the law the law didn't uh provide for any prohibition for that. It's the outside of the car. There's no expectation of privacy. But given the duration of following that car everywhere that it went and and the the court did not like that and it it reminds me another thing. I don't know we're going to talk about this, but you asked the question. There was a company called Persistent uh Persistent Surveillance. It was a it was an airplane company that operated in Mexico and I remember the article talking about um there was a a murder no witnesses. persistent uh persistent surveillance was able to track a five square a five square mile five mile radius all around the city. I think it was Mexico City and they basically um were able to do things like watch this guy get killed because they had that on film. Trace back who killed him, trace back where the cars had gone for the last month and I think 125 people got arrested from that. Right. So I think that when you talk about, you know, open AI had um prohibitions against private data, that's a fourth amendment line right there. But I think what um Amade was worried about was the use of publicly available data in the in the um you know in in the collective. So you can imagine there's no expectation of privacy for this plane flying around and watching what's going around. But to be able to sort of rewind the recording all the way back and I think something in the neighborhood of like um you know uh crimes were prosecuted out of that. I mean you can see how technology can deliver a police state very very quickly. You know we exist in a an area where you know um you know people probably violate laws but there's only so many cops to go around and see that and police it. So, what happens when they know everything that you do and everybody's a criminal? How does that change things? >> Absolutely. That's a great point. And actually, as you talking about that story, there's another very similar story about Google Street View. They were, you know, there's a lot of cars rolling around providing footage for the uh Google Street View. One of them happened to in the corner of somewhere as it's driving past through some remote village happened to notice a person putting something in the back of their truck trunk. And uh later it was discovered that that was actually also a a murder and they were able to find and arrest the person. Um and those situations are interesting because it's like well good these are murderers we're talking about. But it kind of like begs the question what happens if this unrestricted surveillance gets to the point where all of us are targeted for minor crimes. It gets sticky. So I I don't know what's your take. Is there a personal do we so you're saying yes we do need to update let's say the fourth amendment or some of the laws regarding this is there an easy way to do that or is there an easy solution to this or no >> so I I don't think there is and then um you know one of the things that you know Mustafa Sullean talks about in his book the coming wave I'm sure you're familiar with is this role of people in our our participation you know I I just uh mentioned I don't know if uh you can somehow put a link but you As you know, I wrote a couple articles on these topics. So, there's no way we can cover any these topics in in in uh complete depth. But what he talked about um as far as um containment, if you remember this concept, is that if we're going to contain this, we need participation. We need people at the table. We need people involved. Uh recently um I I can send you the link to it, but I know there's a recent study that done that that we we just talked about that talked about how few people are actually, you know, involved in this. I was just recently at the consumer electronic show and I was thinking hey there's two this is like 2% of the top like tech heads in the in the country right but in talking there I thought I felt people were very behind you know you've talked about uh I think the lily pond and the exponential curve and the stick man looking down uh I don't think people really understand what is coming so as far as legislation's concerned I really have been struggling with that problem myself I think that you know we almost need a way of mass participation and of of sort of like lightning round legislation. We've never dealt with the exponential before. So, how do we do that? How does the law catch up uh you know to what's you know what's happening? The only thing I can liken it to is if you remember designer drugs, that whole thing that was playing out that we had uh prohib prohibitions for certain chemical compounds, those are drugs, those were illegal, those are certain control substances, those are felonies, right? Uh what had happened is that the drugs were uh they were getting creative with the chemicals and it would stay outside the law. So now they have these new designer drugs that would couldn't be prosecuted. it took sort of an evolution and an understanding to say, "Hey, we are now going to prosecute on what effects on the body those chemical compounds have. So, um I know that's a kind of an abstract example, but we really need to start addressing things." And then this really comes to what I think is the the crux of this whole uh Pentagon and uh you know anthropic open AI dilemma here is that you have um really uh as the phrase goes with you know power uh power corrupts in and uh um uh ultimate power you know absolute power corrupts absolutely right. So this is more powerful than anything we've ever known. It's the private companies that know how powerful it is and intimately understand that and maybe the federal government does too. So, it's only natural to want to put guard rails on this. And our our policy makers, they're in the mix. They're behind. You know, our lawyers, they tend to be lites and and live on precedent and protect people from risk. So, they don't really like change. So, that's part of it. Law is at the, you know, the fabric of our society. So, I think that this should be a wake-up call. This is a moment of inflection that people need to understand that big things are happening. The reason for this conflict is the power of these technologies and everyone wants to, you know, use them as they should they see fit. I'm sure the Pentagon and the federal government seems that it's absolutely clear to them how this needs to be used. Um, I'm sure that to anthropic, the guardrails that are putting in place make absolute complete sense. I did like your piece on Sam Alman. I think that, you know, he was uh maybe being villainized a little bit out there like, you know, just he's the the right of Silicon Valley coming in there and just taking advantage of this contract and maybe more risky. I think he did what, you know, what he could. He did these three, you know, sort of red lines there. He had cloud computing to address uh the fact that the um uh that the AI company would have ultimate control over the cloud. Um that he tried to put some guard rails in there. I think at the end of the day it's unnatural that we should have laws that that account for this. I mean private companies can't enforce specific performance. I mean the the US military, you know, you can do a breach of contract filing, you can do a claim, but you're going to get money damages. I mean, the thing we're trying to prevent has already happened at that point. So I think that um uh that really what needs to happen is people need to understand that this is not some routine political um argument or whatever. This is a debate about power and what should be done with that power. And I think that I shared your semi optimism that we can really have a utopia here. We can have um you know like they talk about having any good or service you want. I think we're going to cure disease. I think people are going to have more free time the, you know, I call it like the weekend mentality or the vacation mentality to explore and pursue, you know, sort of that Maslo hierarchy of needs thing where you're you're you're trying to be self-actualized. I mean, my neighbors don't hang around with me for the job I do. They may be hanging around with me because they they think I'm funny or like, you know, we we like the same foods or whatever. And that's what I think is so great about this technology is that, you know, we're so used to being at dinner and everyone's on their phone, no human contact. I think this is going to take a lot away a lot of the drudgery. As we used to say with drones, dirty, dull, dangerous. It's going to do that stuff for us and we're going to have more time to be more, you know, fulfilling lives. In the past, going back, a postage stamp of achievement would have been what we could achieve as people on this planet. But in the future, we have opportunity to dream big. And I think that we are going to ruin it. We're going to spoil all of that if we don't get this moment right. >> I agree. There's a lot of things that I agree wholeheartedly on this idea of having a lot more, not utopia. People have a problem with that word. But I mean, you can imagine if technological progress goes over, let's say, 200 years, you can imagine being in a much better place 200 years from now. I think what AI does is just shrink those timelines. There's negative things that can happen and we have to be very careful about that. The in terms of like what our worth is AI is better at chess. People still play chess. AI, you know, robots, machinery can lift more than we can. People still work out the gym. We still have the Olympics, etc., etc. Now, we have to be a little bit more comfortable with the idea that things can be smarter than us. You know what I mean? Like, if I go on a beautiful walk up a mountain and I hike up there, just because a machine can do it faster makes no difference to me. That's I still enjoy it. I enjoy time with with family and friends etc etc and we have to reconnect with that we don't have it's just separate your identity from the job or how much better you are at something than other humans like maybe just that goes away but that doesn't mean that we lose meaning or anything else um I do have to be running but I really feel like we should have another longer session thank you so much for being here and we got to do this again sometime it's an interesting intersection because like the intersection of of AI and law is always I'm I'm wondering like can our current law and our law systems continue existing in a world of AI because they tend to be like rooted in old traditions and it's about stability and AI is like the opposite. It's just this like chaos that is morphing faster than most of us can keep keep up. It's like do those things seem like they can be um you know something's going to break. >> Yeah. So we'll see. You know, you know what's weird is like it's kind of a feeling like deep down in my gut, but it's just like I just kind of know like nothing is going to stay in control. Like something will happen and it might not be bad. Like, you know, maybe I'm overestimating bad sides. Totally reasonable. Hopefully I am. But like it just seems like, okay, laws, a bunch of stuff's going to happen on the internet. A bunch of people are going to be unhappy about it. And the the solution will not be to go to lawmakers. it will be to implement some other AI system that like codes up some kind of a wall or goes out there and like tries to track the humans who like initiated the the bot to do the thing and punish them and try to is going to try to build some kind of AI system that figures out who is behind this and like punishes them or like take the those servers offline or like you know what I mean like it's just going to be a chicken or egg thing for legal for most industries you Even even stuff like I was thinking even stuff like the FDA or like you know stuff that's on top of food like I don't know I feel like somebody might just invent this machine that can make a biotech cure and they'll be like hey this is a cure for cancer like and they'll be like the FDA has not like approved this and like there's no humans >> and they're going be like yeah but >> I don't care like my AI built it and like my kid's dying I'm going to inject them or whatever like it's just it will happen and then we'll just if it works other people are going to do it too. And then the FDA is going to have to like try really quickly to get it approved because there's now like actual human evidence out there. So, you know what I mean? Like the whole thing will just be rapid fire every direction. And I mean, for the people that are playing around with peptides right now, that's kind of a big conversation. For people that don't know, there's these medicines that are they're kind of weird because they're kind of outside the law a little, but they're not really controlled. But compound pharmacies, certain clinics can prescribe them to you. Um, and uh, they're having absolutely insane insane effects in terms of people's ability to to heal um, to it almost seems like they're almost like in this anti-aging technology because they're drastically improving the the collagen synthesis and stuff like that. people seem to be like aging backwards, healing old injuries. Uh, of course like retat reatride, retatride is like one big that people one thing that people are are talking about because it's the if you recall the semiglutide that was like the first generation of the drug and everybody's losing weight. If you if if you see a celebrity that used to be fat or anybody out there, now they're skinny like you can bet that's probably what it is. But then there was the the second generation and this is the third generation that not only does it do all those things, it also gets your body kicks it into high gear in terms of energy expenditure and I mean just people are it's just it's a very powerful effect. It's outside of regulation. Um and so you know that that brings up a lot of questions. We actually talked to well I interviewed Matt. Um he did talk about some of those issues too. We touch on that very briefly because obviously this could have um really bad consequences, but for somebody like me that's very interested in this stuff and exploring the stuff and is able to do a lot of research and I understand the risks. So, I mean, I would hate for this to be to be regulated and taken off the market, but I also understand that there's a lot of people that are going to get hurt because maybe they they don't know how to approach this stuff in a safe way. So I can see the argument for both sides. >> Yeah. Well, and especially when you just buy Yeah. It seems different to me like when you choose to take something risky on your own body and you try to build it and sell it to other people and you don't have a good structure for that. But on the other hand, I I don't see why AI can't build stuff. Like I feel like I could order some kind of a peptide in the in the future and like I won't know if I'm talking to a bot. I don't know. I think the bot might go get chemicals from maybe some people or maybe it'll like get it from some factory or maybe it'll like make it itself and like put stuff in the bottle and I'll be like I don't know like you know I mean it's unless there's like some blockchain history of like where all this came from. Um you know even like even all those cryptocurrencies that Molbook was making that were so fake. It's too early for me to believe that any of those would be of value. But I do see a future like when an AI can keep a long-term vision where I could imagine VC is getting approached by a complete agent and it has a that's got this idea. I think the the government would buy this piece of software to help with the the electricity grid if I built it. I need, you know, probably about eight $800,000. It was going to take this much compute, but I think I can get it built and sell it to them. And here's how I I'll deploy my agents. And you're not even talking to anyone. It'll just be like agents deploying agents and probably giving the investor a refund if if the if if they trusts it enough. You know what I mean? Like if you know the agent won't just like betray you after it takes the money. >> So it's funny because what you're describing already happened like >> actually sorry this is the news from this week. Sorry, not my prediction for the future. >> Well, Truth Terminal, that's exactly what happened. Uh, so Andy did >> I haven't checked it on True Terminal for a while still out there. >> Well, no, this was from back when it was it was kind of blowing up. So it decided to reach out to Mark Andre and hit him up for whatever a guy. I think he funded it with $50,000 and launched a cryptocurrency and that cryptocurrency hit I think it was like half a like 500 million market cap. Now there it's kind of like hard to follow cuz Andy Ary is the one behind it. So it's like how much of that was he controlling? Um you know what I mean? So we're not quite sure. >> We're still in the early days of all this. >> We're in the early days and we're going to see we're going to see more and more of this >> but I could imagine I could imagine sort of a trustworthy infrastructure kind of built somewhere where like the money goes into kind of escrow and you can sort of watch how it's spent and and you get your returns like sort of in dividends as it comes in. And like there's solutions to a lot of this stuff and I'm sure JP Morgan would be happy to like you know bring on a new crypto AI division to profit off it. So like some of it will happen. I don't know how many how much longer but it's just you know you got to get your head around it because it's not like the past. It's going to be very interesting and the speed of this stuff is going to be so much faster than I I think that that we're used to. Um, what other big interview? What other big things did we mention at the top of this uh episode that we wanted to touch on? >> Well, we kind of went into the psychology. We talked a little bit about the Elon Musk and Amanda kind of battle. I mean, a little more details on that. His argument was he he kind of put her on blast after she got um some press because, you know, it's like it's a job. It's it's a job that's essentially kind of like policing or changing the ethics. And obviously if one person has different ethical values than someone else, there should be a debate about it and somebody might dislike her quite a bit for her ethical decisions. But um he was saying that the problem with her doing her job was that she doesn't have children and she doesn't really have a an interest in the long term future. So there was kind of a debate. She, you know, she says she wants kids in the future. She said that she has, you know, a lot of empathy for people who are not her and wants humanity as a whole to succeed. But, you know, actions are a way to tell how people value things. They're not the only way. It's probably you probably shouldn't just focus in on if people have kids, but it is true. People with kids probably do have a longer term vision. And it does bring up a conversation about how do we get people in office in companies in places like inside of LLM um companies where they have to make ethical decisions that do care about the long-term future because it's this is not the time to want quarterly earnings. You know, this is not the time to say let's just exploit all of Anthropic customers, makes make a big bag of money and get out of here. Like this is a tool that is going to keep evolving and will really shape the future. So the like the five big tech companies, I guess the eight eight or nine of them if you go worldwide to China, like it feels to me like how the ethics in those that handful like less than a dozen frontier models are is a really big indicator of how safe the future is going to be. Yeah. And ethics and morals, it's such a deep and complex discussion. And uh I feel like often, not often, but you do also obviously encounter situations where people almost use it as as a weapon in some cases where it's like to you want something selfish, but you know, you can't say that. So, you know, >> you say something kind of like, oh, it's for everyone. >> It's for everyone. It's for the greater good. Let me figure out how to like get what I want through through that context. Obviously, that doesn't mean that everyone's like that, but you know, especially now with social media, there's so much I'm sure it increased. >> Yeah, I know, >> right? Cuz it's so much more important to be seen as the good person. >> Well, every single every single like robotic weapon thing that I cover, it's like this will be good for search and rescue when some like innocent hiker is lost. I'm like, how many innocent liker like hikers are there? Why are you putting like a trillion dollars? They're clearly all weapons. They're meant to hunt people down. >> You know what I mean? But it's like it can find somebody who's lost. And I'm like, I know, but that's not why you're building it. >> Well, I mean, the wildest one was the one with the robot dog with the flamethrower. You see like spewing it's like the company, you know, they're like >> this trees are in the way like >> Oh, no. No. Their use case was like uh ice clearing the ice clear like if it's you know like if if you're in icy conditions you take a robot dog with a flamethrower strap to its back to you know get rid of the ice like I don't think so. >> Yeah. No, that's that makes total sense. Yeah. Strap robot a flamethrower to clear ice. Yeah. Well, what what else would you use that for? Um yeah. Anyways, but um so yeah, but my point is like I I it's hard in these situations to like for example, New York right now is doing this thing where they're thinking about restricting AI's ability to answer questions regarding for example medicine and a couple other things, medicine, psychology, whatever. And it it like I try to understand both sides of of of the argument. I try not to just have my own opinion. But wow, am I against this? This just to me Yeah. just seems like a bad bad idea. >> First of all, who are they to dictate what we what information we have access to? This seems so dystopian, so backwards. Also, it's like so early in this technology like and we're already beginning to be like blanketed. No, it can't answer medical questions. A lot of people are saying it shouldn't be used as as a psychologist. Um, and saying, "Oh, well, it's going to cause harm." But, you know, it's such a like, you know, I'm happy for people that have the money and the resources to get professional help when they're they have mental issues, but guess what? That's not most of the world >> most people, >> right? So sometimes you have a choice between having access to something that's good enough and oh yeah, maybe if you have the resources you can get something that's that's better, but you know, for people that don't, they should be allowed to have access to something free that's right there that could potentially help them out. And of course, we got to make sure it's also not driving them to, you know, more, you know, issues and stuff like that, but kind of blanketed thing like, nope, you can't get any help for this particular issue, you know, through a chatbot. What if we did that for the internet? It's like, nope, no one on the internet can share health, mental health advice or uh, you know, any medical advice, right? We just shut it down just in case there's some misinformation out there. It seems so backwards, you know what I mean? >> Yeah. Yeah. >> Plus, plus we're already to the point where these things do give they give you insights that your doctor just just doesn't really have. I And I don't know. I mean, I I I guess labeling things is fine with me. Like, I think it's important that we have places where, you know, maybe your LLM output says, "Okay, I'm an LLM. I do know lots of things. I'm often right, but I'm not perfect, and you should double check with your doctor. Here's my answer." Okay? Like something like that is fine, but don't withdraw the information from people. Don't slow it down because I just feel like that's just going to make it worse. They're going to get information from even worse places and you know if it's an action like here's how you build something for yourself. I don't know why why not just learn about it like do you know what I mean? Like people are going to learn about it and then take it to your doctor if you need to like teach him you know or her. >> Yeah. I don't know. And yeah, I just it makes me mad and a little bit worried about like especially if more and more politicians decide to jump on this ba band wagon. I feel like most politicians aren't techsavvy enough to to understand this technology and maybe just I'll come out right and say it like maybe just not smart enough to regulate this technology intelligently. uh uh you know I'm sorry but like I'm I'm sure the the distribution of like intelligence like is not concentrated in our politicians. I'm sorry like we should at least have people that are able to understand the technology and and um if there's regulation have intelligent regulation that can morph fast in response to how things change. So right now, more and more in my life, I'm beginning to put on these AI agents. And I think that's the future, right? Because, for example, recently what I did is I create a little bot that um I tell it how I feel, what what I'm eating. I take snapshots of my food and I just send it the snapshot and it figures out the macros, the calories, and it tracks it. I tell it how I feel. I tell it what supplements I'm taking. And recently, I finally took some time. I I took all my blood work from the past. that's all in these random pockets uh you know in this email in this whatever. So I collected them and I I I uploaded it and I'm like all right what are you seeing from the la you know last couple years at least I did you know what are we seeing in my blood work and immediately like it's hard to explain this to somebody that hasn't experienced this but these models the amount of context that they can see all at once is is is bigger than a human being even your doctor as knowledgeable as they are people say oh doctors were very knowledgeable But just the context might be limited, right? Because they have that half an hour with you or whatever. They might know a lot of things about you, but it's not necessarily top of mind for them. Like, oh, you had this issue back 5 years ago. They might not recall that actively at any given time. They're not going to connect it to what's happening. If you're currently with these chat bots having that entire scope, they will draw connections that no human I I don't want to say a human can't do it, but they probably won't, right? Maybe if you have a doctor that's just your doctor and you pay him a million dollars a year, he just thinks about you all the time. >> Um, otherwise, no. And it's been incredibly incredibly helpful. Uh, this chatbot sends me little tips every day from studies that it finds like, "Oh, remember you you had this issue where here's a new study that I found while you were studying." >> It's actively still learning for you. Huh. >> And I have the same thing for a a financial agent that now that we're approaching where we got to start doing taxes, I'm actually very curious how well it's going to be able to suggest strategies for me, maybe things that I've missed. Is it going to be better than my CPA? Right? Like if if it comes up with something that's a good strategy or way whatever deduction that my CPA missed, it's like uh oh like we gota >> oh yeah, >> yeah. As bad as it is to give everybody access to good doctors, like everybody getting access to every tax loophole. >> That'll change the government. That's that's probably when they really come down on it. They're like, "Oh, the average like they find all the tax loopholes for me. You owe nothing." >> Okay. Thank >> for for people that are >> Yeah. For people that are not aware aware, a lot of our taxes are based on the idea that very few people understand the tax code and can actually take advantage of it because this this whole thing just falls apart if you know like what you know. Yeah. >> Yeah. So anyways, that's interesting. Yeah, I thought about that. That's a good business model for someone. Go f go find all the tax loopholes and give them to regular people. But >> yeah, and that's the thing that's the thing that I keep seeing a lot of people I think miss when they're talking about where the future's heading. They miss this idea uh that the AI is going to become an operating system, right? Everybody's like, "Oh, it's going to help me with Excel, right? It's going to there's going to be this little model that helps me do Excel correctly." No. No. You know, I I don't think that's the future. uh there's not going to be a model to help you do Photoshop and this and this and this that no UIs are going away. What it's going to be is like all your data, everything just gets uploaded to your agent. That agent puts it all in a database. You don't see any of it. There's no UI. You're just talking to it. Every UI we see user interface like things we click on to do stuff whether it's Photoshop or your bank or whatever we think that that's like somehow like we need that as the part of the reality. No, at the end of the day, it's just a way for you to access some information or some functionality. >> Yeah, >> it's going to be able to do that better than you. All you have to do is just if you can say the words, then it it will do that for you. UIs are going away. All that stuff is going away. The amount of stuff that I use now is reduced to just Telegram. >> You know what I mean? To just mess. >> I always remember thinking it was sort of fascinating. I heard that War Warren Buffett doesn't have a computer. He's like never used a phone, no computer. And I was like, "What's his life like?" And he's like, "Well, he's he's super rich." So he's got assistance. And he says, "Hey, like how much money do I have in that one bank account?" Or like, "How many bank accounts do I have again?" Or like, "Where is the where's that stock at? Like print me out a thing." Like he just talks to everyone around him. He's been like living the AI future in the past, you know? And I'm like, "Yeah, I guess so." Like why >> Yeah, why log in if I can just say how much money is in my bank account? I guess my voice is somehow the identifying thing of me and somehow AI just knows who I am and can trust me with that info and goes and checks it and then it's like you have enough money for Chipotle and I'm like oh great like I'll take one and they're like okay here it is like thank you like no need to do anything you know no phone no credit card no whatever. Yeah, >> but um yeah, that's that's that's I know I I remember in the movie Her thinking about it, but I got to start getting used to I think I would like it. I think I'm sick of staring at screens. I mean, I got a couple more years in me, but I'm looking forward to that change. >> It's >> like I I spend so much time just like, you know what I mean? Like my neck hurts all the time and >> Exactly. And the thing is that future it's it's already here in the sense that the capability is here. It's just to set up something like open claw is difficult for a lot of people because it's still very I don't want to say archaic but it's still very like it's built by developers and a lot of developers they don't understand how difficult this stuff is for like a normal people you know what I mean because it's all through like the command line interface and stuff like that. It's not complicated but it's just for muggles or normies or whatever word we decided to use. Yeah. It's it's intimidating and it just it can be very very difficult. Uh, also there's tons of issues with security and there's issues like it's not available but but the capability like for the people that are experiencing it like it's there like you know where this is going um and it's it's it's there's you're just chatting with something and it does everything for you or you know what I mean like within reasonable Yeah. But okay, so let me put you in uh Amanda's shoes so we can talk AI ethics here for a minute. But there is a there is a petri dish right now and it's full of um it's full of human brain cells that have like clustered together and they can take electrical signals in and out. And there was pretty much a DIY hacking group. I mean, it wasn't it wasn't as formal or as expensive as you'd think that used a bunch of Python code and set it up in in such a way that it could compute the game of Doom. So, it has control over like which direction the player goes and like when it shoots the gun. And it's learning to it has it has learned to play. Not it's not enough cells that seems like it's very good at it. It doesn't do nothing. It's not randomly moving around. It is sort of slowly beating the game. So, it's got some tendency to learn, but it's uh not a lot of cells. It's still sort of a weak model. But just the just the idea that it actually is up and running right now. Um is it a like what do you think about it? I mean, do you is there potential if you just keep putting these things together for that to be as conscious as a human or as long as brain cells aren't aligned in the same way ours are, do you not have an ethical problem with it? So man, this is a a deep uh conversation. So actually I covered this and I'm trying to figure out who. So the thoughts emporium were the people that um I I covered their experiment in this about two years ago. Seems like a lifetime ago. So they had this idea, can we get a bunch of human brain cells to to play the game of doom? So they ordered it. They put it on a petri dish and they ran little um electrical diodes or what like like little ability to run current to different parts of the brain. Uh they couldn't do it with human brain cells. They were expensive. There was issues, but they did use rat neurons and so they replicated and it seemed like they were beginning to have some success with it. So, I'm not sure exactly how um that continued, but this thing that we're talking about now is by a different lab if I understand. Do do we know who's doing this? >> Um let me >> Cortical Labs. >> If you keep chatting, I'll find out in a >> Yeah. So, basically, and I apologize because I I do I I'm realizing this is kind of blowing up. I do want to do a full deep dive into it. I did I did the thought emporium video from back in the days when they were doing it and I kind of explained how they did it and what the what the issues were but I don't know how far they took it. This seems like somebody came in with maybe more resources and a different approach and actually >> it is cortical labs. Yeah. >> Okay. Perfect. And so it sounds like they did actually uh uh make it work. So in the original so I apologize if some of the details I'm I'm talking about the previous experiment. So maybe here it's a little bit different, but my understanding was they figured out how to um do reinforcement learning with these brain cells. And so apparently if it's doing something wrong, they shock it kind of with static. There's this garbled information which is like a negative reinforcement signal. And apparently music is a positive reinforcement signal. So I'm not sure >> because of the rhythm. Yeah. >> Yeah. I'd be curious to see how how true that is or what this new approach did. But, you know, to begin with, we're taking real life human brain cells and we're doing reinforcement learning. So, bad and good and we're just like that just to begin with is is scary. >> This was this was 800,000 human neurons, too. So, this one maybe the previous one was done with rats, but this is human. >> Yeah. I like playing video games, but if you think about just being stuck as a a brain in a jar playing video games and just being shocked and rewarded constantly for depending on how well you do. I mean, that that seems like a nightmare, right? And not not being able to complain or anything. If there's some uh consciousness that exists there, obviously kind of dystopian, that's like a dark mirror episode. I'm sure they'll do an episode about this very, very soon. Um, but yeah, what do you what are your takes on this? >> I'll throw a couple more some context I probably should have put at the beginning. So, the company, the Cortical Labs, they made their system programmable with Python that controls the the way that you lay the brain kind of it's a kind of a 2D brain. It's not really 3D like ours. It just lays on top of a chip, so they can kind of control it. And then there was an independent developer Sean Cole who decided to like connect Doom to a larger neuronpowered chip. And that's like the thing that is kind of like being talked about right now. So it kind of also opens up the door that like oh people are realizing they can just do this. like it's probably not going to be somebody who's like, "Well, I wonder if a bunch of human brain neurons should control my can opener and my my robot vacuum and you know or like different games or maybe someone else would be like, oh, let's see if human brains are better at predicting the stock market than than chat GPT, you know." So, it's also just the fact that now it's this kind of open- source Python thing and something that isn't in like incredibly hard to put together. I mean, I can't do it, but like it doesn't seem like smart people are completely unable to do it now. So, yeah, it just I don't know. It just kind of took me off guard. I don't think it could become conscious just laid out in a 2D way like that. But I don't know. Hopefully, somebody can find a way to measure consciousness better because I sure know I have a subjective experience and I totally think it's possible that these things do. But it's like asking when does a Have you heard of the old uh philosophical question about like when does um enough grains of sand become a pile? >> Right. Right. Yeah. >> It's like three grains of sand is not a pile. Like eight doesn't seem that right. Somewhere between like 50 and 80 seem like maybe you could call it a pile and then it's clearly a pile at like 300 or 500 grains of sand, you know? So that's kind of what I feel like is going on here. I feel like just 800,000 neurons probably isn't enough because I know the scale of my brain and dog brains and things and how much many more there is when you have 80 billion. But um who knows dude like it probably is sort of conscious just so little it's not a pile yet. Yeah. And I mean we talked to Joshua Bach about this very thing because that's something that he's um working on and and and researching is like what is consciousness? were the different parts of consciousness and um you know from everything that we've seen and we've talked about so there's not exactly a good answer because it's complicated. It's a very difficult problem because number one if you think about it like most things in science we can measure we can observe and measure and write it down even things we can't directly observe we can observe what some something else some other metric like why we believe that the universe came from sort of some sort of a big bang or some concentrated thing is because of the the blue and the red light shifting and a lot of other stuff. So even though we we haven't seen it, we're sort of able to measure something that gives us clues. Um so it might come as a surprise to people that the idea of consciousness or subjective experience like there's zero of that or anything like it because like I know I'm having a subjective experience and I assume that all of you people watching are as well, right? or like you watching this you assume that I'm having subjective experience but there's nothing that we can do to measure that there's nothing we can do to observe it or see it right even we're able to attach some sort of a we're able to scan sort of the electrical things in our brain right so we we can measure thoughts we can measure thought activity we can measure a lot of different things we can't measure um subjective experience in any way shape or form so consciousness and subjective experience. Like there's nothing that we're able to run a test to see if something is or isn't conscious, right? We're kind of just like guessing. Um and so and we don't know what the components are. There's a number of papers that kind of talk about it. Uh that they have their guesses into what it is. I think the latest one we're saying it's like when a system starts trying to predict the environment and it also has to sort of like predict its own actions within the environment then then consciousness sort of forms because it has to sort of accurately predict its own state therefore it's aware of its sort of existence if that makes sense. Um but with large language models it's kind of weird because it's like well they certainly talk like they they're aware of their existence but does that mean they're cons conscious? Like I don't know. >> Another really interesting project that I was kind of fascinated with this week is um did you see the fly? It's a it's a fruitfly and it looks like it's just a fruitly in like a pretty kind of older uh video game, right? Like the graphics aren't extremely high poly or anything, but it's just a fly. It kind of lives in this sort of sandbox. It's got some things that kind of look like food, kind of look like leaves. And it's created by Eon Systems and it is the most fascinating thing because it is the entire connetodome of a fruitfly. Uh it is billions of connections all simulated like in in the software. So it it is a video game. It is controlled by what you would call an agent, but it's not like an agent like DeepSeek or Chat GPT. It's an agent that really is a onetoone replica of the brain of a fly and it's inside of a body that gives it the exact form factor of a fly. So it it works actually. It's like taking a human brain and putting it in a human with the exact body type and a virtual environment and just seeing how it works. So like is this the Matrix? Is this the first biological creature that's been entered into the Matrix? Man, it's such an interesting thing. Uh, the first time I saw this couple days ago, I guess, um, I was just kind of scrolling through it and I misread whatever the person said, I thought they said like, "Oh my god, they put a fly in Minecraft." And I look at it and I looked at the video and it did kind of look like Minecraft. And it slowly zoomed in and it just tracked this fly for I mean like a good minute. And I'm just I'm like why is like I was so confused. I'm like I'm just looking at it. Is this like a big deal? They put flies in Minecraft but so what? And I was for a good minute there. I was so confused and later I realized oh no wait this is much bigger story than I >> Well yeah it was Have you heard of the word conneto before? Do you know that? >> No. No. Uh yeah you use that word. What is that? Okay. So, a conneto is the the complete profile of a biological onetoone map. So, if so, we have too many. You can't do this with a human yet, but a fruitly is pretty I hate to say dumb, but it's like pretty small. It's like one of the animals with the smallest neuronal brains and it's 125,000 neurons and 50 million connections. And when you get the full thing together, you have a full conto. So you have a conneto for a fruitfly and that that's what's kind of different here. Like they had put sort of wiring for the wings in before. They had actually done some stuff with the brain um for a couple years now. They've been working on this, but it's the first time they got the whole physics engine, the whole conneto and the full body map, you know, connected with through like digital neur whatever you call that the uh the way for it to like actually control its own body. It's like it's all in there now. >> That is just number one absolutely incredible. Looking at this fly move, it seems so lifelike. It's so much like a fly with the little weird twitchy movements and how it's like rubbing its face or whatever. >> Well, it's doing what flies do, you know? Like it's it thinks it's a fly for better. I don't know if that's the right wording in this, but like it is a fly, right? Digital fly maybe. But it but like what it is in the digital world evolved biologically to be exactly what it is which is another fascinating point like we just recreated the snapshot of where it is now but to its brain it's got you know billions of years of evolution behind it that brought it to be this in this kind of an environment. We It's interesting because we Yeah, it we interviewed Nick Bostonramm and uh I was like what six months ago or so at this point and you know one of the things that he was talking about is like we need to be careful in how we treat these assimilated beings in these similar environments so that we we don't cause massive suffering across vast scales that never never before seen. And it was so interesting because I think that for a lot of people like when when they heard about the you know if if they listen to a conversation like that they're like what are these people even talking about? Like none of that is real. What are we like what what is this? Is this like you mean like video game characters? But you know we're seeing the emergence of technology like this, right? So now it's a fly, but guess what? You know, we're going to keep scaling this thing up. Um and we don't know what we're dealing with. Like consciousness again is a very difficult problem. Um what exactly are we simulating? Are we simulating life if if it is indeed like if it if eventually we learn to simulate something more complex like a human being. If it walks and talks and acts like a human being and you ask it if it's conscious and it says yes. How is that different from asking another human being if they're conscious and they say yes? Right? we we have just the same sort of visibility into whether or not it's true. Uh so it becomes this like weird thing where we don't know like what is the matrix? Is is it real? Are are we in the matrix? How would we know? Right? If if if if our entire perception of everything is at the end of the day, if you think about it, everything we think or feel, our entire subjective experience is at the end of day some sort of an electrical current that's feeding into our brains. nothing is when we touch things we're not actually touching things technically it's just some little current that tells us oh there's there's something there or sense here smell taste whatever um so you can imagine that all being simulated in some way so >> let me throw a few things so I don't have answers to these questions but thinking about this fly during the video I made like it these came into my head so if if you were to put some sticky you know simulated in this little thing like some sticky pad there and the fly walked over to it and it got stuck. Um, it would pull and be like, "Oh no, I don't I don't want to be stuck here. This is could be my death. Like, I'm scared." Right? It would do everything that scared would. And then I was thinking like, "But is if I if I It doesn't have a nervous system, but you could digitally do that, too." And then it feels like it would be sort of torture or it would be like in pain at least. I mean, it's just a fly, but still like, and I do squish flies, so I'm not like that holy or any, but it's like, but still, I don't want it to suffer. Like, I don't like the idea of it being stuck there. And it could be stuck there kind of for eternity. Like, it doesn't need food and water. It's not going to die. Like, if it's if it is like, >> you know what I mean? And but maybe if it doesn't have a nervous system and then it kind of makes me think like is the human brain unable to feel pain if it doesn't have a physical body to be in and is that actually like a a better place to be? But just a lot of weird thoughts were kind of coming from this this thing and like I don't know what what does it feel like when you try to imagine yourself as this fly and like or what you might be like digitally in a digital environment. Yeah. Yeah, I mean at the end of the day we tend to assign a lot of um weight to to reality, but none of it is quote unquote real in a sense that at the end of the day is just some electrical thing that that hits some organic matter and matter and then that's our experience. And I think so much of of that we don't even like understand like um some I saw this somebody was asking a person that's been blind their whole life, you know, uh does it make sense to you that things get smaller as they get further away? It's such a simple question, but I was I was like, "Oh, does that make sense?" Like like >> yeah, maybe not. >> If you think about it, it's like why would that that what there's why does it make sense? Because you know the way we experience reality is uh is just the photons the reflections of stuff they they hit a certain protein that protein picks it up that's our eye right the photons the light particles hit a certain protein that that protein is able to pick up sort it's like a little computer that's like okay I'm registering little photons I meant if photons hitting the the protein and that electrical impulse goes to our brain so it's like the fact that something smaller further away the mounts that you see in the distance like like none of that is actually what is sort of real. It's some abstract thing just the size of the object but from your viewpoint if you don't have a viewpoint what is it >> right? So it's like when we click on a folder on a desktop, right? There's no desktop. There's no folder. None of those things exist. It's a representation that is useful to us. And that's literally the world as we see it is that's all it is. There's no folder. There's no desktop. There's no trash can. It's just little photons hitting our our eyeball and then we sort of like reconstruct it into something that's useful for our survival. But that's not what it is, you know. Well, and then there's also this kind of question, and this comes up a lot in the Matrix movies, but of the imperfections in the world that it was in. Because I was like, okay, so it's it's getting food. Like, it's doing what fruit flies do. It's like going and looking for sugar or whatever. >> But then I'm like, but it's not really hungry. Like, it's not like the stomach needed to send out some hormone that made it hungry. So, it wants food, but it wants food kind of differently than a fruitly normally would. It does it because it's like part of its sense of self and aura, not because the stomach needed it. And when it goes and eats a digital piece of sugar, it's not like that did anything or like or is that on the creators of that simulation to like add all that level of detail? Do we need like a hyper like Unreal 5 engine thing and like simulate the whole hunger of the body so that like if it doesn't have food it actually does die or something like you know what I mean? It's like when where's the right moral ground to strike if you are going to release a conneto into a virtual environment. Yeah. So Scott Adams uh writer of Dilbert and many other books I've read a lot of his stuff a little bit recently passed away I think from cancer later in life he actually went into doing like more more politics and that's where a lot of people like sort of like were divided on him. We're not talking about that um because there's a lot of stuff that he did before that wrote books that had nothing to do with politics that were very interesting. But one of the things when I was starting my e-commerce business in 2016, I think it was, you know, he would talk, he went on the Tim Ferris podcast talking about how we're living in a simulation and we're not sure exactly what's out there, but we do have some ways to control the simulation. And he talked about how in his life he would set this intention and he would write it out 15 times a day and he would like focus his attention on it and those things would come to pass. Um and and so and and he would say like look I don't know exactly how it works because a lot of different people they have different sort of uh explanations for this. Some people say oh it's just like reticular activation where whatever you sort of tell your brain it tends to focus more so you're able to sort of find those things more. Then of course there's like the secret right the law of attraction stuff like that. People have that explanation you know some people believe it's more like God-based. It's sort of like like prayer, you pray and there's some whatever like every person you ask might have a different explanation for it. But what I noticed that a lot of people do and some very successful people have this sort of belief right um there's some evidence for it at least the visualization seems to be a very powerful thing for athletes for example um visualizing performance improves performance even without training which is but that that sort of makes sense but the the whole where where where the universe shifts um does make sort of less sense. In 2016, before I started my business, I had had kind of an idea, but um I decided to try the Scott Adams process. I haven't really talked about it before this date, but um he passed away recently and I just figured I'd throw this out there. Um I did it. I I I thought about what I wanted my business to be. At the same at the time, I didn't start it. I was working for a company doing online marketing for them. I I did worked with e-commerce and stuff like that. And he said, okay, fixate on your goal and write it down. At the time, I was like, man, I wish I could make X amount of money per month because that would really like free me up. I wouldn't have to work a job. It would give me some like room to to do stuff. And I was thinking like, what would be a cool number? I was like, a,000 bucks a day. So, call it 30 bucks a month net profit. That would that would give me enough breathing room to where I could p, you know, have a little bit more freedom. So, I write I wrote that down uh in April of 2016, I think it was. And I just kept writing it down every day, similar to how he described that process. And um you know, weird stuff started to happen. The company that I was working with, they scaled up uh um online e-commerce. They were doing skincare. A lot of the people that I worked with, they were in in the consumables. So supplements, skincare, workout supplements, like stuff like that, you know, and um the company, one of the people that worked for them, apparently he was doing doing something shady. So the company started collapsing. There were some things that happened. The thing is they like built out a perfect prototype. They tested it. There was a good sort of uh market fit. They were able to profitably buy advertising and sell that product. And then through no fault of my own, the company's like, "You know what? We don't want to pursue this." And they just walked away. The person had multiple companies that he was running. So, he didn't care. And I'm sitting there like, "Oh my god, like they just showed me step by step how to build and scale a company. I I know the people that they they used to manufacture this stuff. I know how they market it and I know it works." And then now they just and of course like it would be unethical for me to launch a competing business, but they just shut down. So I'm like uh obviously I'm going to launch a competing business right um I had this idea brewing I just didn't know how to go about it. So this was very like for fortunate that this happened. So you know I developed the product, I started selling it, it was profitable as expected and over the next year that company grew to be a million- dollar business as in we did a million in my first year. I was the only full-time employee. Um I outsourced customer service, product development, product shipping, fulfillment, etc. And the the thing that I was writing every day was like, I want to make, you know, that 30,000 a month net profit by and I gave myself a year and I wrote it down and yeah, like the next uh almost a year from them, I I hit that number. I started hitting that number with this company and that was wild. >> Yeah. Congratulations. Yeah. Now, from there, that's when things get a little bit weird because then it's like, well, okay, then just double the number and write 50,000 or 100,000 a month, a million dollars a month, right? Is it like you can do anything? No. And I realized this because if you're not aligned with that purpose, if you don't believe it and it's not really driving you, you're not going to write that down every day. Like, it really h one, you have to believe that it's possible. Two, you have to really, really want it. And that's the thing that made me question like is it it's it's not like you write anything down just appears. I think it's also like the reverse of that is like if you're the type of person and it's the type of goal that you're willing to write down that often then you're likely to achieve it. You know what I mean? Because again there's there's alignment and you're willing to do it. It means you think it's realistic and it means it's really driving you. um which is also something that Scott Adams talks about. But anyways, the reason I brought that up is because it's like one of the things one of the ways that he explained he's like potentially if you believe that this is a constructed constructed reality. This might be a way to like signal outside of it to kind of like how you want it to play out for you, which again is that whole like law of attraction or whatever thing. Um, anyways, I don't know where I was going with that, but >> I guess like talk like talk outside the simulation. Are you talking about like how the flight could communicate to us what it wants? >> Well, watching it. >> His idea was if you imagine yourself uh creating a reality and you upload your brain, you put yourself into that reality. Um, this might be you might give yourself a way to maybe steer the game a little bit. >> Yeah, dude. If the if the fly like made a little note or something, it's not I don't know if it's smart enough to write, but if it was a human emulation, you know, conneto in there and it was like, "Hey, I would love it like a leather chair right now." I'll be like, "Oh, yeah, yeah, yeah. I'll just throw one in in your simulation for you." There you go, bro. >> Yeah. >> You could talk outside of it. >> Yeah. Yeah. So I I don't know where this stuff goes, but my point is that think about how deeply ingrained these ideas are kind of in in the in the the the human sort of stories that we tell, right? Matrix. That whole point is like he's just a regular guy, you know, walks around, whatever, and then eventually he realizes that no, this is simulation. He's like, oh, and I can control it, right? Like, oh, I know kung fu and you know, it's like he has unlimited power. Like why is that such a uh why is that such a meme? Why is that such a strong thing throughout our whole >> It's an enduring premise because I think it keeps hinting at the future. Yeah. >> I don't know. It it does like Minority like Minority Report was like an okay movie but for some reason every couple years you're like pre-rime like could that be a thing because it's kind of becoming a thing. Yeah. And the simulation hypothesis would expl if there is anything like that out there. The simulation hypothesis might explain it or I don't know man this is this is above my paper >> signal. Yeah, I need a sign. >> And and by the way, Andre Karpathy, we started talking about him. That was one of his ideas is AI might be able to unlock the secrets of the universe and sort of for us to send a message to if this is indeed a constructed reality. Send a message out there being like, hey, we figured it out. Like I don't know, let us out or I don't know what >> I wish, dude. I wish all the frontier all the frontier models were just focused on communicating outside the simulation. Prove this is a simulation and find ways to communicate with whatever is outside of it. That would be I wish we were racing towards that future. >> Yeah. And I mean to the people listening, please don't take any of this too seriously. These are thought experiments. We're not making any statements one way or another. Um but I guess the question is like if we do build a simulation, a perfect simulation with the things that appear to be conscious humans or human level intelligences, what does that say about our own reality? Nothing. just, oh, we're able to recreate everything that we see and those things perfectly believe that it's real. Does that say nothing about us? Does it say something about us? You know what I mean? It's just an interesting sort of um >> Yeah. And you know, up until this week, you could have said, "Oh, it's only a thought experiment." But at at least for a fruitfly, Eon Systems has now done something that's maybe we should talk about. You know what I mean? Like yes, it's not at the level of a rat or a human or high fidelity, but like those things all seem much more achievable than getting, you know, to a point where there actually is a conneto inside of a a fly flying around. So those are those all just seem solvable with time and money. This one is kind of the changing qualitative change to how we see things. >> Yeah. And I mean, if you had a human in there, wouldn't you maybe one of the experiments that you would run is see if you could slowly kind of give him more and more powers and he realized that he's able to like like you play out the Matrix. >> The Matrix, >> basically. >> I mean, you like I think people would I don't know how people would feel about the whole ethics of it, but you know, there's certainly part of me, and I don't think I would actually do this, but there's part of me that would want to at least think about doing this, which is if I had this, like copy yourself over, right? Like wouldn't you want to see a Wes Roth in you know you simulate you just build your house like you build your neighborhood maybe the whole world you just like tap into Google Earth and you put you know a Wes Roth in there and you have it start trying different businesses and then you're like yeah all the ones that fail turn those off the one that works let me know and I'll actually do that or or just for entertainment if there's nothing else to do you know what I mean and again it's like that whole you mentioned it earlier it's like that it can't be perfect right I forgot how you phrased it like there has to are, you know, in Matrix they they couldn't provide a perfect utopia. It had to be difficult. Video games are the same way. Video games are boring if there if there's no challenge. Some people like the really the hard stuff, you know? >> Yeah, dude. I was at this pool party and I was like, dude, what if I just cannon balled in that water right now and just got everybody wet, you know? Like I wouldn't, but like in my simulation, I' i'd at least like to see what happened. >> Be like, "Oh my god, Dylan, what' you like was it funny or did they all hate me?" Or like I don't know. You know what I mean? >> That's an interesting point. play out a bunch of different simulations to see well that's already happening by the way that's what anthropic is doing with the um palenteer that's what they're doing one of the applications is in war it plays out a million different simulations to try to see what so yeah uh >> yeah a little bit different than the cannonball but >> yeah but I mean and also if other people if they if they were stimulating all your friends minds and watching you guys interact or >> you told your like the girl you like that you liked her how she reacted I don't know >> all that stuff could come down this, but we'll start with the fruitfly. Try to try to think about it here and make some good judgments for the future. >> Yeah. Well, with that said, thank you so much for everybody tuning in. Tuning in. I'm beginning to not be able to say words. I my be as I'm beginning to tune out. So, thank you for tuning in. Uh yeah. So, what do you think about, you know, I asked I for I wish I remember I did have a survey about what people thought. Is this real? Is this fake? Is there something that's constructed a simulation? Let me do another survey and uh I'll post. So, let me know what you guys think. Obviously, no one knows. But are you one of the people that is just like, "Nope, this is all there is to it. Nothing beyond what we're able to see." Or you a little bit more like um Wolfrem who has this idea of the >> the rouad. >> The rouad. Yeah. And basically, it's like this giant machine, this computer that is able to calculate everything. Like where I'm curious where the people listening to this where you are on on there like what how how far are you willing to allow to to to entertain how weird this world is I guess is the question. So anyway >> good question. Good question to end on. All right. Yeah. Thank you for your time. We'll see you next week. That was fun. >> See you. Take care. Fight.

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source