They Might Be Self-Aware

My Dinner With AI, Artificial General Intelligence, Infecting your BRAIN | EP04

Episode Summary

In Episode 4 of "They Might Be Self-Aware," hosts Hunter Powers and Daniel Bishop discuss the current state of artificial intelligence, particularly large language models like GPT. They explore the leap from previous AI models to these more advanced ones and discuss the unique features that make them different. Hunter and Daniel ponder the future of AI, including the potential for AI to gain personhood and the implications for job loss and creativity. Throughout the conversation, they share insights, anecdotes, and predictions related to AI's impact on various industries.

Episode Notes

(01:35) Cognitive Cooking with Chef Watson
(08:15) The Revolution of Large Language Models
(18:14) Artificial General Intelligence (AGI)
(24:45) The Future of Human and AI Interaction
(34:57) Jobs Replaced by AI
(40:00) Personhood and Ownership of AI
(49:47) The Impact of AI on Legal Discovery
(55:30) The Potential of AI in Healthcare

How Large Language Models Work

Episode Transcription

Hunter Powers
Welcome back to another episode of they might be self aware. I am one of your co -hosts Hunter powers joined by none other than the the the other the other the other co -host Daniel Daniel Bishop. If if Bitcoin jumps a thousand dollars, we're to take a break during this. I'm just giving you a heads up right now. But it was crazy. There's been it's just it's it's just a crazy a crazy little thing to watch.

Daniel Bishop
Daniel Bishop, the other one of the co -hosts.

Is it having another run?

Hunter Powers
I enjoy the volatility. That's not what we're discussing today. I'm just giving you the heads up.

Daniel Bishop
I can make it if you want it to go down, all I have to do is start investing in it. I've done this about four times now. If you take a look historically at Bitcoin's price right at the very tippy top, right before it crashes back down, that's when I said, fine, I'll buy some. So, I have been told that, yeah. Yeah.

Hunter Powers
Ha ha.

Yeah.

Yeah, that's not when you're supposed to buy it for the record. You wait for it to go crashing down. there's actually something and I guess we're getting off the topic, but whatever. There's something called the what's it called? The greed, the greed index, Bitcoin greed index, which just tracks like how greedy people are being. And if you back test it, it's actually a decent indicator of of when is not a bad idea to purchase some and when is not a bad idea to sell that which you have purchased. But.

Daniel Bishop
Good.

Hunter Powers
Yeah, if it jumps $1 ,000, we're taking a break. Let's see. What's new this week? What's top of mind in the world of potential self -awareness?

Daniel Bishop
Well, I've been doing a little bit of reading. It's going to show up very blurry on video. But fortunately for those of you who are listening, it's not blurry because audio isn't blurry. I don't think this is cognitive cooking with Chef Watson recipes for innovation from IBM and the Institute of Culinary Education. This is a book that came out a little while after Watson had its 15 minutes of fame on the Jeopardy show where it

Hunter Powers
Yeah, I can't read that at all.

Daniel Bishop
Went up against Ken Jennings and a couple of the other winningest people granted it was given a few seconds head start or something like that But generally speaking it was AI looking at natural language giving correct responses back. I don't think Rules is written. This is a large language model that they had come up with but it's something of its ilk and I was really impressed by that and also I'm a bit of an insufferable foodie. So

Let's combine those two things together and you have this book that people made where Culinary folks talked to the Watson folks came together and said give us recipes Not quite recipes. It mostly ended up being give us a list of ingredients that might work well together and they gave it a bunch of different flavor profiles and maybe you know what the Acid and the salt and the sweetness and so on are of each ingredient But the idea was come up with like an interesting combination of these things and then the Culinary Institute folks

turned them into recipes and they're all sort of wild stuff. I've not actually made anything out of this book, but it's very inspirational. Uh, and it's been more inspirational recently because we're in the large language model revolution and I, we're not going to do it here. We're not doing it today, but you know, keep your eyes open and your ears peeled for me making a recipe with a chat GPT.

and then making it, we'll upload some pictures, I'll let you know how it goes. But that's something that I've been thinking a lot about is creative uses of chat GPT other than, well, I don't even know what the usual would be. I suppose people are using it for all kinds of things.

Hunter Powers
I feel like you just really wanted to eat a meal from a Jeopardy winner and it has nothing to do with the AI. There's actually a related story in the news this week. Instacart supposedly is suggesting recipes from ingredients and there's been some horrible recipes coming out that people have been posting. It's been, you know, one of those like PR nightmare features. Someone had...

Daniel Bishop
Right.

Hunter Powers
I'm just going to imagine it originated from a hackathon internal to the company where someone should, hey, I can make a recipe. Give me a pickles and peanut butter and an old orange. And yes, and they shipped it and it supposedly is just really, really, really bad. Like the things that it's that it's suggesting. So.

Daniel Bishop
And I'll tell you what you can make.

Is this, I may have read the same thing, is one of the problems with this that it doesn't check that what you gave it are actual ingredients. And so you could say like a box of nails and some Kleenex and X -Lax and it'll tell you a nice meal that you can make out of it.

Hunter Powers
Yeah, and it's just hallucinating all over the place. Which if you ate that, you probably would too.

Daniel Bishop
that's fun. Well, maybe you just talked me out of making an AI meal, but...

Hunter Powers
the AI. Despite its ex lax tendencies, it is taking over the world. We're on the cusp of AI dominance, bowing down to our our new overlords, accepting the inevitable. And I actually, again, just like sidebar, I saw they were showing Terminator two in in the theaters last week. And I'm like, yeah, and I'll go see it.

Daniel Bishop
They were off by just a few years. Yeah. Yet.

Hunter Powers
It was from, this is 1991. And in the film, they predict that in 1997, that'll be the year when the AI takes over and the machines all kill us. And so, happily, that hasn't happened. we headed there? Are we back on track? We got the little sidebar, but now we're back on track?

Daniel Bishop
Well, considering that the stuff that I've seen AI automating aren't manual labor efforts, they aren't, you know, replacing miners, it seems to be creative endeavors instead. I think the robots need us around to maintain them and to, you know, do the mining of the raw materials for them for at least a little bit longer. I don't know if we're going for a full terminator.

apocalypse. There's a handful of them to choose from, but it's all going to be some sort of, you know, post singularity something that we get to. Assuming we get to AGI, which I kind of want to talk about what that means.

Hunter Powers
So I guess maybe before AGI, the current revolution is all centered around.

Daniel Bishop
Also, sorry, artificial or not artificial. Yeah, artificial general intelligence, AGI, for those that are not aware of the term. This basically meaning there is an AI that is at least as capable as a person in more or less all of its capacities. There's some nuances to definitions and not everyone holds the same one, but basically AGI means what we kind of think of as a quote unquote real AI.

Hunter Powers
I also know some persons where that wouldn't be much of a challenge because I'm not, I'm not sure if there's a bar or it's just literally any person, but the current, the impetus, the current fire that does exist and tech that does exist today are a large language models. That's what a lot of these advances seem to be centered around. Um, what,

Daniel Bishop
Mm -hmm. Transformers and diffusion models in terms of like architectures underlyingly, they're very hot right now. Very hot.

Hunter Powers
And what's the, what is the leap from what we had before this? Like what was the kind of novel discovery and as close to layman's terms as possible?

Daniel Bishop
Well, I am not a million percent sure of like the math underlying it. So right, we'll talk about it at a fairly high level. There's a couple of underlying math heavy architectures. You're all just taking a bunch of math and stirring it in a pot until the right answer comes out. But how are each of these? Yes, but how are each of these things connected? And, uh,

Hunter Powers
Mmm. You're back to that cooking thing.

Daniel Bishop
Neural networks is another term that a lot of people have probably heard of. Basically, it's just the way in which information is going from one of these neurons to another, what information gets passed back. And then some of the breakthroughs are less in terms of the algorithms and more in terms of just the computational power. Computers are a lot more powerful than they used to be. And as a result, we're able to do a lot more with them.

Hunter Powers
Why does it feel so different now than, let's just say, three years ago? The kinds of interactions that we're having with AI today, they just, they feel very different. And it potentially is hard to give words to that, but what, why, why is it different? What's, there seems to be something special about it. What, where's that coming from?

Daniel Bishop
I think that's the underlying architectures that power these things, as well as also having

Collated humongous gobs of data that you can then train these models on but some of it are the tricks that machine learning engineers have come up with to Figure out how to stir that pot a little bit better I don't want to get into the specifics for two reasons one This isn't that kind of show and two I don't know it terribly well at like a base level but what I will do to both the people watching and listening is I'm going to provide a couple of links to some

Hunter Powers
hear the uh...

Daniel Bishop
What is a large language model? What is a, what is stable diffusion? What are transformers? And I don't mean the robots. And besides me also rewatching them to help myself, anyone else is welcome to take a look at those to learn a little bit more of like what these things that have started to change the world are actually made out of.

Hunter Powers
I often hear the description of how they work as, you know, it's basically a fancy autocomplete. It's read the entire internet and given some words, it predicts the next word that would most likely come as a response to the words that it has seen before. And there's some amount of...

Daniel Bishop
Yeah.

Hunter Powers
probability in there. So you're not going to get the same answer every time, but these are the words that are most likely to appear and I'll choose one of them. And then I keep just sort of choosing the next word. You know, how, how fair is that?

Daniel Bishop
That's right. I think that's a really fair way of putting it. So way in the pre LLM transformers, neural networks, whatever days, there was something called a hidden Markov model. Not going to get into the math of it, but it's basically a really simple version of what you just described where you see a word, you know, here's all of the individual words in my corpus of all of the data that I've looked at and the word the.

2 % of the time is followed by the word man and you have then a 2 % chance of man being the next word that you choose. And then now that you've got man, what's the next word? Man in is 1 % and of and a and what, you know, every other word has its probability of following after. And so you could put together a tiny little chain. This requires very little computational like complexity, but you can put together a chain.

that could spit out words. And this was something that was basically fun little toy examples some years ago. A way more complicated version of that is frankly what a large language model is. You have a bunch of training techniques like masked language modeling, for example, where you have a sentence and you're going to black out one or two of the words in there and try and get the machine to figure out what the correct word to input into that sentence is.

Uh, there's a variety of other things that are happening as well, but basically at the end of a large language model being trained in the first place, you have a base model and that is besides matched language modeling, uh, some other tricks and just doing machine learning to it to the point where you have a model that doesn't, it, it isn't quote unquote, knowing how to do anything so much as it knows a lot about.

words and what kinds of words follow other words. And so if you give it some text, it will maybe try to complete like a partial sentence that you gave it, or it will just output whatever sort of seems like what might follow that, which kind of sounds like chatting with a person, but it's not. So a base model is just something that's been trained on the internet. A lot of stuff. Maybe the New York Times, maybe legally or illegally. The lawsuits will let us know how that turns out.

We have a base model, which is an incredibly impressive thing, but not actually useful to a lot of people. Then you have a few different things that you can do with it. So there are, um, instruct tuned models where basically you say, Hey, base model here are now. Hundreds of thousands or whatever, a, some number of prompts where you say, here is context and the thing that I want you to output. And then the correct response is.

what the output is supposed to be. Multiply that by 100 ,000, 2 million, 50 billion, however many it is. And then you can fine tune the base model to say, in the future, given this kind of information, this is what a correct response is. Or as close to correct as possible. There's a reward function that these things have. A reward function for a person being given a prompt to write an essay for homework, for example, is...

an essay that talks about the correct thing, for example. So like that's instruct tuned. We also see this for like code completion models, for example. So these things have wormed their way into programmers tools where I type out half of a line of code and it can just auto -complete the rest of it and lines below that because it knows the gist of what you're trying to do. And it's told complete code. That is the thing that we're training you to do. Another, sorry, go ahead.

Hunter Powers
speaking of words All right, well now I was going to make a wonderful reference to the trash man and I believe it was also featured on the episode of the family guy where they said that bird is the word. Bird, bird, bird, bird is the word. Yes.

Daniel Bishop
bird. And now we have to pay a lot of money to RCA or somebody. It was under six seconds, it's fine. So the other thing that I want to describe as what you can do on top of a base model is a chat model. And so a chat model, which is

Hunter Powers
But, okay.

Daniel Bishop
I'd say closer to what chat GPT in one of its like base forms is and chat GPT being one of the most familiar LLM uses for people is you take a bunch of chats and just here is one person talking, here is the response to the person talking and you give it a million billion, however many examples of what a back and forth conversation looks like, especially if you can give it more and more context above that of here's what the 30th

response in a conversation that's talking about something 10 answers ago was That's how you get a chat model and that in terms of like the data that you give it is basically the same thing as what a base model sees ish and what a instruct tune model is but the important thing is you're taking this model and you're telling it this is the kind of output that I want and You said some people it might be fairly easy to almost

get up to their like intelligence level. I mean, I'm not going to point fingers, but it's one or both of us or zero, zero, one or two of us.

Hunter Powers
maybe even some people on this podcast.

Chetchibit has, I was just looking at it, 180 million users right now, or at least that was the last reported number, though that's pretty recent, less than a month ago. I mean, just outstanding growth, overnight success. There's a question of whether they can also lose it overnight, but that's gotta be like the number one product that, well, I'll just say people, humanity is currently interacting with large language models today, agree?

Daniel Bishop
chat GPT being like yeah, absolutely there are local things that people can do but you have to have fairly beefy compute well

Hunter Powers
Yeah, it's gotta be number one.

Daniel Bishop
That's getting less of a requirement over time, but for the most part, a fairly beefy computer to put these things on and ease. Like I'm subscribed to chat GPT, even though I also have some large language models locally on my computer, because it's so easy to just open up the app or open up the website and say, Hey, I want you to act as a creative writing buddy. Here's an idea that I had. Let's kind of back and forth on it.

and it can really help with that. Or, hey, I'm applying to these jobs. Could you write a cover letter for this? If you look around online, a lot of people are using this to automate aspects of applying to jobs, cover letters being great usage of this, being able to take notes and collate them together from meetings, being able to respond to or write certain emails. Some of the rote stuff. Mm -hmm.

Hunter Powers
There's an account on TikTok where the interviews. And as he's doing the interviews, he just asks chat, GBT every single question that he's asked. And his whole sort of like the purpose of the channel is to get through the interviews. And he does get job offers. They're all in fields that he does know nothing about. He just types the questions in. It's the responses back and reads them. And there's a lot of like and I think it actually works. He's like, well, let me think about that.

Daniel Bishop
It's

Hunter Powers
And we can see, yeah, great question. Let me think about that. And then he more or less just reads it directly off screen. And sometimes they'll ask, well, did you want to clarify any of the requirements of this question? And he says, you know, so they're like, huh, yes, I do think that is a good idea. Let me think about those requirements. And then like literally it's like, yes.

Daniel Bishop
installing.

there he's reading off the three things that Chachi BT sent him. Yeah.

Hunter Powers
what restrictions might be in place that geopolitical restrictions might be in place that could. And I mean, these are like very high level, you chemical engineering in this sort of very specific context that he does nothing about that he's up. He's not doing a bad. I don't think he's acing it, but he's not doing a bad job and he is getting job offers. Yeah.

Daniel Bishop
So that kind of brings me to one of the big points that I wanted to talk about. We mentioned AGI before. You're talking about this guy who using chat GPT can get.

Hunter Powers
What's okay, what's the difference between AGI and what we have now? It's just better, a little bit better, or is it fundamentally different?

Daniel Bishop
Right. Um, well, I think AGI should be able to do everything that a person can. Granted, people tend to have meat attached to their, uh, Oh no, there are plenty of large language models that you can go out and get right now that are under the ERP category. And I'm not going to say the words out loud because I'm sure it gets us flagged on Apple podcasts or something like that. But, um, yeah, it's for the best.

Hunter Powers
Everything.

I'm not even because I have no idea what it means.

Daniel Bishop
Good. And keep it that way. And for those of you who are curious at home, I dare you. I double dog dare you. Take a look. So, AGI would be able to do everything that a person can, or at least a person's mind should be able to do. Like, again, very broad definition. This is something that most people probably wouldn't agree on any of the specifics of, but very broadly speaking, AGI...

Hunter Powers
Maybe go to that DuckDuck search engine before you Google something like that or search something like that.

Daniel Bishop
as good, at least as good as a person, at thinking about stuff. Now... Yeah. Right.

Hunter Powers
We talk about the touring test as like a way of an AI pretend to be a human well enough that it convinces you you can't distinguish between computer and human. If you pass that test is it a GI.

Daniel Bishop
And more people are starting to, yeah, chat GPT passes that test under a lot of circumstances, a fairly large percentage of the time. Now, does that mean have, there are a lot of people that say there are certain problems with the Turing test and so on, but like, okay, if over X percent of the time, what is the percentage where we say enough people, enough of the time, uh, don't realize that it wasn't a person that typed this.

It was a computer. Do we just say, yup, it's sentient now or it's AGI or it's some other term that we apply to it. So when it comes to AI, AGI, sentience, creativity, uh, I think creativity is one of things that we should definitely dig into a little bit more, but basically there's a hundred percent of use cases. Just by definition, there's a hundred percent of use cases and there's a hundred percent of people and there's a hundred percent of the time.

that people interact with use cases. Some... Right now, yeah.

Hunter Powers
But by the way, so is it only conversations? be AGI, you don't need to actually do anything in the physical world. It's all in, I am talking to a black box and you could be an expert in this area or you could just be a computer program and I can't tell. And then we're sort of like evaluating different areas.

Daniel Bishop
would say a hundred percent yes for the reason that like in terms of abilities or disabilities someone who is locked in, you know, completely paralyzed, they've got locked in syndrome, but if we could attach a bunch of electrodes to their head and be able to help them out, which it sounds like there've been some great improvements in that field or at least related.

things to that with like paralysis and people being able to communicate. They are absolutely 100 % know if stands or buts still sentient people. They just don't have access to a body for some sort of medical reason. A computer that could 100 % of time or, you know, better than or whatever, like one of these people who are in that very unfortunate circumstance, like that's, I think the closest that you could do in terms of apples to apples comparisons.

Hunter Powers
Are you referencing things like Neuralink, Elon's company to put chips in your brain, or is that something separate?

Daniel Bishop
it could be that or non -neural link, but similar types of approaches where like somebody is paralyzed, we put these things on or in their head and now they can move a cursor on a screen, that kind of thing.

Hunter Powers
I believe it was Elon who said that we are just the biological bootloader for the digital superintelligence. Do you buy that premise? Is that what's happening?

Daniel Bishop
I think that kind of thing shows up a lot in sci -fi. And I wasn't pausing just to give ChatGPT a chance to start writing ahead of me. But no, I'm thinking like there are a lot of instances in sci -fi. I'm specifically thinking of like the Mass Effect games right now, where it's more or less inevitable in these instances. I don't know if I agree 100 % with it myself, though probably do.

Hunter Powers
Hahaha!

Daniel Bishop
that at a certain level of technological achievement, you're going to get to the point where the computers get smarter and smarter and more capable, and then you have some kind of robot apocalypse and you run the risk of destroying all life, possibly, at that point. I'm gonna go see the new Dune movie tomorrow, and in the lore and mythos of Dune, they got up to thinking machines. They had a whole big war about it, the Butlerian Jihad, I think it's called, in the lore of Dune. And then after that point,

All right, no thinking machines. We can have machines, but they ain't think you've got the giant worms. You've got a mentats where it's basically a person that's a computer. You have people that can vaguely see into the future and get you across a space and time and so on, but no thinking machines that got us into trouble last time. So you see that as a theme over and over. Um, I'm not a hundred percent sure that I buy it because that's not the only theme that we see in sci -fi and sci -fi being.

Hunter Powers
just giant worms.

Daniel Bishop
People who've thought about this bunch. Ian M. Banks, a very famous Scottish sci -fi author. He wrote some other stuff too, but I like him for sci -fi. Unfortunately, recently passed a few years ago. Fantastic author. I would highly recommend many of his books. They can get a little weird, but I really do recommend his works. Yes, I -A -I -N Ian M. Banks. Maybe some of the aspects of individual.

Hunter Powers
Is this safe to Google or I also go to Duck Duck? Okay.

Daniel Bishop
things a little less, but it's definitely a Google Googleable name. So in his culture series of books, it is a post AGI post singularity, whatever type experience where artificial intelligence has got super intelligent and just sort of decided to like keep humans along and let them sort of do their own thing. And it can be useful sometimes for even though they're millions of times smarter than people.

It's a benevolent AI, utopian sort of society that those folks tend to live in, as opposed to a Terminator. And I think that we've got kind of both ends of the extreme to play with, and realistically, we'll get a little bit of everything, I suspect. Mm -hmm.

Hunter Powers
You mentioned, you know, as a joke that you're not just reading off of the chat GBT response right now. But, you know, it is entirely plausible that you could be. And as noted earlier from the tick tock, I mean, there are people doing just that now. And when you become just a mouthpiece for the AI, it's like at that point, you know, has it taken over?

Daniel Bishop
Three.

Hunter Powers
And it's really just like, it feels very plausible that, all right, I'll just be your mouthpiece and I'll say everything that I agree with you on now. But over time, you're going to agree with it more and more. Like, could it shape how you think about things because you leverage it, you know, all of the time to do all of your work already and slowly, you know, you basically just become it.

Daniel Bishop
Mm -hmm. Well, I would say less become it and more become lazy. And this is something that we're seeing in education right now. So a lot of teachers...

Hunter Powers
I don't know if you're literally going to put a chip in your brain that's telling you which is what they're saying they're going to like Elon's very clear. That's what Neuralink is going to do. It's going to hook up to what's it's Grok. Is that what his is kind of dumb large language model is called.

Daniel Bishop
Yeah. It's just a bad Llama 2 fine tune where he wanted to say naughty words more.

Hunter Powers
Yeah, it's going to hook up to that. You're going to think about things in your brain and it's going to, you're going to hear it in your brain back. And then you're just going to say it like, I feel like that's exactly what they're doing.

Daniel Bishop
Well, I mean, it's one step away from I can just Google the thing. It's bringing that human and the technology closer together. If I want to look up someone's birthday, you know, it's on my phone if I've saved it. Otherwise, it's just I have to think real hard when's Hunter's birthday and then it's going to pop into my mind. I don't see a large difference between those two things personally.

I mean, one's way more invasive, of course, but like the so in the education space, like people are a lot of teachers are very upset with students cheating on homework, cheating on essays because they're just reporting out what a chat GPT spat out of. Write me a two pages. Right. Yeah.

Hunter Powers
Yeah, it's the old calculator argument. Like, why do I have to learn your math? I can make it just type it into a calculator.

Daniel Bishop
Exactly. And so I think the whole education system is going to have to be revamped with this new tool because what we have is a new tool and certain types of testing basically are now irrelevant or irrelevant if you don't have Irrelevant if you don't have You know access to like a chat GBT or so on but Instead of replacing people it's going to possibly make them lazier and I say possibly because there are plenty of

artisans out there who say if you're using a Mechanical sander as opposed to like sanding by hand you're cheating or you know if you're using a planer and you're not using like some physical thing you're using like a High -speed whatever planer. Well, you're not really like some gatekeeping type stuff. I'm imagining in almost any industry there could be that That same sort of thing could happen for writing for this for that for education stuff

Hunter Powers
Mm -hmm.

Daniel Bishop
where what we have is a new tool and people can be lazy with it. The same thing is like, I can use a calculator to cheat at math homework. Sure. I can now also, it's, it's the same thing. I can now use chat GPT to cheat at my English homework or my history homework, or possibly my math or even chemistry or whatever homework. So like there's more homeworks that we can cheat on, but it's just a tool.

Hunter Powers
Yeah, I don't know where you were in this timeline, but I happened to be in high school right along the timeline where calculators got really, really smart and teachers didn't know that they were really, really smart.

Daniel Bishop
Ah. Right. Ooh. Yeah.

Hunter Powers
and that you could store notes and all sorts of custom programs and games and I would literally go into my history test and say hey any objection to me using my calculator on this history test?

and show it to them. Here's my calculator is like a I think it was a TI -89. I 93 TI -93 I think I had at the end. I know and then yeah, absolutely. Even for some of the SATs like again, before they were banned, you could go in and take your SAT -2 in chemistry and have tons and tons of notes stored. And I showed them the proctor again, always like completely transparent, but not telling them. But like, look, do you?

Daniel Bishop
Just notes. Yep. Lie of omission. Yeah. Yep. I would have, I'll tell you right now, you and the rest of the world.

Hunter Powers
If I leverage all of the functionality of my calculator, would you have any issue with that? No, no, no, no, no.

Daniel Bishop
I would have failed high school stats. Just, I don't know why I wasn't paying any attention to this class. I'm very, very sorry to my high school stats teacher, by the way. I did not give you the attention or respect you deserved, especially considering what field I ended up in. But I would have failed high school stats if not for the fact that I learned the basics of programming. I had, I think it was a TI -86 plus something. And I didn't learn stats, but I learned just enough of it.

Hunter Powers
You needed that 93.

Daniel Bishop
at the time to make a function to be able to call. And I just made a little program that like, it was just a stats helper for this kind of problem. Just input these variables. Here's the output. Like way, way less than a Wolf from alpha, for example, you know, these general things. It's no chat GPT, but like as a form filler of this kind of question, put these things in, get this sort of output. It worked. And I got, well, I got a D, but it wasn't.

Hunter Powers
Hahaha.

Daniel Bishop
It wasn't because of my testing grades. It was kind of everything else. I suppose it does. Um, so there is something that I wanted to get back to. I was mentioning a hundred percent of people and a hundred percent of the time and so on. So, uh, when it comes to AGI, I don't think we're going to get there where there is a switch flipped and the collective of humanity says, yes, that was it. We now reached the finish line. 51 people.

Hunter Powers
That explains a few things.

Yeah. AGI.

Daniel Bishop
were put into a room and 50 of them said, you know, failed the Turing test because they thought that a computer was a person. Like if that's, if that's the case, I think we could almost say that we're there. Uh, and a lot of people have been moving the goalposts. Right. Yes. Yeah, exactly. And so I think that the, not to get sort of too philosophical about it is, but like there isn't a binary it is, or it isn't.

Hunter Powers
That's what I was just going to ask. There's a cohort of our population today that believes we've already achieved it. We're just not really being honest with ourselves.

Daniel Bishop
There is a percentage of the population, a percentage of the time in a percentage of use cases that is going to continue to the point where eventually it's a hundred percent of people for a hundred percent of the things, a hundred percent of the time. But in the meantime, we've got 30 % of the people for, you know, these 10 % kind of tasks of back and forth so that you can ask it 40 % of the time are, are saying that, yeah, that it's as good as a person.

And for writing, um, cover letters for job applications, that's probably most people most of the time at this point for writing, uh, a book. It seems to be basically nobody, none of the time. So there have been attempts at like a completely chat GPT written book. It, the way that it was trained speaking earlier of like what kinds of data goes into fine tuning this to give an output. It's not tuned to write a book. And so you get like.

Okay, I'll write you a story and it kind of makes it about a page and a half long and it and it always ends it and there is sort of like a structure to how it's been putting these things together. Maybe somebody's going to make a book GPT where it specifically was made on a bunch of Stephen King books and now it just knows how to pump them out and you've got ghost writers. But like there are a lot of different use cases of I want there to be text.

or speech and the text is just hooked up to a speech generator. Those have gotten amazing. So like of all of the different kinds of things, more people, more of the time are going to start saying, yes, this is good enough or as good as a person or I can't tell the difference. And so like, when is AGI? Is it 50 % of people, 50 % of the time, 50 % of the use cases?

Hunter Powers
Yeah, I think my test would be.

when I know that, when I would rather give the work to an AI agent than a human. So to do my taxes, by the way, sounds like a fairly computational program, but I'd still, I'd rather give it to a human than just some random programming. But let's say I needed to label a whole bunch of data. There are definitely scenarios today where I would rather give it to an AI agent than a bunch of humans.

Daniel Bishop
Mm -hmm. I've done this working for you at our previous job. Uh, there, there were a bunch of, yeah, there were a bunch of news articles coming through and we wanted to see, is this about a ransomware attack, for example, a fairly easy thing for a person to do?

Hunter Powers
I'M SHOCKED!

Daniel Bishop
Was this article about a ransomware attack or not? And there's edge cases like ransomware was discussed, but the article wasn't necessarily about it or it was, but it didn't like ever mention a company by name or whatever, or who was actually affected. So like there's intricacies of it as opposed to really just this immediate binary at is or it isn't, but right. We were having a bunch of people label these documents, then make non -large language model machine learning models to then determine if this is or isn't about that.

But they were inconsistent and needed oversight and training and a lot of extra work went into getting to the point where this was enough 85 % of the time correct. And then chat GBT comes along and it was significantly better can run all day long and an order of magnitude less expensive.

Hunter Powers
And you're using chat GPT as a proxy for large language models.

Daniel Bishop
Sure, it was right. So any large language model, in our case, we were using an API access to GPT -4, but like, right, I keep using the term chat GPT. And so is that replacing jobs? Yes, we're already at the point where AI is straight up replacing certain jobs. It, for the most part, is very rote stuff like that. But we are getting into like creatives getting affected by that too. Like, you know, artists have...

Uh, the, the D and D folks, wizards of the coast laid off a lot of people, including a lot of art folks with the presumption that AI art is going to start worming its way into more of these products. They've gotten in trouble at least once already for trying to have like, granted, it was one of their artists use something, didn't declare it, whatever, but like AI art is starting to get into real published stuff. Um, there was the writer's strike recently where one of the sticking points, and I think an extremely valid one is.

We don't want AI writing. Now, granted, I've seen the jokes that AI has come up with. We've talked about that a little bit already. It's not great yet. In a couple of years, is it going to be great enough? Quite possibly. And I think that's something that we truly need to worry about. I'm glad people are fighting to keep these creative jobs. But also, like, tools change the world. We don't have nearly as many horse and buggy drivers and maintainers. We don't have people that are making

that are scribing books by hand after the advent of like typeset. The world will change as a result of this. Jobs will be lost. The question I want to pose to you is, is it a net loss or is it going to be there's new jobs or does this force us all into a universal basic income scenario or what? What do you think?

Hunter Powers
I'm suspicious.

I'm suspicious about the AI job loss. Now, data labeling is a good example where I completely buy that companies that are specializing in having humans labeling data are seeing their revenues decline due to AI agents taking over that work. But...

Daniel Bishop
Sure. Everybody.

Hunter Powers
a lot of the layoffs that we see, was it this week? I think Google announced they're going to lay off a whole bunch of people. But yeah, everyone's like, and the idea that, well, it's because of the AI and they're going to replace it with AI and they're going to use a lot more AI. And I just, I don't buy it. And I think there's probably like a couple, there's a couple of reasons. So I completely buy that AI is starting to augment people and make people more productive. A bit, but I don't think it's replacing.

Daniel Bishop
Yeah. Well,

Hunter Powers
I think there's like almost no jobs that it's all out replacing replacing. Well, I did think of the example of call centers, like they're definitely AI call centers, but they all suck. And there's AI like chat support, you know, where you, you go on to ask a question and it's a, well, what can I help you with? But it's all awful. Like I've never ever seen one.

Daniel Bishop
You tell me one time that you've called a call center and you said that was a pleasant experience and the person understood me and I got the answers that I was looking for. And that's with people. If we're at a point where it's really bad from a large language model, heck, we're already as good as people are.

Hunter Powers
It's horrible!

So

No, no, no, no, no, it's worse. It's worse when there's no human, even if that human can barely understand me, it is worse because you just know that you're talking to a soulless being and the fact that like, it can't come anywhere close to answering your question and like you're forced.

Daniel Bishop
Now, I've worked in a call center. If you want to talk about soulless beings as someone who has worked in one, you don't have one while you're there. You're stuck in a box and you're resetting someone's password for the 30th time that month, for example, very specific example.

Hunter Powers
talk to your support computer and I'll talk to my support person. Even if they may be on that lower tier of AGI.

Daniel Bishop
yelling manager at the LLM until you get chat GPT -5 instead of four. Manager, manager, help. I've heard cursing at them sometimes works.

Hunter Powers
Yeah, help, help, help.

It's such, I'm having so many flashbacks. I've got to move, we have to move on. I'm not going to be able to continue, continue this episode if we don't. Bitcoin down $500 by the way. Don't, so we don't have to worry about stopping for it being up a thousand.

Daniel Bishop
So.

Hey, if you want to go down more, all I gotta do is hit that buy button. I'm telling you. I should set up a site. Should I sell like my financial advice as a service where like I'll take a look at the stock market and then anytime I'm about to buy stock, I'll give like a five hour heads up online and then people can short that stock and then I buy the stock and it goes down and people make money and they have to pay me. Like basically I want enough people paying me.

Hunter Powers
Please don't.

Daniel Bishop
to make bad financial decisions to then offset the bad financial decision. Because I can't invest well. I've read a little bit about it, apparently not nearly enough, or I just have the exact wrong intuition. If I'm going to invest, it's going to be wrong. So I feel like I should sell that as a service.

Hunter Powers
Yeah, I'm not buying it. I'll let the AI give me me give me some advice. I guess as the AI is becoming, it's getting better. I don't think we disagree on that. Moving towards this sort of AGI, the discussion of what rights does the AI have and who owns the product that it is generating seems becoming more and more of a hot, hot topic.

Daniel Bishop
Well, you've

Hunter Powers
can it can AIs.

Daniel Bishop
in three or four sets of hands. Again, there are open source, widely available, locally installable large language models, but yeah, your chat GPT -4s and Mistral larges and whatever Amazon, I'm sure Amazon has one, I don't remember what it's called. Yeah, those are being held by some, well, Mistral's not big, but some pretty big companies have access to this by virtue of needing.

a trillion GPUs and all of this compute power and data centers and so on. So what does that mean for the future? Mm -hmm.

Hunter Powers
You know, we have the concept some degree comparable to an individual. A company can have similar rights as an individual and be registered in a similar way as an individual. Do you think we get to the point where an AI model...

can be a registered entity or the inverse must be a registered entity because we've also been going down that path in terms of regulations. But...

Daniel Bishop
I finished reading Neuromancer for the first time and there's a group in that called Turing and like they make sure that every AI is managed isn't allowed to get too powerful has like Yeah, certain rights in certain countries. I think so Absolutely. I think so not chat GPT

Not GPT -4. GPT -5? I think GPT -5 is going to make discoveries. Now, when I say that, it's not gonna get the credit. This actually just came out recently, news in, I think, you know, a US court decided this or something. I don't have the specifics, I don't have the article in front of me. Maybe I'll find it and put it in the show notes. But, Patent Office, something like that, said that there has to be personhood for...

attribution of like an invention. You can't invent something or trademark or whatever it was. You can't X without being a person. And I think that's fair. And there's probably certain precedents for that. Like you could just have a there's a guy who tried to. Hmm. Right. So.

Hunter Powers
why isn't it the person behind the model or the like shouldn't it be either the person that prompted the model or the person that built the model that then owns that patent.

Daniel Bishop
I think that's sort of the big question. And then to your other question of like, okay, what happens once this gets more capable? Is a GPT -5 going to be so capable that certain countries, maybe not the US, because we already said no, but what if Switzerland says, okay, this class of model is considered sentient or a person or whatever, it's allowed to make...

Discoveries is GPT -5 or 6 going to win a Nobel Prize and have a humanoid robot or just a computer screen wheel out onto the stage to get its medal? I think so. I really do. I mean, look at how far we've come in such a short time. Some people are saying that we've reached a zenith and that it can't really get that much better than this. I don't buy it. I think we're going to continue finding improvements, whether it be underlying architecture or just continued better.

data and more fine -tuning for specific use cases, and one model is actually 300 little models under it, whatever it ends up being. If a person's able to pass interviews for, you know, a senior position as a chemist and doesn't know anything about chemistry, give all of chemistry to something that can hold all of it in its memory, and then maybe it will find some compound that hadn't been made before. We're already seeing that, by the way. This has already happened. We're...

uh, it, you know, a new polymer or certain proteins are being suggested by AI from a basically infinite space of possibilities. And they say, here's 300 options. And then people go and try and synthesize it into whatever they're going to be an AI thought of it, you know, whatever it went through the space and figured out here is a suggestion of a compound to make. And then a person's going to make that and then they're going to win a Nobel prize. And.

That will absolutely raise the question of do they deserve it because they didn't come up with it.

Hunter Powers
Yep. I don't understand why it wouldn't be either the creator of the technology or the user. If a doctor creates a tool to help, you know, a life -saving procedure and is going to win a Nobel Prize for that, the tool doesn't win the Nobel Prize. That's idiotic. It's stupid. The doctor wins the Nobel Prize. Is AI not just a tool? Like, why is it different than a tool?

Daniel Bishop
Mm hmm. Sure. the moment, yes. I think at the moment it is a tool, but it's going to be a gray area. As it gets more powerful, it stops being a tool and starts being something.

More than that, like to make a bad analogy here, a chimpanzee does not have personhood in most of the world. I think like dolphins and maybe like certain apes of certain types have, if it's not personhood, they have, you know, certain like semi -sentient, right? There's a few countries that say certain kinds of animals are not quite at like the human level, but they've got certain rights.

Hunter Powers
Okay.

Mm -hmm.

Daniel Bishop
They're kind of considered there. We're going to get there for the intelligence level and self -awareness and creativity and or facsimile thereof enough to the point where it doesn't matter for most people. And then it's going to get more powerful than that. And so I think you have I'm not saying a chimpanzee is a tool, but you can give a chimpanzee a hammer and have a fun afternoon, give him some stuff to smash. But.

Right, if you have this thing that isn't a person, then it keeps getting more powerful and keeps getting more capable and more intelligent. At a certain point, you have to say, this is no longer a non -person. This is a person level intelligence, even if it isn't like a brain and a body kind of thing.

Hunter Powers
if you could snap your fingers and give.

GPT -4 a large language model personhood would you? Is that a horrible, is it a horrible idea?

Daniel Bishop
Uh, I don't think a horrible idea. I don't think that. So talking about what we did earlier, where a large language model is a base model that's been trained on just like text, mostly from the internet or other anywhere really. And it knows about language and it picks up a lot of facts and non -facts and just information and whatever along the way. And then you fine tune it towards something.

If we were to fine tune a model specifically to be Craig, I don't know, whatever, your name is Craig. And like there are people that do role play type models, talked about that, alluded to that earlier, but like there are role play models where you can say, hey, this is your persona. I want you to always respond as that. If you just continue to fine tune that into a model and then give it a memory, which GPT for chat, GPT, whatever, they're starting to add a memory to it.

People are looking into a variety of different ways to add long -term memory to these models. If you have intelligence plus memory and the capacity to grow and change over time, isn't that a lot more in the direction of intelligence?

Hunter Powers
I guess potentially.

Daniel Bishop
Sure, it's context window, right? Yeah.

Hunter Powers
When we talk about memory, there's two parts of quote unquote memory. One is how much can you tell the large language model, the prompt it, they call it And then which I guess memory still has to feed into that context window or is it a separate concept?

Daniel Bishop
No, no, it's definitely part of it. There's a term called retrieval augmented generation, where basically you take a database of, uh, gotta say it like that, rag. You take a bunch of text, whether it be a book that you want to be asking about or previous conversations or whatever it is, but you just have chunks of text and you ask the large language model question and it takes that question and says, what are the most relevant chunks of text? Gets

Hunter Powers
Rag.

Rag. dirty rag.

Daniel Bishop
One, five, 10, a hundred of those uses that at the top of the context window of here's everything we've talked about. Here's potentially relevant information. Here's the question that was just asked. Now give a good, pleasing response that maximizes your reward function or whatever. Um, don't people do the same thing? Don't they? This conversation that we're having besides being a back and forth, when you respond to me,

It's something that you're hoping is going to be relevant to what we were talking about. It talks, it is informed by the things that have happened before. It's informed by stuff that you've heard and read and seen over the course of your lifetime. Couldn't you just take all of that stuff and turn it into text and have that in a database? And as long as you enough of the time, bring up the correct relevant information, spit out a good response. Couldn't we just automate you?

Hunter Powers
I stopped listening after bird is the word, but sure, I'll go with it. Why not? If it happens that AI models start getting something close to personhood, should we be concerned? Is that a almost necessary step towards an AI dominance or whatever?

Daniel Bishop
Ha ha ha.

think dominance will come in drips and drabs as well. We're going to start with people who are doing very rote, you know, classification of documents. As I mentioned, those jobs are literally already being lost.

Hunter Powers
What's the next job to fall?

Daniel Bishop
The next job, I mean like a secretarial type job. Maybe it's court stenographers. Maybe it's, well, certain non, like maybe not lawyers, not one of the partnered people at a firm, but there are a lot of like just out of college baby lawyers whose main jobs are doing like legal discovery. Folks are getting into legal discovery with NLP, with AI, with the gen at whatever you want to call it. That's...

literally already happening, a highly skilled, highly technical, highly, well, maybe not that highly paid, especially for your first couple of years out of college or whatever law school. Like the field of law is starting to be affected by, we can just hook this up to a rag system that has read every law and every case. And when you say, here's the case that we're talking about. All right, here's literally everything that's relevant to that. And it took four seconds and 17 cents.

or whatever it ends up being. Like, that's happening.

Hunter Powers
fast food drive -through order takers. That's the next job to fall. Because it's not, that's another one of those like hard experiences like the customer support line. But the customer support line has to do a lot of thinking. I feel like the fast food order taker, much less thinking, the bar is much lower.

Daniel Bishop
Hmm

Hunter Powers
to completely automate that and like that's a thing of the past that you're talking to a human. And I know that it already exists to some degree where you're not talking to a human or they're already routing those things to call centers. So the person you're talking to isn't local. And when they say hello, by the way, I don't know if you catch that a lot, like the first hello from people is a recorded hello. Hello. Yeah. Yeah. Listen for the recorded hello.

Daniel Bishop
No. So it they're still like loading a person up. Okay.

Hunter Powers
by the way, like not just fast food lines, I mean in a lot of conversations where you're talking with a salesperson or it's a call that opening hello, it's not real. It's a fake hello.

Daniel Bishop
So AI is coming for our hellos is what you're saying.

Hunter Powers
Yeah, hellos are gonna fall. It's gonna be a thing of the past. All we'll have left is goodbye.

Daniel Bishop
Hehehehehe

Uh, yeah, so next jobs to come. I mean, like think of what are, what are jobs that require a lot of sifting through data to collate a response and give it out. I think like legal discovery is absolutely one of them. Uh, certain.

Hunter Powers
So like, I don't know about legal discovery. Like I buy again that it'll augment legal discovery, but it's legal discovery. Like the outcome of a legal case has such heavy weight that like, I don't, again, hey Hunter, you need to hire someone to do all of your legal discovery. Over here, I have the GPT -5. It's not even released yet, but let me just say like it's more than twice as good as anything that's on the market today.

Daniel Bishop
So that's not the diff - No, no, that's not the binary.

Hunter Powers
or over here a like normal law firm who has a department that does discovery. If I can only have one, I'm going with the humans.

Daniel Bishop
The binary is, or sorry, the non -binary of this is it's not have a team of people or have ChatGPT, it's have a way smaller team of people and ChatGPT. So the jobs aren't being completely obviated. It's by being such an augmentative tool, you need less people to do the same amount of work.

Hunter Powers
or you'll use the same amount of people to do more work. So where is, so that's one of the, so everyone goes to the, oh, well I can get the same amount of work done for half the money, so that's what I'll do. But you could also do wildly better work for the same amount of money. And I don't understand why.

Daniel Bishop
So instead of having 10 baby lawyers, you have two baby lawyers.

Sure, you could do that as well.

Mm.

Yeah.

Hunter Powers
People seem so hesitant to go there. They'll talk, oh, all right, well, AI is going to replace coding or heavily augment it. So you'll be able to do the same amount of product development work with half the number of engineers. Or you could do double the amount of product development work, have twice as many features, have the quality of your code and the quality of your end product be twice as good for the same amount of money.

Why do we just instantly go to fire everyone and keep the like don't increase the quality of things keep that that that support line that you're calling that's that is this close to hell on earth as exists We're not gonna make it any better. We're just gonna run it for even cheaper. I don't I don't get it

Daniel Bishop
Yep. I'm hoping that the call centers go away, frankly.

Hunter Powers
They're not getting, no, they're just gonna get cheaper. They're just gonna get cheaper. And more horrible.

Daniel Bishop
find out. Let me take two highly paid, highly specialized fields. And I think this is the two options here. On the left, you have lawyers. We don't want, I don't want more lawyers. Hopefully there aren't going to be more legal cases happening. There's the same number of cases that are coming out, even if there were even more lawyers. And there's a saturation.

Hunter Powers
Okay.

Daniel Bishop
There's stuff out there that talks about like there are too many people going into the law field because there's just only so many lawsuits going around as much as you might say, oh, Americans, they're so, you know, lawsuit happy. Okay. But there's still only so many lawsuits and so many things that you need lawyers for. If you make that workforce to five, 10 times as efficient, you don't need two, five, 10 times as many lawyers or the same amount of lawyers to do two, five, 10 times as much lawyer rank.

But for something like doctors, if we make it where it's significantly cheaper because a radiologist can take a look at a slide and get the output quickly, or you look at a dermatologist who sees, oh, this is a melanoma, instead of having to have the entirety of like certain medical processes being done entirely by hand, it can be sped way up and triaged better. And you can take people,

who have questions for their doctor and intelligently route them or answer 80 % of them because with a RAG system, you can say, here's the answer to your question, it's in the documentation you've already been given. That's gonna be most of the queries that are going, oh, hey, I didn't take my medication this morning, should I wait until this evening instead? That's what, 15 % of all of the calls that are going to a doctor or something? Questions like that can just go away and then,

the same number of doctors could do more work or you could have more doctors being able to get more done, make healthcare more accessible to more people more of the time. So I think that there is a case for slash your workforce in half or double the workforce because they can do work more easily, more efficiently, cheaper, et cetera. So I think you can see both outcomes. And that's why I don't think we're going just full E &M banks or Terminator. We're gonna get a little bit of everything.

Hunter Powers
I think we both agree on where we're going. I don't think we both agree on what it's gonna be like when we get there, but we definitely both agree that they might be self -aware. And until next time, Daniel.

Daniel Bishop
Right. Thanks, Hunter.