This week on "They Might Be Self-Aware," @HunterPowers and @DanielBishop dive into the intriguing world of Tesla's Full Self-Driving. Is it truly getting more human, or just more confusing? We also take a ride on the AI hype train. Are you one of the lucky 2% using AI daily, or are you still stuck in the analog world? Plus, the future of education: Could AI replace homework and teachers? And what happens when employees start automating their own jobs? Does it even matter? We're not skirting around the ethical concerns either. From licensing agreements to the murky waters of digital privacy, we cover it all. And for the romantically inclined, we explore the bizarre yet fascinating concept of living vicariously through our digital twins’ love lives. Is AGI on the horizon, or are we just daydreaming? We break down the realistic timelines and wild predictions. Stay tuned, and don't forget to engage with us. Your future might depend on it! For more info, visit our website at https://www.tmbsa.tech/
00:00:00 Intro
00:00:36 The Human Side of Tesla's Full Self-Driving
00:06:37 Ethical Concerns: Licensing Agreements and Beyond
00:22:01 All Aboard the AI Hype Train: The Lucky 2%
00:26:04 AI in Education: Replacing Homework and Teachers?
00:30:31 Automating Jobs: Employee Innovations and Implications
00:52:09 Loving Vicariously Through AI Digital Twins
00:59:59 The Countdown to Artificial General Intelligence (AGI)
01:05:00 Wrap-Up
Hunter [00:00:00]:
Welcome to episode 18 of they might be self aware. I am Hunter Powers, joined by the.
Daniel [00:00:07]:
One, the only, the real, the self aware, Daniel Bishop.
Hunter [00:00:10]:
Oh, how's it going, Daniel?
Daniel [00:00:13]:
I'm alright. How about you?
Hunter [00:00:14]:
Oh, I'm excited about this episode. Today we're gonna be talking about Tesla full self driving. It's getting more human, but is it good enough?
Daniel [00:00:23]:
And how about in terms of transportation, the AI hype train all aboard. Or maybe just 2% of us.
Hunter [00:00:30]:
This one I'm excited for AI, replacing homework and maybe teachers too.
Daniel [00:00:36]:
Maybe we can replace employees or let the employees automate their jobs. And does it matter if they do?
Hunter [00:00:42]:
And all the while spackling over the ethical concerns. Specifically with these licensing agreements, we can.
Daniel [00:00:49]:
Also live vicariously through our digital twins love lives.
Hunter [00:00:53]:
Oh boy. It's all up next. Hit that subscribe button. Now you are listening too. They might be self aware. So Daniel, how, how often are you actually using AI in a given day? Like what's give me your average.
Daniel [00:01:16]:
I feel like people like you and me are the absolute best use case scenario people for this like that. It's designed for people like you and me. I don't actually think I use it every day. Most days, but I don't think every day.
Hunter [00:01:34]:
I got to be pretty close to every day.
Daniel [00:01:36]:
Yeah, I mean like, I guess it's integrated into like copilot. So if I'm doing any coding, some stuff's gonna autocomplete. But I'm not always signed into that. I always open up chat GPT every day.
Hunter [00:01:50]:
I find that. I guess this is just a little bit of a sidebar. First off, what is copilot?
Daniel [00:01:55]:
Yeah, good question. So copilot is something built into visual studio code. And to explain what visual studio code is, it is a code editor. I do my programming in it and copilot helps. It's like a really nice version of autocomplete where you can type in the first line of something, it'll kind of fill in the rest of a function or write up some documentation for you based on the contents of a file. Things along those lines.
Hunter [00:02:23]:
Yeah, it's very, very good autocomplete for code. And that it can sometimes autocomplete the whole next paragraph equivalent in the code.
Daniel [00:02:33]:
Which is wild and can save a tremendous amount of time when you stripe.
Hunter [00:02:38]:
I also use copilot, though I find that I use copilot way less or way less in my tab, completing large segments of code. And I find that more often I'm taking an initial idea or how I want to change the code and giving that as a prompt to one of the large language models. So I want to refactor something and add a new feature. I'm selecting the entire code segment and then writing a prompt to make the change and then evaluating whether that change accomplishes the thing that I want to do. I don't know. Yeah, less just tab completing for me. But I'm, I've got to be every day or like, you know, 99% of the days because I also find it's.
Daniel [00:03:27]:
A part of my life now we've.
Hunter [00:03:28]:
Talked about this that I like. It's almost become my Google. I am way more likely to go to one of the different large language models, like a chat GPT or a Claude AI or a perplexity, and ask the question that I have or explore the topic. Just because it's so much easier to explore a topic and dive deep, and dive deep and dive deep. And yes, you have to go and verify a few things. Is that really true?
Daniel [00:03:59]:
Right. But it is not always true.
Hunter [00:04:02]:
It's, it's, it's. It's better than Google.
Daniel [00:04:04]:
For me, I wouldn't say that I'm using chat GPT before Google without wanting to stand by that statement. And I do. There is definitely a chat GPT is wrong component to it. It is not right all the time. You do have to double check things. But Google, like the search page of Google of please help me find this thing that I'm looking for. Even though I feel like I have very good Google. Foo is getting increasingly hard to use in a meaningful manner.
Daniel [00:04:40]:
Maybe it's just the web's been too polluted or SEO got too figured out. Chat GPT is I'm having thoughts about this and I'll give it a paragraph of text sometimes of like, here's this kind of conceptual space that I'm looking for. What do you got? Here's some really important information. This is the stuff that you're looking for as opposed to me having to google five separate terms. Then, you know, dig way down to find something actually useful. It depends on what I'm searching for, of course. And poor stack overflow and places like that. They've already been scraped to heck and back.
Daniel [00:05:16]:
I used to go to stack overflow, a site used especially by programmers, though there's non programming like versions of it, basically. But hey, here's this weird error that I found. Why is it being caused? And someone eight years ago had the exact same problem as you. And here's the person who figured it out all that got scraped, put into chat, GPT, and that with every other answer ever has made it. And it's ilk, a tremendous tool in terms of like, I am trying to troubleshoot this thing. Please help me out.
Hunter [00:05:48]:
Yeah. Stack overflow is like an Internet forum for programmers with lots of, historically, lots of helpful information. In fact, I have a back there somewhere on there, a stack overflow keyboard that consists of three keys. One key open stack overflow, one key is a copy, and one key is a paste.
Daniel [00:06:08]:
That's very funny.
Hunter [00:06:09]:
Oh, there it is. There is the stack overflow keyboard.
Daniel [00:06:13]:
We'll put a picture of that online. I think for folks, this is a.
Hunter [00:06:17]:
Relic of the past when we had, when all good engineers had their own stack overflow keyboard. This is all you needed to do your job, and this is core attention.
Daniel [00:06:28]:
Is all you need. We had stack overflow is all you need.
Hunter [00:06:30]:
Yeah. And Stack overflow signed a very large licensing deal, so they're now selling their data to these large language models.
Daniel [00:06:37]:
Right. Granted, it was being taken before, but now it's being licensed.
Hunter [00:06:42]:
I have, I've been thinking about these licensing deals, and, you know, a lot of companies are, are trying to, nobody wants their data scraped, but it seems like everybody wants to sell their data. Yes. Like they don't have, we already have.
Daniel [00:06:57]:
This data and we could get money for just having it. Great.
Hunter [00:06:59]:
They don't have any problem with using the data in the large language models they present. Like they have a big problem, but they don't. They just want to be paid for. Once you're being paid for it. Done. Done deal. Oh, you're going to pay me $18 million, $12 million, $10 million. But I hear.
Daniel [00:07:15]:
Yeah, right. That's absolutely what it is. There's lots of outrage until the checkbook comes out, and then all of a sudden you absolutely may use my data for training. Absolutely, of course. And we've talked about this in a broader sense of, that seems to be the solution, right? If we're going to be using everyone's data online, make the licensing deals with everybody. Whatever you put online is going to be used, but you're being paid for it. And to take that a meta step farther, isn't that kind of how most of the Internet works already? So many sites are free because you're the product, your data is being sold to advertisers and you're being served ads everywhere. And the Internet is getting increasingly crushed under the weight of them.
Daniel [00:07:57]:
But like that, that's how the Internet already works. It's just big companies were having to swoon and feign outrage for their users sakes that the users say, yeah, right. We're also against this. And then the checkbooks come out and all of a sudden they're happy with it. They, the companies, not they, the people working for those companies.
Hunter [00:08:20]:
Correct. And I have a theory that they, the companies aren't going to be happy very long. So we're seeing these very large deals again, many tens of millions of dollars being paid for licensing. And the companies are thinking about this as annual recurring revenue and a new revenue stream and perhaps the future of funding them. But I think we're going to very quickly come to the point where OpenAI is like, meh, we don't need to renew this deal. And when we do renew it, it was $10 million last year. All right, we'll do maybe like $200,000 this year, because they've already learned everything they needed to learn.
Daniel [00:08:59]:
Well, and they already did learn it. They frankly don't need that. This is to stave off lawsuits, and then they're going to get diminishing returns on everything new. We've already talked about this.
Hunter [00:09:11]:
Those numbers are going to come way down.
Daniel [00:09:12]:
Yeah. As soon as these models became available and started being used to put stuff on the Internet, everything on the Internet can no longer be assumed to be actually written by a person. And so there is like a pollution of the quality of the data that would be coming in from that point forward, OpenAI and so on. Are they just covering their tracks?
Hunter [00:09:32]:
And then some sites, like a stack overflow, it almost feels like a bankruptcy sale that's going out of business. They sold their data. What is the future of stack overflow? Not really a future there. I mean, they need to pivot to something else. Being an Internet forum for Q and A's about programming questions. There's still some life in it, but I can't, in five years, if they're not doing something completely different and the OpenAI gravy train has, has dried up, it's, it's hard to imagine that, imagine them having any significant presence. They will no longer have their own keyboard. That's, that was the, the peak, peak stack overflow.
Daniel [00:10:12]:
So I think this is along the lines of what some people are saying when it comes to thoughtwork and AI folks, you know, there's, there's plenty of digital link being spilled online of this is going to completely destroy these jobs. And I do think maybe someday entire swaths of these will be gone. I don't think programmers as a whole are going to be gone and programmers will continue to have questions and there will continue to be questions that like your chat GPTs can't answer. I have definitely used chat GPT to try and figure out certain problems and it is not 100% of the time going to get me there, especially with new technologies, something that hasn't been brought into the training fold yet. Over time, the new versions of whatever are going to get better. They're going to, as soon as a new framework comes out, ingest all the API documentation and BAM, now you've got it integrated in there. But still, I think that people having figured out weird edge cases that a computer hasn't run into yet, solving problems and sharing their solutions, I don't see that going completely away. And stack overflow as a, a site will probably have to undergo some kind of tightening of the belt.
Daniel [00:11:32]:
But I don't think they're going away. I really don't. Like for ten years, you know, maybe.
Hunter [00:11:38]:
20 years from now, we can only speak decades. About as long as we could possibly consider.
Daniel [00:11:43]:
Exactly.
Hunter [00:11:43]:
I mean, once you get to the.
Daniel [00:11:44]:
Point we're all, it's the Matrix and we're batteries and a big weird spider robot running around the blasted battlefield of earth.
Hunter [00:11:53]:
I agree with you on, you know, the developers are not going to be placed, but I'd suggest that the majority of people who have an opinion about this believe very strongly that, man, developers are on the way out very soon. Like telling people like, stop, stop getting a degree in it. Not that you ever really needed to have a degree in it to begin with, but don't pursue this as a career now. And I guess it's because I'm using it all the time and that I can see that the code is still nowhere near. Like, it's really far from being able to just give it something to build and being able to trust the output and like, I don't need an engineer. It doesn't feel like we're close. It feels like, it feels like the self driving car. Yes.
Hunter [00:12:41]:
If you keep your hands above the steering wheel the majority of time, it can kind of go where you're going. It's no comparison to an f one driver, but it can, the majority of time, go where you're going, but it'll also run you, run you straight into a train or whatever the latest Tesla accident was.
Daniel [00:12:58]:
Yeah. So the Nvidia CEO, Jensen Hwang, I think is how you pronounce his last name.
Hunter [00:13:06]:
We can go with that.
Daniel [00:13:07]:
Yeah. He said, hey, kids, don't bother learning to code, you're not gonna need to do that by the time that you're entering the workforce. That is a wild thing to say. Yeah, absolutely wild. It also might be true on the 15 ish years timeline. Like maybe, as I keep mentioning, for what will count as like, a sapient, sentient, whatever, AI. I think that it's not going to be 100% all at once. Every programmer is no longer necessary.
Daniel [00:13:40]:
It's going to be, and we're seeing this already harder for junior developers to be hired because you don't really need little baby developers that aren't very good that you have to really manage. Because I can get the same amount of work done that they'd get done by just doing a couple of chat GPT queries and plop in some new code. We're getting into that point.
Hunter [00:14:00]:
I agree with that. We are making engineers more efficient. And so if you need the same amount of work done, if you hire engineers that are well versed in leveraging these tool sets, they can get more work done today than they could have five years ago. By a lot.
Daniel [00:14:16]:
Yeah. And then on the Tesla side of things, like, if you look at where we were a few years ago, like three, five, certainly ten, I remember there were some early days of automatically driven cars. There was like a race out in the desert where people were making these, like, futuristic mad Max style robot vehicles that needed to navigate from point a to point b. And it was like, mostly just a desert, but it had to, like, get over rocks and, like, keep going in the right direction and stuff that we would consider very simple. We were having a lot of trouble with very, very historically, shall we say, recently. And now I've got this, you know, month long trial of the FSD, supervised for. For Tesla. And I'm having to turn it off once or twice at least per trip, because, oh, I was trying to get into this lane and it just wasn't going to do it in time or, oh, it was still accelerating to.
Daniel [00:15:21]:
I. I'm not gonna say I would have hit the truck. You know, I'm not here to slander any companies.
Hunter [00:15:28]:
What kind of truck?
Daniel [00:15:30]:
It was like a semi truck. It was painted black, which in Texas is a wild choice. I was. I was going faster than it in one lane, and then I asked it to move. I, like, hit the turn signal. It moved over into that lane and wasn't decelerating. Or if it was decelerating, it wasn't like decelerating fast enough to the point where I hit the brake and I mean, that's just one of a bunch of different things. We're like, oh, it tried to make a turn right in one.
Daniel [00:16:01]:
Oh, it did this. Oh, it did that. It's not perfect, but it is so much farther than where we were a decade ago or five years ago. It truly is an absolutely astonishing piece of hardware and software. But we have these aspirations toward that 100%, and people complain about it not being there. I think it's because it's being sold as being there is, frankly the problem.
Hunter [00:16:26]:
It's being marketed as.
Daniel [00:16:27]:
Yeah, right. And the folks at Tesla are being very cheek. No, no, FSD. It's not full self driving. It's FSD, and it's supervised. But, like, come on, we know what they're basically saying, that it's supposed to be, and it isn't that yet. And, I mean, I feel like it.
Hunter [00:16:42]:
Was, but it is incredible when five years ago, when Elon said, hey, next year, your Tesla will be going out and acting as a taxi for you while you sleep and making money, and.
Daniel [00:16:54]:
Then next year happened, and the year after that and the year after that.
Hunter [00:16:58]:
And I think he's still saying. He hasn't stopped saying it.
Daniel [00:17:01]:
Right? So. Right, that's. Well, you got to keep the hype going in that case. So, speaking of the Tesla thing, I was. I've been using it because I'm fascinated by the technology. It did something that I would describe as so human, uh, yesterday that I checked to make sure, because I have my hands on that wheel whenever it's going, because it's tried to do some weird stuff.
Hunter [00:17:26]:
It cussed out the driver in front of you and like, yeah, right.
Daniel [00:17:28]:
Hey, and then guys hits the car. I'm walking here. It wasn't that I was passing here. That actually happened to me in real life, and it made my week. And we can talk about that another time. Like, literally, guys, okay, I'm talking about now. I'm in New York. I'm in Brooklyn.
Daniel [00:17:44]:
A guy almost gets hit by a taxi crossing the street. He slams his hands on the hood and says, hey, I'm walking here.
Hunter [00:17:53]:
He literally said that, I need the New York accent.
Daniel [00:17:56]:
I'm not gonna do it. And then. And then he walked off, and I. My jaw dropped. I'm like, I cannot believe this actually happened. Yeah, it was. It was incredible. It was like what? Like seeing a meme in real life.
Daniel [00:18:13]:
So, anyway, back to the Tesla thing.
Hunter [00:18:14]:
Okay.
Daniel [00:18:15]:
I'm passing a semi at, you know, the relative speed difference is 5 mph, something like that. And I noticed I wasn't looking down at the screen to see the little visualization. I was like, I'm watching the road, and I'm like, I'm really close to the left side of the road. That's usually the car does a very good job of keeping in the middle of the lane. So I looked, I'm like, is the FSD on? Is it what? And then I realized after it did it a second time, as I was passing a second truck, it does what I do, which is, if you're passing another big vehicle, especially at a low relative speed, you kind of tend to give them a little bit more room. And it was purposefully doing that. It wasn't doing it for cars, and it wasn't doing it for other vehicles with a higher speed differential. And I thought that was really neat.
Daniel [00:19:09]:
Like, obviously, it's just a thing they programmed in, but that was. It's something I do. It's something I've seen other people do, and I don't know, like, you can program in stuff that feels very human. And I thought that that was really interesting. So we're, you know, there's progress being made here, but it isn't full. Not yet.
Hunter [00:19:31]:
I think on the last episode, you said that when the trial is over, you don't anticipate continuing to pay for the service. Is that still the case?
Daniel [00:19:39]:
100%. It is very cool. And I look forward to it really being full self driving someday, or full self driving all the time on highways or, you know, under whatever circumstances where they say this has been totally certified. You're allowed to take your eyes off the road. We'll let you know when you get there. I look forward to that day. I'm not going to pay $200 a month for the current iteration of things. It's better than it used to be.
Daniel [00:20:06]:
It's very good. I'm not going to spend $200 a month for it or whatever it ends up.
Hunter [00:20:13]:
Where's, where's your inflection point? What's the, what's the magic number?
Daniel [00:20:16]:
It's. I'd say the $200 a month is a fine number. It is the usefulness of it. I have zero trust in 95% of the time right now. It needs to be way more trust way more of the time. Or, like, you know, I don't have to turn it off multiple times per trip or, or whatever. I very much trust it on, like, highways to keep the lane and to make lane changes for me. I don't like its lane changes, but the.
Daniel [00:20:50]:
I'm staying in a lane like that's most of your driving if you're really thinking about it, unless you live like in a denser city. But if you're taking a lot of highways like I do, that's a lot of the actual driving percentage wise. That's great. Love that. Don't have to pay for that. All the other stuff. It would have to be something that's more trustworthy for me to actually pay for it. Otherwise you're paying to be a beta tester.
Hunter [00:21:17]:
I think there's hardly anything these days that you're not paying to be a beta tester of, but I guess that's another conversation. So, yeah, so you and I are somewhere between 80 and 100% of days leveraging a large language model. There was a, I think that's fair. I'll just call it article. I don't, I don't, I won't elevate it to study. Recently though, that suggested that around 2% of the population, and it was specific to british respondents and this thing, but it was suggesting that overall about 2% of our current population is leveraging these tools on a daily basis. Is that, does that track with you? Do you feel like it's only. Where would you have put the number? You're just guessing.
Daniel [00:22:01]:
As I, I remember hearing something from, I probably from OpenAI or at least involving them. That said it was like hundreds of millions of active users or a hundred million. And so 100 million out of seven point whatever billion is. I mean like it's not a large percentage. So I guess that kind of tracks. But that's like a worldwide sort of number. I'd imagine your industrialized countries or first world or whatever you want to call it. I would have thought it'd be higher than that.
Daniel [00:22:35]:
Also, that's daily and I don't think I would count as a daily user.
Hunter [00:22:40]:
Yeah, it depends on the most days of daily active users. I've worked at enough startups to know there's a lot of creative definitions of a daily active user.
Daniel [00:22:48]:
Right. And so if this, so this was just a survey post online, and presumably the question was like, do you use an AI tool like chat GPT every day? It was almost certainly something very much like that. Like I'm looking at the Reuters Institute site and they talk about the findings and I don't actually see what the, the questions are, but it had to be something very close to something like that.
Hunter [00:23:19]:
That's a good point. A monthly active user would be a much better.
Daniel [00:23:24]:
Yeah, number monthly. Look at weekly even. I think that would give them up.
Hunter [00:23:28]:
To way over what percentage of first world countries are the. Is the population that technically is capable of using these tools? Using this, the two year old don't really care about.
Daniel [00:23:44]:
I would say the USA is the highest and it turns out it is. Their reported rate was 7% and they looked at Argentina, Denmark, France, Japan, the UK and the USA. 1% daily in Japan, which I don't know what number I would have thought of, but, yeah, 2% France in the UK, 7% in the USA. And again, that's daily. I think we said weekly, monthly. That would really bring it up. But also, that's not of 100% of people. There are children that, you know, aren't old enough, there are people who are elderly and don't really use technology or just people who don't use technology.
Daniel [00:24:30]:
I think it was in that same set of results we saw, it was between 20 and 30% haven't even heard of chat, GPT or any of its cousins. So 70% of the population of these surveyed group of people haven't even heard of it. And that's people who took an online survey. So these are connected people. They weren't just going out into the middle of nowhere.
Hunter [00:24:58]:
So 2% of british respondents, 7% of the US. I think those numbers would start to suggest that we're just really early. Is that your conclusion?
Daniel [00:25:12]:
I think so. Right. This is all. I don't know where in the hype cycle we are, but it's got to be at that beginning where it's either still rising up at that top. I don't think we're in the trough of disillusionment yet. Yeah.
Hunter [00:25:27]:
I guess in order to answer there's two parts, or two parts at least, come to my mind. One is, are there more people that these tools would be super helpful to today? Let's assume they don't get any better. And then again, how much better? How quickly are they going to get where they. They could help a larger percentage of the population? I don't. The first one I'm not sure on. Is it. I guess the real question is, is it just because people don't know or is it because the current state of this tooling, unless you're sufficiently technical, it's not that helpful.
Daniel [00:26:04]:
Yeah.
Hunter [00:26:05]:
Or you'd use it daily.
Daniel [00:26:07]:
I think helpful is a tough one. It is definitely helpful to me. Things like copay, things like chat, GPT, I can use it to make my job easier. I know other people use them for writing emails. I know of teachers that have used them to help grade papers. I think that's something that we've got an article lying about on. Kids are obviously cheating on homework assignments or essays or whatever to use things to.
Hunter [00:26:33]:
Would you suggest that all teachers should be leveraging something like a chat GPT to help them grade their students?
Daniel [00:26:40]:
I think that's tough because what we have is a big hammer and not everything is a nail for math problems and so on. Chat GPT is not the right tool. Like, there's plenty of existing tooling that will actually check all of the math or all the steps off the whatever to see if someone got it right and showed their work properly. I don't think that chat GPT is the right kind of tool for that job, even if it could do it really well. And we've seen that, like, they're good ish sometimes at certain kinds of math, but you can't really trust it. But grading essays and so on, it's. It's really tough. I don't know if teachers, certainly not every teacher, actually reads the entirety of everyone's essay that was written.
Hunter [00:27:30]:
I always wondered that in school, right? You would sometimes hide the sentence in there just to see if they catch it. I don't know if you ever did that. I would sometimes make up words and I had a few that were, like, my signature words and I would just insert them just to see did I get the. Like, the circle.
Daniel [00:27:47]:
I've done this in my professional life as well. Like, I'll insert a sentence into something that I hope other people are going to read. And it's usually, it's something like, message me the potato emoji as an indicator that you've actually read this sentence, this document, and literally, once ever has someone responded with that? Any job, whatever. If you're writing a bunch of documentation, no one's ever really reading it. I mean, like, certain things are being truly combed through, but, man, I think that a lot of it is kind of smoke and mirrors and performative. Should chat GPT be grading our, our essays on, you know, having read whatever book it is that you're supposed to read, I don't think it should be entirely responsible. But also, like, if it gets good enough to the point where it can check for the things the teacher is supposed to check for, if it gets good enough and cite its sources, and, like, bring the teacher's attention to the thing that they should be looking at. I think that, like, don't use it as a replacement for grading.
Daniel [00:28:51]:
Use it to speed up grading. I'm a big proponent of human in the loop in general, like for any AI ML type application. And so a teacher shouldn't be obviated by chat GPT, but rather you run all 30 students essays through and then in two minutes you have a, instead of reading the entire essay and you've got an entire afternoon's worth of work, you have 30 minutes of. Here is where this kid had this problem. Here is something where somebody was typing absolute nonsense. This was a really short one and they didn't really get to the point and like, bring the teacher's attention to where they can teach most effectively. I really think that this will, will eventually become better integrated. People will try to replace teachers first eventually.
Daniel [00:29:35]:
I really think they will, and I don't think they should and will eventually realize it is an augmentative tool.
Hunter [00:29:40]:
I guess the point though is that teachers, which would seem like a kind of obvious profession to start integrating this technology today. When you actually think about it, 2% of teachers using this on a daily basis probably sounds about right given the current state of the technology. It's not that they don't know about it, although I'm sure there's a large swath that doesn't. It's that the technology really isn't there yet. Where, you know, state education board should be buying the enterprise license for chat GPT and, yeah, students. But, like, that's primarily cheating. Like, what they would be using it for.
Daniel [00:30:18]:
Right.
Hunter [00:30:18]:
We can come up with the, like, legitimate reason, but the. What are they all going to be using it for? Is cheating. So, like, they really shouldn't be using it either. Then let's just take blue collar workers in general as a. Another, like, large swath.
Daniel [00:30:31]:
Yeah. About 200 forklift drivers don't need chat GPT.
Hunter [00:30:34]:
No, 2% of forklift drivers using it on a daily basis. Okay. Yeah, actually, maybe that's actually, that's a lot. So it's not. The technology isn't there yet. And then you have the white collar workers, you work in an office or remote, a remote office doing clerical, analytical kind of work. That's where potentially you are. You are giving up on some, some productivity.
Hunter [00:31:03]:
But I don't know.
Daniel [00:31:05]:
I think so. I think there's a lot of desk work being done. You know, I've heard horror stories online of like, you know, my older coworker yelled at me for using copy paste instead. Like, wild story. I don't know if any of these are really true, but imagine the amount of head spinning that could possibly be happening from, from white collar jobs that are not super thinky. That could probably be sped dramatically up via application of AI. So I do think that that 2% will grow over time, especially as there are iterations of this that are more fine tuned to be integrated into your workflow. Right now, few people would be willing to open up the chat GPT site and interact with it and then like bring the results out of it into their workflow.
Daniel [00:31:58]:
But that's why we're seeing Google cloud having things in Google Docs. That's why we're seeing Microsoft integrate things into office. These things will be part of the tool that people are already using very soon, or already are, and will just keep getting better. So that 2%, it frankly might not go up that high because people won't realize they are already using it.
Hunter [00:32:23]:
Yes, I think on the Microsoft side you have to pay for copilot. It doesn't just come with Office. So while there are, Office isn't free either. Yes, but companies have to choose to pay the extra per site licensing fee for Copilot. And I'm suggesting that a very, that 2% of companies are going to say, oh yeah, let's, let's add that for all of our employees. And after thinking about a little bit, they may be correct. It's, it's, it's more of an opt in early, early, you got to be a really early adopter. You have to want, like strongly desire to embrace and learn about the technology in order to get the value that is there out, but that it's not easy enough to get that value out unless you're willing to at this point become, you know, a mini, mini prompt engineer.
Daniel [00:33:12]:
I absolutely agree with you. I don't think that this is something that is ready to be integrated into every facet of our society. Despite the fact that there are a million AI startups very much saying you need to wear this pin, you need to hold this little cute orangish device, keep it on you at all times. Where's your r1, by the way?
Hunter [00:33:37]:
It's back there somewhere. I did turn it on over the weekend because it was a new update. So the new update is now when you take a picture, you can save the picture, a revolutionary feature, and it appears on this web app. So you didn't have to log in the web app and you can see the picture. And then there's a setting that you can check to automatically create a piece of art from the picture. This is the breakthrough new features of the Rabbit R1 revolutionary. I actually didn't try the features. I turned it on, it installed the update and then turned it back on, back on the shelf.
Daniel [00:34:20]:
We're not there yet.
Hunter [00:34:22]:
We are in a hype cycle for AI. We don't necessarily yet know where we are in this hype cycle. I was looking back on like, what are the recent ish hype cycles? We had blockchain. That was a big hype cycle. So what is the hype cycle? Hype cycles, new technology that there's a lot of hype around that people think can kind of think will revolutionize things, create lots of business opportunities. There's often lots of investor capital associated with these hype in believing that the next big companies are going to leverage this technology. So we had blockchain, metaverse, Internet of things, quantum computing, autonomous vehicles, 3d printing, and looking back on all these, none of them, they're probably all around that 2% mark.
Daniel [00:35:11]:
Well, not every product's for everybody, right? I love 3d printed stuff. I think it's cool as hell. I don't have one. I don't need one.
Hunter [00:35:19]:
The question that I think I'm posing is, what if it's not that we're early, but it's that we're further along and later in this hype cycle that we think. So the technology hits this sort of peak saturation. And when we look at all of the kind of recent technologies that the world at large thought had the ability to, okay, everyone's gonna have a 3d printer, you're gonna print all of your toys, you need a new tool, you're gonna print that, you need, oh, something in your house breaks, you'll just print out a new one. And that's still an incredible future vision. But then the technology sort of hit its limits and it's still evolving. There are new 3d printers, there's really cool technology there, but it's for the two percenters. And so what if an AI, we're actually, we're kind of. It's not going to move much past 2%.
Daniel [00:36:13]:
I don't think that this is going to be a 2% thing. I really think that AI, as it evolves, as it matures, as a set of technologies, is going to get integrated deeply into everything. And not literally everything, but like if I he ygoogle at my phone and tell it to set a reminder or something, it won't anymore, which is really, really annoying, but it used to, and my home version still does, slapping a little bit of AI in there to ask it broader questions and just have it like respond about them right around the corner. It has to be. And everyone's got one of these things in their home. Sure, George Orwell may be spinning in his grave as to what we've invited into our homes, but we've got them and we're going to use them. A lot of people do all the time. Everybody's got a phone and that's going to have AI.
Daniel [00:37:12]:
You know, I've been seeing releases from like, Google and Apple. Hey, we have these like, specific AI chips that are going to be in the phone to help do on phone AI stuff. And it's necessarily LLMs.
Hunter [00:37:25]:
They already have the Google Assistant and Siri.
Daniel [00:37:27]:
It's just, yeah, they're not large language model powered, they're. They're huge conversational tree powered. And I think that's going to change and it will get more capable and that's going to be something that's part of everybody's everyday life. You'll just ask your phone questions about stuff and it'll tell you, I really think we're going to get to that point. That's way more than 2% of people that are going to be using that. 2% of people going to chat GPT and using it. Sure. 2% of people leveraging large language models as part of their daily lives.
Daniel [00:37:59]:
I think it's going to be way higher than that. All the teachers are going to have the board certified software that will reduce the amount of time that they have to go through each essay because it'll highlight the problem sections. All of the secretaries are going to have things that highlight the, you know, it's just going to be like a better email filter of bringing the most important stuff up to the top because an LLM read through stuff like that is going to be everywhere. And it'll. The way that I've described, these are.
Hunter [00:38:30]:
All features that we have today. And you're just, what you're saying is that it will get better and it'll.
Daniel [00:38:35]:
Get more powerful and it will be something that we don't even realize is large language models or AI just is the feature that makes my email inbox sorted.
Hunter [00:38:46]:
And I guess what I'm talking about is so not tiny little gains, which is, I would suggest that those are, you know, like, maybe, and I understand that tiny little gains add up, but we were talking about like, fractions of a percentage of productivity increases by having a better email filter. That's. Now, that's great and I want it. I Lord knows I need it. But those aren't the kind of productivity gains that are part of the hype cycle, which is your entire job, is going to be automated.
Daniel [00:39:17]:
And so you're describing the Jetsons. When George Jetson goes to spacely sprockets and there's the big screen with the face on it and it talks to him and then he pushes the one button and then goes home for the day. That's what you're talking about.
Hunter [00:39:30]:
That's what I'm the AI that he talks talking about. I'm not the one talking about. That's what everyone's saying is coming. They're saying, as you point out, don't, don't get a degree in computer science. You will not need it. There'll be no jobs. There'll be no jobs.
Daniel [00:39:43]:
Some jobs will go away. Some jobs are going to evolve. You still need George Jetson to push the button.
Hunter [00:39:50]:
Yeah, eventually all jobs go away, but.
Daniel [00:39:53]:
Right.
Hunter [00:39:54]:
Define eventually. Eventually. There's no planet left. Right. The sun burns out one day. But I'm not really concerned about it. Yeah.
Daniel [00:40:04]:
So I, I don't know when there's no jobs left. I do think there's going to be attempts to get rid of a lot of jobs and it's going to semi work and cause a lot of ruckus and then we're going to kind of settle back to let's have a lot of the same jobs or kind of jobs that we have now that are enabled and empowered by AI. Let me give you an example of one of the problems that is starting to crop up and that we're going to have to deal more with kids cheating on their tests and teachers using chat GPD to grade test. You have this weird ouroboros of AI eating its own tail. But if I want to hire a programmer, especially a junior programmer, and then I put out here's the take home test. I have very little guarantee that human hands had a lot to do with the stuff that I see at the end so that it's not just like kids in school. This is absolutely a problem in the professional workspace of being able to determine if a person is the right fit for a company, uh, based on the output they've seen from them.
Hunter [00:41:14]:
I'd also argue why should you care?
Daniel [00:41:17]:
Right? So that's the other side of the coin of if they can give me something that is commensurate with my expectations for the job, who cares where they got it from?
Hunter [00:41:26]:
Correct.
Daniel [00:41:26]:
So it means that they're learning to use their tool. That that's the future that we need to get to is not saying you shouldn't have cheated on this test, it's. You used tools available to you to provide me an answer that I was looking for. And I think that that's something that we'll have to come to terms with, because right now, people, I think kind of in general, have the thought that that would be cheating as opposed to leveraging all available tools.
Hunter [00:41:53]:
Yeah, I manage lots of software engineers and computer scientists, and my position is use the tools, don't use the tools. I don't actually care. You have full autonomy, work how you want to work. However, keep in mind you are going to be judged against people using the tools. And I would suggest that we are on a path to those people being far more productive. But you decide your own, your own fate, your own path in life. If you want to use it, use it, don't use it. I care about the end product.
Hunter [00:42:26]:
Do I have a good end product? Are you outputting good working code that doesn't have a lot of bugs and accomplishes the tasks in front of us, how you did it, as long as it isn't illegal, I don't understand why I'm supposed to care now. I think what, what happens is the evolution of what is a junior or associate programmer that has to evolve. Like the bar should be being raised. It's what you can copy and paste into chat GPT. If, if I can take a work ticket and paste that into chat GPT and set up some, give you access to my code base with, we've talked about rag retrieval, augmented generation, and it outputs it. And if you can do that for your job, then either I'm paying you too much, or I've done a poor job of defining the job, or you need to be completing ten times the amount of work because you're just copying and pasting stuff around in a way that requires intelligence and understanding of code, and I want to respect that and there's value there. I need someone who can actually read the code and figure out what's going on and test it and validate that this actually is working code. And when there are small issues going in there and check it.
Hunter [00:43:36]:
Cool. So maybe, maybe that's, it's a, it's more of a volume question that should be evolving for a, an entry level position in, in programming. The quality bar is still the same because your experience isn't going to be there to where you have the skillset to increase that quality bar. But leveraging these tools, you do have the ability to increase the quantity bar. So I expect junior level code, but five times, ten times as much of it, which, by the way, doesn't sound great.
Daniel [00:44:04]:
Right, right, right. Because now you have to manage five times as much bad code.
Hunter [00:44:09]:
I'm rethinking this.
Daniel [00:44:11]:
You just talked yourself out of it. But yeah, I think these are the kinds of things that we're going to have to adapt to. Is, is it more, is it the same amount? But it's almost like you're managing your own junior by virtue of like, this hopefully less capable than you system. And even if it ends up more capable, like, right, there comes a point where all the stuff you'd give to a junior MLE machine learning engineer is like completely gone, or a junior dev junior, whatever. You know, maybe there, as long as there have been UX designers, there have been tools to try and replace UX designers. And maybe there becomes a point where you take chat GPT and you hook it up to whatever the hottest tooling is, and it can start popping out the buttons and designs that you want better than any person ever could. If it's, if it's fully taking over that field, like maybe that sub field goes away. But I still think that designing products that people are going to use, a whole website with a lot of infrastructure, etcetera, certainly they're too complex at this time to completely automate.
Daniel [00:45:21]:
But even if you could automate them, it still needs to be something people would use. And so I think people need to be kind of involved in the process.
Hunter [00:45:30]:
And for the school angle, I let the kids use it. Let the kids write their essays with chat GBT. Fine, go ahead. However, I'd suggest that there's probably going to be more like, in class, you're just going to be writing an essay like your test today is. We're going to spend the next 2 hours writing an essay that's going to be graded.
Daniel [00:45:48]:
Yeah, I think homework is going to be not as large a component of the learning process in the future, or at least not homework like we see it right now. Because if you could just write an essay with chat GPT, you haven't learned how to write. You haven't learned how to take a collection of thoughts and put them together into a structured way because that's what you're trying to learn with, write an essay. So I think there are a lot of people that are concerned, and rightfully so, that this could be one of the least educated generations by virtue of people not learning, because they can just like take this and give me an essay at the end. But properly doing education in schools, letting teachers focus on the teaching aspect of things, because all of the grunt work got automated away. I think there's the potential for better educational outcomes where you can really focus on, oh, the AI gave me a lesson plan. There's a site, magicschool AI. I think it's a really interesting.
Daniel [00:46:53]:
I don't know if they're going to be the ones that win in this particular arena, but it's one of them that I found. Magicschool AI, it's for like lesson plans and there is a chatbot aspect of it. And presumably over time they're going to try to make it easier for people to. Easier for educators to do the educating without having to spend so much of their time, especially often unpaid time, getting ready to teach. I like that potential future where teachers can do the teaching, where students can directly benefit from personalized attention or more personalized because an AI helped enable. Here's where the students are. These are things that these students are ahead in and these students are behind in. Help little Johnny over here with this thing and then that'll be useful.
Daniel [00:47:44]:
Here's like a set of instructions that could be broadly applicable to everybody. I feel like that's a better future than just saying, oh, you cheated or didn't cheat or whatever. Using chat GPT to like make a test. The. The system has to change to adapt to this. So write essays at home. No more. Write essays in class.
Daniel [00:48:05]:
Having been given very specific instruction on how to assemble your thoughts in a structured manner, how to make a compelling argument in ways that have been customized for each student. Yes, let's go to that sort of future.
Hunter [00:48:21]:
And I say still write essays at home. It's just leverage all the technology that exists. Writing long form essays is still a skill that you should have. However, we shouldn't pretend that this technology doesn't exist. So yeah, keep it up and make that part easier. I do think your idea that homework changes wildly, rapidly, that that has some weight to me that that feels correct.
Daniel [00:48:48]:
Because the point is to test competency at the thing that you've been taught. Right? Same thing with a take home test for like trying to hire an engineer or something.
Hunter [00:48:59]:
You can also imagine your homework being to have sessions with these AI tutors especially. We've seen in the GPT 4.0, these kind of incredible conversational experiences. Google demoed something very similar and theirs, I believe was in a tutoring capacity where your job, you need to log 90 minutes of conversation with the tutor who's going to poke you for like, where are you missing information, explain concepts to you. And it is a voice conversation back and forth that you are having to minimize the ability for you to automate. Although I feel like I would still try to figure out a way of automating it and just have one AI.
Daniel [00:49:41]:
Toss, two a. Yeah, absolutely.
Hunter [00:49:44]:
100%. But that would be the 2% of the crowd, like. Right. And the 98% would actually be having the conversation, but yeah, homework, cybersecurity training.
Daniel [00:49:53]:
All the required training that every kind of company has these days.
Hunter [00:49:57]:
Oh, you're gonna out me?
Daniel [00:49:59]:
Yeah. Well, no. So this cybersecurity training, anybody who's taking this stuff is going to just put the thing on autoplay with the sound off, off to the side, wait for the video to be done, and then take the four question, multiple choice quiz at the end. Right. What if it was an actual thing that you got to talk to? It was like attending a lecture, and then it's asking you questions and verifying that you do understand the concepts. It'll take more time, but it would literally be more useful than the nonsense that people do these days, which is all performative.
Hunter [00:50:32]:
Yeah. And I'm not saying that I did this, but I am saying that it's possible when you do have to watch those videos that you're just muting and like, letting play in the background, it is possible to go in and issue a specific command to get the videos to play at up to. I was never able to go past 16 x, but you could play 16 x. Wow. And again, I'm not saying that I ever did this, but you would complete, you could complete these two hour sessions in just a few minutes, and it would show that you completed in just a few minutes. And I would also suggest that no one catches on to this, but it's all just, that's still. That's. I would suggest there's probably 2% of people doing that, and there will always be 2% of people doing that.
Hunter [00:51:21]:
And for the 98% of the people there, they're watching. It may be muted that maybe that's. That's a higher percentage, but still, most people are just going to watch it. But yeah, a conversation, that's the next evolution again, eventually that will be replaced, too. What? No one needs to have conversations anymore because I just have my AI that is trained on me. Have the conversation. We will get to that point, but we're not there yet. And it's not tomorrow.
Daniel [00:51:46]:
No, I think there was an article that we were looking at a couple of weeks ago. Didn't end up talking too much about it. Where there were going to be AI's trained on like, writings of yours and roughly trying to affect your personality, like some sort of digital clone that would then go out and date other people's AI's and then help figure out which people that you'd probably.
Hunter [00:52:09]:
I liked this idea.
Daniel [00:52:11]:
It's such a weird idea and it feels so wrong, but it also kind of, I mean, like, it's a lot better than just a lot of swiping right for guys and desperately hoping someone also did for you. And then for the women going through a sea of matches and trying to be like, okay, you know, who should I actually talk to? Have your digital twin go out and be like, I kind of gelled with these five people.
Hunter [00:52:35]:
Yes. But I also, I wonder, so, all right, so they're having a virtual, your AI is going to go on a virtual date with their AI. That's basically what it's doing. And presumably it's going to give you some report back on how this virtual date went. But I assume this will go to the inevitable conclusion that the relationship will continue. The virtually between your AI agents, the.
Daniel [00:52:57]:
Real person decides not to date that person. And then the two AI is like, no, we're going to keep, and I.
Hunter [00:53:03]:
Wonder if it would, it ultimately makes people even more sort of anti social. And because they're, they're getting some of that relationship experience that's happening, but, and they can live vicariously through their own AI agent. They can read. Oh, had a great date last night. Went out.
Daniel [00:53:20]:
This is horrifying. Oh, my God.
Hunter [00:53:23]:
Yes, it is. But how does it not happen? It will happen. And because, by the way, it sounds like a great feature, like, read your entire future with this person.
Daniel [00:53:33]:
Okay. All right, let's take that another step farther. So there's a term that I heard of recently, parasocial relationships. It's like people who watch, you know, YouTubers and like, basically they get to know kind of this person, but it's a one way thing. It's not a social connection. It's para social. Now imagine that you have, like, it's not just one youtuber. It's like a collection of people.
Daniel [00:53:58]:
Maybe it's a live play D and D podcaster or something like that. And that you have your digital twin able to be interjected into a virtual group of people and you get to watch yourself hanging out with these other people, having a good time while you sit in your place alone. I think we're going to see that kind of thing, and that's, that's definitely scary. And sad, but I also don't see it not happening.
Hunter [00:54:23]:
Slack's CEO was last week said that he would like to see support for AI agents going to meetings. And I mean, already you can send like someone to take notes to a. Sorry, not Slack Zoom to a Zoom meeting, but he wants sort of full again the image of you, the voice of you, and being able to talk like you to attend the meetings.
Daniel [00:54:46]:
Gimme. I want it. I want it. There are so many meetings that I. Look, there's this very common phrase in industry of, like, this meeting could have been an email. What if, even if it still was a meeting, you just get the email of the cliff's notes of what was important at the end. So much time could be saved, and that's adapting to humans. Like, it again becomes this weird, like, layers that shouldn't have even needed to exist kind of thing.
Daniel [00:55:12]:
But people keep having these meetings. That should have been an email, but if you're gonna have it anyways, at least the AI is there instead of you in the meeting that shouldn't have existed to give you the email that you should have just gotten in the first place.
Hunter [00:55:26]:
Yeah, there's, there's some missing. Something missing there. I mean, that, that's been true for, I guess, forever, right? I was thinking back to Ferris Bueller's day off. It was, there's a scene in the, in the movie from quite a long time ago where they're just recording the, the class lecture, and every day there's like more and more people are absent and they just have their tape recorders on their desk recording the teacher. And I think it concludes with one day they come in and the teacher's been replaced by a. Just a speaker. So again, why even have class? Why even have the meeting? That's the real question. But we don't spend time actually answering that question.
Hunter [00:56:07]:
There must be something more to it than just conveying some information. And that's probably the area that we should be focusing on, is how do we increase that area?
Daniel [00:56:19]:
Half the reason why you send kids to school isn't just for the learning, it's for the social aspect of things regarding the speaker and a bunch of tape recorders in a classroom idea. That's really just the precursor to online learning, which has become fairly available and ubiquitous and so on. I want to go back to the cybersecurity training 16 x sort of thing.
Hunter [00:56:46]:
Here that I gave last time. It wasn't me.
Daniel [00:56:50]:
Just. You heard somewhere that you heard. Yeah.
Hunter [00:56:53]:
I can't even remember who told me right.
Daniel [00:56:57]:
So we talked a little bit about how I have a, I think, extremely well founded fear that people are going to be catfished much more often of like, your kid has been kidnapped. Send us a bunch of money and it sounds like your kid on the other end of the line and other similar kinds of things because you're able to scam people in less scary ways already. I've seen a lot of folks also falling for like, someone's Facebook account gets hacked and then they post like, I just made all this money, isn't it? Oh, and then people's relatives are saying, wow, great for you. I'll click that link. Sure. Oh God, you really need to stop falling for these kinds of things. I think we could help via some, it's almost like white hat hacking. We could have AI's pretend to be a compromised version of us to test our relatives to see if they would fall for scams, to see if they would fall for, you know, these weird security things like, okay, AI, I need you to call up my parents pretending to be me and say that I'm stranded and that I need you to wire me $10,000, see if they fall for it.
Daniel [00:58:08]:
And like, well, they did versions, not.
Hunter [00:58:12]:
You and I work or whatever they used to us. Do you remember that? Like we, you would get these like phishing emails. That's how it usually came across. But the company or company that was employing us was paying some other company to try to fish us to then say, hahaha, we got you. And I would always, I get one about every time I see them. Like, all right, let me see what's going on here. Like, this is interesting. And then they'll be like, you click the link.
Hunter [00:58:34]:
I'll be, yeah, I clicked the link. I wanted to understand what's going on. Of course, I know it's stupid, but, and then, yeah, so I, I've gotten.
Daniel [00:58:41]:
Texts from the CEO before, you know, hey, this is, I need you to go buy a bunch of gift cards. Really? You need me to buy a bunch of gift cards? Me, the guy that you never talked to, gift cards. So I, and like, people fall for these things, otherwise those scams wouldn't exist. I think that AI is a fantastic tool to be used for that kind of use, case of help test people against scams to see if they'll fall for it and then educate them if they did, because people are going to be using the same technology to try and scam people. And there's already apparently insufficient awareness among a surprisingly large number of people. That keep falling for this kind of thing. So ramp it up on the good guys side.
Hunter [00:59:33]:
I want to go back to the tmtmtm.
Daniel [00:59:34]:
I'm going to make that into a company. Nobody steal that idea. That's mine. Daniel.
Hunter [00:59:38]:
I want to go back to the 2% number. So let's suggest that today 2% of the population is heavily using AI as part of their job. Um, and consciously using it. So in give me two years. How, how does that number change? I mean, it goes up, but what's your, what's your prediction for 2% a day? In two years, what percentage of the population will be consciously using AI as a significant portion of their job?
Daniel [01:00:13]:
So you said consciously using. That is not the, it's integrated into word and you didn't really realize, correct. This is like, I'm going to chat GPT or I'm specifically loading up the thing that has been marketed as AI, and I will use that. I think that that's going to. A couple years from now, I mean, let's call it 5%. And that's, well for the Brits in the US, you know, if we're at seven, it'll go to 15 something, like, I think it'll double ish. I think it's gonna peter out not much higher than that, because as more people get exposed to it, more people use it, and it's more young people than older people as well. So like the, whatever demographic shifts, like more folks are going to use this, I think it is going to become more seamlessly integrated to everything and less of a overt.
Daniel [01:01:05]:
I'm going to the AI box and I'm going to go use that. It's. AI just is part of the stuff that we do. And so I. The way that I've always talked about natural language processing to people in general is it is at its best when you don't even realize it happened. NLP should be a background thing that just makes everyone's lives more convenient. And we are truly starting to see the realization of that particular vision through tools like jet GPT.
Hunter [01:01:36]:
By the time that, like these technologies where you say specific words and make people do whatever you wanted them to.
Daniel [01:01:41]:
Do, that's NLP neural and natural language processing. So this is using machine learning, AI, whatever, to look at language, natural language, and then process it into getting insights out or doing something with it. Large language models, being able to take text in, give other text or audio or whatever out. Yeah, I think that that's the future, and I think the future stops being. This is the AI response so much as, like, you know, and Google flubbed it pretty hard with their AI overviews at the top of things, telling you to put glue on pizza, for example.
Hunter [01:02:19]:
Darn good pizza, by the way, you.
Daniel [01:02:22]:
Ate it immediately after we were on the call.
Hunter [01:02:25]:
One bit of cheese fell off, too.
Daniel [01:02:27]:
A little bit of gasoline for that spicy flavor, too.
Hunter [01:02:29]:
Oh, that spicy, spicy meat sauce.
Daniel [01:02:31]:
I do think that that's going to get a lot better, and then it's going to eventually meld into the experience to the point where you don't realize where the AI starts and where the just product that you're using and the content that you're consuming ends.
Hunter [01:02:48]:
Yeah, I don't know that I'm convinced that it's gonna get a lot better. I definitely think it's gonna get better, but I think it's gonna be more incremental than we believe. And just based off of history of this tech, the technology promises wild. It's gonna get wildly better.
Daniel [01:03:06]:
The peak of inflated expectations.
Hunter [01:03:08]:
Yeah, you get an incremental gain. So your concept of we're at 2% today and we're at 5% or double, maybe a tiny bit more in two years, that feels very right to me. However, I wonder if there's one. There is a wild card in there, and that's AGI. And I think that that number is the. We don't discover AGI, then we're in two years, we're at 5%. If we discover AGI, it could be substantially higher. Well, it's just an unknown.
Hunter [01:03:38]:
That's the black swan.
Daniel [01:03:39]:
If it's AGI, it's 100% of people, because it's going to take over or whatever. So.
Hunter [01:03:45]:
And it might be.
Daniel [01:03:46]:
If we're at 2%, it's a question of where on that hype cycle we are. If we're still rising, the 2% goes higher. If we're at the top, 2% is as high as it goes, it'll go down and they'll come back up as people. Like, as it gets fully integrated into things, once people start hitting everything with the new hammer and they realize, oh, this is for nails, then it will have its use. And I do think large language models in their current kind of form, like, we're still seeing new ones and slightly new architectures or bigger context windows or this or that or whatever, but, like, what an LLM is sort of at its core seems to kind of be solidifying a little bit from this red hot sphere of new possibility into a nice cool orb of a neat thing, and that neat thing can do stuff in a productive way. It being used as copilot for programmers is one of the obvious use cases. There's other use cases that people are doing right now. We're seeing a.com boom of a bunch of funding having been thrown at anyone who even says AI.
Daniel [01:04:49]:
And you can start up a company with half of a dream and a GPT prompt that's going to go away probably soon. I don't know exactly when. Maybe it's already started. I don't know, like, if the, the money train really is still going, but, like, it's going to come down at some point. And those that crawl out of that trough are going to be the ones that figured out actual use cases that actually drive the needle forward for businesses, for revenue or revenue retention or productivity or this or that. And I think we're starting to see some of the use cases come out. Oh, there's where AI is being used in a useful and meaningful manner, and then. Right, and then here's.
Daniel [01:05:30]:
Whoops. It turns out it wasn't any good at that thing. And then that company is going to fail, and we'll see a bunch of both of those things happen.
Hunter [01:05:37]:
And potentially even those incremental gains is a kind of zero sum game. So, yes, these companies will increase by 10%, 20% efficiency, but by doing that, other companies will decrease and stack overflow. A net gain. Yeah, stack overflow goes away. They capture that business. But it's not as if new things are really developed until we reach AGI. And I'd suggest that potentially, this adoption rate is also a representation of how far along we are.
Daniel [01:06:08]:
What.
Hunter [01:06:09]:
What percentage of AGI do we have today? We've got about 2% of AGI. I wonder. I could buy that premise. The. Again. But the black swan is tomorrow. It could be at 100%. It's unlikely.
Hunter [01:06:21]:
We can't predict exactly when it's going to happen. That number is going to tick up. If we do nothing in two years, it'll probably be at 5% or.
Daniel [01:06:31]:
And, like, we don't know when it's going to make that. That big jump.
Hunter [01:06:34]:
Yeah, the junk.
Daniel [01:06:35]:
It'll show what LLMs are. I mean, like, the. The transformer architecture was huge. It was a gigantic change in the whole landscape of NLP. And then transformers being used for what the modern LLM is. And some of the.
Hunter [01:06:47]:
They got us from 0% ability to 2%, which as a percentage of increase is infinite.
Daniel [01:06:52]:
Right, right. Yeah, we went from, these are neat toys, and there are some use cases for some of these things, too. Everyone's seen this thing that kind of talks at you like a person.
Hunter [01:07:03]:
We just have to wait till the day they are self aware, and then we can, I guess, retire this.
Daniel [01:07:09]:
This.
Hunter [01:07:09]:
This podcast. This has been episode 18 of they might be self aware again. I am Hunter powers, and today I've been joined by Daniel Bishop. Daniel, it's been a great conversation today.
Daniel [01:07:25]:
Fun one as always.
Hunter [01:07:27]:
If you listening out there are not subscribed, why wait? Tomorrow? It could all go away. Like, enjoy it while it lasts. Even though we are only at 2% on the percentage towards AGI, hit the subscribe button. We're on every. Every major platform. But why not just use the one you're listening to right now? That's gonna be my suggestion. But yeah, YouTube, Apple podcasts, Spotify, wherever you want to listen, you can find they might be self aware. And we will return next week for yet another episode.
Hunter [01:08:00]:
Yes, until next time, they might be self aware.