
AI With Friends
Welcome to AI With Friends, your weekly launchpad into the world of Artificial Intelligence. Hosted by Marlon Avery, a pioneer in GenAI innovation, alongside Adrian Green, VP of Engineering at LiveNation, and Sekou Doumbouya, Senior Staff Cloud Systems Engineer, this show is your go-to source for all things AI.
Our hosts bring diverse expertise—from AI strategy and tech innovation to industry leadership. Every week, they break down the latest AI trends, interview top experts, and simplify complex concepts for AI enthusiasts, entrepreneurs, and tech professionals alike.
Marlon, Adrian, and Sekou combine their unique perspectives, whether it’s Marlon’s collaborations with tech giants, Adrian’s leadership in global entertainment engineering, or Sekou’s cloud systems expertise. Together, they make AI insights accessible, actionable, and exciting.
Tune in live on LinkedIn every Wednesday at 10:00 AM ET, or catch us on all major podcast platforms.
Here’s what you’ll get:
- Cutting-edge insights from AI leaders
- Real-world applications of AI technology
- A vibrant community of forward-thinkers
If you're ready to stay ahead of AI trends or spark your next big idea, join us each week for an hour of engaging, thought-provoking content.
Subscribe now and become part of the future of AI with AI With Friends!
AI With Friends
EP13: Generative AI's Trillion-Dollar Impact, Meta's Election Misinformation, Anthropic's Protocol
In this episode of "AI with Friends," hosts Marlon Avery and Adrian Green explore the dynamic world of artificial intelligence, offering insights into its latest developments and challenges. From the economic impact of generative AI to Meta's controversial claims about election misinformation, the duo navigates through a spectrum of thought-provoking topics.
Key Highlights:
- Economic Impact of Generative AI: McKinsey & Company forecasts a potential $2.6 to $4.4 trillion annual boost across industries. Marlon and Adrian discuss how AI literacy is becoming essential in the job market, with software engineers leading the charge in AI tool utilization.
- Meta's Election Misinformation Claims: Meta asserts that AI-generated content accounts for less than 1% of election-related misinformation. The hosts critically assess this claim, reflecting on Meta's history with misinformation and the broader implications for content moderation.
- Tenstorrent's AI Chip Development: With a $693 million investment, Tenstorrent aims to challenge Nvidia in the AI chip market. Marlon questions their ability to compete, while Adrian highlights the expertise of Jim Keller, Tenstorrent's CTO.
- Anthropic's Model Context Protocol: Anthropic's new open-source protocol promises seamless integration with platforms like Google Drive and Slack. The hosts discuss its potential to revolutionize AI assistant interactions and the company's ongoing innovation.
- Sustainability Concerns in AI: As AI models demand more energy, the conversation shifts to sustainable solutions like neuromorphic computing and nuclear power. The hosts emphasize the need for efficient algorithms and hardware to address these challenges.
Notable Quotes:
- "I think it's gonna be a requirement in the job field, in the job market to understand how to use these tools. I believe in five years you're going to have a new term called AI illiterate." - Marlon
- "Every week we have something to say about Anthropic. They are really doing a lot." - Adrian
Join Marlon and Adrian as they dissect these pivotal topics, offering expert analysis and engaging discussions on the future of AI. Tune in for an episode that blends technical insights with a conversational tone, making complex concepts accessible to all.
Follow the Hosts:
- Marlon Avery: @IamMarlonAvery
- Adrian Green: @InfamousAdrian
- Sekou Doumbouya: @SekouTheWise1
Affiliate Links:
Marlon Avery: Yo, yo, yo.
Adrian Green: What'S good, Marlon?
Marlon Avery: What's up, man? How you doing?
Adrian Green: Yeah, I'm doing good over here. Doing good over here. Been a busy week, Busy week at work. Definitely a busy week in the tech world. Really interested in the things we got lined up today to talk about.
Marlon Avery: Yeah, man, same tech world. I'm over here in the government capital as well. Nation's capital district, chocolate city, you know, if you will.
Adrian Green: Right.
Marlon Avery: All the ghetto, good old, good old nicknames, you know, it's, you know, it's been interesting. We didn't have a chance to talk about this, so I think we'll kick it off with this. I was invited to the event yesterday here in D.C. and it was the form it 100, which they were recognizing some of the, of individuals who have made impact in the tech community, you know, within like the last year or so. And so super dope, but also too they had like some keynote, keynote speakers to sub to as well. And so one of those speakers was, and please forgive me, was the, I guess like the leader for the census. And she was talking about the infrastructure of what they built from the last census up until then. And so they had integration of AI. They had, they supported multiple languages and things like that. And then they sort of roadmap like what they're looking forward to, you know, for and how they're going to integrate, you know, some of these things. And then also to the somebody, some company from the IRS spoke as well. And so it was.
Adrian Green: Very government, very, very official. It sounds like you got that Census Bureau and the IRS. Yeah, you're definitely in D.C. yeah, but.
Marlon Avery: It was, it was, it was good because it was like, you know, you kind of got to see, you kind of got to see how some of the, our government entities are thinking and integrating and looking forward to, you know, build solutions and everything using technologies like AI. And it was good to see. It was, it was rewarding to see, you know, because I mean, I know in some, some areas of government, you know, they start using like punching cards, you know, and stuff. And so, so yeah, man, that was, that was, that was rewarding to see and fun to see. And so, you know, shout out to the, Shout out to them.
Adrian Green: Shout out to the government, man. They really try their hardest. It's, it's tough, it's tough to be in tech and in the government because the first thing you're going to think about is security. You know, you're trying to roll out, you're trying to roll with the latest and greatest hop on Whatever's new and hip out there. And oftentimes, unless the government created itself, it's going to be like, nah, not so fast. You know, we got a, a stack of paperwork to go through as far as approvals on getting this.
Marlon Avery: You know, is it tough because it's tough, or is it tough because of the talent market, the industry and stuff that's happening right now, or do you think is this tough, period?
Adrian Green: Well, I think that it is tough for a couple of reasons. The talent market in the industry, I think it probably is as strong as it's been as far as the talent market out there. I would, I would suspect the, the, the industry, though, like, when you think about government, it's, you know, everyone knows, you know, you land that government job, it's cushy. You know, you get, you can, you can park, you know, and parking is not a bad thing. But when you get that government job, you're going to see a lot of your folks from yesteryear. You know, you may see some people that watch, like Honeymooners, what it was on the air, you know, and, you know, saw the transitions from black and white TV to color tv. You can, you, you get, you know, there's people, there's, you know, so you'll get people like that in there, typically, and this is just my own experience, it's tough to get those people to adopt new technologies. These people have been there, you know, years, years, decades before you got there, and their stuff is still running.
Marlon Avery: So it still works.
Adrian Green: There is a. Still works. So there is the, if it ain't broke, don't fix it kind of, you know, first and foremost, you know, rule of thumb there. So there's, it's tough to, it's tough for a government, for anything in the government, in my opinion, to stay as up to date as it could. Like, a lot of people will say stuff like, well, why don't, why can't we just vote on our phones? And why can't. We can't, you know, why can't a lot of this stuff just be digitized and automated? And, you know, you're up against that. What you're, the wall that you're up against is security. A lot of the time, security. And we want everything operating within the government to be as accurate as and as secure as we can. So that means everything has to be tested and vetted and tested and vetted.
Marlon Avery: Yeah, I'm, I'm going to lean towards. It's less security, is less worry about security more so if it is not Broke, don't fix it.
Adrian Green: Okay.
Marlon Avery: You know, because also when it's, when it's time to upgrade, when it's time to develop a new technology that also requires new skill sets and it requires the workforce to either grow with it or get it left behind.
Adrian Green: Yeah.
Marlon Avery: And I think it's more so that, I mean we, and I don't also think that, I don't think that's also just in government. I mean I'm pretty, pretty confident. You've seen and you've been a part of companies, teams where municipalities for sure. Yeah, they have, they're, they're intentionally. Dang, I hate to put it this way. They're intentionally keeping things a certain way because it provides job security for them.
Adrian Green: Well, that's what the guy's thinking, I'm sure. I mean the, the individual who is in charge of the system. Yeah, for sure. But at the top the only thing that they're going to care about is the bottom line. So you know, you, for what you just laid out right there, it's, it really is, it's really about the incumbent kind of staff that they got because you know, they could, that could stand in the way. It's like, you know, I learned pearl for this back in 88. You know, I'm not trying to learn anything, anything, anything else. I'm going to stand in the way of any, any new technology coming. But I would say that along with that, you know, there's only so much that you can get away with. They, they can make it difficult, but they can't stand completely in the way of something that's going to be overwhelmingly helpful.
Marlon Avery: I agree. But they definitely can, it can definitely can be delayed. I think we, I mean, I think you see that across healthcare, government, banking, you know, there's definitely systems, old processes, everything that are still alive there. They should, probably shouldn't be, especially if you want to be in the forefront of things like security.
Adrian Green: And you know, a lot of it has to do with contracts too because a lot of contracts, if, you know, you know those things will be multi year sometimes for certain software suites or something like that that you're using. So you know, we already got that. We gotta wait for the contract to run out and you know, if that's a couple years, who knows where the, who knows what the wave's gonna be then, you know.
Marlon Avery: Remember the good old days when like your, your community, your zip code, your apartment complex signed like a 10 year deal with Comcast.
Adrian Green: Yeah.
Marlon Avery: Now you're locked in to a service Internet Provider, phone service, whatever. Maybe like, oh, my God, I wish I could switch.
Adrian Green: Yeah. What you guys.
Marlon Avery: Yeah, man. So here, I'm gonna kick us off here, man. Hey, guys. Number one man, welcome. Thank you. Thank you and everything for tuning in, joining in. If you guys don't know, you know, man, we. We're a couple individuals. We're a couple individuals. We're a couple of builders, We're a couple of engineers, software engineers. We're a couple, you know, AI architects, cloud engineers as well. Most importantly, man, we're just a couple. We're fans of the space of artificial intelligence. And we've been doing this podcast privately for like, I don't know, a year and a half, two years. And then finally we kind of got to the point where it's like, you know what? This probably beneficial to the rest of the world. And so with that, man, we decided to come on here and talk about something that we're seeing, building some of the things that also too, we predicted. We've been very, very good at that. We've been very good at. We've been very good at predicting what's next. We're very bad at predicting when because it's. Things are just moved. Things are moving at like a weird speed. It's like, oh, man, they got here really quick. And then like, oh, we still haven't used to do that yet. And so with it, man, we created this podcast, man. This is, you know, AI with Friends, man, your weekly podcast, kind of get to know your different things. This is happening into artificial intelligence, gen AI, some of the building components. We get to cover some of your favorite companies here, like OpenAI, Microsoft, Google and so on, so on. With that, man, we're going to enter into one of the company we haven't talked about as of yet, which is McKinsey & Company. McKinsey & Company read an article not too long ago and they said generative AI has potential to add anywhere from 2.6 to $4.4 trillion annually across industries, reshaping functions like customer operations, R and D, marketing and software engineering. Key industries like banking, pharmaceuticals, are seeing transformation potential from personalized customer interactions to accelerating drug discovery. I say the technology promises significant productivity games but requires businesses to address adoption within workforce readiness. Adrian, man, that's a large number. 2.6 to $4.4 trillion. You know, this. This is the potential. What we can see with a. The AI, economic impact and everything, particularly here on U.S. americans and stuff for. Primarily, man, what's your thoughts and stuff here?
Adrian Green: Yeah, it's just going to get. I think that that's a. It really kind of reinforces my view, viewpoint of where I see things right now. I see, you know, Facebook has said recently that, you know, the internal use of AI tools has increased their productivity. I think an ADP was another company cited to just say that, like the exact same thing. And it's, you know, from a software engineer's perspective, I think as far as the early adopters, I think, you know, the software engineers are getting the most utilization out of it because we've seen, you know, those reports that x percent of code that's in GitHub currently is AI generated, which I believe it was a significant portion, like over half, like 30.
Marlon Avery: It was like 30 to 40% already or something like that.
Adrian Green: 30, 40, 40% already. So approaching half. So it's significant. That means that, you know, really, in lieu of, you know, typing out a lot of syntax, let's just take it from that perspective. Just in lieu of typing out the syntax that you need, even with all the keyboard shortcuts in the world and whatever code editor that you have, whatever templates that, that you have, the AI can, is getting to a point where you can do that simple stuff a lot more effectively than you can and faster. Maybe not more effectively. Sorry, let me backtrack. Backtrack there. Not more effectively, but faster. You know, there's going to be. The code that it's going to generate isn't going to be the most optimized, but if it's just regular, standard boilerplate kind of code mixed in with, you know, some other jazz, you're. It's like you would be really shooting yourself in the foot by not using AI to help you out as a software engineer. And it's just going to go up. I mean, the broader adoption in the workforce I think is going to come through tools and integrations like, you know, Google integrations and you know, that are already on the suites that people already, you know, that are attached to the suites that people already utilize. So we'll see what that. What that effect has there. But, you know, there's one personal thing I got to bring up about it is that, you know, when I was first working with computers, one of my first jobs was just help desk. So that's just fixing computers. I was also. I work for a mobile computer fixing computer repair company too, for a little bit. What I noticed then is just like the real frustration. People get so frustrated when their computers aren't doing what they want to do.
Marlon Avery: Yeah.
Adrian Green: And it's I mean when I was there it was like banging on like, why won't it do this? Or why see it's doing that thing again and why I want to do this.
Marlon Avery: Yeah.
Adrian Green: When it comes to the, you know, over time, hopefully those people have learned more and become, you know, they're less frustrated now. But I would have to say that the average user experience with generative AI or like, what would you call like, so let's say like chat GPT for instance.
Marlon Avery: Yeah, yeah, yeah.
Adrian Green: So there's just there I, I, when I, I introduce people to it and they use it, I feel that same vibe again of just the frustration of I, I'm asking it to say it this way and do this thing and it's not doing it and I don't know what to do. That is some, you know, in my own, you know, experience of putting things together and measuring the impact and how effective things are, that is a hump that people have got to get like over like the, the population in general is going to have to get over in order to be more comfortable using AI products and having it appropriately, you know, speed up their workflow and take a lot off of their plate. A lot of people don't know how to use it. So they'll put in the first prompt or, you know, the first request and they don't like what they get. They get frustrated, they get up, they walk away or, or they'll say, I'll just do this myself. This is more frustrating. This doesn't work.
Marlon Avery: Yeah, yeah.
Adrian Green: So I think we're still there, but it's doesn't that story is not the same as the software engineers really perspective on it, but that's just how it is right now. Do I think it's temporary? Yeah, I do this. There's going to be like AI is going to be up against like, I think everything that we do and it's going to keep, you know, kind of bouncing up against everything that we do until it finds a place to lock in. And when it does, you'll it really, the masses will see what we're experiencing, which is this, you know, great productivity tool.
Marlon Avery: Yeah. I think the most simplest way to explain the benefit and particularly around the speed of, you know, these tools and what it does for your productivity, what it does, what it will do for the workforce is you have two people leave their house at the same time. Let's say they're neighbors and they're both going to the grocery store. One person pulls up ways, the other person pulls up Map quests Okay.
Adrian Green: All right. Two dinosaurs. Gotcha. We got a T. Rex and a, he got a T. Rex and a stegosaurus in the back seat.
Marlon Avery: Well, no, wait, what mean Waze is modern. Modern? Oh yeah, yeah. But you pull up MapQuest versus Waze and I would say Waze is equivalent to using a generative AI tool to kind of help you get to a destination. And you can do it the, your, the old way and everything or the way you're comfortable with, which is using MapQuest. It'll get you there. You get there, you know, you get to your goals of eventually and everything. But you know you're, you're gonna be missing out on time speed, you know as well. And also too, you know, Waze also helps you integrate things like you know, looking out for accidents, you know, police, you know, whatever, maybe potholes. And so like it's like, that's kind of like how I look at this particularly around the workforce. I mean Stanford, MIT did a research and they said that they've, they've seen that general AI tools has already added, the people who are using every single day has already added up like 30 to 40 productivity, you know, to the day to day, you know, work aspect. And so you know, it's, it's, man, it's one of those things. I don't even try to convince people to integrate it, to use it and everything I also do for like a good majority has easily you at this point probably used it once, you know, if not consistently I think, I think the consistent numbers are a little bit more lower. But and I think also too like once you've seen that, I think you can also too kind of start to get a insight of how the economic impact will be, you know, on our society. And so I also see this as I think, I think it's gonna be, I think it's gonna be a requirement in the job field, in the job market to understand how to use these tools. I, I believe in five years you're going to have a new term called AI illiterate. Yeah, you go, you can have a new term called AI illiterate where people are going to be looking at people who don't know how to use these tools as AI literate and it won't be a good thing, you know, and stuff. And so just like right now, you know, computer illiterate, you know, you still got people who are computer illiterate, you're gonna have people who are AI literate. And those people are, it's going to kind of like fall Fall behind and kind of like feel some of that. And so, I mean, the four, the four categories that he spoke on in an article from McKinsey was custom operations, R D, marketing, and software engineering. I mean, as software engineers, we're definitely seeing that already. Like you, like you mentioned, already cost operations. I mean, I think that's the. I think that's one of the big things because the ability, the ability to have. Okay, I'll give you one. We figured out with the AI phone agency we've been building, we figured out how to give it memory based off the last. The previous calls that it already has with the AI phone agent.
Adrian Green: Yeah.
Marlon Avery: You know, so. Meaning when you call your dentist office and the AI phone agent picks up and the phone age is like, hey, Susan. Hey, we were super excited to have you last Thursday. We see, like right now, the appointment looked good, everything went well and everything. Susan would love to have you again. Hey, what can we help you with? And so it's already referenced back to like your previous experience, or it can also understand that your previous experience, your previous issue wasn't solved, you know, or nobody got back to you. So it can automatically, you know, scale that up as a priority, you know, for its particular team. And so I think in custom operations is. It's, it's. I think it's. The people who's going to win in that area is gonna be people who can be the most creative.
Adrian Green: Oh, for sure.
Marlon Avery: And I think it's gonna be the people most creative. And also you'll see like the most in value and stuff with it too, as well.
Adrian Green: It's so tough. It's so tough because you can be as creative as you want, man, but like in the AI space, as soon as you are, it's easily replicable. You know, it's, it's. It's tough.
Marlon Avery: Yeah, I think, I think it's on the, on the. I think there's always concerns. I think the concerns are valid, particularly around the workforce and the impact of that. You know, I think a lot of, A lot of times people are worried about it's going to take their job. And I would say yes. And like I do, I still believe it will replace some roles. I don't think it's going to replace a significant amount of roles, but I think the roles that are repetitive without having to think will be replaced, you know, pretty quickly, you know, with set tools and, and set solutions. But I also think that the ones who embrace it also will be empowered by it to elevate and to execute and to build, you know, new solutions and everything that the world, the world may need.
Adrian Green: Yeah, if you have, you know, 30 to, you know, 30 to 100 emails to answer a day, you know, it's, you know, that's you, you need AI, you need some help.
Marlon Avery: I mean you got people, that's that four time draw. I mean you got, you got, you got, you got people who are, you know, responding to emails and stuff all day, you know.
Adrian Green: Yeah, but at the person that is say engineering lead or whatnot and he also has answered that amount or someone who is like head of procurement or something and they also, they have to do all of this correspondence as well as all of this, you know, coordination. Not having to write those emails, not having to. The generative side of that will, you know, restore their hairline. It for sure will. That's, that'll restore your hairline better than a trip to Turkey. Sure.
Marlon Avery: Yeah.
Adrian Green: For sure.
Marlon Avery: Yeah. I mean I think that, yeah, I, I'm a little surprised that I don't hear more about AI integration in the procurement lane. I thought that that would probably be one of the first things that was like get gobbled up. And I see some, some, some companies, agencies have like kind of banned it, if you will, which I also think is like the wrong decision. I think you don't ban, I think you put instruction guidelines around it. You know, they need to make sure you're getting the proper output and people are using it properly. But yeah, I think that's surprising to see that, you know, it's not being used as fast as I thought it was going to be in that area.
Adrian Green: It's, it's coming because we've seen some, you know, supply chain optimization AI tools. C3 was one that we covered. I think it was a couple weeks ago that's, you know, has one of their, their suite, one of their pieces of their suite of applications is focused on the supply chain side of things. So it's just like you're right, it's right there and it's a matter of plugging in the inputs and it'll be able to for sure, for sure help you out and source a lot of things that you missed.
Marlon Avery: Yeah.
Adrian Green: And get out ahead of things too.
Marlon Avery: Yeah. Yeah. Okay. Well, speaking of getting out ahead of things, Meta. Meta announced their phase. They say that their AI content contributed to less than 1% of election related misinformation across its platforms during the global elections this year. It says their efforts included rejecting over a half a million image requests for election defects, dismantling Convert to dismantling, convert influence operations. Meta emphasizes focusing on behavior patterns rather than content origins to effectively manage misinformation and plans to enhance its policies based off their findings. Adrian, man, they met, met us. Hey, this, this, this. We're no longer to blame here. You can't blame social media. You can't blame us. You know, they said they've Contributed Less than 1% of election related misinformation. Yeah, thoughts.
Adrian Green: I mean from the way that they measured it, they're 1%, I'm sure is accurate. Now they seem to have approached this in a kind of a way that this is what they caught. You know, these, these are the things that they caught that are out there. So it was pretty cool. They had a, I mean they had a good effort here. It's not like they, this was just a single, you know, a really small scope effort here. They had, you know, really, you know, protocols or whatnot to identify, you know, coordinated inauthentic behavior. CIB networks which are just basically influence campaigns with a bunch of, you know, different accounts that are just spreading misinformation. They said that they were able to identify a portion of them and ban some of them. But you know, and as part of their efforts too, they said that they rejected 590,000 requests to generate images of President Trump and Hair and Vice President Harris, Governor Walts, Vice President Elect Dance and President Biden. So I, I think that it's com, it's, it's commendable, it's a commendable effort to like go ahead and do that. So, you know, not allowing the, not allowing Meta's AI image generator to create that kind of deep fake nonsense. But what I think is, you know, this is a, this is a moment in time right now. Say you're able to catch 1%. I'd be interested to see what that number is in four years or in eight years because this is something that's only going to get better. And you know, you could be these, you know, coordinated inauthentic behavior networks that you're finding could be just like, you know, the start started bigger things, more efficient influence campaigns and more accurate deep fakes. The one thing that's, that you're, we can, it's very identifiable for the most part right now in AI generated just by the human eye. When we see it relaxes AI generally, you know, there's some that I'm sure you, you know, will fool you. I'm sure. But when we come across them, you know, meaning the tools that are available to the public and that people are using. What that's generated is something that's, you know, visually you can kind of tell that something is off when that kind of like uncanny valley. It's going to shorten. It's going to shorten and it's, you know, we're gonna get, you know, through that to, you know, realism at some point, I would suspect real realism, where you're not going to be able to really tell if this is fake or if it's. If it's real or not. So in short, I say that like this 1% is. I'm glad that they're ahead of it. They have like, you know, a strike team or whatever they put together to, you know, identify this kind of behavior around the election. I would say more staff for that team is going to be needed for sure. And y'all got a part of me, man. I decided to play sports outside on a 45 degree day and now I have a runny nose. Congratulations. Congratulations.
Marlon Avery: There you go. Here's my question. Here's my question. Do you get an applause after you were originally part of the issue?
Adrian Green: Ah. I mean, the fact that, I mean, it was egregious. What Facebook has, has. Has let transpire. I mean, so. That's a good point. I mean, I'm, I'm with you. And that's a good point. That's a good point.
Marlon Avery: I mean, it's less than. It's less than 1% now. What was it on the last election?
Adrian Green: That's a good question. I don't think they had that strike team then. I don't think that strike team informed.
Marlon Avery: I think they know that number and they, they would do it. Nothing. There will be. Nothing can happen. Get that number out of them. You know, I think, I think they know what their number is, but they're like, yeah, refuse. You know, we. We only want the positive light and stuff here.
Adrian Green: For sure.
Marlon Avery: Yeah. I, I think this is. I mean, this is. This is HR at its best for Meta. You know, this is pr. Yeah, pr. Yeah. So thank you. Anything. This is PR at its best. This is like, this is a part of the rebrand. This is a part of. We've turned a. We. We. We've turned. We have new science up here. You know, Facebook is no longer. Facebook is not Meta. You know, these are two different companies. You know, stuff here. You know, even though some of the same people might. No, no, no, no, no, no. We're a completely different company.
Adrian Green: Don't look at that. Don't look at that.
Marlon Avery: Yeah. That's in the past. Like, forget that and everything. So, you know, everything. And so, yeah, I think. I mean, I think it's. I don't know why I think it's the right thing to do. My. My question becomes, if we weren't the right thing to do and everything did the right thing do. To do, was it too late? Meaning there were people, individuals and families who got impacted, you know, by seeing misinformation, seeing things on social media and reading these things on these platforms. And also too, with that they were a part of silencing, silencing. That was part of pushing, you know, set agendas, you know, as well. And so, you know, that's why I proposed the question, you know, do you get an applause down when you was originally part of the issue? And so, you know, I mean, I don't know. Do you. Do you celebrate. Do you. Do you celebrate the bully? Because he's. He's turned the new cheek now and he no longer wants to breed up another 100 people. He wants to turn to eyes, and now he wants to help people. Everything. Like, how do you balance that, especially the people who were impacted by the bully, you know, so, like, what's the balance and stuff there?
Adrian Green: Yep. Vis a vis. Stanley Tookie Williams, one of the founding members of the Crips, went to prison, turned his life around, wrote up, read a bunch of children's books. Snoop Dogg was on the campaign to get him pardoned from his. I think he had a death sentence. But that didn't. Didn't go through, you know, because, you know, sometimes when you do so much damage, there's no path of redemption for you when it comes to your own credibility unless you change your name or, you know. You know, there's a. You know, I'm not gonna say that, but yeah, anyway. Yeah, so I don't know, man. I mean, I think that around the election, Facebook's issue was that or, you know, what happened last time is that people were able to buy the ads and through the Facebook ad network, they were able to set up misinformation campaigns. So I want to know, really.
Marlon Avery: And it wasn't just like ads either. I mean, what. I mean, sorry. It wasn't just like the political side of things. I mean, we're talking about misinformation around covet, you know, as well, where that really. That directly impacted a lot of people into as well. And so, you know, again, like, I mean, you. You were part of the problem, and now you turn to. I mean, yeah, you're doing what's right itself now, but, you know, what is the balance of doing what's right when there was so much damage done in a previous, you know, your previous life, your previous, you know, tenure.
Adrian Green: Yeah. And I'm, I'm looking at this too, and it's not enough that I'm seeing in the statement from Meta regarding the, whatever, any changes that were done with their ad network around this. So what they're laying out is that in the scope in which they're measuring this, I think is the scope of any, it's, it's what any social network is going through right now. So this applies to all of them because we're just talking about fake information, AI generated images and fake misinformation information. Like so, I mean, we talked about.
Marlon Avery: The new kid on the ball last week, Blue Sky.
Adrian Green: Yeah. If we are to say that 1% of like, they received only like 1% of that, then I'm to assume that that is extremely low across other, you know, platforms as well. Because if they were talking about, let me look at the police here, let me talk about numbers here. We redirected, you know, 590, 000 requests to generate images of the presidents of the, you know, the, the people involved in the election. But that's not the only thing because the AI, it's, it's tough because it's like, it's the issues around the pop. Like all the issues around the whatever president is going to be, that's the AI misinformation that we need to be aware of. So if they're spreading misinformation about some bill that's going to be passed that's going to have some catastrophic effect or something like that, or misrepresenting some information that's out there with, you know, AI generated or augmented imagery to basically distort a truth that's out there. I, yeah, I find it sketchy. I think it's a commendable statement. I think that it's, it's, it's fine, you know, but, but maybe not. Maybe commendable is. This is a stretch. You're right. Because given its history, I think that we probably needed something a little bit more extensive than what this statement goes into. For sure.
Marlon Avery: I think, I think, you know, we've heard from certain individuals like, you know, you don't get to celebrate for what you're supposed to do anyway.
Adrian Green: I mean, I think that, I don't think that that's a, that's painting with it. That's a little harsh, you know, because.
Marlon Avery: The opposite was what though, what they're not doing. What was a significant impact with misinformation being, being, you know, being solid stuff in a lot of ways. I mean, the opposite was even harsher.
Adrian Green: Yeah, the opposite is harsher. But if a company does something wrong and then you say, they say, okay, we're not going to do that anymore, we're going to do things this way. It's like, okay, it's up to the consumer then to choose whether or not they are going to trust whatever what that company, you know, trust or, or put their money there. Basically.
Marlon Avery: I think it matters the impact of what the, what the wrong decision took in. I think it matters what was the impact or their own decision or what you ignored or didn't do or added to whatever it may be. And I think the impact of that. Yeah, yeah, go ahead.
Adrian Green: Yeah. What should have happened then? Like, so are you, are you more on the side of like meta? Should have just not said anything about it?
Marlon Avery: Rather, it's not anything about anything. It's the understanding that you, a, you are a social platform that allows thoughts, communication, shared opinions to live and go within either direction. It is very challenging as a company to then decide what you deem to see as what should be appropriate and what should, what should not. Because when you start to, when you start to allocate, you know, that power and then now it is, it becomes less about constitution, more about feelings and so, you know, it's less about freedom of speech, more about, oh no, no, no, no, I don't think this should be on there. And when you are employee of said company, then a lot of times you have access or the ability to remove, persuade or whatever it be. And that's just not your role. Your role. And what you, what, what these, what a platform like Facebook was built was to allow the world to connect and the world has connected beautifully within this platform. But then they started to decide what they wanted the connectivity of communication to be further problem. And I think that's when you start to, that's when you start to. And it's two sides to it. I mean.
Adrian Green: Well, I mean, I gotta say, I gotta say to that point it's got to be moderated to some degree. Like all the platforms have got to be moderated and you, there has to be a control over what is posted there. Just in general.
Marlon Avery: I mean, I get, I get, you know, I think, I think you. It's hard to do it with speech though, you know, because it's like there's nothing, there's nothing illegal about, you know, cussing out a policeman. Like, it's frowned upon. You know, everything. Like, but there's nothing, there's nothing, you know, illegal anything else about it, but there's something definitely illegal about child pornography. Even if you're talking about it, you know, everything, you know for sure. They're, there's issues and stuff there. And so I think you, when you align with the law and in some of these areas, you know, then I think you can have a more delicate balance on how you start to monitor these things. And in the last tenure, particularly the election, there was no balance. The balance was all based off emotion. And now you're making a human connectivity platform, not a software connectivity platform, which now you're gonna, you're gonna go in issues because now you're gonna have to decide what human emotion you're going to agree with on what side of the fence.
Adrian Green: No, I mean, it's a, it's a little bit more specific than that because it is the. What I allow on what lies that I allow to be spread on my platform. You know, so it's not even even.
Marlon Avery: That you're running into issues even that you run into. Okay, watch this. Let's, let's, let's take, let's take Kobe for example. Let's say code, for example, anything Dr. Fat, Dr. Foxy and stuff. When he was coming in, one of the things, stuff that he was stating, but basically how to protect yourself, you know, are he went from wear masks to don't wear masks to wear masks, to don't wear to a mask or anything. Now, okay, I get on the platform and I'm saying, guys, you gotta wear a mask. Dr. Foxy just said, no, we don't. You don't need to wear a mask. Who's right? Where's the fact in it?
Adrian Green: Yeah, there's no fact. They're still, that's still opinionated situation. Well, in that situation, I don't, I can't say that it was opinionated. I think that when you have a, a doctor like Dr. Fauci or someone who is saying that they are the authority on.
Marlon Avery: By the way. By the way, he's on a run.
Adrian Green: Yeah. So I don't even, I don't even know. You gotta update me on that. I don't even know about that. But if you like. But just like, okay, so like say we had Neil Degrasse Tyson, okay. Who was also Mr. Pluto. Very vocal, very vocal. Mr. Pluto himself. Very vocal proponent of all the COVID protocols. You know, so you don't just take, take him. Take him for instance, like as more information is coming out about this then like, you know, things change. You know, when it. It. There was a big scare from my understanding, having lived through it, about just how humil. How like humanicable or how you know, transmittable this is and how deadly it is. There's a lot. There was a lot of unknowns about. About that. So as that information is coming out, it's coming out from the top. The protocols are being said as far as what the standards and what you should be doing. Those protocols are then given as mandates down to you know, really establishments, organizations, all this. All this other stuff. So it. It's a funky time. I mean that's a. That's a. That's a tough compare that. That's a tough like comparison because we're.
Marlon Avery: That that's the reality of it. It is a. Is an equal amount of difficult conversation as it is around the election.
Adrian Green: Not at all. Because around the election you have things that. If you have a fake news campaign that's going to say they're the, you know, this party wants to put babies on spikes in Minnesota and they're. They want you. You know, they're. You could do that. Like, you can, you can have that. But that is something that's easily fact checkable. You can just look and see whether or not that's actually going on. You know, when it comes to something like a virus that we don't even know what is that we're. And we're still figuring out it escaped from a research lab or got something in a wet market or something like we're still trying to figure out what that is. There's going to be changes of information that come out and it's going to be. It's gonna.
Marlon Avery: You're proving my point because even with. Even. Even in the midst of changes, even the midst of research, in the midst of information being shared or anything, in the midst of that you had platforms that was marking removing certain things as everybody's trying to figure this out. My problem and stuff with these in these areas is that okay, so like.
Adrian Green: The coven misinformation stuff. So like if so like they were taking down posts that mentioned Covid and things. Right?
Marlon Avery: Yeah. You. It wasn't. You could. You could have put a post up there like I just cured my 5 year old by putting raw onions and everything on her feet and removed, you know, the stuff and everything from. Is it. Watch this. That's a fact for that Mom. The platform saw it as misinformation because the FDA or whoever. No, no. Or the FDA or whoever said they're like, oh no, no, no, no, that actually doesn't work or anything. And we're never getting to this because this is a turn to the whole thing. My larger issue is when you're not actually fact checking, you're simply checking based off who is paying y'all.
Adrian Green: Well, has there been information about like a payola, like a kind of, oh.
Marlon Avery: Yeah, she's gonna run for. I've had the privilege in Adrian, as you know, I've had the privilege of building some, some of these AI solutions for doctors, anything, neurologists, you know, everything. And now I've had. Now I also had a privilege to freely talk about things like covet because they've lived through it, they worked through things like that. Bro, a lot of these dogs are pissed. They were still pissed because they feel like even they were silenced in this area and they were just told to shut up and just go this direction and do what we said we was going to do. Everything. And they're not following the science or anything. They simply just kind of follow the money and stuff in this area. And so yeah, I mean that, that's my bigger, bigger issue, which we should probably change the subject. Yeah, yeah, but it's, it's things like that and it's like, bro, you can't say that this is misinformation and that is that. Then that is not simply because the, the person that it's coming from simply has wrote you a check. That's my larger issue. Everything, particularly when you're a platform and now you're impacting millions, billions of people, while that onion on a five year old foot actually did help somebody. But the post said no, you simply just need to get vaccinated and that's going to help you more. And the people, not the people, but the person who got vaccinated had a larger issue, had a larger response to, within their system. Why they could have had a separate way or a natural way to do this and everything. But simply you want to step in because you see it as a fact check source because somebody wrote you a check. Kiss my ass. Next subject.
Adrian Green: Well, this is, this is what I say too. And, and, and, and here's the one thing I got to say too, which is even, I mean it's just to change the subject and even off of this. And that is just for this. Anyone listening to this, know your roots, know where you come from. You know, humans have been around For I'll take it.
Marlon Avery: I'll take a step further.
Adrian Green: Time and there's many like things and old wisdom that you can really abide by and live a healthy life. So, you know, I'll take it a step further.
Marlon Avery: You're right. You're right. Know your roots, know what you come from. Also know your country. We live in a country that is full of greed.
Adrian Green: Okay.
Marlon Avery: We live in a country who are people who are money hungry, institutions are money hungry, organizations are money hunter money hungry. And a lot of these individuals, organizations and institutions rather see their bank getting checked and stuff getting filled and actually making sure they do right by somebody, everything or right by the larger majority or right by the population, right by segments or anything that itself I really have a majority issue and stuff with.
Adrian Green: For sure. For sure.
Marlon Avery: Now I want to fight. Now I'm just playing. All right, so let's. Here we go. A Canadian starter startup historian raised $693 million to develop AI training servers and processes to compete with Nvidia. With plans to release their new AI chips. The company aims to disrupt the AI chip market while leveraging this funding to expand its AI engine engineering team. The move underscores its anticipated competition in the AI hardware sector. It says this now is. They're looking at this as a legitimate contender or challenger, if you will, to Nvidia. Adrian, I'll start with this one. I'm keeping it. I'm gonna keep very, very short.
Adrian Green: Cool, cool.
Marlon Avery: I don't see it. I don't see it. It is, it is. I've got, I got into learn. So one of my good friends, he has a couple thousand GPUs particularly he was doing mining everything on cryptocurrency. And so I got to learn more about GPUs and everything. You know, it's kind of like, you know, being involved and around stuff him and I got this, you know, understand the difference between GPUs and CPUs and new use cases. And so for such. But also too I got to understand the difficulty of building a GPU, the difficulty of building GPUs for a particular use case. And then also too now we wanted to. Now when you want to do this for AI compute training servers, anything that's another level and another levels of difficulty. And I respectfully, don't. Let me say this very carefully. $693 million is a lot, a lot, a lot of money.
Adrian Green: It's, it's, it's.
Marlon Avery: What's this? It's a lot of money. I don't think, I don't think it's a lot of money for hardware. I don't think it's a lot of money. Yeah. For particularly like building gpus and everything. And so, and so maybe they figure some things out and maybe they've have. I mean you obviously you got to figure, you got to know some things already to raise, you know, almost $700 million and so. But as a, as a challenger though to Nvidia who is not only off to the races, they are leading the market. Not only leading market, Dominating the market. Not only dominate, dominating the market. They were simply declaring a new market every 90 days. Like you know, it's, they're just out here, they're playing with house money and stuff. And so as a challenger, I don't see it as somebody who can get into the space, you know. You know, sure. I mean we, I think a lot of times we've been taught that number one is either number one, nothing. But I mean Lyft is still a multi billion dollar company. You know, I'm also another company. And so maybe they were going to try to position themselves as the lift in this area, you know which I think that type of opportunity is wide open. You know, you still got intel and everything, all those other companies. You're trying to figure this out too as well. And so, you know, so sure, you know, you know for sure. But yeah, several hundred million dollars is a lot of money. And then didn't open AI. Wasn't that their intent when they raised their money and then they, they switched their plans. They deviated from it.
Adrian Green: Yeah. Into the chip, into the chip area. Right. Into some R D around chips.
Marlon Avery: Yeah. And then they deviated from it. They say yeah, we're gonna, we're gonna leave it alone.
Adrian Green: Yeah. So like they built that first chip with Broadcom, but that's. They didn't. That was back in October. They announced that. But they scaled back its foundry. Yeah. Their actual ambition to, to build the actual chips. Yeah. I found this one interesting. I had never heard of this guy who founded this company or is the CTO of this company. He didn't find. He didn't found it, but Jim Keller. So this is where on, you know, AI with friends these stories just start to bleed into one another. As far as, you know, the narratives that really you can follow by just following the podcast. But we talked before about the state of Intel. Intel, if you didn't know was they're not doing too great. So right now their CEO just stepped down. What's his Name. Galesburg shut down. Sorry, Pat Gelzinger, step down. So a lot of people are thinking like heads had to roll because the company's not doing great. And the company's not doing great, a lot of people say, because they are a chip set designer and a chip yet manufacturer at the same time, and they're spread too thin, potentially not being able to do well enough in both categories to be sustainable. Now, Jeff Keller. Jim Keller, sorry. Comes from Intel. He was at intel back in 2018. Here. Yeah. April 2018. Joined intel, the senior vice president, and he resigned in 2020 because of his feelings around their decision to outsource more of its production. So when I read that, I thought to myself, we have a Guilfoyle here. We have an engineer. All right, Jeff. Jim Keller. Or Jeff, I'm gonna keep getting his name right. James Keller. Jim Keller. Jim is a microprocessor engineer and he's an OG at this. And being that he's worked at amd, Apple, Tesla, and he was a lead architect, the lead architect of the AMD K8 micro architecture, and that includes the original Athlon 64. So this guy knows his stuff. So this is a guy who, according to just what I'm reading here, seems to be passionate about building and kind of doing it himself. So he's at intel, they want to outsource it. He's like, we can put this together. And I gotta say, man, like, I've worked with people like that before and I, I love them. I love them. So, like, I'm biased in this opinion here because I've been with the person in the build room who can.
Marlon Avery: Who can also build. Yeah.
Adrian Green: Oh, my God. That's putting together the Cisco switch. Got a cigarette hanging out of his mouth. Look, we just do this and just like, seriously, the, the, the. You know, I've worked around, I've worked with some really talented people, and there is, it's also.
Marlon Avery: It's the same reason why Elon is able to attract so much talent. You know, that's what I was also. Yeah, yeah, he said, he said, he said. Interview with Marquis Brownlee. He said he still spends 50 of his time engineering. And that is. Blew my mind. Yeah, Yeah.
Adrian Green: I don't believe that. But it's possible he's got too much going on to spend. If you're spending 50 of your time engineering and you got, what, six companies? Dude, you need to stop. Engineer. You're spread. You're. You're not.
Marlon Avery: But again, going back to your point, it's like, I I can see that because that's also what still attracts individuals like yourself.
Adrian Green: Exactly, exactly. And I'll say this, I trust Jim Keller in this because he's actually, he has that proven track record of the engineering side of it. So you're thinking about a company with someone at the top who can get just as down in the mud as someone else. Which you know I've, you know I haven't checked out Elon Musk's GitHub repo recently but you know, so maybe it's popping, I don't know. But this, but you know, this dude has a pedigree behind him and on top of that too, just going back to the story of the CTO here, he also sound founded in 2023 Atomic Semi, which is a company that makes foundry tools to, for small scale fabrication equipment. So that sounds familiar. It's chip, chip making. So this is a guy who is straddling two companies now. One company is, you know, could be, you know, vital to the success of the other. So it is kind of sounding familiar as far as the Elon Musk story. So he's saying that at 10 store they're going to build these chips for AI specifically and they got a client list of people that are, you know, I guess have pre orders and all that. I think that there, if you were to have confidence around someone I would, I would say that like as far as intellectual, as far as like the, the know how of how to do it, this seems like a good place to put your chips is with this company.
Marlon Avery: But the looking at $700 million worth of chips.
Adrian Green: Yeah, yeah, yeah, I can see that. Yeah, I can see you putting your, putting your, you know, because it's, it's, it's spread across different investors. So it was led by Samsung Securities, AFW Partners. Other investors included Hyundai and Jeff Bezos's Bezos Expeditions, among others. So he wasn't alone in the 693 million dollar raise. It was, you know, parties that were putting in a, you know, each their own portion of that and so on paper so and you know, behind a seemingly, you know, really technical, a guy with a lot of technical acumen and a solid history. However, you know, looking at the history though it is, you know how you can let your passion shoot your, you can have, you can be so passionate that you can shoot yourself in the foot in that, you know, you take on too much, you can be too ambitious and you know one thing or you get mad and spread it. How volatile, how volatile are you too personally because if you're leaving these companies from like disputes over the direction of how it's going to go, what are the, what trust do I have of an investor that you're not going to have a fight up in the C suite, you know what I mean? Oh, because you want to go one way and the board wants to go another way. It's tough. You know, that's it, that's, that's one thing that I, that I can see about that. But in and of itself I do believe that the more money that we put into the manufacturing of chips, especially domestically, is going to pay off. I do, I think that there needs to be. Since Nvidia has so much of the market share of that. Yes. Let's start innovating. Let's start kind of thinking of better and more efficient ways to really just like cut back on that monopoly there potential. Not, not monopoly, but like that, that crazy market share on the chips. Do you increase competition? Yeah.
Marlon Avery: Do you, and speaking of competition, do you think there is that much talent available that Apple Nvidia OpenAI on the hardware side of things having already gobbled up?
Adrian Green: So you mean like new talent on the, on the hardware side?
Marlon Avery: No, I mean I, I think new now, I don't think new time. I, I think if you're gonna be, if you're gonna be, if you're gonna be building and challenging to individually, you definitely have to have seasoned people now. Yes. You sure you can, you can convince people to come over and jump ship, you know, things like that. But is there, is there really that much talent in your thing on this specifically of building processes, processors, anything, you know, chips, GPUs like that to be a certificate challenger to Nvidia?
Adrian Green: Absolutely. Yeah, I think so. I think that like what I think is that there is a lot of, you know, people that are in hardware and that do that and you know, you probably don't hear about it. You probably don't. That's not the people that are getting really the headlines a lot of the time it's the developers, it's programmers, the people who work with more in software than the, you know, the more like talk to the machine level programmers and engineers, the ones that you know, are writing compilers and all that, you know. Yeah, they're still out there, man. Like, I don't, I, I wouldn't say that there is a lack of talent. I would say that probably if a company came along and gave those people a chance to shine, a lot of people would jump on it.
Marlon Avery: Okay, cool. All Right, so here we go. Anthropic. Anthropic has now released a mcp. Oh, it's a Roth has at least an open source is open sourcing the model context protocol and a promise to Bridge, which promised to bridge the gap between AI assistants and data sources enable seamless integration. Developers now can create MCP connectors for tools like Google Drive, Slack, GitHub. Early adopters like Block and Apollo report significant improvements in its functionality. This unified protocol supervised AI data interactions paving the way for a scalable connected system. And so Anthropic Narrow man has made some additional moves here around the MCP Man. Hey Adrian, what's your, what's your thoughts?
Adrian Green: Every week we have something to say about Anthropic. They are really doing a lot. They are, they are, they are really doing a lot. Claude 3.5 sonnet in itself like that's one of the best reviewed LLMs out there when it comes to coding. So I, you know, am a fan of what Anthropic does. We talked about, you know, all the moves they've been making and as far as their, you know, upcoming, their partnership with Palantir, I think that was them as well. So what this multi Model context protocol is, this is, it's basically a secure way to communicate between your data and an LLM. And it's cool because it sits, it sits on, you can install it on the machine where the data lives and you know, tie it into other existing platforms out there like Slack, Google Drive, Postgres and even like Puppeteers. So like you know, that's basically a website scraping, you know, add that in there with like communicating with your databases, communicating on Slack, looking at your commits in GitHub and analyzing your shared files there. We've talked before about tooling and that made that being the difference. And when we talk about tooling we talk about Anthropic. It's like if they, it's. They've been in that same conversation this entire time and for them to continue to roll out these new products at the pace that they've been doing it. Oh by the way, going back to 10 stern, they're like they're going to release a new chip every two years which they need to chill with that. But the good luck. Yeah.
Marlon Avery: So you can announce a new chip over two years, releases something different.
Adrian Green: That's something completely different Dog. So you have a, you know, a company here that like they're just, they're just really bulking up the tooling. They're really making a, an effort to do that. And they're putting this through. You can install some pre built servers through the desktop app. Their desktop app, Claude desktop. And they have plenty of guides to get started. But I think that say I use locally I use this product called Anything LLM using that, it's a desktop app and you can just install it. You can install it and then use any model that you want to. You can use local models, you can use cloud models, you can. Anyway, it's a product similar to that, but I'm excited to try it because I think that it may be more focused and better optimized than what Anything LLM is doing right now. That's an open source product. So. Yeah, but yeah, tooling. That's all I have to say. Tooling, Tooling. Tooling.
Marlon Avery: Yeah, yeah. So Anthropic is. Yeah, they're, they're doing some significant work here. I will say they're not to, they're not. Let me, you know, so I would say this companies was basically building protocols on top of API calls and everything. Two things like OpenAI and Anthropic. And so they realized that. So one company, no I know for sure that was doing this was Intuit. I believe Pinterest was doing this as well, you know, as well. But I know Intuit was doing this and so basically they were building protocols on top of, to make API calls and so to like to talk to, you know, certain LLMs. And so, you know, they realized that a lot of companies or well, endurance will require this if they wanted to allow their enterprise or their employees or anything to, you know, access and stuff a lot of these tools. And so yeah, I think, I think it's, I think it's necessary more. I think it's a good move. I think, you know, like you said, as we're kind of getting to like this tooling enabled world. I mean I think you're just going to need, you know, a lot of these things and so you know, in the ability to be able to connect your Google Drive from your CSVs to your PDFs to your documents and things like that, you know, I can also see things like Notion Jira, you know, coming right behind this, you know, as well. And so, so yeah man, I think it's the, I think it's, I think it's the right move. This also too gives the, it gives companies and organizations, you know, the necessary superpower to be able to find information quicker, produce information quicker, be, be able to connect it and you know, essentially build, you know, you know, on Top, you know, solutions up on top of this and so yeah man, I think it's exciting. You know, everything one DARPA is doing and as always man, they, they're definitely heading the right direction. We're definitely going to see who's going to become the, the Uber and the Lyft, you know, in the, in the LLM world and everything. I think, yeah.
Adrian Green: What's going to be the standard protocol too? If this protocol is going to be somehow, you know, at some point which, which, you know, communication protocol between the LLM and your data is going to be the standard. That's going to be interesting, you know.
Marlon Avery: Yeah, okay, cool, cool. Lastly here man, we got. There's been some concerns around sustainability. It says the exponential energy needs for AI model training, you know, such as your ChatGPTs are sparking sustainability concerns. While GPUs dominate AI training alternatives like neuromorphic and optical computing show promises in reducing energy use. So these innovations mimic brain efficiencies or use light. Yeah. Or use our use of light. But the faith, the challenges faces the scalability, precision. That's okay. Do you, Adrian, man, do you think this is a, I mean we've talked about the last couple weeks, we talked about, you know, Microsoft, we talked about Google going to either partner or acquire old nuclear plants. You know, they aim to, you know, help with like this energy consumption. I mean do we think this, that is enough? Do we think we're going to kind of run our ideas? Do we, do we think we're going to just like, hey, we're just going to figure out a way how to reduce the energy use? You know, like what's your thoughts stuff here?
Adrian Green: Yeah, well I'm glad that we're, that we're, that we're looking on it or looking at it really what's cool about it is, I mean, hold on, give me one second. What is cool about it is that it's a machine. So what they're trying to do with the neuromorphic design, right. The. So it's a new physics based, physics based self learning machines could replace the current artificial neural neural networks and save energy. So basically they've. What they're saying is that they developed a machine that has like a self learning technique, like a physical machine. So it is the, it's going to be able to carry out training but then train itself and it all happens within a physical hardware device which is kind of wild. So what they say in. I think this is like Max Planck just Schultz something German. This research paper that I'm looking at now, but they're basically saying that this is basically okay. So when training a con, like the art, the, the regular neural networks, it's, you have all this data and it's, it's basically, it relies on you having the amount of data, more data to feed it. And more data means more processing, means more power. What this is saying is that we're all gonna, you know, is this going to train itself within its own box, within its own brain and getting, getting into how this mimics human brain function and, and stuff like that. I think that's clever marketing or whatnot. But maybe the approach will work. Having a machine that'll train itself could efficiently, could maybe use less power. So you can have something that's just, you know, at the end of it, you can have a model train that's just as smart but had use less energy, you know, and they use a lot of comparisons to like the human body, which is cool. Like we do all these calculations in our head at any given moment and we're so much more energy efficient than what a computer is like. For a computer to do that, you would need, you know, gigawatts of power. And. But this is something that's going to be, they're saying it's going to be more, you know, it's not going to be human level optimization, but it's going to be something that can potentially, you know, take that, take that requirement for a massive amount of energy and time. I mean, these models are trained for months on end off of the data that they have. So yeah, the, if you have a machine that can train itself, I don't know if that'll be a shorter, a shorter training time. But the way that they're laying out, they're laying it out here, it looks a hell of a lot more energy efficient. So I think that, you know, because I think Facebook is looking at the nuclear reactor now. I just saw an article about that recently. I think looking at nuclear power because everyone is like, well, you know, it's carbon emissions going on here. We need a cleaner way to use this power. So I'm glad that they're looking that there are, there are companies looking at this in this area right now. It looks more like, it looks just beyond concepts of a plan, you know what I mean? So we'll see how it shakes out.
Marlon Avery: Yeah, yeah. I think just beyond constant plan is like accurate. It. I think it's a, it's one of those things that we see as an opportunity and we can hope that we get There. And so, you know, and then, and once you get there, will that be enough? You know, will we have to build something else or will we have to scale what we have and how long would that, you know, take? You know, and so, yeah, I think as more individuals, institutions, organizations, or educated about the capabilities of like, you know, G gen AI integrations. And yeah, I think we're definitely going to need, I don't even think we, I don't think we fully understand how much we're going to need because I mean, this is, this is like, this is still at the bottom of the first, you know, stuff. And so, you know, it's so much. I think we're going to need so much and everything. And so. And I think there's, I think there are steps that help us get there and I think there's inventions that would help us get there. And some of those inventions haven't been, you know, brought to light yet, if not invented. And so, yeah, man, I think it's, I think it's like you said, you.
Adrian Green: Know, which we're thinking, yeah, let's just keep pushing for the education, let's keep pushing for the math programs and all that, man, because we're gonna need the map. We're gonna need people that are able to do this heavy math because it's either you find a new device or you find, you know, look at the hardware, way better ways to optimize it. But it could, it's going to come down to the algorithms too, and how, you know, bet how best that's working. You know, I don't know about algorithms. I don't know that part of, you know, really, I don't know. The, the whole algorithm areas was just, it's just one area. I'm glad that, you know, that I have a job and I'm able to pay my bills without a computer science degree, you know, so, yeah, that's tough. But you got, you gotta have, you gotta have that high math intelligence, that high math ability.
Marlon Avery: Okay. Also too, man, if you guys are wondering, you know, we didn't give our friend there a shout out except for the beginning of the podcast here, man. Our good friend and co host Seku is at reinvent, living his best life. We're happy and jealous at the same time.
Adrian Green: And jealous.
Marlon Avery: Yeah. Envious, by the way.
Adrian Green: Yeah.
Marlon Avery: Because I know he's going to come back with this big smile on his face that, oh man, he saw this, you know, this is coming and just, just want to brag all day. We want to hear that you know.
Adrian Green: And the number, the number of articles, the number of articles we're having to hold back on because we want the real from the horse's mouth. You know what I mean? Scoop, I look, I really look forward to him catching us up on aws.
Marlon Avery: You know, that, that could, that could be the whole episode next week. Yeah, yeah, yeah. And then. And we things that was release talked about for there. Yeah, I mean that could be the whole episode next week, you know, and then let's see if AWS will sponsor it as well. Yeah, man. Yeah, man. Another great podcast here, man. We talked about, we talked about the economic revolution around Gen AI. We talked about meta saying hey, we're no longer blame around the misinformation around elections say that they contributed to less than 1%. Also too, we talk about AI's growing energy demand along with historians raising nearly 700 million. And then also to anthropic announcing a new model context protocol that's up there. And so man Johnson, see you there, man. Hey, John. And so yeah man. Adrian, man. Closing, closing thoughts and stuff there, guys.
Adrian Green: It's been great. It's been another great podcast. Looking forward to the next one. You can find me on my varied not uniform name social media accounts. The most reliable one is infamous Adrian on my X account, Marlon.
Marlon Avery: I am Marlon Avery on all platforms. See how that rolls on the tone, Adrian, you know, because I am like you.
Adrian Green: I gotta get like you, man.
Marlon Avery: And then also too man, we are a hour friends podcast on all platforms as well.
Adrian Green: Yes indeed.
Marlon Avery: Until next time.
Adrian Green: We until next time. See you.