
AI With Friends
Welcome to AI With Friends, your weekly launchpad into the world of Artificial Intelligence. Hosted by Marlon Avery, a pioneer in GenAI innovation, alongside Adrian Green, VP of Engineering at LiveNation, and Sekou Doumbouya, Senior Staff Cloud Systems Engineer, this show is your go-to source for all things AI.
Our hosts bring diverse expertise—from AI strategy and tech innovation to industry leadership. Every week, they break down the latest AI trends, interview top experts, and simplify complex concepts for AI enthusiasts, entrepreneurs, and tech professionals alike.
Marlon, Adrian, and Sekou combine their unique perspectives, whether it’s Marlon’s collaborations with tech giants, Adrian’s leadership in global entertainment engineering, or Sekou’s cloud systems expertise. Together, they make AI insights accessible, actionable, and exciting.
Tune in live on LinkedIn every Wednesday at 10:00 AM ET, or catch us on all major podcast platforms.
Here’s what you’ll get:
- Cutting-edge insights from AI leaders
- Real-world applications of AI technology
- A vibrant community of forward-thinkers
If you're ready to stay ahead of AI trends or spark your next big idea, join us each week for an hour of engaging, thought-provoking content.
Subscribe now and become part of the future of AI with AI With Friends!
AI With Friends
EP10: Generative AI Arms Race: Hackers vs. Defenders + Adobe's Big Bet on "Commercially Safe" AI
In this engaging episode, hosts Adrian and Marlon explore the dynamic world of AI, focusing on its impact on various industries. They discuss Adobe's latest AI video tools, highlighting their potential to revolutionize video editing and creation. The conversation shifts to OpenAI's new model, Orion, and the challenges it faces in maintaining its competitive edge. The hosts also delve into the innovative strategies of companies like Writer, which recently secured significant funding to enhance its AI capabilities for enterprise solutions. A critical discussion on the cybersecurity landscape reveals the growing threats posed by generative AI and the urgent need for robust defense strategies. Throughout the episode, Adrian and Marlon emphasize the importance of adapting to these technological advancements, offering insights into how businesses and individuals can harness AI's potential while navigating its challenges.
----------------------------------------------------
Welcome to AI With Friends, your weekly launchpad into the world of Artificial Intelligence. Hosted by Marlon Avery, a pioneer in GenAI innovation, alongside Adrian Green, VP of Engineering at LiveNation, and Sekou Doumbouya, Senior Staff Cloud Systems Engineer at Pinterest, this show is your go-to source for all things AI.
Our hosts bring diverse expertise—from AI strategy and tech innovation to industry leadership. Every week, they break down the latest AI trends, interview top experts, and simplify complex concepts for AI enthusiasts, entrepreneurs, and tech professionals alike.
Marlon, Adrian, and Sekou combine their unique perspectives, whether it’s Marlon’s collaborations with tech giants, Adrian’s leadership in global entertainment engineering, or Sekou’s cloud systems expertise. Together, they make AI insights accessible, actionable, and exciting.
Tune in live on Twitch & YouTube every Wednesday at 9:00 PM ET, or catch us on all major podcast platforms.
Here’s what you’ll get:
Cutting-edge insights from AI leaders
Real-world applications of AI technology
A vibrant community of forward-thinkers
If you're ready to stay ahead of AI trends or spark your next big idea, join us each week for an hour of engaging, thought-provoking content.
Subscribe now and become part of the future of AI with AI With Friends!
Follow the Hosts:
- Marlon Avery: @IamMarlonAvery
- Adrian Green: @InfamousAdrian
- Sekou Doumbouya: @SekouTheWise1
Affiliate Links:
Marlon Avery: Hey, hey, hey, hey. What's going on, Adrian?
Adrian Green: What's going on, Marlon? It's just another day in the hood. How you doing?
Marlon Avery: I'm doing good, man. Another day in these AI streets.
Adrian Green: Yeah.
Marlon Avery: Hey, man, doing great, man. How's your, how's your week and stuff been?
Adrian Green: Not bad, not bad. Got a lot of good work done, you know, trying to stay outside the news cycle, you know, keep my head down and focused on work and family and. It's been good. It's been good. You?
Marlon Avery: Yeah, man, I get as well, man. I've been building, doing a lot of traveling workshops. Yeah. Teaching and stuff, bro. I gotta. Actually got a workshop tomorrow. I'm doing an internal Microsoft AI tools workshop, so teaching some, some employees there how to, you know, use these tools and everything at scale, so their day to day operations and so. So yeah, man, I've been doing that and then, you know, building and stuff as well. And there's just so much, bro, it's like, it's. It's like overwhelmingly good sometimes where you're just like, oh man, I gotta go do that thing. I go check out that thing, you know, like, wait, they're doing what now? It's just.
Adrian Green: Can't keep up. It's impossible.
Marlon Avery: Yeah, man. You know, I think somebody, somebody asked me, it's like, hey man, how do you keep up all yourself? And I was like, you don't. You just, you know, you just put your best foot forward, you know.
Adrian Green: Yeah.
Marlon Avery: And you got to keep going. So.
Adrian Green: Yeah, someone asked me the same question. I said, start a podcast.
Marlon Avery: That's funny. That's not even the reason why we did this. You know, we were kind of, we were kind of already having these conversations, you know. But yeah, that's a, that's a, that's a good one there for sure.
Adrian Green: Yeah.
Marlon Avery: Yeah, man. Yeah, also too, man. I, I were our co. Host and stuff there. Seku. Seku is not with us this week, man. He's. He's traveling and stuff himself and so he's at. He has some workers up to attend to and so. Yeah, man, he'll be back with us next week and you know, we'll keep it going, you know, as well. We will.
Adrian Green: In the absence of Seku the wise, we will try to be as wise as we can, you know.
Marlon Avery: You know, it's hard, man. Seku got what, 10, 20 years on us of wisdom, like technical wisdom, you know, so it's just like, you know, even with ChatGPT, bro, it's hard to keep up or hard to Catch up if you will, you know, with those things. And so Seko, you know, man, we love you, appreciate you. Let's get into it, man. Guys, welcome back. Another episode this ep 10, you know, so 10 episode 10. You know it's crazy bro, because like, like you said, we've been doing this. We've been having these weekly discussions for like a year now, you know, with. Between us three and then we finally got to the place was like yo, this should probably be a podcast. But it's just like it feel. It doesn't, it doesn't feel like it's been 10 because we've been doing this for quite some time. But it's also like oh man, we did 10, you know.
Adrian Green: Yeah.
Marlon Avery: Type of thing. And so yeah, it's a weird nuance stuff there.
Adrian Green: Yep, yep. I, it feels like 10 for me because I was just at the coffee shop. I started working from home because I, I was working out of the coffee shop. I'm a, you know, remote worker but I, I was working out of the coffee shop for a long time and you know when you're doing that you get a group of friends in there and I hadn't been there for a while because you know, your boy needs two screens and I gotta be, you know, I gotta be. I gotta have my two screens. So I went in there the other day and yeah. Catching up on recent events and we got into AI and then I got into nuclear reactors and all kinds of stuff, you know. So that's how I think it's been 10 episodes. Because definitely my knowledge of the scope of everything that's happening is really skyrocketed.
Marlon Avery: You know, I'm a two screens guy too as well. So I got, I got. And I'm also two wide screens by the way.
Adrian Green: Oh yeah.
Marlon Avery: So I got, I got one over here and I got one over here and stuff. And so I got my camera here in the middle and so yeah, but two, two hours going by myself and then trying to like you say like going to work like at a Starbucks and trying. And I think focus not necessarily is the thing. It's the productivity between two where you can kind of go from screen to screen or have a code base here. Some slides up here like you know, screen over here and you know, I think is that. And versus like being on a 12 inch MacBook, you know. It is. And, and I even like even, even with this thing, man, it's just like it, it's like a side chick now. Like.
Adrian Green: Yeah.
Marlon Avery: It gets pushed aside, you know, just.
Adrian Green: Ain't text her in a while. You ain't texting in a minute now, huh? Yeah, you gotta get text her after the holidays, you know.
Marlon Avery: Right, right, right. February 15th is your day.
Adrian Green: Yes.
Marlon Avery: That's funny. All right, man. Well, let's dive into it, man. We dive into this up here. Adobe is doing some interesting things, it says. At its annual MAX Conference, Adobe introduced a suite of generative AI video tools powered by its Firefly AI model. So these new features are designed to simply simplify video editing creation while addressing concerns around responsible AI development. One of the notions additional is the generated extend, which allows users to seamlessly extend clips, smooth transitions, and adjust shot durations for optimal timing. Within Adobe Premiere Pro, Text to video and image to Video provides capabilities to generate video from text to prompts, to animate static images or illustrations. Significantly, Adobe emphasizes firefighter commercial safe nature, which is very interesting as well. Meaning it is trained as exclusive, exclusive and licensed on public domain content. This approach distinguishes Firefly from the competitors in controversies over using copyrighted material without permission. Adrian, kick us off, man.
Adrian Green: Would you look at this, at this positioning? Would you look at this positioning? Adobe finds itself in. Okay, so Adobe is, you know, I think they came under fire early on in the whole AI conversation for possibly using some, you know, to train its models using content that they didn't know. I feel like they've since rectified that. So now everything in its training is publicly available. So at the end of this, I'm sure the AI, or I'm sorry, not AI, Adobe, would like to have the best model that you can use without fear of someone coming after you for copyright infringement or something else. So my question to Marlon after hearing this was, you know, I need a bit of an update on Adobe because I seem to recall there was some time in the recent past where Adobe was under fire for really putting everything behind a paywall, so making access to its application, which is like industry standard, you know, more expensive, more exclusive. Will something like this pull or, you know, increase Adobe share? You know, as there's other competitors out there doing AI stuff like canva, you know, there's a new image generation text to image generation apps popping up every day. Yeah, yeah. So what's going on with their pricing structure? Because you said you had something, you had something for me about that.
Marlon Avery: Yeah, I've been paying attention to just like some things of Adobe is doing. So Adobe's. I think one of the complaints for quite some time with Adobe is just like, you know, somebody. Even some of the ease of Use things or some of the simple things, simple things to do within Adobe's world was consistently always behind like a paywall. And I think they've listened to that, you know, feedback and if you notice their Adobe's new strategy now and I, I don't, I'm not sure if they talk about this publicly but I've been noticing that is that Adobe has a team or several teams that are looking across the Internet and they're looking for tools, tasks and things that they can offer for free that you don't have to be behind a paywall to do. So you know, for example, transitioning a, a JPEG Image or a PNG or doing a MPP4 video to a MOV file, you know, like some of these like day to day things where individuals may be doing and they just need like a quick tool, you know, to find Adobe's like scouring the Internet and they're looking for these little tasks to do and they're offering them for free within like their umbrella. And I've been noticing this, they did, they've been popping up with a lot more free tools, you know as well. And so like I've been noticing that across the board. So I think they're trying to do like a rebaring around their pricing complex and stuff if you will. And so they're offering you know, some of these free things and stuff that, which also kind of increases the brand and stuff of Adobe or the likeness, you know, if you will with that, you know as well go like going back to, going back to, with this video model. It's like that, those strategies around, you know, creating like getting people access and getting people to use these tools around. The big thing is that this, they, it's a very similar marketing strategy called, what's it called like the little Big Horn or something like that is basically when you want access to someone or you want access to something you don't go through, you go, you don't go to them directly. You go to all the things around it and then eventually what happens is that it brings you back in, you know, to that main source. And I think they're doing this right now, you know, with like some of their big tools, their big models. Because if you start to use, you know it's, it makes the same thing that Apple does, you know, it's like the gateway drug to Apple is typically know like the iPhone, you know, and once you get the iPhone and like oh crap. Okay, now this, this, this, you know, this, this, this smartwatch that I Have it doesn't work seamlessly with you know, my, you know, they deal with my Android like okay, cool, now I gotta get the Apple watch. All right, cool. All right, now like, okay, then I want to do some work. Ah crap. Now I get a MacBook. Now I get iPad. Now get Apple P everything. And so you know, it's the same thing. It's the same like, same strategy. They're just kind of doing this on the software side and so it's, it's not surprising, you know, it's not surprising and everything. And I, I think I was, I think they're doing like a, a really interesting and very slick way and everything and quiet way to doing this as well because like once you start to go use the video model and you're like oh man, you start getting results or anything, guess what, you're gonna go download then Adobe Premiere promo. You're like, it's the same thing. You're going to get the iPhone especially gonna get the Apple watch. It was going to lead to the map. You know, it's as well. And so I, I don't think it's surprising. I think they're doing, you know, a great job. And you know, Adobe's, one of Adobe's things is where they kind of like stand their self probably on is making it commercially safe, you know, for businesses to use because they've trained their images and videos are basically off their own content. You know, so Adobe's had their own suite of, you know, you know, pictures, videos, stuff, you know, for quite some time. And so they just kind of like they train their own models, their own content, everything, which means they, they're, they have a high likely chance of like, you know, not getting somebody kind of get back and come like hey, you stole my image or what it may be. Yeah. And so, so yeah, so yeah, I think it's, I think it's a, I think it's an interesting move. I also think it's, I think it's, I think it's interesting to see Adobe it was enough who, it was somebody else who announced their video. Was it Llama? Yeah, I think it was Llama. I think it was Llama announced their, their multi mod their video model as well. And so I think it's interesting to see Adobe and Meta, you know, launch these models before OpenAI, you know to as well and so, but not saying it's like the gonna be the best one or it's like the right, right thing to do, but I just, I just kind of find it interesting. Especially with OpenAI announced their video model. Was it, was it last year or top of this year? Something like that. So yes. Yeah.
Adrian Green: And remember back in, when was this. Back in June of this year, Adobe changed its approach to, to the training which remember for a minute it said that the work that you created on the platform will be used to train the models.
Marlon Avery: Right.
Adrian Green: But, but they've since stepped back with that, you know.
Marlon Avery: Yeah. So I'm gona be very, I'm gona be very honest. I don't believe any of these companies. No, of course when they say that, you know, like, yeah, Facebook's been saying for forever, we're not, we don't use your data for, you know, this type of stuff. Like that's just, you know, once it's in there, bro, it's like so many ways you can manipulate it, there's so many ways you can safeguard the way like oh, we didn't necessarily use it for this, but we did like some back end channeling for this. Like it's.
Adrian Green: Yeah, yeah, sure, short of a severe audit, you're not gonna be able to know.
Marlon Avery: Yeah, and when you say severe it means severe because you not only have to come, you have to come with, you know, you have to come in with, you know, court approvals, you have to come in with the right developers, engineers, you got to come in with, you know, cloud system architecture engineers. Like you ever come in like, right and everything, you know, you just can't come in one level and just like so many documentation for this like and it's just like yeah, you gotta come in deep.
Adrian Green: You really do, you really do.
Marlon Avery: So yeah, you got to come in deep and stuff with that. Anything. But I mean here we, here we go. I mean I think, I think the video model stuff is going to be fun. Expect. I, I think, I think these type of things can be very beneficial for small businesses and startups because like now they can create more, you know, more professional and you know, more professional like commercials and ads and you know, things like that. What I mean, one of my favorite, my favorite, one of my favorite AI generated video is the one open AI had where they had the two pirate ships battling in a coffee cup and man, bro, it was, it'd be such an awesome ad for like a small business coffee shop, you know, like it was just, it was so fun. It was, that was like really, really nice to see and so, and it looked real and stuff as well. And so, so yeah, I, I think these type of things, man, can definitely be, you know, Quite the tool for the, for the small business and startup.
Adrian Green: Communities and empowering the creators on those teams, really.
Marlon Avery: Yeah, yeah, everything. Yeah, yeah, I like that. Yeah. Because empowering the creators to. Even for like agencies or individuals that you hire and also someone's working stuff too as well, they can come back with some awesome stuff. So. Yeah, I definitely see that.
Adrian Green: Yeah, for sure. Because if it's the, you know, you can't have the executives. Well, you know, typically your typical executive isn't going to have that much of a creative, you know, bent as like who you hire as the creative person. And the creative person empowered with this is going to be way more effective potentially.
Marlon Avery: Yeah.
Adrian Green: Because it's to the limits of their imagination really. And that's what, that's what the whole conversation around AI is. It's like, it's not a magic box. It's really like you got to know the magic words as well.
Marlon Avery: Yeah. You know, and it's, it's. I've said consistently, you know, a good part of my mission now on like these workshops I do is to decrease fury, increase creativity, you know, around, you know, AI. And so, you know, when you increase creativity, you know, people mind starts to like, you know, like, if can you do this, can this be built? Can we integrate these two things to do a certain output? And typically the answer is yes. You just kind of got to figure out and stuff like how to put it together. And so, yeah, it's been amazing for me to see some of where people minds go when they really understand the capability of these tools like the one we talked about last week with the American Folklore Society and hearing that majority of African languages are not embedded in any type of models. And you know, and then some of those language, some of those languages don't even have characters to it because it was necessarily no reason to write them down. People were just speaking them. And it's just amazing to me, you know, and something so, you know, those type of, like, how do you start to build those type of solutions? How do you start to think about, you know, how to, you know, incorporate, you know, African languages, dialects, you know, it's around, you know, around, you know, for that, for that continent. And so, you know, being able to build a voice model, you know, that gives individuals, communities the ability to communicate from, you know, one continent to another in a live stream, in a podcast. And like, whatever it may be, like, man, this, I mean, you're starting to now open up a, a new part of the world and access to other human beings. That's going to have an increase of everything, you know, and stuff. And so it's just it. That was such a reward to, to hear, to see and to experience that, you know, there's, there's, there's still parts of the world where, you know, even large companies like Google and OpenAI hasn't tapped into.
Adrian Green: Yeah, yeah, for sure. And I'm sure that those people too, because, you know, I can see it going into the really the ownership and the access at some point. And I wonder if the preservation, because that's what we're talking about, I think a little bit in the conversation, the preservation, you know, who's, who's the holder of that, you know, right now that official like preservation of this verbal language could get, you know, kind of tricky like later on when it comes to that. Who knows though.
Marlon Avery: Let me think if I want to say this. Yancing Yensing, CEO of Nvidia, he did a conference this, I think it was like earlier this year or late last year. I don't, I don't remember. I watched like four times and it's like it was forever ago. But he said something very interesting doing that, you know, time. He said that basically if you are a country, you know, if you are in a community, if you are a population, if you are a group of people and you have certain elements of languages, dialects and you know, information artifacts that needs to be preserved, needs to be protected and he didn't use this word, I'm about to say, but needs to be owned. You better figure out how to, how to build that now. And I thought that was such compelling. Because I've said this before, I believe places like China will figure out a way how to own their language. You know, I believe same thing, places like France. And I think what's going to happen is, is that once they have models of their languages and they figure out a way how to own those things, communities, institutions, countries will have to either make embedding or API calls to do translations. And I think that's going to happen at scale. Now, the unfortunate and the capitalistic side of that too as well. You will have businesses, institutions that will claim ownership of other people's history simply because they haven't taken the time to build it themselves. Oh yeah, or they're, or they're just not, they're just not thinking about. Or they're not there or they're not, you know, like where maybe they don't.
Adrian Green: Have the media the means to really even tell the story, you know.
Marlon Avery: Yeah. You know, and so I Think they will start to kind of take that history and I would, I would. Hey, I gotta be careful. Can we get sued as a podcast? Oh, you're mute. You're mute. You're mute.
Adrian Green: We're gonna have that disclaimer up at the beginning of the podcast, so we should have to worry.
Marlon Avery: Yeah, okay, let me, Let me plug this. There's been, we're all excited to see what's happening in the world of voice, particularly around AI voice, and we're starting to see the capabilities and the outputs of things that we once desire. X amount of months, years ago, decades ago, what it may be. And also too, we're also starting to see how well it's becoming to mimic and analyze and characterize languages, including dialects, which can be difficult. We also participated in having Jamaican patois as an exit to one of our podcasts as well. But again, that's being outputted by a large institution where no matter how you want to put it at the current moment, they, once they train that region of their AI voice model, they own that. And they also. It's documentary that institution also to train individuals from the community to come in and talk and speak and, you know, talk and speak and everything, you know, to, To. Yeah, to. To train those models, you know, as well. And so there's, there's nothing. I don't want to say there's nothing wrong with that, but again, this, we live in a capitalistic society and I think there can be a back end, unexpected. What do you mean you own this now? Type of, you know, backlash. Yeah, yeah, we don't. We saw the same thing happen with Facebook.
Adrian Green: Yep.
Marlon Avery: You know, you know, people have been trying to get their data back from Facebook for how many years? Yeah, good luck. Never happening. Yeah. And so they own that data and then they sell it back to you, you know, as well. And so.
Adrian Green: I mean, yeah, I remember, I remember when MySpace went away and actually it was an article I read recently, or maybe it was a Reddit comment where someone was like, they still miss all their images they had on their MySpace. Like they lost them, you know, like they're just, they're just gone now and there's no way to get them back. And that's Sad.
Marlon Avery: I thought MySpace is still up. I think it was like a music platform or something now.
Adrian Green: Yeah, it's still up. But whatever redo that they did, a lot of people lost stuff.
Marlon Avery: Yeah. Oh, yeah. Okay. First of all, when you start to see, yeah, MySpace is non existent. When you start to see, like the blank, you Know like when an image is trying to load and it has like that grayscale image icon there that's like majority of the site, everything. Yeah, that's hilarious. Okay. Yeah, but yeah, so as exciting as all these things is, man, it's just like, it's just, I don't know, man, it's just like we still live in a capitalistic society, you know, no matter what, you know, people are looking to get their share, their ownership, their own, you know, their, their piece of the pie. And unfortunately, you know, watch this, give this example. China joined the, China joined the World trade organization in 1999 and they've been trying to join for quite some time. And when they joined, when he finally kind of came, came to the table, they said, hey, we want to take over one thing and we want to master one thing. And they said, what's that? Manufacturing. We'll make it cheaper, we'll make it faster, you know, and so do so. And so that agreement was made. A lot of people don't remember that China bankrupted some countries, bankrupted, you know, some countries because they took all those jobs, they took all those opportunities, they took all those, you know, they took all those deals and stuff you know, away and they bankrupt a lot of countries and stuff in that, in that, in that process. And so, you know, it's, it's those type of decisions that can be made where it can impact, you know, communities and families and things like that. But again, this, there's no avoidance. It's just like, you know, you know, the doggies, dogs and you know, who's the biggest shark and stuff in the place.
Adrian Green: And so yeah, yeah, it's business stuff.
Marlon Avery: Yeah, yeah, yeah, for sure. Speaking of business opening eye it says reports suggested OpenAI upcoming AI model, code name Orion is exhibiting less significant performance improvements compared to previous generations despite reaching the performance level of GPT4 after 20% of its training. Internal testing reveals that Orion advancements are less pronounced and that the leap from GPT3 than GPT4. Some OpenAI researchers express concerns around arise capabilities to consistently surpass other predecessors in specific specific tasks like coding. These situation arises as OpenAI heightened investors expectations following this substantial 6.6 billion dollar financial round on funding round. The potential for diminishing returns in a in AI development raises crucial questions about the future advancement and the increasing scarcity of high quality training data. Adrian, what's your stuff?
Adrian Green: Well, I think to take this article in, you get to think about the broader conversation and that is, you know, how do you train an AI model. You train an AI model with this is, you know, the short of it is a lot of information. The better quality that information is, the, you know, the better formatting it is, the better model you're going to have, the smarter that you, that your model is going to be. Now you have all the data in the world. At some point you're going to consume all that, all the data that's there. You're going to be able to have the facilities to really process all the text data that's out there on the Internet. After that, does the model get smarter? And I think that people are estimating that by 2032, all the, all the model, all the, all the learning is going to be learned. You know, all the data, all the text that we have now will be fed into, will be processed by AI. So what do, what do really, because we're talking about, there's a lot of implications here or a lot of things to think about. We're talking about how expensive it is to train these models. So that means that if you were to, you know, bulk up the data, say using synthetic data or something like that, you could, you know, you're basically increasing the size of the training data set. But will that synthetic, synthetic data corrupt or make hallucinations with the data that's already there, making them less accurate, less reliable? So, you know, I think, you know, it's going to be the conversation that we've been having around tooling that's going to really take AI to the next level. And we're talking specifically, specifically about the tools that are going to be incorporate AI to create products and to make integrations easier. So I think that they need to think about an acquisition, they need to be looking around, thinking about, you know, better ways to train data. OpenAI has already mentioned that they, they are looking at new, better ways to train their models because without, without that, I don't see them being competitive, especially with Llama on their heels. Every time they come out with something new, you know, Llama's releasing the, you know what I mean, RC Cola version of the, of the AI model. You know, so it's not that far off. At some point consumers are going to be like, well, I can get this for free. That's if you don't, you know, really take this time to behave, to be really strategic in the way that you're going to take this, these GPT models to the next level.
Marlon Avery: I have a, I have an interesting question so far on this is as like as these models are being built Trained. And as we're ingesting more data or retraining of the data like Meta has done with Llama, here's my question. What are we trying to reach? What's the goal?
Adrian Green: AGI?
Marlon Avery: Even that, even that. I mean you have people can't agree with AGI is, you know, and so like what, what's the goal? Who, who to say? Who's to say? Like what we've done is, is enough. And then from there you build, you know, rag applications, you know, and stuff on top of it, everything. So I think a comparison in my head of this is just like, like was like with like the Wright brothers. And so when the Wright brothers built the airplane, like how did you determine the, the, you know, optimal speed of the plane? You know, so like you're already, this is human beings too as well. You've already built something that we've never seen before and it benefits you to something that is like it's never been to benefit you before. But then you start getting either investors, consumers, people complain that yes, I know I can get from my destination from point A to B, you know, 80, 80 faster, but why can't it go faster? So yeah, with the plane, with the plan example, it's like, okay, I'm getting from Chicago to New York in four hours compared to a 15 hour drive, you know, but why can't it be two hours? And I'm questioning where, like where is that coming from? Is this is just the, is that the just willingness of just like wanting more of it even with the lack of appreciation from where we are and stuff already, you know, it's just like, you know, we consistently want more and stuff, you know, of it. And we're not even like bro, going even now people haven't, majority of people still don't maximize the use of these models to his full capability. Then that, that's only, that's only just a clear aspect of just prompting, you know, and you still have people now and I do these workshops, you still have people that complain that it's like it's not giving me what I want, it's not giving me my information, but you're writing six letter prompts and expecting it to, in the, in the middle of that, get a wire connected to an IV that connects to a neural link inside your brain and read what you think it needs to know stuff. In that process, you know, you still, you still have to spend time and engagement around your prompt and being, being direct, precise and train of work as well and giving them examples and you know, keep building on top of the prompt and everything because it's, it's a, it's a guessing tool, it's a predicting tool. It's not a mind reader, you know, except on that. Yeah, so it's not a mind reader and stuff like that. So it's a predicting tool. And so it's just like. I don't know. I think, I think diminishing returns only matters if it's not aligning to something that they said it was going to meet or with a unconnected, misaligning goal of the consumer. Because, like, right now, if, if there's no more development with opening eye and we stop at GPT4 omni, you know, just right now, bro, this is still amazing.
Adrian Green: Yeah, yeah, you can leave it right now. Yeah, and it's still amazing. There's still a. Thousands more products you can build off there.
Marlon Avery: And there's so many things you can build, you know, on top of this. There's so many things you can do, you know, with these things. And it's just like we, it's like we want, we want more. We, we're not satisfied from getting from Chicago to New York in four hours. Like, no, we want it, you know, to be, you know, 30 minutes max with no damage to human bodies, you know, like, and that, that should be the goal. And until then, I'm not satisfied. And then if we do reach for 30 minutes, like, okay, why can't it be 10, you know, like, you know, okay, now why can I transport, you know, like, you know, my body and my things? It's just like, I don't know, man. It's like we're, we, like, we just, we just, we, we didn't have access to these, these tools, these things. What, what? Two, two and a half years ago and now we're already talking about that. It's not good. It's not doing enough when most of you couldn't even imagine that we'll be here, you know, at this place and stuff now.
Adrian Green: Yeah, that's true. But when you think about it, you got to think about their. They're saying that it's according to the performance benchmarks that like they said. So it's not really consumer opinion about it. It's about them saying, okay, well, from three to four it improved by this much doing this, this set of things. And then, so 4 to 4, 0 it improved by, by this much. So they're looking at the, the margin.
Marlon Avery: There, you know, so watch this. Two things and everything. So, you know, as you Know, I do these workshops, everything. So I hit his complaints, you know, in person a lot, a lot of times. The second thing of this, it says this, it says this situation arises as OpenAI faces heightened investor expectations. Heightened investor expectations. So investors are putting on this, and I would not say unnecessary expectations, everything, but they're putting expectations and stuff on this that it should be, you know, bigger, faster, you know, you know, things like that. But again, that's the deal.
Adrian Green: It's the deal for the investment. I'm gonna make it this, you know, I'm gonna make it do that again.
Marlon Avery: Though, like, man, and I, I do understand that you just put in $6.6 billion, you know, into this round. I get that. But again, though, if you do no more iterations and you simply allow enterprise, consumers and everything to build on top of this, you know, yes, it may take some time, still a significant, you know, advancement, you know, and stuff in there. So it's just like, it's still the. And again, like I said, I've heard the consumer complain about this. And now you were having, you know, investors say, like, hey, we wanted to be, you know, better build, you know, bigger, faster, stronger, like where. How you want to put it. And it's just like. And also too. And also too. Here's a. How can I say this? The nerve of individuals, companies to say we want this to be bigger, faster when, when the, the big boys invested in this. You've always had the resources to build some type of thing. You just didn't do it. You didn't can do it. And now the fact that we've done it or the fact that they've done it, whatever and everything. Now you want to tell us that like, no, it should be more, it should be doing this and things like that and stuff, bro. Like, no, no.
Adrian Green: When you look at, maybe it's my technical. Well, I'm just, I'm just saying look at, look at the landscape. The investors may not think that their money's safe because if you say, okay, say like hypothetically, they stop, it stops getting smarter at 40 llama is going to keep getting smarter. The other models are going to keep getting smarter, you know, because they're catching up, they're closing that gap. And this is where as an investor you may be, you know, a little bit, you know, get the cold sweats because you're thinking, okay, in 10 years, if this or in five years, if this model is performing equally to the free and open source models that are out there, then the only thing that we have to rely on as far as like or one of the things. The major thing to me that we have to rely on as far as our investment in this company paying off is the tooling is the product suite around this and their user base. How much they're playing. Yeah. How much they're able to hold on to their users as these free and open source alternatives pop up. You know what I mean? It's so, it's, it's the consumer complaints are one thing but there's also, there's money, there's money at the end of it because they're not going to look so sweet if you know what they do at their core. Their core intelligence isn't as smart as something that's free and open source that you can build tooling around as well.
Marlon Avery: So look, I, I, I understand that in his business and again we, we live in this capitalistic society. I think it's like the, it's, it's the disconnect of the desire just to need more, particularly in an area that didn't exist for the consumer 18 months ago, you know?
Adrian Green: Yeah.
Marlon Avery: And it's just like you, you won't, you won't more, you know, you want, you know, and I get it, you know, it, there's benefits to getting more to as well. But you know, I don't know, maybe it's, maybe, maybe it's my, my technical corporate trauma.
Adrian Green: I mean, I think I'm gonna say that Sam Altman's saying the same thing in those meetings. He's saying the same thing as you're saying. You know, he's like, you can't always get what you want. You know what I mean? But you're gonna get what you need, you know, and we got a good product here. We're, you know, we got a good lead on everyone else. So we're going to look at really strategically how we can approach this the best way so that we remain competitive so that you know, your money is safe with us and it's going to grow and blow up. It's like the last, I mean that's like the major startup story right now I feel like is open AI. Yeah, you know, that's, that's the big one.
Marlon Avery: It's, I just, I just looked this up real quick because I just wanted to see. It says right now opening I has a market share. So it has a market share of like 79% when it comes to like Jenny ad tools or general platforms.
Adrian Green: Yep.
Marlon Avery: And so yeah, I mean, yeah, I, I, I don't Doubt that. And I, I think I said disagree where you know, these, these conversations probably being happened, you know, stuff behind closed doors as well and stuff and so. Yeah, it's interesting.
Adrian Green: Yeah, it is, it is, it's, it's, it's really, it's really interesting. And also it may lead into our next point here because I think we have some more conversations about training today.
Marlon Avery: Yeah, man, hold on. Which one you want to do? The.
Adrian Green: One on the right?
Marlon Avery: Yeah.
Adrian Green: Because I mean that's the real story here. Yeah, yeah.
Marlon Avery: Good, I got you. So Ryder, it's a company specializing in Enterprise focused generative AI solutions announced a successful 200 million Series C funding round valuing the company at 1.9 billion. The funding will be used to enhance riders AI agent capabilities which aims to automate intricate workflows across teams and systems. This investment underscores Rider's commitment to build secure, reliable and adaptive AI systems specifically designed for complexities of enterprise use cases. Rider has also established a strong presence in the market with a client roster that includes prominent companies such as Mars, Ally Bank, Salesforce and Intu it. The success stories highlights the increasing demand for AI tailored solutions that addresses specific business needs. You know. So Ryder has closed their round of $200 million in C funding. Adrian.
Adrian Green: So on the. This one is interesting to me. So Writer is. And it relates to our last discussion. So Writer is focusing on a suite of enterprise enterprise like applications. I haven't gotten much details about. Let me see. No.
Marlon Avery: I can tell you it's going to be tools around automated workflows or racifications that can respond with accurate internal consumer. I mean there's our company data, the ability to put in unstructured and structured information to this. So anything from Excel files to Excel files to PDFs to Notion Date documents or templates of what I want to put it to PowerPoints, you know as well. So the ability to put all that stuff into a knowledge base and then you build workflows, agents in between that. I'm sorry on top of that to be able to communicate and help automate and help partner with organizations and also employees. So this is going to fast forward the process of what's going to happen is individuals like Rider, if they are working on this or if they're not Writer, if you are. I want a piece of this. What this is going to fast forward of what's going to happen and I mentioned this before, is that the new workforce will be power with and partner with AI agents, meaning workers from customer relations, customer support, all the way up to C suite, individuals will have an AI agent partner with them and you will have tailored information and documents and stuff to help you make quicker, better and faster decisions. So when a. We've all experienced it, a a V1 of it now when you're on a phone or you're talking to a consumer, you're talking to a chat, you, we've all heard the, let me review your account and give me a moment like we've all heard that. And so what's going to happen is, you know that once you call in, query in, chat in or whatever it may be is that all the information will be populated, you know, for that person's use or, you know, information. Then what's going to happen is all the information will be structured into their learning style or into their recipe, the way they need to receive it as well for that individual person. So if a person is more visual, they will have graphs and they will have charts of like around information around that person's profile, account or whatever it may be. If a person is more, you know, detail, they will have, you know, diaphragms, workflows or whatever it may be. And what basically what it does, it allows that individual within that organization or within that team to see and to receive information much quicker so they can kind of get to a solution much faster. And this what the workforce will become, you know, once we kind of like, you know, either hit the fast forward button or once somebody like build out the full solution. And so this is the baseline of what right writer is doing doing if they know it or not.
Adrian Green: Yeah, yeah, they're laying that because right now it's all on the generative side. But you're absolutely right, they can go those other directions with them. What I was going to say about how it relates to the last topic that we're talking about around training is that somehow they figured out how to do, how to train their platform, which performs right up there with GPT4 and Claude 3.5 Sonnet. In October, I'm going to read it from the article here. They released a model Palmyra X004 trained almost entirely on synthetic data. So synthetic data was the key thing here that I want to circle back to. But developing this cost 700 grand and just cheap compared to. It's really cheap compared to what it costs to train the open AI model of a comparable size. It's $46 million. So using the synthetic, synthetic data was probably cheaper. But you this we have to get on the topic of training here because it's their approach to training that Writer says made it more effective. So if you think of, you know, maybe if the OpenAI model, just hypothetically here, was trained using Writer, the Writer team's approach, would that be smarter? Would that be a smarter model? Now, because you're not just using synthetic data, you're using actual data. So these are the conversations that I think that, you know, AI is definitely, they're definitely looking at Rider. I mean, open AI is for sure looking at Ryder. Companies like Ryder that are being more efficient with their training approaches as a way to, you know, you know, they're going to have to really catch up to that. They're going to have to really be aware and know how to do whatever technique their writers is doing or companies like them to train their models better. Because once the data runs out, it's going to be about the training, training things, training your models, you know, better. So I thought that was it, that was interesting that for under a million dollars, like a quarter of the cost, they were able to, you know, they were able to get something that performs just as good with synthetic data, you know, so that's a, that's a lot, There's a lot riding on Ryder about that. So we'll see. We'll see. $200 million Series C isn't. I mean, that's, that's a modest, that's a modest raise there. It's not nothing to wait to stick at, you know, but, you know, it's a, it's a, it's a, it's a modest raise. So. But these are one of the things in the AI industry that I think are going to be important to keep an eye on.
Marlon Avery: Yeah.
Adrian Green: Because, you know, we got nuclear power plants standing up, we got Three Mile island, we got all kinds of things happening in order to power these training attempts. So, you know.
Marlon Avery: Yeah, I think, I think one of the things in circle and stuff there, you know, said they develop a prominent client roster for companies like, you know, Ally Bank, Sales for Salesforce and Intuit. So let's, let's take some of those examples. Everything. Let's take Intuit and let's take Ally Bank. So, you know, if you are, if you are Ally bank stuff in there, you know, one of the main things that you want to do is that you want to get to customer connection, customer problems quicker. You want to be able to, you know, automate a lot of these things. And so they already have a nice amount of data in their previous phone calls, emails, processes of what, you know, customers and clients and stuff are trying to do or what they're trying to solve, what it may be. One thing I've learned from building solutions and stuff in this area is that there's a large amount of them or they're just saying the same thing over and over again. There's a large amount of that, you know, so, hey, I lost my card. What's my pen? This transaction is not right, you know, like, you know, you're, you're solving the same thing over and over and over again. So when you're doing the same thing over and over again, it is very much easy to incorporate or build synthetic data around that, you know, because it's just the same thing over again. The only thing you got to do is just train the model to how to see these, how to see these patterns, you know, via, you know, email, via phone call, like support it may be, and then how to, how to pathway, pathway to a result. And so yeah, I think it's, I think it's clever and stuff on their end because they, they've skipped a step in a way, you know, they didn't have to, they didn't have to necessarily, you know, go gather a bunch of data and everything to train the model. They didn't necessarily have to write contracts around like, hey, share them out, share your data with us. We're protected. Or like, would it be like. And, or sometimes, you know, companies may even try to pay, like try to charge you, you know, for those things. And so yeah, they skip this up, you know, but it's like, you know, it's, it's. Watch this. Imagine when somebody hears that you're a software engineer and they do that and let's. I imagine you have at least 100 conversations with people around. How do you get started? How did you teach yourself? You know, what's the job market? Like everything. And in those hundred people, would it say that you start answering the same questions over and over again?
Adrian Green: Yeah, for sure.
Marlon Avery: Yeah, you know, it's the same thing. And so for like with these large companies that just now you got to figure out how to build it at scale, you know. Yeah, you know, as well. And so, yeah, I mean, so speaking.
Adrian Green: Yeah, speaking of that, you got to really give it to these guys too because the two founders, I think this is a Dubai based company, but they started another company before this core Doba that was about localization. It had nothing to do with AI localization, Like localization, sorry, so localizing your product to new markets. Okay, so like translators, subject matter, experts Things like that. I don't think that yet they had like an algorithmic approach, not necessarily an AI approach to do that, but they went from that to this company. So in between then Hither and young, they definitely got themselves up to speed on AI and launched a company, you know, and we're able to train a model at less, less cost. I mean, it's really, really scrappy, guys.
Marlon Avery: Yeah, yeah, for sure, for sure. This next subject, there's some scrappiness into this as well. And this is, this is a subject that is, in my opinion is not happening enough. I think it should be happening more. I, I have friends in, in this area and there's there telling me that they're starting to see increase of the lack of urgency on the, let's call it the good guy side. And so yeah. The emergence of generative AI presents a new challenges and opportunities in the realm of cyber security. It said threats actors are increasingly using gender AI to enhance their attacks, creating more convincing phishing emails, developing malware and explode vulnerabilities in systems. It says cybercrime professionals face the imperative to adapt and master generative AI technologies to counter these evolving threats while adhering to ethical regulations and guidelines. It says to effectively leverage generative AI for defense, robust strategies must be developed that prioritize collaborations between securities development teams and mitigate internal risk and also emphasize cyber hygiene practices. Furthermore, emerging regulations, including the UK Cybersecurity and the Resilience Bill and the EU AI act underscore the need for businesses to adapt their operations and compliance measuring measures to address the unique challenges that presents by generative AI in cybersecurity security. So I'm gonna start off with this one once one. So I, I built my first AI product solution in 2000. What year is this? 4. So 2019. 2019, I think, which was, which is an AI grant writer on top of GBC 3. And you know, once I hit compiler, I think I was more surprised than anybody that it worked. And then you know, after that how well it worked. And so from there, you know, my mind has been just racing of like what can be done, how can you speed up advance build on top of and you know, with different solutions, different use cases and you know, I've been privileged to find so many different things that can be built, you know, around these things. And in the midst of that, once I do, once I start to understand the power of these things, my mind definitely start to lean into the cyber world and not necessarily the solutions, but how well intact and phishing emails and all these things can be developed using these tools. And so I told somebody this other day, somebody asked me, you know what, like, if I want to become a software engineer today, like, what would I, what would I do? Where would I start? And I think, I still think there's value in becoming an engineer. Not necessarily, not, not necessarily. Not just like building, not just like writing code and understanding syntax, but it's the understanding of problem solving and system building, which will still be continuously valued and stuff moving forward. My answer to that was if it's a boot camp, if it's a institution, if it's a workshop, whatever it may be, if they are not teaching you how to write code, understand systems, build infrastructure in partnership with AI, it's the wrong place to go to be and because that's where everything is headed. And with that, I've been watching the cybersecurity world, particularly around education, and I know there's one out there, but I haven't seen at least me now, personally, I haven't, I haven't seen one institution or organization who has fully adopted and integrated the building of building with these and teaching and stuff with these tools, which is a problem because now you're preparing students and workforce to get into an area where they're about to bring knives to a gunfight. And you're just not going to be properly prepared, you're not going to be properly trained and you're going to get, you're going to take some beating, you know, except for quite some time until we can get to that place. I mean, we're, we're already limited on, around the cybersecurity engineers now, now you're talking about beefing up the, you know, the attacker side with superpowers, you know, around these areas. And we're not doing nowhere near as a good of a job to, to prepare, you know, the stuff that we have coming in to the workforce or even like the current ones and stuff now. And so the, the backlash is that unfortunately we're going to feel that in some areas and you know, if it's not, if it's not an individual standpoint, it's definitely going to be, you know, organizations that impact us, companies that impact us, you know, simply because, you know, and to be honest, I'm not even sure what the answer is. I'm not sure if it's a, you know, mandated thing by the government. I'm not sure, you know, what they, what the answer and stuff might be and stuff right now. I know what we can do. We can Start having these conversations like this to help educate the masses. Why this is a must need thing. And also too, I don't want to skip over that there are countries and organizations who are having this at the forefront of their mind. I'm not saying something that we just know what to do and nobody else is doing it. I'm not putting that out there. You know, so like it mentioned too like, you know, UK Cyber security and the EU AI act as well. So there are organizations who are doing this, but at the large scope, we're still lacking and stuff there. And so, yeah, and then also on top of that, when you insert a tool like voice AI and we've already started to see, you know, phone calls being made, acting as using voice cloning, you know, software and acting as somebody else to, you know, ask for money or, you know, saying that, hey, we have your, you know, child and you put the child on the phone, but it's a voice, so it's a script. And everything that's been read, everything from an, you know, AI voice agent. And now you're about to allocate, you know, thousands of dollars where maybe we are child's at school safe and you're just panicking because she, you know, it's like, yeah, that's, that's what, that's what he or she sounds like, you know, like, yeah, you know, something.
Adrian Green: So, yeah, it's a scary thing.
Marlon Avery: Yeah. And so, yeah, I, I think we gotta have this conversation more to figure out, you know, how can we, you know, how can we, you know, battle this, you know, this, this, this area, this landscape, you know, of cyber security.
Adrian Green: It's a, to me, I feel like it's how you have to first define the scope of which you're trying to, where you're trying to look. I mean, reading, reading about it and what the people have done. I feel like they're still defining that like what is going to be the AI security framework that is going to need to be followed here. You know, there's so much, I mean one of my biggest disappointments from learning about, you know, computers before I even had a computer, just from what I've seen in media growing up, was that, you know, the hacking does not take place with like sunglasses on behind a keyboard. You know, the hacking takes place with emails with, you know what I mean, phishing emails with like social engineering, with this whole other side of it that I had never thought of. That had nothing to do with the ones and zeros really. I mean a lot of it, a lot of it does, like, don't, don't, don't get me wrong, but when you, we're talking about like massive data leaks and we're talking about things like that a lot of the time, it's, it's from social engineering. It's from, you know, receiving a malicious PDF at this point. They can throw in, you know, image generation with that, you know, some kind of deep fake email in there to really authenticate. I mean, really have something looking really authentic and trustworthy.
Marlon Avery: Yeah. And, or pattern, human pattern recognition. So understanding that human beings are individuals or creatures of habit. One of the, one of the hacking things I realized the hackers was doing is once they figure out that you have an account at, at certain businesses in certain places and they know that they have limited or low security, they would go hack those places around theirs that you have, they had limited low securities to get the passwords to do. Probability of using the same password at the place they actually want to get into for sure. So, and this is, I'm not calling anybody, but, but like, let's say, for example, they go hack your pet Smart, you know, they go hack your Walgreens, they're gonna hack your, you know, like these, you know, they don't, they don't, they may not have like high level security. And then they recognize that, man, you use the same password for these seven accounts, everything, the probability of you using that password to this big one, I can get in, everything could be the same thing, you know, and stuff. And so, you know, or something like.
Adrian Green: It, you know, something adjacent to it using the same numbers or something in a different way.
Marlon Avery: Yeah, and, and now you can just use chat. I'm not gonna say that, but. Yeah, go ahead.
Adrian Green: Yeah, I mean, we can say what like, we can also say is that, you know, this whole security conversation is if a quantum computer comes out and that's like, you know, readily available, that security is out the window. Security is out the window with, with that. So, you know, with the AI security conversation we also have looming in the background that quantum conversation, which I think breaks Bitcoin encryption, it breaks everything.
Marlon Avery: So you think, you think, you think it gets you, you think it can get through the. It can, it can be fast enough where it can. How's it work if you can get through it fast enough where it can mimic a blockchain and trick the computer of thinking that this is actual transaction or actual block that is not there or, you know, that may be a replica or something.
Adrian Green: That's what I heard. That's What I heard, because I think it's the. It's the shop 50, 256 encryption that I think is standard on the blockchains and that, you know, the quantum computer breaks that. The military doesn't even use SHA256 anymore. So, you know.
Marlon Avery: Maybe we should become cybersecurity engineers, because it's like, we better be in business.
Adrian Green: For real, man. Cyber security is like the war on drugs, man. It's like, it's, you know, it's hard that it's. It's a. You know, it's ongoing. It's ongoing.
Marlon Avery: You know, just it also to be happy. We're the good guys because, know, we could very much easily become Frank Lucas of cyber threats.
Adrian Green: Oh, for sure. For sure. We keep the. Keep the white hat on, you know, we're only advocates. We would never have you doing anything illegal. For real.
Marlon Avery: The Frank. The Frank Lucas of cyber threats is hilarious.
Adrian Green: It's hilarious, man. Frank Lucas.
Marlon Avery: That's another one of our shirts that's gonna be there.
Adrian Green: Yep.
Marlon Avery: Yeah. Just ask me shirt. For real. Just. Just be happy I'm not the Frank Lucas of cyber threats. Oh, man, that's funny. What was that other shirt? What was the. What was the other one?
Adrian Green: Snitches get unplugged. They just get unplugged.
Marlon Avery: Oh, man. Yeah, we're gonna. We're gonna have that available already. That's hilarious. Yeah. Well, man, this is. This has been a great ep, man. How'd you. How do you feel about it?
Adrian Green: It's been good. It's been good, man. Lots of good. Not lots of new news. Lots of learnings coming up. So it's been. It's been good. It's been nice and steady in the. In the AI world as we watch how everything develops in these times.
Marlon Avery: So for. For our viewers and for our listeners, man, definitely stay tuned. Soon enough here, man, we're going to be developing our own. Our own community as well, where we guys will be able to interact with us kind of behind the scenes, but also too, we'll be hosting our semi own internal community led how to videos workshops and so on and so on. And so, you know, we. We. We understand, man. Everybody is figuring this out. Everybody is building and trying to figure out, you know, how to use and how to benefit from these tools. And so, you know, we'll. We'll step in and be kind of a liaison, instructors and teachers and kind of guide us. Got us all in the right direction and so, you know, all of us, me, Adrian, Sekou, we've all built solutions and we built, you know, agents and we've, we've, you know, built a lot of these examples of ourselves. And so, you know, even for the engineers, we can teach you guys how to build it, but also too, for the day to day professionals, we can also to, you know, teach you guys how to use it on a no code aspect as well. So, you know, stay tuned, man. Stay tuned so for that. And so yeah, man, this, this is EP10, you know, so we talked about, you know, on the Adobe announcing their new video model is OpenAI having diminishing returns. Rider secures $200 million as it looks to fuel, you know, the enterprise AI solutions and then the generative AI cybersecurity battlefield. You know, hopefully it becomes less. Yeah, so hopefully that become. Right. Just.
Adrian Green: Yeah.
Marlon Avery: Nuke coming in. Yeah, so hopefully, hopefully we can, we'll be, like I said, man, this, this, this, this podcast, man, this is a, I mean this is, this is a platform and everything for us to kind of all learn and engage and stuff, move forward and stuff in this AI eccentric universe. And so, yeah, man, we'll continue to do so. So. Well, guys, man, Adrian, man, give us some closing statement stuff there.
Adrian Green: Guys, it's been a pleasure. Stay motivated, stay positive and we'll see you next time.
Marlon Avery: Okay? All right, Peace out. Peace.