Thriving In The Digital Age

Thriving In The Digital Age: Jack Heslin and The AI Conversation

Joe Crist Season 1 Episode 16

In this episode of Thriving in the Digital Age, host Joe Crist speaks with Jack Heslin, founder of the AI Conversation, about the transformative potential of AI technology. They discuss the confusion surrounding AI, its limitations, and the importance of understanding its role as a thought partner rather than a replacement for human intelligence. The conversation also touches on the future of work, the impact of AI on job markets, and practical steps for individuals and businesses to engage with AI effectively. Jack emphasizes the need for community and shared experiences in navigating the evolving landscape of AI, as well as the importance of curiosity and adaptation in a rapidly changing world.



Joe Crist (00:01.475)
Hey everybody, welcome to another episode of Thriving in the Digital Age. I'm your host, Joe Chris. Joining me today is Jack Heslin. He is the founder of the AI Conversation. Jack, thank you so much for joining us today. Could you tell the audience a little bit about yourself?

Jack Heslin (00:15.908)
Hi, Joe, and thank you very much for the invitation to join to join your podcast. I'm about a 30 year sales veteran who, without any clear meaning to always found myself kind of tech centric. Spent a lot of years in the telecom space that segued into the IT space that segued into the 3D printing space. And then about four years ago, totally out of the blue, had an opportunity to join a

You wouldn't exactly call it an AI startup, but it was a tech startup with an AI component. Came on board as the VP of Sales. That actually did not take off. The company doesn't exist anymore. But my exposure to AI during that time led me to reading some of those books back there on the shelf about AI, following different thought leaders, thinking, this just might be the most transformative technology since the printing press.

And I don't think we're going to know the answer to that for many years, but this is the place to keep an eye on and become more familiar with.

Joe Crist (01:27.041)
Yeah, absolutely. AI has been explosive in the past couple of years. And when it comes to even the knowledge of AI, what challenges are you really seeing for especially the early adopters around AI?

Jack Heslin (01:41.528)
Well, think the big ones really are, and this has a lot to do with what has everything to do with why I started the AI conversation. I think it's about confusion is the challenges. You know, if you take a step back and you're in a traditional company with a traditional sales role, selling a product or a service, you can go into a company and say, what challenges are you having with this?

whether it's hiring good salespeople, whether it's training, whether it's improving a manufacturing process, you can get specific. But you walk into a company today and say, what do you want to use AI for? And the answer might be, well, as soon as we know what it is, we'll let you know what we'll use it for. So my own personal opinion, I should probably stipulate this upfront, is I wish AI had a different name. I don't think it's intelligent at all.

It's a bunch of number crunching with ones and zeros that are really, really fast pace. So the challenges are where do you apply it? And to know where to apply it, you have to know what you want to do better in your business. But then you kind of have to know what it what it does well in the first place. You know, you're an efficiency guy, right? So if you don't know the capability of something, which I would say today, millions of people still don't.

and there are still huge issues to be addressed in AI, then how do you know what you want to apply it to in your company? So these get to be very, very tricky conversations, a little bit of a chicken and the egg quality to them.

Joe Crist (03:23.587)
Right. You you bring up a really good point too. It's how do you define it? Right. And I think a lot of people in their minds, least the folks I've talked to, right, especially the non-technical folks, when I talk about AI, a lot of people automatically go to like Skynet from Terminator, right? Where they're like...

Jack Heslin (03:43.542)
Right.

Joe Crist (03:44.951)
There's this whole fear of like, it's going to take everybody's jobs and do all the stuff. I'm like, no, AI is actually fairly stupid. Right today. Right. Right. And it's the thing is the way we define AI, when we hear the word intelligence, I'm imagining a lot of people are thinking like, it's like a person. No, it's like a dumb dog.

Jack Heslin (03:52.526)
Well, yes, it is. It is.

Joe Crist (04:09.443)
I can't like, it knows a few commands and like the context and the works in, right? So like really defining it has been like, it's such a massive challenge for so many people. And like, I guess we were talking before, right? I'm an efficiency guy. I care about making things go fast and being accurate. if we don't even know what we're trying to apply this to, right? If we don't even have the base knowledge, it becomes very difficult to say, I'm going to have AI solve this problem for me if I don't mean, but I can't because I don't even know what AI does.

Jack Heslin (04:37.91)
Mm-hmm. What it does, but to your point about, you know, it's not like a person, the point that I come back to again and again, I don't believe there's a fixed number now about just how many AI servers there are in the world. I'll just say rhetorically, I think it's probably millions between China, the US, all the AI infrastructure. And what I like to point out to people is,

If it was possible for everyone in the world today to not use another AI application, to just stop right now, at no point will any of those millions of AI servers say, hey, where are you guys? There's no initiative there. There's no ability for anything in AI to move forward on its own. It is all prompt driven. We're the drivers.

It is likely between hallucinations and black boxes that we might not always know where this thing is going, even though we're driving it. But it is not like a person. cannot take initiative and it cannot imagine an image or anything like that that we don't tell it to do. So it is an entirely a reactive technology. And that's why I wish it didn't have the name that it has.

Joe Crist (06:02.613)
It's not. It's, yeah, it's... And I think a lot of people get the confusion there, Where they think AI is just like fix all, right? It's like, we have AI. It's like, we're using chat GPT to create this stuff. But in reality, it's just being fed prompts and then looking into the wide, the big wide world of answers, right? It's...

You know, had a really good conversation with a buddy of mine, Quentin, a long time ago, and we talked about context and AI, right? And this really goes to your point, like, it's not really that intelligent. As human beings, as you and I communicate, we can communicate through context. I have an understanding that you know things, and you have an understanding that I know things, or at least the assumption that we know these things, right? AI doesn't have that.

AI will go off a general question and give you a very general answer. The more narrow you get in your question and your prompts, the better your response is to get because you're creating that context for it to actually search. And a lot of people just don't know that. They treat it like Google, where they're asking, hey, here's this question. But Google is doing the same thing AI does. It just goes back and finds the answers it thinks you want to hear.

Jack Heslin (07:10.872)
That's right. I do think a big concern that I have, and I just went through this in the community a couple days ago, is even though it can only respond to a prompt, a concern that I have, and I wrote this up in the community, I guess it was Sunday or Monday, is

Joe Crist (07:13.389)
Right.

Jack Heslin (07:36.202)
even though it's responding to a prompt, is it going to come back with a better quality response than I could have given myself? So my concern is not about the technology. My concern is, are people going to say, is a thought partner, which I think is the right way to look at AI, this is a thought partner, or,

This thing's gonna write a better reply than I can anyway. So I'll just have it write the replies and use that. It's gonna do a better output than I can do. And that's another reason why the name makes me uneasy. And I hope this doesn't breed, and I hope a lot of people would yell at me about this, because I believe in having the conversation, have the debate. I hope it doesn't breed a kind of laziness. I'll get some response from the AI here.

Scott and the term thought partner isn't for me. I first heard it from Scott Galloway in his podcast. Think of AI not as a search engine, as a thought partner. And I really like that phrase. And that's the phrase I try to build on. But I hope that people will not say, you know, I can't write as well as that. I can't come up with an image as well as that. So I'll just put in my prompt and give that to Joe Christ and tell him here. I think we're to be working that out in the years ahead.

Joe Crist (09:00.695)
Yeah.

Jack Heslin (09:03.236)
I get a little nervous about that.

Joe Crist (09:06.697)
And I think that's an important thing, a distinction to make, a thought partner. Right? And I really do like that. One of the challenges I've seen from a lot of folks, it's...

They use AI as a fix for things. And a lot of people who have this fear of, AI is going to take jobs. No, AI is not going to take jobs. People who know how to use AI and leverage it as a thought partner are the ones who will do it. The thing is, AI is so limited in what it can actually do that needs a person to determine the value of what it created.

Jack Heslin (09:40.344)
That's right. That's right.

Joe Crist (09:41.921)
Right? And that's the real big key. you're so dependent on AI to solve everything and not really assessing the value or scoring it or sharing it with your customers where they're telling you the value of what you're producing, if you're not changing from there, you basically have this very powerful noisemaker.

Jack Heslin (10:04.366)
Powerful noisemaker, but Joe, you're segueing into the really big, and this isn't just my view, but it's certainly a view I share, the really big concern about using AI even as a thought partner is it can be wrong. This is based on probabilities.

Now, the probability of being wrong for your particular prom, for your particular application might be minuscule. It might be very, very tiny that it got something wrong. But the big, big problem is it could come back. And we see this in chess. We see this in the game of Go with that fellow, Dahl in Korea. It'll come back with an output that a human being might not be able to comprehend, but it was a better output.

Are you familiar with the Go game in Korea? Do you know that? I won't take up the whole time with this, but very, very broadly. So you know the game of Go. It's the Asian game with the board and all the all the disks and you move the disks around black versus white. Well, there's a fellow I think is Korean, Lisa Dahl, who was the world champ by the age of, you know, very young. And this game has been around for a long, long time in Asian culture and history.

Joe Crist (11:03.491)
I am not.

Joe Crist (11:14.23)
yes, yes, yes.

Jack Heslin (11:27.552)
And it's it's every bit as big as chess is in other parts of the world. And you can find I'll find the link. I'll try to send it to you. You can find the Netflix documentary on this where this fellow sat down to play the AI model that a British company had built to play Go. And I think they played five games. And throughout the series, the people who are narrating this are saying, what move is this thing making?

And at one point, you can even see the expression on Lee's face. Like, why would you make that move? That move makes no sense. But it would go on to win. With a strategy no human being ever thought of.

So in that case, these outputs that no one could understand were the right outputs. Now of the five, I think it was five games, Liesl et al did win one. And to the best of my knowledge, he remains the only human being to ever be an AI model. And then the same thing happened in chess. It's coming out with strategies no one ever thought of. The big risk is, but it can be wrong. So here's an output that the human mind can't.

comprehend. Is it right or is it wrong? And now we're going to start using this in healthcare. Someone's going to probably use it to determine monetary and fiscal policies. Someone will probably use it to determine military strategies. What if the recommendation is so outside the box of what you and I can relate to, we just have to trust it?

Are we really going to do that? And the people who understand this in academia at a really granular level are very candid saying they don't know that the issues of bias, the black box, the hallucination, no one is saying, we'll figure all that out and it'll go away eventually. No one has said that.

Jack Heslin (13:28.898)
And there are people in academia who think, you know, how did this thing get commercialized so fast? But why did I start the AI conversation? Because this genie's out of the bottle. And Joe's opinion matters. My opinion matters. That person's opinion matters. And we should participate in this conversation.

Joe Crist (13:49.867)
Absolutely. Yeah, you know, that's that's how.

It's really hard as human beings, right? Because we can only predict so far in the future, right? Where we can, I mean, obviously the closer we are to that timeline, the more accurate it's going to be. And it's the same for AI, but we can't see as far because we can't process that much data. And that's really a strength of AI, right? AI can crunch a lot of data and give you a result. Now the accuracy of the result is totally different, right?

Jack Heslin (13:58.723)
Mm-hmm.

Jack Heslin (14:10.434)
No, no we can.

Joe Crist (14:20.221)
it's it's either going to be really good depending on the data you already have but if it's just making things or doesn't really make things up if it's just taking guesses right and then yeah exactly right

Jack Heslin (14:28.452)
That's right. That's exactly right. It's all based on the training data and the and I want to be very careful in how I how I talk here because I don't want to sound like I'm judging, but you and I as users of the technology, we have no basis for saying, OK, here's how Google trains its AI models compared to how chat GPT or how they train their AI models.

All that's opaque and they're not sharing it. So, and none of them, I think you and I talked about this once, none of them make an effort to distinguish themselves from any of, just say, you know, we can do all these things, but why are you better than the others? And they all keep that very behind the curtain, if you will. So I think the consumer gets shortchanged. There's no...

Joe Crist (15:02.306)
Right, yeah.

Jack Heslin (15:27.516)
Legal criteria, I'm not saying there has to be or should be but there's no Guidelines to say here's how you tell us how you train your models on what amount of data. How do you know it wasn't biased? What was what was missing in it? You know, how did you clean it up? People don't even know to ask those questions

Joe Crist (15:43.555)
All right.

Joe Crist (15:47.971)
Yeah, the thing to think about too when it comes to actually reading data is AI won't exist without data. You need good data structure and governance to actually have AI work like effectively for anybody, create any sort of value. But the real challenge here, well, one of the real challenges, it's human bias, right in that data.

Jack Heslin (15:54.104)
That's right.

Jack Heslin (16:08.674)
Mm-hmm. Mm-hmm.

Joe Crist (16:09.699)
AI is going to look at that. if you look at a lot of data, especially data that's referencing other data, AI will look at that too. It's like, well, this seems to be more correct, because this is what I'm seeing more often. There's a higher probability of this is being the right answer. And AI is going to look at that. And a of models are designed that way to look at whatever the majority says is truth or what it perceives is truth.

Jack Heslin (16:30.072)
That's right.

that that's exactly right. And if the majority is wrong, which it certainly can be, then the model comes out. And I know it kind of comes back full circle, but this comes back to what I said a few minutes ago about the trust issue. How much can you trust the output? I'm not trying to pick on anyone, but it was a big news story. I think it was about a year ago this time that attorney in New York who used an LLM

to come up with legal precedents and he didn't check it. And he said in front of the court, well, here's the precedent, totally fictional. And he ended up apologizing to the court saying, I'm really sorry about that. But this guy was probably closer to my age, a little bit older, just didn't know that, look, this isn't like a search engine. You're more in charge. And I suspect there'll be more and more examples of that.

Joe Crist (17:04.245)
I've heard that.

Jack Heslin (17:31.812)
This isn't something we should just give carte blanche to.

Joe Crist (17:35.317)
Absolutely. So obviously there's a lot of challenges out there, right? You know, it's still new technology. We're still learning it. There's not a lot of regulation in this space. What are some solutions people can start, like just the average person could even start doing today to help them really migrate the AI and not just, you know, the person trying to make their life a little easier, but even businesses, like where should we be starting?

Jack Heslin (18:02.126)
Well, I get that question a lot. before I ever talk about our community, I just say, you will start playing with it. And don't just play with one. As a matter of fact, there's a woman I was talking to recently who was very appreciative that I kept mentioning different AI tools. She said, Jack, all I ever use is ChatGPT. It never occurred to me to try Claude. It never occurred to me to try Gemini. So I say to people to answer your question, start playing around.

start playing around and you'll notice that you can put the identical prompt into different LLMs and get very similar answers, but you're not going to get identical answers. And that should be something that people should be asking about. Well, why not? Why am I not getting identical answers? Now, in many cases, the differences could be minimal. I did something, a very simplistic test with it.

Joe Crist (18:43.265)
Absolutely.

Jack Heslin (19:00.48)
I was over the summer. I went into the major LLMs. I went into CLAWD. I went into Gemini. I went into chat. And I said, finish this sentence. Today for lunch, I had peanut butter and one of them came back and simply said, the answer is probably jelly. And then another one, I think it may have been Gemini, said, the answer is probably jelly, but here are several other possibilities. So, that's interesting. One of them said, here's your likely answer.

And then I forget what the third one was, but the point is the answers were not identical. So to answer your question, start playing with it. Nothing real serious. Don't ask it a question about your high blood pressure. Don't ask it a question about your retirement planning funds or anything. Just start playing around. And then to segue to your comment about your business, step back and think about your business. Every business has an area it can improve upon.

Joe Crist (19:43.063)
Yeah.

Jack Heslin (20:00.728)
What would you like to do better? And then start to talk, you know, there's plenty of information now about AI and marketing, AI about HR, AI about this. The reason I, and I'm kind of getting into myself promo here, but the reason I started the AI conversation was so people could have a neutral environment to say, I'm trying to figure out this, AI and blank. I run, you know, I'm an efficiency expert.

Or I run a law firm. run a CPA firm. Sorry. Where should we be using these tools? And then based on, we talked about this. The Wisdom of the Crowds. This is a book I've read twice now. My belief is if we get enough people talking about this, we'll create a kind of best practices. And your two cents matters.

I think my two cents matters, their two cents matters. And the point of the community is to give a objective, neutral platform for people to ask questions, get answers, share experiences. So maybe they can cut out spending money on the wrong software. Another thing that kind of pushed me down this road is I saw, I won't say the name of it, but I saw a website over the summer from a

startup and the page said Supercharge your business with AI And I'm thinking what a bunch of bull You know And I've spent a lot of my career in small to medium-sized companies They don't have the resources to really research things deeply quite often so Can we help? What does Joe think what does Jack think what is what is Spencer think what is Susan think?

Joe Crist (21:34.466)
Right.

Jack Heslin (21:55.46)
and go on from there.

Joe Crist (21:58.509)
Yeah, absolutely. And so you brought up an interesting phrase, best practices, right? I think one of the things that you mentioned too about picking the right for somebody. one of the things I like to preach is building the right practices for you and your company, right? And it's talking about experiences, right? Where it's like, okay, hey, we've used this and here's the outcome, right? A lot of people pick whatever's popular.

It's just the nature of humanity, right? If we know about it, that's a solution. That's the possibility, right? But there's a lot we don't know about. And hearing the stories of people and what they've tried and where they've succeeded and failed is really big in providing that education, right? Providing that knowledge and really sharing it. Because a lot of people run into things like there's way more than just chat GPT out there, like way more. So.

Jack Heslin (22:42.201)
Yeah.

Jack Heslin (22:51.64)
Yes. Yes.

Joe Crist (22:54.239)
And taking those different tools, whether you're using ChatGPT or Gemini or Zapier or Make, there's so many different avenues for people and everybody has something that's right for them. It's not a, this is your only option and you're gonna pick that and you're gonna figure it out or too bad. This may not be right for you.

Right? Or this may be a little better for what your needs are and like, this is going to help you get to where you want to go. And by actually sharing that information, having that community, it opens up a lot of possibilities, right? Where it's just like, this product has the same thing. This actually seems more of what you're looking for. Right? Cause a lot of people just be like, chat GPT does everything. Not quite, but.

Jack Heslin (23:42.678)
Not quite. And Joe, that was perfectly expressed and ties in exactly to what that woman said to me a few days ago. Jack, it never occurred to me not to use ChatGPT. I said, there's a whole bunch of them. They've just done a very good job marketing this. But play around.

Joe Crist (24:02.007)
Yeah, and it's the whole thing too. like, think it's part of like human nature and like, even like evolutionary psychology, right? Where

Human beings are very naturally curious creatures. And with that, we like to find things out. Well, if we ask a question and we get an answer, and this is probably why Google is so powerful, because for a long time, people would just go on the internet and ask a question and then get a response. Instantaneous. And that's a powerful thing. Everyone's thoughts and ideas and questions and concerns went to Google. And now it's going to chat GPT and the other platforms.

Jack Heslin (24:13.604)
Hmm?

Joe Crist (24:41.458)
But because of that, are just so like, they become dependent on it. But with the question I find a lot who aren't really asking themselves, it's why do I need to know this? Right? What problem am I trying to solve in my life? And I think taking the approach of, I'm trying to do this, right?

Jack Heslin (24:45.634)
Yeah. Yeah.

Joe Crist (25:04.739)
Well, I need to ask something that is more focused on solving that kind of problem than getting a general answer. And being able to really truly define the problem you're experiencing or the challenge you're experiencing will lead you to that, to a much better solution in the end as opposed to just, hey, I only know about this. So starting with better questions will give you better answers, if that makes sense.

Jack Heslin (25:26.052)
It does it does make sense. And of course, it brings up the whole sort of philosophical debate. How do you know you're asking a better question? Maybe your question is kind of shallow. I've certainly asked shallow questions in my life and didn't realize it until I got a result that was and I'm talking about life, not just my career. I got a result and realized I didn't take something nearly seriously enough. I put this again using the it all kind of ties together.

Joe Crist (25:34.21)
Right.

Jack Heslin (25:55.8)
that thought partner, when I gave a presentation on AI about a year and a half ago to a local community here, I said, know, it's entirely possible that this technology will hold, force us to hold the mirror up to ourselves like no other technology we've ever developed. And if we're getting prompts we don't like, if we're getting outputs we don't like, well, where did that training data come from?

Joe Crist (26:25.389)
Yeah.

Jack Heslin (26:25.72)
The training data comes from websites. Who writes websites? Human beings write websites. You don't like the output? This may, and this is not a prediction, but it's possible over coming generation, couple of generations, this technology will make us look more and force us to look more in the mirror than anything we've ever built before. It's possible. So that's not a prediction, but we'll see.

Joe Crist (26:49.121)
Yeah. Yeah, I think, I think it's going to happen. You know, that brings up a really, another really important question, right? So obviously there's a lot of capital going into AI. There's a lot of people trying to find new tool or create new tools and use new tools for AI. And it's really is impacting the world. Like in how we live, like, where do you see the future? When it comes to, I know big question, right? But

Jack Heslin (27:13.284)
Mmm.

Yeah, I could keep it short and say, I don't know. I can't say that. I'm the expert. So let's answer that question with the phrase best practices again. So when I get into that conversation with people, I say, yes, best practices on an ongoing basis. It's not going to be static. In six months from now, certainly a year from now,

Joe Crist (27:22.499)
you

Jack Heslin (27:42.882)
the LLM models will be better trained. They'll be quicker, they'll be faster. There'll be more companies pushing the envelope of what's possible. The only thing I have a deep conviction about is that this will change the nature of work in many, perhaps all, but certainly many sectors. One of the things that really elbowed me to this to make me stay in the space

when Ginni Rometty was CEO of IBM, I think it was 2018 or 2019, she was on CNBC and she said, this is going to impact 100 % of jobs. And that struck me because she wasn't saying it'll impact a lot of jobs, many jobs, it'll have a big impact. She was getting really specific, 100 % of jobs. And I think the...

the work that's going to need to be done is how do we reassess what work is?

And how do we reassess the word intelligence? Does that need to be re- and what is thinking? What is thinking? What is intelligence? What is the nature of work? And I think these are going to be very hard questions to answer for years to come. And I do think that there'll be a lot of jobs that will diminish.

I don't understand why people get upset about that because technology has been altering the economic landscape for centuries. Nobody today makes a good living building wooden sailing ships. You know, this is this isn't news that technology always impacts the way things are done. The IRS and don't quote me on this. People feel free to say, Jack, that's not right. My understanding is the IRS is proactively thinking about where's tax revenue going to come from.

Jack Heslin (29:49.476)
25 years from now, if more and more jobs are done by a bunch of LLMs, do we need accountants? Do, you I'm sorry, I think about to say something there.

Joe Crist (30:04.599)
That my brain right now is like spitting up thinking about this. Like that's a super good question. Right. It's, it's cause it's going to mean it is it's already impacting the workforce. yeah, like tax. never even thought about tax revenue.

Jack Heslin (30:10.212)
Yeah, it is.

Jack Heslin (30:18.98)
I know people who, yeah, apparently, and again, I don't want to say this is firm, but my understanding is somewhere in the IRS, they're looking 20, 25 years down the road saying, hold on. If more jobs become automated because AI at a basic level can do bookkeeping. It can do accounting. It can, it could sell you a car. Really?

I mean, could. Then what jobs do those people have? And I'm not one of those, my God, we'll all be unemployed. I'm not that guy. But right now, we don't know what jobs are going to come. And if people aren't working, they're not paying taxes. And then where is that money? My guess is, and this is just a guess,

Joe Crist (31:11.511)
Yeah.

Jack Heslin (31:15.524)
that the nature of taxation will change drastically to more output oriented. What is the productivity of a particular process? And maybe that ends up. Not to digress too much, but I was trying to explain this to my son years ago. I said, let's just say mom and I bought two cars. One stays in the driveway all the time, the other one we drive. We paid the same property tax on the cars. My guess is at some point,

taxes will have to be more based on the usage. And that car then in the driveway doesn't get taxed as much. I think as automation comes more into the production processes, AI becomes more enabled, and maybe people aren't working as much and that's not a prediction, then we're gonna say we need to change the tax structure. We're gonna have to, I think. Feel free to say, Jack, you're wrong.

Joe Crist (32:10.723)
That is so profound. I never thought of that, but that makes perfect sense. And the other challenge that comes with this, like when you don't adopt AI. So as I said before, people shouldn't be scared of AI. They should be scared of people who know to leverage it. If you're having AI take care of low-skilled jobs, just really any job you can train AI on, really, really process-dependent jobs that AI can do,

the level of unemployment that could create and it raises the bar for people. So there's two things I can see happening. This is a very, it gets a little dark. It's where you have people who can't get jobs. So unemployment is going to spike, which carries a pretty large.

Jack Heslin (32:56.374)
you

Joe Crist (33:06.701)
group of problems with it, right? If you are unemployed, things like crime increase, sickness, disease, all sorts of social problems across the board happen when people really just can't afford to live. But it also creates this opportunity of, well, now you, what's called creative destruction when new technology changes the way we do things and the way we live and our economy and even government works.

where people are now having to find new opportunities. So it creates new industries as well. So it's a double-edged sword, right? People get hurt, right? Because it's like, well, this thing has now changed the way of life. And those who will adopt to it, or sorry, adapt, can adapt to the new way of living and the new standard that we have as just human beings, right? It's either going to jump on the bandwagon and succeed,

or you're not and you're going to fall behind and you're going to suffer for it.

Jack Heslin (34:04.22)
And I think it's up to each individual's proactivity. You know, you're either going to be proactive or you're not. You were proactive in getting through college or a trade school or going into the military. You were proactive in getting a job. Well, that's not that that's only going to become more so. I don't think any of this is really easy. I do think it's stressful.

Joe Crist (34:26.755)
Absolutely.

Jack Heslin (34:32.824)
But what I've said to people who have kids or grandkids who are at that high school or college age, get them to pay attention to this. Because this is a huge question mark. And I'm not ready to say this will be the most transformative technology since the printing press. But I think it probably will go in that direction.

I'm 61 years old. Am I going to be the one to figure out the right role of this in society? No, I don't think I will be, but my son will. And in that generation, the ones who are coming into their careers who will say, okay, well, how do we use this? They'll be the ones to really determine the future path of it, I think, because they'll come into their lives with it.

Joe Crist (35:25.793)
Yeah, absolutely. All right, so I do have one last question for you, right? This is definitely my favorite question to ask. Zobz, you've been in the game for a while, right? Whether it be sales, whether it be the AI conversation, it's everything you've done. What piece of advice would you give to the audience to walk away with today? It could be anything at all.

Jack Heslin (35:46.09)
anything at all. You know, when the pressure, when you have all this gray in the beard, is you're supposed to say something full of wisdom. look at all the wisdom here. say something full of wisdom. You're not there yet. Trust me, you'll get there. It happens.

Joe Crist (36:02.123)
I'm sorry.

Jack Heslin (36:05.571)
Jack Heslin (36:09.412)
I think the biggest challenge I've had in my life is walking that very fine line between staying with my strong beliefs on something as opposed to when should I have listened to other people more often.

And I can't give you the advice on how to do that. I can only say to you, be aware. Be aware. The world is incredibly complicated. And you know, this morning at 7 a.m. this morning, I was on a MS Teams call with a guy in Singapore and a guy in Ohio and I'm in Delaware. And

And the guy in Singapore looks at the world his way and the guy in Ohio and I guess we're closer, but there's never been a time in history when it was this easy to communicate to the masses for this cheaply. So opinions come at us from every point of view on every aspect of life, from marriage to kids to work to government to religion.

And some of those opinions are worth listening to and some of them are bullshit.

Jack Heslin (37:33.326)
So I think the challenge in the modern age will always be walking that fine line between holding on to what you believe to be true, but at the same time, you kind of got to be open-minded. And I don't think it's easy.

Joe Crist (37:50.208)
No, no, absolutely not. that makes a lot of sense though. know, it's, I think it's very, think it's very smart. You know, it's, something I think we all struggle with, right? Because there's information everywhere, opinions everywhere, you know? Yeah.

Jack Heslin (37:52.097)
If you say so.

Jack Heslin (38:04.654)
There's information everywhere telling you everything, telling you about how to improve your marriage, your kids, your work, how to believe in God, what your government, everything. A lot of it can be bullshit, but some of it's worth listening to. And you just gotta walk that fine line. It's hard.

Joe Crist (38:19.331)
Yeah.

You know, that's...

Joe Crist (38:26.275)
One of the things I was just thinking about as you're saying that about the information everywhere and something's kind of been pretty consistent our conversation it's You know our beliefs that we're just going through the data. What's like AI is Right and trying to figure out like what's right? You know and what we are looking for, you know, and we get it wrong too just like AI

because we only know what we're told. Human beings, we're learning creatures. So if the information we're getting is bad...

Jack Heslin (38:58.068)
And but here, and you just brought up a really important, as human beings, we're learning creatures. But that doesn't, I think a lot of people, I think a lot of people sort of absolve themselves of the responsibility to keep learning. They want to cling to their set of beliefs that they've lived their life by. And well, those beliefs don't apply.

Joe Crist (39:15.053)
Yes.

Jack Heslin (39:25.732)
Not today. They did 20 years ago. They did 50 years ago. They don't apply today. And this is hard. No one says this is easy. you have to walk that fine line between open-mindedness, but believing what you believe. But be open-minded. It is a paradox. Well said. Well said.

Joe Crist (39:45.303)
Yeah, it's a bit paradoxical, right? Yeah. Jack, thank you so much for joining us today. I learned quite a bit from you and definitely have some things I need to be thinking about in the future, especially when it comes to AI. Hi, everybody. This is Jack Heaslin from the AI conversation. I'll definitely be posting a link in the description. Once again, Jack, thank you for joining us. I'm sure the audience learned just as much as I did.

Jack Heslin (39:57.326)
Thank you.

Jack Heslin (40:13.25)
Joe, thank you for the invitation. This was a lot of fun. Always happy to come on again sometime and let's keep in touch. You stay well.

Joe Crist (40:21.827)
Thank you, everybody.


Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Project Flux Artwork

Project Flux

Project Flux