Transcript: Episode 7 - The Honest Conversation About AI Nobody Else Is Having
- Tech 4 Grown-Ups

- 4 days ago
- 7 min read
Hey, welcome back to Tech 4 Grown-Ups. I'm really glad you're here today, because today we're doing something a little different. We're not doing a how-to, and we're not walking through settings or talking about a specific scam.
Today I want to have a real conversation with you about something that is everywhere right now that affects almost everyone listening to this, and that almost nobody, and I mean nobody, is talking about honestly. We're talking about AI. The good, the bad, and the ugly.
Let's jump right into it. Let me start with what AI actually is. Not the commercials, not the hype.
What it actually is. Artificial intelligence in its current form, it's an extraordinarily powerful pattern-matching tool. Okay, just keep that in mind.
It gets trained on enormous amounts of human-generated information. Text, images, data, and it learns to recognize and reproduce patterns from that information at remarkable speeds. That's genuinely impressive, okay? Some of what AI can do right now would have seemed like science fiction maybe 15 years ago.
It can write a passable first draft of something, summarize a long document in seconds, recognize faces in photos, and detect certain patterns in medical imaging that human eyes miss. And it also has the ability to translate between languages in real time. These are real capabilities of AI, real value, and I want to be clear about that before I say anything else.
But, and this is a significant but, AI cannot walk into a room and read the tension between two colleagues who aren't speaking to each other. It cannot feel the difference between a patient who says they're fine and a patient who isn't. And it cannot draw on 30 years of watching how a particular industry actually works.
Not how it looks on paper, but how it actually functions with all the politics and personalities and the unwritten rules that never appear in any training manual. It cannot replace wisdom, and wisdom, in case anyone needs reminding, is mostly made of time. Okay, let's talk about what's actually good here because I'm not here to tell you AI is all bad.
That would be lazy and it wouldn't be true. So, for older adults specifically, there are real meaningful benefits happening right now. And I want to tell you about a woman in our community.
Let's call her Helen. Okay, Helen was diagnosed last year with a rare autoimmune condition. Her doctor, as excellent as he was, he had limited experience with this particular combination of symptoms.
So, Helen, and of course with the help of her son-in-law, used an AI tool to research her condition. She cross-referenced her symptoms with the latest clinical literature, and she was able to walk into her next appointment with a list of questions that her doctor described. Now, I'm quoting here, okay, the most informed questions I've received from a patient in years.
So, Helen didn't replace her doctor. She supplemented him. She showed up as a more informed and empowered patient, and she got better care because of that.
That is AI working for a person, not on them. That's the version I believe is worth celebrating. And there are others.
AI-powered hearing aids that adapt in real time to background noise. Fall detection systems, sophisticated enough to tell the difference between sitting down quickly and actually falling. Translation tools that let grandparents communicate with children who grew up speaking a different language.
This is real value and real improvement in real lives. This part of the story is true, and I think it's important to say so. Now, here's where I need you to really listen.
A few weeks ago, Oracle, who is one of the largest technology companies on the planet, laid off between 20,000 and 30,000 employees in one shot. A lot of them got a 6 a.m. email, and that was it. That was career over.
The stated reason? AI. The technology, they said, had made many of these positions unnecessary. But let's look at that claim carefully.
The positions being eliminated were held overwhelmingly by experienced, mid-career to senior professionals, architects, program managers, security specialists, people with decades of institutional knowledge that exist nowhere in any database. They cannot be scraped from any website, and it took entire careers to build. AI made these positions unnecessary.
Okay? No, that's not true. AI provided a convenient explanation for a decision that was really about cost, about replacing expensive, experienced, and legally protected employees with cheaper alternatives. And while Oracle was handing out those 6 a.m. termination emails, they were simultaneously filing thousands of applications to bring in cheaper workers from overseas.
So, the work still needed to be done. They found a cheaper way to do it, and AI gave them the story to tell the public. Here's what bothers me most about this.
This isn't new. Companies have been eliminating experienced workers for decades using whatever justification happened to be culturally available at the time. In the 90s, it was restructuring.
In the 2000s, it was outsourcing. And in the 2010s, it was digital transformation. But now, here in 2026, it's AI.
The justification changes. The pattern does not. And the people who pay the price are almost always the same people.
Experienced, older workers whose decades of accumulated knowledge are suddenly declared obsolete by the very companies that spent years benefiting from it. I want you to think about that. Now, here's the thing I really want to say the ugly part.
The part I don't think enough people are saying out loud. The narrative that AI is coming for all of our jobs. That within a few years, machines will be able to do everything a human does, rendering entire categories of experience worthless.
That narrative, it serves a very specific group of people. It serves the companies using it as justification for workforce reductions. It serves the investors who profit from those reductions.
And it serves the technology vendors selling AI products to nervous executives who are afraid of being left behind. It does not serve you. And it is not entirely true.
AI in 2026 is a powerful tool with real limitations that its most enthusiastic promoters have a very strong financial incentive not to discuss publicly. I wonder why. It hallucinates.
Confidently produces false information presented as fact. It has no judgment, no ethics, no understanding of context, and no capacity to be held accountable. No ability to do the kind of nuanced human decision making that the most important roles actually require.
A 62-year-old ICU nurse with 30 years of experience brings something to that bedside that no algorithm can replicate. A 58-year-old teacher who has spent three decades learning how children learn, how this specific child in this specific classroom, on this specific Tuesday morning is processing the world. And that does not have a machine equivalent.
The experience you have accumulated over a lifetime is not obsolete. It's irreplaceable. And anyone telling you otherwise has something to gain from you believing it.
Now I want to bring in something that I think about a lot when I'm watching all of this unfold. The Stoic philosophers, they were writing in a world of completely different technologies but absolutely identical human nature, returned again and again to one warning. Be careful of things that arrive dressed as progress.
Now Seneca watched Rome dazzle itself with engineering marvels and architectural wonders while its institutions quietly rotted from within. He actually wrote with the particular weariness of someone who had watched this pattern before, that the problem was not that people had new things. The problem was that they had mistaken new things for better things.
Now Marcus Aurelius spent his entire reign under pressure to react, to act dramatically, to be seen doing something in response to every crisis. He almost never did. His great discipline, the thing that made him arguably the most effective leader in Roman history, was the ability to look past the performance of urgency and ask the plain quiet question underneath everything.
What is actually true here? What is actually happening? And who benefits from my believing the story I'm being told? Now that question asked about AI in 2026 is not a comfortable one. Who benefits from you believing that your decades of experience are now worthless? Who benefits from an entire generation of experienced workers accepting their own obsolescence without question? The Stoics would not have been afraid of artificial intelligence. They would have been deeply skeptical of the humans making claims about it though.
That skepticism is not pessimism. It is clarity. So what do you actually do with all of this? First, I want you to use AI.
Genuinely use it. The tools are real. The benefits for individuals are real.
And there's no wisdom in refusing them out of principle. Use AI to research your health. Use it to draft things.
Use it to learn. These are real improvements in daily life. Now second, do not accept the story that your experience is obsolete.
If you're still in the workforce and feeling the weight of this narrative, use it as motivation to make your irreplaceable value more visible. Document what you do. Show people what decades of judgment actually produces that no algorithm can.
Third, ask the plain question. When a company or a politician or a technology vendor tells you that AI is the reason for a decision that happens to benefit them financially, ask Aurelius's question. What is actually true here? Who benefits from me believing this? And the answer will almost always be clarifying.
And fourth, stay in this conversation. The people building AI and deploying it, they do not look like most of the people listening to this podcast right now. That is not an accident.
The antidote to being left out of a conversation is to be in it anyway. Loudly. With evidence.
With the particular authority that comes from having watched the world long enough to recognize a pattern when you see one. So that's what I've got for you today. I know this one was a little different, a little heavier maybe.
But I think you deserved the honest version of this conversation, not the sanitized, everyone wins, isn't technology wonderful version. You're smart. You've been navigating a complicated world for a long time.
You don't need to be protected from difficult ideas. You need someone to lay them out clearly and trust you to do something with them. That's what we try to do here.
Now, if this episode sparked something for you, an opinion, an experience, a question, I really want to hear it. Leave a comment on the blog post or drop us a message here or find us on Facebook. This is a conversation, not a broadcast.
And if someone in your life needs to hear this, share it with them. Forward it. The more people who are thinking clearly about this, the better off we all are.
Now, thank you for listening to today's episode on Tech 4 Grown-Ups and I'll talk to you next time. Have a great day.

Comments