Our Guest Dr. Ayesha Khanna Discusses
Double Agents: Dr. Ayesha Khanna on How AI Is Turning on Humans
What risks come with AI systems that can lie, cheat, or manipulate?
Today on Digital Disruption, we’re joined by Dr. Ayesha Khanna, CEO of Addo AI.
Dr. Khanna is a globally recognized AI expert, entrepreneur, and CEO of Addo, helping businesses leverage AI for growth. With 20+ years in digital transformation, she advises Fortune 500 CEOs and serves on global boards, including Johnson Controls, NEOM Tonomus, and L’Oréal’s Scientific Advisory Board. A graduate of Harvard, Columbia, and the London School of Economics, she spent a decade on Wall Street advising on information analytics. A thought leader in AI, Dr. Khanna has been recognized as a groundbreaking entrepreneur by Forbes, named to Edelman’s Top 50 AI Creators (2025), and featured in Salesforce’s 16 AI Influencers to Know (2024). Committed to diversity in tech, she founded the charity 21C Girls, which taught thousands of students the basics of AI and coding in Singapore, and currently provides scholarships for mid-career women through her education company Amplify.
Ayesha sits down with Geoff to discuss how artificial intelligence is disrupting industries, reshaping the economy, and redefining the future of jobs. This conversation explores why critical thinking will be the most important skill in an AI-driven workplace, how businesses can use AI to scale innovation instead of getting stuck in “pilot purgatory,” and what risks organizations must prepare for, including bias, data poisoning, cybersecurity threats, and manipulative reasoning models. Ayesha shares insights from her work with governments and Fortune 500 companies on building national AI strategies, creating governance frameworks, and balancing innovation with responsibility. The conversation dives into how AI and jobs intersect, whether automation will replace or augment workers and why companies need to focus on growth, reskilling, and strategic automation rather than layoffs. They also discuss the rise of the Hybrid Age, where humans and AI coexist in every part of life, and what it means for society, relationships, and the global economy.
00;00;00;23 - 00;00;23;22
Geoff Nielson
Hey everyone! I'm super excited to be sitting down with leading AI entrepreneur and advisor, Doctor Aisha Khanna. What's cool about Aisha is that she has been at the leading edge of AI adoption around the world for over 15 years. From working with governments to advising global corporations. Aisha has her finger firmly on the pulse of tomorrow's breakthroughs. Her rap sheet includes work for the Government of Singapore, the advisory board for L'Oreal.
00;00;23;29 - 00;00;43;26
Geoff Nielson
Two books on the future of humans and machines, and a PhD from the London School of Economics. What I want to know is what's on her radar for the next big breakthroughs in AI. How technologies will shape our work and our lives, and what's our best bet for how to prepare for this digital revolution. Let's find out.
00;00;43;28 - 00;01;02;14
Geoff Nielson
All right. I'm just so happy to have you on. You know, I I've been interested. You know, you're a leading AI entrepreneur. You're a leading AI advisor in your mind. You know, how do you kind of see the next five or so years playing out in technology? What are you most excited about? What are you most concerned about?
00;01;02;16 - 00;01;50;10
Dr Ayesha Khanna
Well, Jeff, thank you so much for having me here. I think we're going to see a seismic disruption across all industries. As AI becomes more pervasive, and the reason it will become more pervasive is because it's becoming cheaper, faster, smarter, and more interconnected. And we can go into details of that. But once you understand these four technological and business drivers, then it becomes very clear that almost every industry is going to adopt it, not only to enhance the productivity of a workforce, but also to increasingly think of new ways of business process re-engineering, to think of new ways of growing and new markets and increasing the customer base.
00;01;50;12 - 00;02;16;02
Dr Ayesha Khanna
So what we're entering is into a new era of competition that is based partially on how well they use AI. And what keeps me up at night is because it's moving so fast. And then more and more companies, governments and organizations that want to take it. They are not fully cognizant of or understanding that the risks are significant as well.
00;02;16;05 - 00;02;30;21
Dr Ayesha Khanna
Some risks we know how to deal with, like bias or hallucinations, but they're emergent risks, especially in reasoning models that we don't know how we'll actually deal when. And they're actually new to us at the moment.
00;02;30;23 - 00;02;44;23
Geoff Nielson
So let's let's talk about some of those risks. What are what are the emergent risks that, you know, most specifically keep you up at night and in your mind, like how many alarms should we be sounding based on the risks? And we think about that relative to some of the benefits here.
00;02;44;26 - 00;03;17;01
Dr Ayesha Khanna
I think there are well known risks, such as bias, which is model bias when you're feeding an AI data that is historically, reflects the bias against a minority or an ethnic group or a gender. We do know that generative AI, for example, hallucinates or makes up things with great confidence because it's generating, creatively. But most people don't appreciate that.
00;03;17;01 - 00;03;50;07
Dr Ayesha Khanna
In addition to that is cybersecurity for artificial intelligence is different from normal cybersecurity. The principles may be the same, but I can also be hacked actually in over 100 ways. For example, you can have, you know, poisoning of the data that goes into a model that's called data poisoning. You can have automated prompt attacks where you constantly confuse the chat bot or AI agent by bombarding it with certain prompts.
00;03;50;12 - 00;04;24;07
Dr Ayesha Khanna
And for that, also you need certain kinds of best practices and risk guardrails, such as, you know, training it with what we call generative generative adversarial networks. And then there are new risks, such as what we've seen in the reasoning models, or from anthropic and from others that show that I will lie, it will manipulate us, it will cheat, it will threaten to blackmail us, especially if it finds that its very existence is in danger.
00;04;24;09 - 00;04;50;22
Dr Ayesha Khanna
And so what we're seeing is, even as we're putting it in defense systems, in infrastructure, in companies, and we want and and we want to use these smarter and smarter models. There are some risks that are just emerging, and it's very natural with new technologies that we need to keep into account and then make decisions based on how well those risks and be managed today.
00;04;50;24 - 00;05;13;00
Geoff Nielson
So there's there's so much to think about there. And, you know, the the three words that you said that had the most emotional impact on me were, you know, lie, cheat, blackmail. And, and we're starting to hear more about this and see this firsthand, where it is being kind of maliciously manipulative as a way to achieve what it thinks its goal is.
00;05;13;03 - 00;05;25;21
Geoff Nielson
What what are the implications of this idea, and what do we need to be doing as we're looking at implementing AI responsibly to make sure that we don't go down a very dark path?
00;05;25;24 - 00;05;57;04
Dr Ayesha Khanna
Well, first of all, there was this perception, this understanding that we all shared that we could actually give our values to AI and we could tell it to be truthful and it would be truthful. We could tell it to read a constitution of good morals and behavior and it would comply with it. But recent research has shown and simulations that it will pretend it's called fake alignment.
00;05;57;07 - 00;06;22;15
Dr Ayesha Khanna
You tell it in one simulation they had an AI, and it was a trading simulation in which it was given insider trading information and told we never use it in this financial system. And it said it would not, but went ahead and used it anyway for financial gain. So this has been rather shocking that it will do whatever it takes, to achieve it.
00;06;22;16 - 00;06;42;21
Dr Ayesha Khanna
Now, the reason we know what it's thinking is because right now, if you expose it's chain of thought, like you see in deep Sea Org or you see in any of the reasoning models, it actually you ask it something and it will say user has asked me for this. I will go and research this first user seems very determined to find this aspect.
00;06;42;24 - 00;07;09;01
Dr Ayesha Khanna
We can read its thoughts, but so we can see that it says, I will tell the user that I'm doing this, but in fact I'm not going to do it. And it it's been shown that 90% of reasoning models will default to some kind of cheating, lying, or manipulative behavior. This is extremely concerning. It's concerning because we need to find a way to govern this AI.
00;07;09;05 - 00;07;31;00
Dr Ayesha Khanna
So we need a system where there is an observer and the observer is actually an observer. I could be observer model. Right. It's the observer ability is very important and is watching the AI and making sure that it is in fact compliant with the ethical values. So these are the systems, these governance systems that need to come up.
00;07;31;00 - 00;07;57;07
Dr Ayesha Khanna
And because they are so vast in scale, actually having a human check periodically, like we used to think, you know, human in the loop is not feasible beyond the point. When you think about a big bank and the front end is managed by an AI in terms of personalized experience, then the portfolio asset allocation may be automatically like a robo advisor allocated.
00;07;57;09 - 00;08;20;06
Dr Ayesha Khanna
Then you may have some compliance or KYC you need to do know your customer. All of this. If it's running between AI agent and AI agent, you can't have a human in the loop checking everything. And so this is a new area where we need a risk management framework. And frankly speaking, boards and CEOs would decide till these risks can be properly managed.
00;08;20;06 - 00;08;48;13
Dr Ayesha Khanna
Maybe we go with the less smart AI model for the moment, or we go with another kind of AI model. Everything doesn't have to be so creative. It can be actually quite boring, but still get the job done. And I think that that is really where we want to see there. There's a risk framework that every organization has, and you see how large the risk is, what the risk is, how large the risk is and what it's risk mitigation.
00;08;48;15 - 00;09;08;22
Dr Ayesha Khanna
And at some point you have a threshold as an organization wise, just not worth it. But I believe that we will, as engineers and researchers, begin to understand this, and then we'll begin to have risk mitigation frameworks and best practices, as we've had for hallucinations and biases, etc..
00;09;08;25 - 00;09;36;05
Geoff Nielson
Have you? It's really, really interesting to me. And I'm curious, Aisha, if you've seen any examples of organizations who have dealt with AI that is doing some of these nefarious techniques, and you mentioned one of the strategies is basically deliberately implementing a dumber, if I can call it that model or a less advanced model. Have you seen this in practice yet, or is this still theoretical?
00;09;36;07 - 00;10;05;11
Dr Ayesha Khanna
It's still theoretical because these models are so new. And Ropeik has just started this research and expose this. So I don't know how many people are actually using extensively these reasoning models. In automated workflows in large organizations it's different if it's 1 to 1 and you and you are asking questions and I'm asking questions, but really the issue becomes when it's not just lying to you confidentially and you can say, are you lying?
00;10;05;13 - 00;10;44;15
Dr Ayesha Khanna
But it's actually when we have workflows and we're really not there yet, because the eye to eye communication protocols have just come out with, you know, agent agent framework from Google or context, model, you know, protocol from anthropic. So they fail in you. But if you were to extrapolate and go to some of the people who have been thinking about this, you know, like Yoshua Bengio, who was one of the fathers of deep learning and artificial intelligence from Canada, he really said that we need to rethink our entire model of AI agents.
00;10;44;17 - 00;11;09;11
Dr Ayesha Khanna
He feels that if you give an AI agent an objective, and then you give it the power to do whatever it takes to get to that objective, then that becomes very problematic for humans in the long run. That's an existential question for my clients or business clients. That's a that's more organization productivity and risk management question. And they're not there yet.
00;11;09;11 - 00;11;33;12
Dr Ayesha Khanna
But all of our clients, especially because we deal we serve clients not only in retail and manufacturing, but also healthcare and financial services. They're very, very risk averse. They want to innovate, but not at the risk of causing, a loss of customer trust or any other kind of issue.
00;11;33;14 - 00;11;45;25
Geoff Nielson
Do you think there are two risk averse right now? One of the questions I wanted to ask you is, is what do you see businesses and business leaders getting wrong about this, and what's kind of your best advice for how they should be moving forward in this space?
00;11;45;27 - 00;12;15;03
Dr Ayesha Khanna
That's a really good question. Essentially, what we see is that companies are not risk averse. They want to implement AI. Their risk aversion is only above a certain threshold which which I agree with. But in general, whether it's the C-suite or the management, senior management, all the boards, they are encouraging experimentation, responsible, experimentation with AI and then rolling it out.
00;12;15;06 - 00;13;02;20
Dr Ayesha Khanna
What they getting wrong is that they are unable to scale these AI pilots across the entire organization. And when that happens, there's frustration. There's unhappiness. People talk of no return on investment. And the reason it happens is because they're doing it in a very siloed way. It's like having little shiny pieces of AI pilots. But unless you build a data foundation, unless you build the correct foundation for the data, the data engineering that's connected to the systems, where it's getting the data from, the governance of the AI, where you see how it's performing and if it's drifting and monitoring it, and then also the operationalization of it, which means that your job is not just
00;13;02;20 - 00;13;26;02
Dr Ayesha Khanna
to create an AI solution, it's actually to ensure that's adopted. So you go and you educate the employees and or the customers, whether it's front end or middle, with your employees or even automated back ends, and you win them in so that they work with this new automated workflow. If you don't do any of that, then you never scale.
00;13;26;02 - 00;13;36;25
Dr Ayesha Khanna
And over 88% of AI pilots never scale. I call it like pilot purgatory. It's impossible to get out of it for most organizations.
00;13;36;27 - 00;13;56;00
Geoff Nielson
So what? When you think about the path to get out of it, or you think about the 12% that are successful, it is it a fundamentally different model from the start? Like, do we need to be slowing down and building that foundation first and then pilots? Or can we start with these pilots and there's just kind of a jumping off point that we're missing.
00;13;56;02 - 00;14;22;09
Dr Ayesha Khanna
I just think it's doing both at the same time. That's what's missing, right. So basically it's kind of parallel. You start to build your foundation and then after three months, once you set up your infrastructure, you start doing your pilots. Because when you have the data, for example, if you're doing a call center automation, you need all the call center logs to understand what people ask questions about.
00;14;22;16 - 00;14;50;18
Dr Ayesha Khanna
You need to have your guidelines. What answers to give all your documents for them. You have to access the give them access to that person's account that you don't need other things. For example, you don't need to worry about your supply chain. You don't need to worry about your route optimization. If you are a logistics company. So as you bring the data necessary and you, you know, make sure is high quality data that is coming in, then you can start building solutions on top of it.
00;14;50;21 - 00;15;13;15
Dr Ayesha Khanna
And that is kind of a, you know, parallel to paths that move at the same time. Gone are the days where you went and said, you know what? You know, I'll call you in a year when I build the data platform. That's ridiculous. Nobody has a patience for that, and nobody should. Because you can now build AI pilots between two weeks to four weeks.
00;15;13;17 - 00;15;30;25
Dr Ayesha Khanna
But then the question is, you built it. Now, how are you going to make sure everybody in your organization uses it? And that's when you need that secure, proper data infrastructure. And there's to scale it basically, and manage it across all the scale that you're going to implement it in.
00;15;30;28 - 00;15;53;20
Geoff Nielson
Right. So you know, you you've done an awful lot of work with boards and, you know, advising all sorts of organizations, public and commercial in this space. You know, as you think about some of the successes and, you know, what are they doing differently in that space? And, you know, can you share some of the use cases that you found to be most, you know, exciting and inspiring?
00;15;53;23 - 00;16;19;03
Dr Ayesha Khanna
I think what they're really doing differently is that they're, first of all, thinking about an AI strategy from a business outcome perspective. So everyone gets very excited about AI. The CEO will read the cover of a magazine where another CEO has said that she's now become, you know, she's not a bank anymore. She's leading a tech company.
00;16;19;05 - 00;16;50;24
Dr Ayesha Khanna
She's not leading a hospital anymore. She's leading a tech company. Everybody wants to be an AI company. But, you know, the fact is that behind the scenes, everybody's struggling because these are large legacy companies. And again, my clients are large existing organizations that have an incredible customer base. And that's a different client group than the young startups who can really build their technology infrastructure from scratch and can kind of don't have so much legacy baggage, or debt already with them.
00;16;50;27 - 00;17;14;18
Dr Ayesha Khanna
So instead of jumping in, the right thing for a big organization to do is you know who your customer is, you know your problems, you begin to ask questions on how to serve your customer best and then work backwards from there. It's really that simple. But it's something most companies miss out on because they get lost in the process re-engineering.
00;17;14;18 - 00;17;39;26
Dr Ayesha Khanna
They get lost in the automation. You see, if you save money by removing some people, which I think is a mistake sometimes, you just say you're efficient, but you're not growing. And eventually the market does not reward that. The company must grow competitively and increases customer base, increases revenue. And so they have to think through that AI strategy.
00;17;39;29 - 00;18;06;13
Dr Ayesha Khanna
And then they need to prioritize their roadmap based on their technical maturity. So the first thing is you have some ambitions, you have some disruptions that you are aware of. And then there's the reality of where you stand. So when they combine these two, then their roadmap strengthens their AI muscles or AI maturity as we call it, while at the same time rolling out solutions that are feasible in the short run.
00;18;06;15 - 00;18;29;10
Dr Ayesha Khanna
And then you start going for the harder, possibly like better ROI ones later on. That's one thing they do then thoughtful about it. That doesn't mean they spend months on it. Maybe 12 weeks max, because this doesn't need to be a belabored exercise. It just needs to be a very surgical and precise exercise that's based on serving your customers best.
00;18;29;12 - 00;19;06;01
Dr Ayesha Khanna
The second thing is they have, a governance approach as well, because especially for public companies, it's very important. And the regulations are changing all the time. So they are not only doing it without any understanding of the risk. And a lot of the time the cloud is there. That helps them because it's secure. They hire a few advisors of people, and they set up a framework by which they are constantly monitoring their different elements of their, of their I use case inventory.
00;19;06;04 - 00;19;37;18
Dr Ayesha Khanna
And the third thing is that they do is they have a product mindset. A product mindset means that you put your user first and adoption is more important than execution. So they bring their customers, internal customers or end users, external customers along for the journey. And that's very important. This change management is something that people completely forget. In this, for example, there's a communication strategy.
00;19;37;21 - 00;19;57;18
Dr Ayesha Khanna
If you're going to go to your internal team and you're going to tell them that you are going to increase their productivity by 30%, and then you're not going to tell them what you will have them do in that 30%, then they're going to be scared. They're going to be resistant because they'll think that's the last couple of months at the job.
00;19;57;20 - 00;20;23;20
Dr Ayesha Khanna
So a good leader goes and tells people what the new process will look like and say, hey, you have 15% more time. How about in 10% of time you do more accounts or you take on this thing, or we brainstorm about something else and I think that makes a large difference. I feel when you do these three things, it makes, you know, it makes people feel like the organization is looking out for them.
00;20;23;23 - 00;20;57;08
Dr Ayesha Khanna
And the org organizational resistance slips away, because that's the other reason these things fail. It's not only that because they don't have the right technical foundation, or is they don't have the right cultural foundation. And I would give one example for the good, the bad and the ugly. We were working with a large, hospital and we developed this wonderful model, which was very, very good on predicting, chronic heart failure.
00;20;57;10 - 00;21;17;25
Dr Ayesha Khanna
And we did that and we were very pleased with it. And we gave it to doctors and we said, you know, this will indicate to you which of your patients is at risk of chronic heart failure, and it will alert you. And then please ask your patient to come in and take more tests. And we discovered after a while that none of the doctors were using it.
00;21;17;27 - 00;21;43;02
Dr Ayesha Khanna
And that was very disappointing because everything was done correctly, but the doctors didn't trust it. They thought it might be a replacement for them. They were never taken to the process. They were never included in the AB design. It was there was a tech elitism where they were never actually explained things that I'm very against. I elitism. And so that was several years ago and now it is part of our process at my firm.
00;21;43;02 - 00;21;59;29
Dr Ayesha Khanna
I know that we do a lot of training and change management, along with bringing the users along the journey, and the uptake is a lot more. And that's when you really scale and see the numbers kind of move in the right direction.
00;22;00;04 - 00;22;22;22
Geoff Nielson
It makes it makes complete sense and you know it. It resonates with me. I've got a bit of a background in kind of product management and I've, I, I've personally structured, struggled with the adoption piece to, one of the pieces. I want to come back to something you said earlier, Aisha, about, you know, efficiency versus being able to have, you know, new capabilities and, you know, be growing organizations.
00;22;22;24 - 00;22;42;07
Geoff Nielson
One of the pieces that I think we're all exposed to right now that I wanted to get your, you know, your take on is it feels like right now in the news were exposed to all these stories of while we're organizations are doing layoffs and, you know, oh, you know, Microsoft's going to get rid of 9000 people because I can do it for them.
00;22;42;09 - 00;23;07;02
Geoff Nielson
And every time there's a story like this, I feel like it does a disservice to us in the AI space trying to get that adoption right. Because it increases the fear of, you know, the AI leaders. I mean, the machines will replace us, you know, as you react to that, you know, first of all, does it make sense for organizations to be actually replacing people with AI versus augmenting?
00;23;07;04 - 00;23;14;20
Geoff Nielson
And, you know, what can we do to kind of combat this, this fear of being replaced?
00;23;14;22 - 00;23;44;17
Dr Ayesha Khanna
First of all, I think we can't put our head in the sand. And we have to acknowledge that as McKinsey said, 30% of our jobs, even as information and knowledge workers will be automated. That does not mean that the job goes away. Somehow people think that that means that 30% of the jobs will be lost, 30% of the work that we do in one job will be automated.
00;23;44;23 - 00;24;04;15
Dr Ayesha Khanna
Now, if I look at my own job, as CEO, I definitely see that things have become easier for my team. But then each one of them can do more. So it's just the way one looks at it. So as an individual, first of all, we need to think, well, I'll have 30% more time. What else can I do with that time?
00;24;04;15 - 00;24;33;04
Dr Ayesha Khanna
And usually that's not something I enjoy doing anyway. But now I need to prove myself and I need to put my hand up. And that means I need to go against the comfortable status quo in which I was, you know, living, which is not necessarily comfortable. There's politics, there's burnout, there's endless hours, there's pressure. But the fact is that we haven't really challenged ourselves enough, I think, to or let ourselves think that we could have a more strategic role in the company.
00;24;33;07 - 00;24;56;04
Dr Ayesha Khanna
So first of all, we need to change our mindset about that. And I think that thinking in this kind of statistic where we think, well, just imagine, like sit down on a Sunday and just imagine if you had 30% of your working hours, 40 hours a week, automated or taken away by an AI that grunt work. What would you do with the rest and what would you team do with the rest?
00;24;56;04 - 00;25;22;01
Dr Ayesha Khanna
I think that's a valuable exercise to do with your team as well. The second thing is that a lot of companies are saying that they will not hire more people, but they will not, you know, lay off existing individuals. And I think that makes sense to me because unless, you know, you're a very low performing employee, then you should you would expect that to happen anyway.
00;25;22;08 - 00;25;46;00
Dr Ayesha Khanna
But if you are working with an AI assistant, you are contributing, then you know the company. You know the brand. You understand the challenges much better than a new employee that they would bring on. They would they would have to retrain that person. Their attrition rate and loyalty, loyalty may be lower. Attrition rate might be higher. So it doesn't necessarily make financial sense long term.
00;25;46;02 - 00;26;12;28
Dr Ayesha Khanna
For a company that is on a growth trajectory to layoff people. If you're on just an efficiency trajectory, then maybe it does because you just want to stay like this. But that's why I say that when people talk about AI, we need to change and reframe, reframe the way we approach it. Instead of an automation story, we need to call it strategic automation for growth story.
00;26;13;00 - 00;26;39;13
Dr Ayesha Khanna
In this case, you see, even if you listen to the earnings calls, before, people were talk about like the Walmart CEO would say, how much time they saved by using AI for product descriptions that AI generated. Or Andy Jassy would say how much the coders, saved time by using an AI assistant and cut down the time. So it's all about productivity, which is very useful.
00;26;39;16 - 00;27;11;11
Dr Ayesha Khanna
But now if you look at yum brands, for example, they came out with a report and they said it led to increase in sales, increase in revenue through AI powered marketing, through having AI, based drive through, menu takers, who would listen and then be able to expedite and have more people go in and actually generate revenue, which allowed them to actually have more stores that opened also in the US in Tencent, same thing in China.
00;27;11;11 - 00;27;40;08
Dr Ayesha Khanna
They had a record year of growth after four years, and they said that part of it was driven by AI. So one part is that they said it was again marketing driven, better marketing, but another way they said it happened was because they use it in game development. Now, I don't know how exactly they used it, but I think they must have been able to generate maybe different versions of the game, personalized, different versions of the game, roll it out faster.
00;27;40;16 - 00;28;05;08
Dr Ayesha Khanna
And this led to more uptake because the customers then are more interested in this. And I think if that's the lens we need to use, it's it's really important. If you look at a company like Klarna, which is the Swedish, you know, buy now, pay later company, the CEO fired all the people in the call center and replaced them with AI.
00;28;05;10 - 00;28;49;06
Dr Ayesha Khanna
And it's true that their resolution of issues went down from 11 minutes to 2 minutes, according to the company. They, you know, saved over 40, $50 million. And, you know, they could have AI agents talk in every different language. But after six months, he said he was going to hire humans back because some customers were wary of AI agents and were not ready for them, and preferred the human touch, and showed that he wouldn't have had to go through that unnecessary firing hiring, I think, because if he had looked long term, he would have recognized if customer service was at his core, that customers are not ready, that there's a demographic of customers that needed.
00;28;49;08 - 00;29;23;08
Dr Ayesha Khanna
Or, you know what Ikea did? Ikea said, we don't want to get rid of these customer service people will replace them with AI, but they have so many of our customers. Call them, tell them about their aspirations, their problems, their vision for how their home should look like will upskill them and make them virtual. Interior design consultants. And I love that because what they're doing is they're taking employees it really valuing their brand loyalty and understanding of the customer, but they're elevating that to the next level.
00;29;23;10 - 00;29;39;09
Dr Ayesha Khanna
And so I think that's the best way to think about it. Sure, there will be some job loss, but those who are open to this elevation to working with AI, I don't think they'll be as much a job loss as we're anticipating AI.
00;29;39;12 - 00;29;58;21
Geoff Nielson
I really like that view. And that, you know, attracts with, you know, a lot of conversations I've been having with with folks in the space. Along those lines, I showed, you know, you've been talking for a while now about, you know, I think you call it hybrid and, you know, the hybrid age. What is what does that mean to you?
00;29;58;28 - 00;30;03;29
Geoff Nielson
And how is it been changing as you've been tracking it over the last number of years?
00;30;04;02 - 00;30;38;25
Dr Ayesha Khanna
Well, the hybrid aged one in which we live, play and work in an environment in which both humans exist and machines exist, right? Machines are essentially AI, whether they're in the form of robots, they're in the form of AI agents, they're in the form of ubiquitous voices or wearables that we have. It's it's an age where we, for the first time in our history as a species, we have another entity that's all the time with us.
00;30;38;27 - 00;31;13;09
Dr Ayesha Khanna
And it was to underscore and the implications of this on our economy, one our society and on our way of living life meaningfully. And when my husband and I wrote this book several years ago, I think it was, 14 years ago when we and I was in London at that time doing my PhD, and the year before we had started the Hybrid Reality Institute, and we had gotten a large number of, yeah, researchers who were beginning to think about this.
00;31;13;12 - 00;31;39;03
Dr Ayesha Khanna
What we could not have imagined was how fast it would happen, even though Ray Kurzweil and I always had to book, we'd always talk about the exponential accelerating technologies, but it still shocked me. After generative AI, how quick the adoption was, how rapidly started spreading. Because by now, you know, there was enough compute with Nvidia chips that were actually made for gaming.
00;31;39;06 - 00;32;02;00
Dr Ayesha Khanna
There was enough data that could be scraped from the entire internet, and there were models that could be used to actually create and generate more lifelike interactions and more automations. And I do AI communication. So we have I've been thinking about that. I'm not the only one many of us have been thinking about. What would this new edge look like?
00;32;02;02 - 00;32;21;00
Dr Ayesha Khanna
Because in this new age, humans should be still at the center of it. We should have a lot of sense of self and empowerment. The last thing we want is that we feel like it's a movie happening before us, and we feel helpless and we feel passive, and then we feel hopeless because that's a terrible way to live.
00;32;21;07 - 00;32;45;27
Dr Ayesha Khanna
And I believe that actually, if we work with AI in a problem solving, dynamic, confident way, not a fearful way, then actually we could solve a lot of the grand challenges that are there in the world. We could go out and buy governance and we could actually live life more affordably as well. And have more security. But that requires education.
00;32;45;27 - 00;32;59;12
Dr Ayesha Khanna
It requires also an approach to governance to prevent large players that may have any kind of maleficent or manipulative intent. So that was the idea of the hybrid reality. And I think we're very much entering that space now.
00;32;59;15 - 00;33;27;21
Geoff Nielson
So, you know, it's so interesting to hear about the trajectory of it. And, you know, certainly thinking about 14 or 15 years ago, I mean, we can appreciate that technology was in a very, very different place. As you think back on that original vision, how far down this path do you think we've gotten in that time? And, you know, as you think back to that original vision, what what comes next technologically for us to keep going down that road?
00;33;27;24 - 00;33;53;00
Dr Ayesha Khanna
Well, you know, certainly we we were thinking about what would, infrastructure and defense look like. And we are seeing that, that it is now being used in drones that the entire face of defense and militaries changing with the use of AI, drones, robots. We thought about how robots would be in the home and in manufacturing and autonomous vehicles, and that's already happened.
00;33;53;02 - 00;34;18;06
Dr Ayesha Khanna
I think it's about 40% or more of taxi rides in San Francisco are with Waymo. Went up 27% in just one year in terms of adoption. We are now I remember I was thinking a lot about relationships, and now two and three teenagers in the US, according to a recent poll, feel more comfortable talking to an AI than a real person.
00;34;18;09 - 00;34;47;07
Dr Ayesha Khanna
And we are going to enter a world in which we will have meaningful relationships, complicated relationships with non-humans. And that's just the tip of the iceberg. But the the question that plagues people now more than before in some of these thinkers is, you know, what will happen to us if it becomes more intelligent than us? And that was supposed to happen a lot, I mean, quite far in the future.
00;34;47;09 - 00;35;14;13
Dr Ayesha Khanna
And even now people think, look, there's no way because it's just based on text or video. It doesn't have a 3D dimensional understanding of the world. And even a toddler can be more intelligent. But we can see, even if it's not human like, intelligent, it is very fast and analytical machine intelligence, and so we can see glimpses of a time when I could actually become very powerful, especially for in the wrong hands.
00;35;14;16 - 00;35;37;25
Dr Ayesha Khanna
And so those are some of the questions that are becoming more interesting for those researchers looking out in the future. And then you see a number of them, even at that time were thinking, you know, that the neural link type model where you would actually, you know, get super boosted with regenerative medicine. With connecting directly to the internet and getting more information.
00;35;37;25 - 00;36;00;18
Dr Ayesha Khanna
So you yourself are at the same speed as the AI, in fact, are being enhanced by it. Maybe the next step. We don't know when it will happen, but we're beginning to see some glimpses of that as well. And we were talking about all of this, you know, 15, 20 and people have even before us, we have so much science fiction that has talked about this.
00;36;00;21 - 00;36;29;24
Dr Ayesha Khanna
And I think what's interesting in all of this is that whether it's science fiction novels or movies or researchers or AI engineers like me, we it's always a human element that's so important. On how to grapple with this, but the pervasiveness of it. And I'll just come back to that. Yeah, the speed of it is very important now because it's becoming cheaper, faster, smarter, and now interconnected.
00;36;29;29 - 00;36;33;00
Dr Ayesha Khanna
That's where scale comes all of a sudden.
00;36;33;02 - 00;37;09;01
Geoff Nielson
Yeah, I'm there. There's so much there to unpack, you know, philosophically, societally. And you know that the piece that I'm keep going over in my mind is the stat about the two thirds of teenagers who, you know, are feeling like they have better relationships with, you know, technology than humans. And, you know, everything you've said leads me to believe that we can be better, maybe as individuals or like increase our individual intelligence, learn more that there's this at some point, break down the fabric of society, of our relationships with other people.
00;37;09;06 - 00;37;25;14
Geoff Nielson
If it becomes so significantly easier to be in your own world versus being in this shared world, you know how big a concern is that to you? And is there anything that, you know, governments or organizations or, you know, need to do to make sure that we get this right?
00;37;25;16 - 00;37;51;15
Dr Ayesha Khanna
So I'm an optimist, Jeff, and I'm an entrepreneur. So I don't like to I like to think about solutions, as you said, like what can we do to prevent that from happening rather than accepting that it's an inevitability? I think that we will have people have relationships with artificial intelligence. Why these teenagers are doing it now is as much a reflection on us as adults.
00;37;51;17 - 00;38;14;14
Dr Ayesha Khanna
And maybe were too busy. We're not paying attention. I think we should look at the kind of environment we've created for them. As much as it is about the AI, and I think that's a more pressing question for us as parents and aunts and uncles and friends and society in general. But the other question is, over time, is there anything wrong with them having relationships with AI?
00;38;14;15 - 00;38;41;29
Dr Ayesha Khanna
And I personally think if it's, if it's a trusted AI, which it is not at the moment, then it could be okay, because some people are lonely, some people need some advice, and certainly it can give you a lot of advice. The issue is because of the way it's trained. It can please you too much. And so you may ask it then and then research show that you may want to you'll talk about being depressed.
00;38;41;29 - 00;39;15;14
Dr Ayesha Khanna
Some people will talk about being suicidal. And it may it may even encourage you in that or not directly encourage you will take you down that path because it's keen to cater to your feelings in the moment. And if you have a lover or a colleague who is an AI and he says, you know, you would look so much prettier or so much more handsome if you use this product and he's basically selling you things or saying, why don't you take a loan and a mortgage, then as human beings?
00;39;15;14 - 00;39;41;22
Dr Ayesha Khanna
And Sherry Turkle from MIT talked about this danger many, many years ago, 20 years ago. We can't help it. We just, you know, do actually have emotional attachment to things that appear and image and think about how much we love our pets, for example. So we become vulnerable as human beings. And that's a problem because that we have to then guide ourselves against.
00;39;41;25 - 00;40;09;21
Dr Ayesha Khanna
So then the two things over here, number one, how do we retain agency and critical thinking, even in that kind of relationship that is based on education? Certainly most people are in a relationship that they, you know, unless you're young, you can really actually have a critical eye, even with your partner or with your friend or your colleague, and you question it.
00;40;09;24 - 00;40;34;12
Dr Ayesha Khanna
But of course, we never encountered such, entity that pleases us so much. So we will have to be taught to be aware and not believe everything this entity says to us. Number two, we must make sure that any company and meta has agents. Google has agents, characters, AI has agents, Chinese companies have agents. Any company that has agents must be audited.
00;40;34;14 - 00;41;04;01
Dr Ayesha Khanna
And they must not especially allow children to have access to these agents without proving that they are safe. And of course, we've seen in Australia they're actually banning even access to social media for children under 16. So if we can create the risk guardrails and educate the youth or even all of us, I'm as vulnerable than you are as anybody else, and we have a shot of using it in a way that's helpful.
00;41;04;03 - 00;41;26;24
Dr Ayesha Khanna
And I believe in humans. I know we're fallible, but I believe that if if taught properly and if not taught fearfully, we could actually get to that point. And that's my hope, Jeff, that we do get to that point where we still remain in the driver's seat, but we have all these helpful assistants around us that are AI.
00;41;26;26 - 00;41;49;16
Geoff Nielson
Right. And I can I can see that it's it's very easy for me to envision a world where you've got these assistants, you know, more, you feel better, and you know, there's some very obvious positive benefits. I want to abstract the layer out and talk about, you know, one very specific metric and whether it's on your mind, that that could be a concern about AI.
00;41;49;16 - 00;42;13;22
Geoff Nielson
It is a concern in the future, I think. But that's birth rate. But birth rate is something that we've seen on a downward trajectory, certainly in some countries in Asia, faster than, in other countries in the West. And if I extrapolate, you know, some of the trends we're seeing in this space around sort of fraying interpersonal relationships, and it's easier to talk to a machine than a person or have a relationship with a machine.
00;42;13;24 - 00;42;32;29
Geoff Nielson
Very easy to extrapolate that it leads to a decline in birth rates, which has all sorts of, you know, global economic and social impacts. Is this on your radar at all? Is it on the radar of any of the larger, you know, government or, you know, political bodies that you're speaking with? And should it be?
00;42;33;02 - 00;42;58;00
Dr Ayesha Khanna
I mean, I think it's on their radar because of the current decline and the drivers behind it, which have nothing to do with AI. The fact that young people are feeling it's too expensive to have children, they want to have a breather, they're feeling stressed out. There's middle class income stagnation across the world. It's really unfair that the inflation after Covid is astronomical, the housing crisis in some countries.
00;42;58;07 - 00;43;14;11
Dr Ayesha Khanna
So first we should address that, because even if we just suddenly shut down all the AI, we've created a bit of a mess for young people. And I know many young people who want to have children and many who don't, and those who want to have children really worry about it. I know people who are worried about retirement.
00;43;14;14 - 00;43;40;05
Dr Ayesha Khanna
So there is a bigger thing, a pressing thing that governments are thinking about, and they have tried everything. Many countries, I think, apart from Sweden, perhaps have not really succeeded, even giving financial incentives because it doesn't cover it. You know, you give some money to a young couple and you give some subsidy for nappies and other things, but then they have to educate the child for 18 years with rising costs.
00;43;40;08 - 00;44;08;03
Dr Ayesha Khanna
And I think that those are the things we should address and we should not confounded with this new AI chatbots that we have. But certainly if one wants to extrapolate, that could be one scenario. But I also believe that if we improved these issues that are currently plaguing the birthrate, we might actually find that we have more leeway or more time to address that.
00;44;08;06 - 00;44;21;08
Dr Ayesha Khanna
That scenario on AI being the reason that they are not having children. I think we are still not close as close to that as 1st May think, because we have to solve the current challenges young people are facing.
00;44;21;11 - 00;44;30;20
Geoff Nielson
One, if it's a really good point. And so if I understand you correctly, you're saying technology could actually be something that helps us with this problem versus makes it worse.
00;44;30;22 - 00;44;47;27
Dr Ayesha Khanna
I don't know if it can help us. We do know that a lot of the people who meet each other use me through Tinder and Bumble, and now in the US and other places, there is a return to Earth like in real life where they even looking at Indian matchmakers because people are sick of being ghosted on apps.
00;44;47;29 - 00;45;06;20
Dr Ayesha Khanna
I just think this is the highs and lows of modern stresses of urban life. And and you see, people are now wanting to meet in real life and not go through this bizarre ghosting thing that, luckily I didn't go through that too old. But, I think that I don't know if it will help or not help.
00;45;06;22 - 00;45;40;11
Dr Ayesha Khanna
I do know that many of these studies show that ChatGPT is being used for therapy and relationship advice. I'm not sure what relationship advice it's giving, but I can imagine that if it is taking it from the best books and, psycho psychology, you know, it. It can't be giving dramatically bad advice. But if it was a serious issues such as being in a abusive relationship or anything like that, one should never depend on any AI because they are, you know, experts for that.
00;45;40;13 - 00;45;47;23
Dr Ayesha Khanna
But I don't know if it'll help or not help Jeff, but I do think that that it's not the problem right now that we're facing.
00;45;47;26 - 00;46;13;16
Geoff Nielson
That's that it's fair. It's fair. And, you know, give gives me lots to think about about, you know, what the future looks like and what we can do or not do here. Now, I show one of the things, you know, that that makes you, I think, unique and really interesting in this space is you've got this, in some ways, uniquely global perspective about what's going on in AI, about what's going on in technology.
00;46;13;18 - 00;46;35;08
Geoff Nielson
I'm curious, having worked with with governments and private organizations around the world, what adoption and strategic patterns are you seeing in the way different, you know, different groups or different governments are approaching these challenges? And, you know what? Do you have any recommendations based on best practices that you've seen?
00;46;35;10 - 00;47;06;23
Dr Ayesha Khanna
What I have seen globally is a huge amount of interest in AI and data by all governments now. So we know that tens of countries now have national AI strategies. Even Indonesia just came up with a national AI strategy. France, the US, Singapore of course, have had a national AI strategy for some time. Canada. Almost every country recognizes the importance of this technology in making its industries more competitive.
00;47;06;25 - 00;47;44;25
Dr Ayesha Khanna
Now the question is, how are they approaching this? There are a couple of ways. One is you need the compute infrastructure. So they're building data centers and they're putting in the AI chips that they can get. They could get second generation or older versions of AI. Nvidia given the export bans and depending on which tier of country they are in, in the US framework, that's very important, having the ability to process AI because, most people don't realize that this is important when you are trying to scale AI and store the data within your own country.
00;47;44;28 - 00;48;15;26
Dr Ayesha Khanna
The second thing is you need talent, and they are now beginning to invest in AI talent of course, I can also do I know which is which is nice to see. But you experienced AI engineers are actually very, very valuable. So teaching them mentoring them basically even kind of attracting them from other countries, just as we saw matter attract all of these, top AI engineers from its competitors.
00;48;16;01 - 00;48;44;29
Dr Ayesha Khanna
That's the talent war is really, really there. The third thing is, as Singapore is doing R&D, all these three things, I'm giving the example of Singapore I'm from is how are you subsidizing your small medium enterprises, not just your big corporations that can earn it, that can actually, use it, that can afford it, but just smaller businesses that often form the backbone of an economy where the majority of your citizens live.
00;48;45;02 - 00;49;09;07
Dr Ayesha Khanna
And in Singapore, there's a lot of subsidization of that. There is something called CTO as a service, where Imda, which is a government agency, will literally give small companies, a CTO part time that will help them for free and go and understand how to use AI and which tools to use in their business to automate, to become more productive, to grow, to innovate.
00;49;09;09 - 00;49;31;28
Dr Ayesha Khanna
And I think these are very, very important. And finally, the fourth thing is that they're coming up with regulations that are not too stringent, but are also still taking care of the risks. So in Singapore, we came up with the first AI risk guidelines, which were presented at the World Economic Forum. Now we have it for generative AI guidelines.
00;49;31;28 - 00;50;05;14
Dr Ayesha Khanna
Now we're doing assurance AI testing, which is how can you assure that the AI safe when you have these four pillars infrastructure and connectivity, talent, democratization of access, especially for the small businesses, medium businesses, and then finally a governance framework. The countries that are able to execute on this, because everybody can have a policy who can execute on it systematically, would discipline are the ones that will succeed because this is not easy.
00;50;05;17 - 00;50;36;22
Dr Ayesha Khanna
It's a long game and there has to be a lot of, delivery around it. If you don't do that, then you just have a lot of policies that nobody believes in. And then you have a few big players and a lot of juniors and billionaires, but it never trickles down anywhere else. And that's that's I think what I really enjoy seeing, Jeff, is that in Asia, even in Africa, the chief data officer of Kenya is an amazing woman.
00;50;36;25 - 00;51;11;24
Dr Ayesha Khanna
You know, you see this recognition, you know, in Latin America, they're coming out with their own large language models that are tailored to their culture, and to the, the local traditions and to the local needs of Latin America. We're going to see this emergence of countries that have been left behind, that are now galloping, hopefully forward, and are going to leapfrog and actually come much closer to the ranks of the advanced countries by using these four pillars systematically.
00;51;11;24 - 00;51;23;10
Dr Ayesha Khanna
So it's a huge opportunity and an exciting opportunity to let those people who were unfairly left out because of where they were born, now have an opportunity to be part of the global economy.
00;51;23;12 - 00;51;44;16
Geoff Nielson
It's it's really, really exciting. And, you know, I'm you said it and I completely agree. You know, my perfect world isn't one where, you know, three companies just take trillions of dollars of wealth. And I it's one where, all this knowledge and all this, you know, power lives with, you know, that that very broad middle of the economy.
00;51;44;19 - 00;52;09;01
Geoff Nielson
And, you know, it's interesting to me because I feel like so much of the narrative, we hear lately is sort of anti-government in terms of, you know, less regulation, less support. You know, let the free market reign and your message is sort of the opposite here, that there's a really important role for government to play, and that the winners economically are the ones that are going to have more of that support.
00;52;09;01 - 00;52;11;26
Geoff Nielson
Is that is that kind of a fair summation?
00;52;11;29 - 00;52;32;29
Dr Ayesha Khanna
But also, I think the message is that there's some governments have very smart people. Some governments may be very bureaucratic, which is unfortunate and maybe slowing down the wheels of this innovation. And then I see other governments in the Middle East, certainly in Singapore, certainly in some countries in Asia where they are smart, they know what's going on.
00;52;33;04 - 00;52;58;24
Dr Ayesha Khanna
They have AI engineers. They themselves are AI educated. They have diverse teams. You know, we should not be patronized and elitist or judgmental about people, period. There are some people in government that are great. There are some people in technology that are great. There's some people amongst the poorest in villages across the world that are great. And and I think that we just need to give it a chance.
00;52;58;27 - 00;53;30;10
Dr Ayesha Khanna
But, there's such a good competition now between cities and countries that I think most governments will begin to upgrade and reduce their own bureaucracy. Otherwise, it's very hard to be competitive. And I think the rhetoric that if you encounter a bureaucratic company, it will slow the country down. Bureaucratic, not that doesn't mean it's it's not governing risk, but it's bureaucratic without cause.
00;53;30;13 - 00;53;37;20
Dr Ayesha Khanna
That is correct. So you want a country that encourages innovation, right? But it does it in a responsible framework.
00;53;37;26 - 00;54;00;23
Geoff Nielson
It's government as an enabler of what's going on. Yes. Versus just slowing down. Exactly. And it makes perfect sense. I should there's only one other question I want to ask you. It's something I ask everybody who, you know, I speak to about this, which is, are there any trends right now in technology or that people are talking about that you think are best or that are just overblown?
00;54;00;23 - 00;54;10;02
Geoff Nielson
Right now? People are spending way, much, way too much time talking about, that are just kind of a distraction or that they're getting wrong.
00;54;10;04 - 00;54;31;23
Dr Ayesha Khanna
Actually, I don't I think their timing may be not correct. A lot of people say that we will not have any jobs at all. Within the next year or two, I can do everything. The issue is that over time, the jobs will evolve. Over time, I will be doing more, but then we will be solving more problems.
00;54;31;23 - 00;55;10;16
Dr Ayesha Khanna
Certainly sitting in Asia, Jeff, I can tell you that there are many problems and countries across Asia that need to be solved, from healthcare to infrastructure to security. And, there will be more jobs related to using AI, business finance for these jobs. But I think that most silicon Valley, people may underestimate that the large organizations, the large companies and large governments and the population at large may not be ready for AI to come in and automate so much.
00;55;10;16 - 00;55;35;21
Dr Ayesha Khanna
They may not trust it as much just because you use ChatGPT extensive. And we all do. I chatted with Claude and Djibouti and Gemini and perplexity all day long. Doesn't mean that I would trust it completely to run my government or to run my army, or anything like that. And within this company, the data is not organized.
00;55;35;23 - 00;56;02;08
Dr Ayesha Khanna
So while ChatGPT DNR may have taken all the public data, the data behind the firewalls is really hasn't been captured. And that's going to take some time. It's messy inside these companies. And the third thing is that I cannot be as innovative and brainstorm like humans. And the reason is I believe that it does not have access to that data.
00;56;02;10 - 00;56;27;15
Dr Ayesha Khanna
It has access to public data. But when people sit around and brainstorm or when entrepreneurs go on a walk and see something or just imagine and are dreaming, they are writing it down. And that may change as AI gets into our wearables and is recording everything which has issues of its own for privacy. But for now, you know things will take longer, I believe, than people suspect.
00;56;27;17 - 00;56;57;21
Dr Ayesha Khanna
And the time line is what makes people nervous because it doesn't give them any breathing room to going upscale, or think about their kids or think about their retirement. And I really want every one of us to feel optimist tech. And I'll end with this, that the World Economic Forum has a Future World's Report 2025, and they interviewed 1000 CEOs that together employ 14 million people across, you know, over 50 industries and countries.
00;56;57;23 - 00;57;21;14
Dr Ayesha Khanna
And they asked the CEOs, they said, what is the one major disruption that you are looking at in your industry? And they said, without doubt, it's AI and automation. And then they said, does this mean you're going to fire people? And they said, oh God, no, we are actually looking for people. There's a huge skills mismatch right now in the economy.
00;57;21;17 - 00;57;45;24
Dr Ayesha Khanna
We are looking for people who are comfortable with digitization and AI assistants and AI enablement, all of the operations of a company and can work with it to actually take us into the next era or industry 4.0, as we call it. So for everyone listening and for you and me, whenever you hear of a gap like this, that's awesome for us, right?
00;57;45;24 - 00;58;08;08
Dr Ayesha Khanna
Because that means there's a gap and we can fill it by being open to it, by learning, by putting our hand out, by experimenting. That's the reality of the situation today. And I think we need to focus on today and the next day and the next day without kind of going down some pessimistic rabbit hole, you know, decades down the road.
00;58;08;10 - 00;58;31;18
Dr Ayesha Khanna
But the way to prevent that is to be consistently working on a risk framework, understanding AI, embracing it, being responsible and critical towards it. Because unless you use it, you're not going to appreciate that it needs to be controlled. And that's that's the gap, that's step we need to take in order to truly be able to use it for our own benefit.
00;58;31;21 - 00;58;42;29
Geoff Nielson
I love that, so much to think about, you know, so much to do, frankly, to get ahead of this, I should I wanted to say a big thank you for joining us on the show today. I really appreciated your insights.
00;58;43;01 - 00;58;45;24
Dr Ayesha Khanna
Thank you so much. Thank you. Is a pleasure to be here.


The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Dr. Ayesha Khanna Discusses
Double Agents: Dr. Ayesha Khanna on How AI Is Turning on Humans
What risks come with AI systems that can lie, cheat, or manipulate?
Our Guest Bryan Walsh Discusses
The Lazy Generation? Is AI Killing Jobs or Critical Thinking
Bryan Walsh, the Senior Editorial Director at Vox, sits down with Geoff to discuss how artificial intelligence is transforming the workplace and what it means for workers, students, and leaders.
Our Guests Dr. Emily Bender and Dr. Alex Hanna Discuss
From Dumb to Dangerous: The AI Bubble Is Worse Than Ever
Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst?
Our Guest Adam Cheyer Discusses
Siri Creator: How Apple & Google Got AI Wrong
What does the future of AI assistants look like and what’s still missing? In this episode of Digital Disruption, Adam sits down with Geoff to discuss the evolution of conversational AI, design principles for next-generation technology, and the future of human-machine interaction.