Our Guests Dr. Emily Bender and Dr. Alex Hanna Discuss
From Dumb to Dangerous: The AI Bubble Is Worse Than Ever
Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst?
Today on Digital Disruption, we’re joined by Dr. Emily Bender and Dr. Alex Hanna.
Dr. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies.
Dr. Hanna is Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in the Washington Post, Financial Times, The Atlantic, and Time.
Dr. Bender and Dr. Hanna sit down with Geoff to discuss the realities of generative AI, big tech power, and the hidden costs of today’s AI boom. Artificial Intelligence is everywhere but how much of the hype is real, and what’s being left out of the conversation? This discussion dives into the social and ethical impacts of AI systems and why popular AI narratives often miss the mark. Dr. Bender and Dr. Hanna share their thoughts on the biggest myths about generative AI and why we need to challenge them and the importance of diversity, labor, and accountability in AI development. They’ll answer questions such as where AI is really heading and how we can imagine better, more equitable futures and what technologists should be focusing on today.
00;00;00;24 - 00;00;01;24
Hey everyone!
00;00;01;24 - 00;00;03;14
I'm super excited to be sitting down
00;00;03;14 - 00;00;07;10
with the authors of the icon, Doctor
Emily Bender and Doctor Alex Hannah.
00;00;07;26 - 00;00;11;06
Doctor Bender is a professor
at the University of Washington,
00;00;11;10 - 00;00;15;11
and Doctor Hannah is director of research
at the distributed AI Research Institute.
00;00;15;29 - 00;00;18;24
What's cool about these two
is that they're maybe the most vocal
00;00;18;24 - 00;00;19;28
critics of AI.
00;00;19;28 - 00;00;22;27
You'll hear anywhere
and think the whole thing is bullshit.
00;00;23;06 - 00;00;24;21
My words, not theirs.
00;00;24;21 - 00;00;27;24
What I want to ask them
is just how far the distrust goes.
00;00;28;04 - 00;00;30;29
What do they really think
this technology is good for?
00;00;30;29 - 00;00;34;16
And if it's as bad as they say,
what do we need to do to get the future
00;00;34;16 - 00;00;36;01
we actually want?
00;00;36;01 - 00;00;39;01
Let's find out.
00;00;39;25 - 00;00;40;24
I'm so excited to be
00;00;40;24 - 00;00;44;18
joined today by Doctor
Emily Bender and Doctor Alex Hannah.
00;00;45;02 - 00;00;47;07
Thanks so much to both of you
for joining today.
00;00;47;07 - 00;00;51;03
I wanted to start by just, you know,
asking a little bit about, you know,
00;00;51;03 - 00;00;54;11
the message that you two have been,
you know, sort of road showing these days.
00;00;54;11 - 00;00;56;12
You know,
you've got a very clear perspective
00;00;56;12 - 00;00;59;24
on, on AI, on,
on kind of the future of this technology.
00;01;00;07 - 00;01;03;13
And maybe for those who don't know,
could you lay that out for us?
00;01;04;01 - 00;01;04;22
Yeah.
00;01;04;22 - 00;01;09;28
So the book is called The Icon, and it is
what it says on the tin that AI is on.
00;01;10;03 - 00;01;14;28
First off, the Shin AI itself
is not a coherent set of technologies.
00;01;14;29 - 00;01;16;11
It is a marketing term
00;01;16;11 - 00;01;20;26
and has been from the beginning,
from the initial convening in 1956,
00;01;20;26 - 00;01;24;12
in which John McCarthy and Marvin Minsky
invited a bunch of folks
00;01;24;28 - 00;01;29;22
to Dartmouth College to have a discussion
around, quote unquote, thinking machines.
00;01;30;03 - 00;01;31;16
So that's one part of it.
00;01;31;16 - 00;01;35;12
The second part of it
is that the current era of er, AI,
00;01;35;13 - 00;01;38;15
the generative AI tools,
including large language models
00;01;38;15 - 00;01;42;11
and diffusion
models, really are premised on this idea
00;01;42;11 - 00;01;45;08
that there is a thinking behind mind
behind that.
00;01;45;08 - 00;01;48;01
That itself is a
can of a fly in the frame.
00;01;48;01 - 00;01;48;27
I'm so sorry.
00;01;50;20 - 00;01;51;19
And so then that
00;01;51;19 - 00;01;55;19
is, and we do that for one,
you know, a few different reasons.
00;01;55;19 - 00;02;00;13
One of them being that
there is a human desire to language
00;02;00;13 - 00;02;04;20
to a, to the synthetic, media,
00;02;04;20 - 00;02;08;27
but specifically the synthetic text
that is an output of these models.
00;02;09;14 - 00;02;12;04
And that leads us
to a whole bunch of different things.
00;02;12;04 - 00;02;13;16
If there's a,
00;02;13;16 - 00;02;17;18
potential mind behind these technologies,
then that means that they can be
00;02;17;18 - 00;02;23;10
a replacement for so many different types
of, things that require humans.
00;02;23;12 - 00;02;25;21
Things like white collar work,
00;02;25;21 - 00;02;29;11
social services,
medical services, teaching and the like.
00;02;29;25 - 00;02;33;21
I was just going to add quickly
to amplify the last point that because
00;02;33;21 - 00;02;37;05
especially with the large language models,
and let me back up one second and say,
00;02;37;05 - 00;02;40;25
I will never use the term artificial
intelligence to refer to technology,
00;02;41;01 - 00;02;44;04
because I think it is a misnomer,
and I think it just confuses things.
00;02;44;04 - 00;02;45;18
And so I will talk about automation
00;02;45;18 - 00;02;48;18
to talk about things in general
or name the specific technology.
00;02;48;21 - 00;02;51;17
So in the case of large language models,
especially when they are used
00;02;51;17 - 00;02;56;08
as synthetic text extruding machines,
we experience language.
00;02;56;09 - 00;02;58;20
And then we are very quick
to interpret that language.
00;02;58;20 - 00;03;02;01
And the way we interpret it involves
imagining a mind behind the text.
00;03;02;09 - 00;03;05;07
And we have these systems
that can output plausible looking text in
00;03;05;07 - 00;03;06;04
just about any topic.
00;03;06;04 - 00;03;09;29
And so it looks like we have nearly there
solutions to all kinds
00;03;09;29 - 00;03;12;29
of technological needs in society.
00;03;13;00 - 00;03;16;12
But it's all fake and we should not be
putting any credence into it.
00;03;17;21 - 00;03;19;03
I think that's so interesting.
00;03;19;03 - 00;03;21;14
And I'm I'm absolutely of the same mind,
by the way.
00;03;21;14 - 00;03;24;14
And I found myself laughing when I was,
when I was reading through your book,
00;03;25;11 - 00;03;28;23
you know, one of the first of all,
artificial intelligence.
00;03;28;23 - 00;03;29;22
I completely agree.
00;03;29;22 - 00;03;33;11
Like, first of all, I do have to
give credit because it is great marketing.
00;03;33;11 - 00;03;36;11
Like,
it just it's so evocative of something.
00;03;36;14 - 00;03;39;21
But you know, nobody can really seem
to define exactly what that is.
00;03;39;21 - 00;03;40;18
And of course, it,
00;03;40;18 - 00;03;43;22
you know, has all these ideas
and can be used for any purpose. But,
00;03;45;00 - 00;03;47;02
one of the things you do early on in
the book is you
00;03;47;02 - 00;03;50;11
kind of just pop that balloon by saying,
well, you know what?
00;03;50;11 - 00;03;53;03
If it wasn't called
artificial intelligence?
00;03;53;03 - 00;03;54;07
Can you share a little bit about,
00;03;55;08 - 00;03;56;24
you know, what what that sounds like.
00;03;56;24 - 00;04;00;13
And you know what
what why you encourage people to do that?
00;04;00;22 - 00;04;00;28
Yeah.
00;04;00;28 - 00;04;03;28
So we have a few fun alternatives
that we call on.
00;04;04;00 - 00;04;07;23
Early on in our podcast,
Alex coined Mathy Maths as a fun one.
00;04;08;08 - 00;04;12;00
There's also due to the Italian researcher
Stefano Torelli Salami,
00;04;12;04 - 00;04;14;10
which is an acronym
for Systematic Approaches
00;04;14;10 - 00;04;16;19
to Learning Algorithms
and Machine Inference.
00;04;16;19 - 00;04;19;12
And the funny thing about that is,
if you take the phrase
00;04;19;12 - 00;04;21;29
artificial intelligence
in a sentence like, you know, does
00;04;21;29 - 00;04;25;05
I understand
or can I help us make better decisions
00;04;25;05 - 00;04;27;18
and you replace it with mathy math
or salami?
00;04;27;18 - 00;04;29;27
It's immediately obvious
how ridiculous it is.
00;04;29;27 - 00;04;31;24
You know, does the salami understand?
00;04;31;24 - 00;04;34;01
Well,
the salami help us make better decisions.
00;04;34;01 - 00;04;36;07
It's. You know, it's absurd.
00;04;36;07 - 00;04;39;24
Just sort of putting that little flag in
there, I think is a really good reminder.
00;04;40;00 - 00;04;43;04
Now, I think that, that to me,
that just hits it
00;04;43;04 - 00;04;46;14
right on the head, and was exactly
what I was getting at there.
00;04;47;15 - 00;04;50;15
That there's so much people.
00;04;50;20 - 00;04;50;27
What?
00;04;50;27 - 00;04;54;06
Once you have the idea of a mind,
it just gets in your head, right?
00;04;54;06 - 00;04;58;02
And it doesn't actually gets the idea
that in some ways, like these algorithms
00;04;58;02 - 00;05;00;15
were describing, as you said,
they go back to the 50s, right?
00;05;00;15 - 00;05;04;00
As long as we've had computing,
we've had these notions.
00;05;04;00 - 00;05;07;00
And it seems like it seems like
it's really an extension of that.
00;05;07;12 - 00;05;08;26
I wanted to expand the question, though.
00;05;08;26 - 00;05;11;26
There's so much misinformation.
00;05;12;08 - 00;05;15;21
Dare I call it disinformation around
artificial intelligence
00;05;15;21 - 00;05;19;17
in this whole sphere
that benefits, you know,
00;05;19;17 - 00;05;22;24
big tech that benefits, you know,
specific people and organizations.
00;05;23;15 - 00;05;26;15
What do you think is the most dangerous
or the most dangerous
00;05;27;03 - 00;05;30;12
single myth or multiple myths
right now in this sphere?
00;05;31;21 - 00;05;32;08
Yeah, it's
00;05;32;08 - 00;05;36;08
hard to rank the myths behind all this
00;05;36;08 - 00;05;42;13
because all of this is so upsetting
and harmful.
00;05;43;04 - 00;05;47;18
I'd say that one of the things
that that I think underlies a lot of this
00;05;47;18 - 00;05;52;02
is that there's a notion of, let's say,
a singular type of intelligence.
00;05;52;16 - 00;05;56;08
And so that's something that's not really
well supported by science.
00;05;56;08 - 00;05;59;14
First off, that there's a notion
that intelligence can be
00;05;59;14 - 00;06;02;15
reduced to a single number.
00;06;02;27 - 00;06;06;18
And some of the stuff that has, you know,
some of the pseudoscience
00;06;06;18 - 00;06;09;24
that is supported, that is based
on a lot of eugenicist thought,
00;06;09;24 - 00;06;13;07
the idea that, like, you can have an IQ
test, the IQ test can be used
00;06;13;18 - 00;06;17;26
to rank order people and, but people and
00;06;19;04 - 00;06;22;20
and machines kind of in a line
and you can order them.
00;06;22;26 - 00;06;27;16
That's why you have people like, OpenAI
and I think
00;06;27;16 - 00;06;31;23
Amir Moratti had said that,
you know, we're going to develop GPT five.
00;06;31;24 - 00;06;37;13
It's one to have PhD level intelligence
or there's going to be, a street level,
00;06;38;26 - 00;06;39;28
agent that people
00;06;39;28 - 00;06;42;28
can, use for $20,000 a month.
00;06;43;06 - 00;06;44;11
And so it does a few things.
00;06;44;11 - 00;06;48;09
It reduces the notion of what
is what it means to be human to something
00;06;48;09 - 00;06;53;13
that is more computable or automatable,
which also has historical antecedents.
00;06;53;25 - 00;06;57;14
It also does this thing where
we're getting into this really dangerous
00;06;57;14 - 00;07;00;16
notion of what intelligence is
and what consciousness is,
00;07;00;23 - 00;07;04;02
as if one needs to be just smarter
to be conscious and therefore human.
00;07;04;17 - 00;07;07;17
And so that's a very dangerous line
to go down to.
00;07;08;00 - 00;07;09;09
And so,
00;07;09;09 - 00;07;14;03
you know, of the kind of rank ordering,
I mean, of kinds of applications,
00;07;14;04 - 00;07;17;14
the world,
it is probably not helpful to do that.
00;07;17;14 - 00;07;21;10
But that kind of foundation or myth around
AI itself is very dangerous.
00;07;22;12 - 00;07;25;12
And like Alex,
I have a hard time ranking these things
00;07;25;15 - 00;07;28;24
and I, you know,
want to add another piece of the puzzle,
00;07;28;24 - 00;07;31;04
which is another, another one of the myths
that I think is feeding into this
00;07;31;04 - 00;07;35;16
is this idea that if you take a system
that has unfathomably large data
00;07;35;16 - 00;07;39;07
inside of it,
it must therefore have an unbiased
00;07;39;07 - 00;07;42;13
sort of bird's eye view into
what's really going on in the world.
00;07;42;20 - 00;07;46;18
And that is definitely a kind of
wishful thinking, like we want there
00;07;46;18 - 00;07;50;27
to be some peak we could climb to from
which we could just understand everything.
00;07;51;05 - 00;07;53;23
But that's not how science works.
That's not how society works.
00;07;53;23 - 00;07;55;03
And it's not a path towards
00;07;55;03 - 00;07;58;09
building technologies that are functional,
let alone fair.
00;07;59;00 - 00;07;59;10
Right.
00;07;59;10 - 00;08;00;25
So, you know,
00;08;00;25 - 00;08;02;20
just just,
you know, following that thread,
00;08;02;20 - 00;08;05;09
it sounds like I mean, there's a few
things at play here. One of them is that
00;08;06;16 - 00;08;07;04
what we're
00;08;07;04 - 00;08;11;02
we have this kind of false foundation,
that intelligence itself
00;08;11;02 - 00;08;14;15
is anything more than this
kind of nebulous abstraction, right?
00;08;14;15 - 00;08;17;28
As soon as we say, oh, it's this,
we're already kind of kidding ourselves
00;08;17;28 - 00;08;20;28
and going down a dangerous road there.
00;08;21;12 - 00;08;23;13
And then it seems like there's almost.
00;08;23;13 - 00;08;26;28
And, you know, tell me if I'm, if I'm
misrepresenting you, but it seems like
00;08;27;11 - 00;08;32;16
there's this drive to pretend
we're creating something objective.
00;08;33;00 - 00;08;37;24
But this objective thing just also
so happens to look an awful lot like what?
00;08;37;24 - 00;08;42;06
Objectivity for those who don't have
a visual in heavy air quotes.
00;08;42;10 - 00;08;46;06
Looks like,
you know, from the from the office of,
00;08;46;06 - 00;08;49;06
you know, a California technology leader.
00;08;49;09 - 00;08;51;07
Is that fair? Yes.
00;08;51;07 - 00;08;54;11
And I mean, there's so much
in terms of scholarship that's showing
00;08;54;11 - 00;08;58;23
that the use of these systems
is anything but objective, right?
00;08;58;23 - 00;09;02;16
From the foundational
where from from Bellini and Gebru
00;09;02;23 - 00;09;06;13
to, you know, where from
that's like, island
00;09;06;24 - 00;09;11;26
can to my own work that's focused on the
data sets and their politics.
00;09;13;01 - 00;09;13;25
Really, any kind
00;09;13;25 - 00;09;17;09
of these systems
are making judgments of some kind,
00;09;17;09 - 00;09;22;00
and they are encoding
majoritarian notions of what it is to,
00;09;22;28 - 00;09;26;24
you know, have a face when it comes
to specifically the civil unions of Peru
00;09;27;02 - 00;09;29;27
or to what it looks like to be someone
00;09;29;27 - 00;09;32;27
in, a certain kind of job,
00;09;33;03 - 00;09;36;05
you know, which tends
to have higher status jobs that are,
00;09;36;05 - 00;09;39;05
that are whiter and Miller and so,
00;09;39;10 - 00;09;42;12
you know, there are those things
that could encode in that.
00;09;42;12 - 00;09;43;12
We have a
00;09;43;12 - 00;09;44;04
and we have a word
00;09;44;04 - 00;09;48;02
for what it means when we trust machines
to have an objective view.
00;09;48;02 - 00;09;50;04
And that's called automation bias.
00;09;50;04 - 00;09;53;06
The notion that, like,
we're going to see certain kinds of,
00;09;54;21 - 00;09;57;21
operations to something that's automated.
00;09;57;24 - 00;10;02;01
And so that objective view
is certainly not objective, is it is just,
00;10;02;18 - 00;10;07;04
reifying views
that are, that are, in the majority and,
00;10;07;11 - 00;10;10;11
and really to the detriment of people
who are not in that majority.
00;10;10;16 - 00;10;14;27
Along those lines, one of the themes or
sort of theses in your work seems to be,
00;10;15;11 - 00;10;19;22
you know, this notion that we can't
actually disentangle the technology
00;10;20;04 - 00;10;24;00
from the creators
and owners of the technology.
00;10;24;00 - 00;10;27;20
And I wanted to, you know, maybe ask you
to expand on that a little bit for people
00;10;27;20 - 00;10;31;20
who are, you know, just kind of learning
about this concept for the first time.
00;10;31;26 - 00;10;33;26
Yeah, absolutely. I think one of the,
00;10;35;02 - 00;10;36;17
parts of the con, in using
00;10;36;17 - 00;10;40;09
the phrase artificial intelligence is that
it's a way to displace accountability.
00;10;40;14 - 00;10;43;15
And if you look at the language
that's used around this, oftentimes
00;10;43;15 - 00;10;46;20
we put the so-called AI systems
in agent position.
00;10;46;29 - 00;10;49;28
We'll say, you know, ChatGPT
00;10;49;28 - 00;10;52;21
has, scraped the whole web, for example.
00;10;52;21 - 00;10;53;28
Chatbot didn't do that.
00;10;53;28 - 00;10;55;14
The engineers at OpenAI did.
00;10;55;14 - 00;10;58;16
And some other people
whose data, the data sets they repurposed.
00;10;58;19 - 00;10;59;14
Right.
00;10;59;14 - 00;11;00;27
And also, you can't scrape the whole web.
00;11;00;27 - 00;11;03;21
That's a whole separate conversation.
00;11;03;21 - 00;11;06;22
But to, to basically
always keep the people in the frame
00;11;06;22 - 00;11;09;01
and say, who built it? For what purpose
did they build it?
00;11;09;01 - 00;11;11;26
Who's using it? Who are they using it on?
00;11;11;26 - 00;11;16;05
And also whose labor was either
just flat out appropriated
00;11;16;05 - 00;11;18;17
or otherwise exploited
in the production of the systems,
00;11;18;17 - 00;11;21;09
really helped
keep the conversation grounded.
00;11;21;09 - 00;11;24;12
It's also helpful to note that in in,
00;11;24;18 - 00;11;29;08
I think so much of this conversation,
00;11;31;18 - 00;11;34;06
there's a notion that
00;11;34;06 - 00;11;37;19
these companies that are building
this are somehow
00;11;37;20 - 00;11;40;25
magnanimous, that they're doing this
for the benefit of all humanity.
00;11;41;08 - 00;11;44;16
This is something that's really well
underscored and sharing housework called
00;11;44;18 - 00;11;48;01
Empire of AI,
that focuses on the kind of drama,
00;11;49;03 - 00;11;51;29
empire of AI
that focuses specifically on OpenAI
00;11;51;29 - 00;11;56;03
and this sort of drama behind OpenAI,
and specifically on how
00;11;56;07 - 00;11;59;07
so many of the people behind
that technology are very fallible,
00;12;00;06 - 00;12;04;12
specifically around Tim Altman
and many that that surround him.
00;12;05;03 - 00;12;09;08
And so it's really, you know,
one thing that highlights in that Daniel
00;12;09;11 - 00;12;14;14
highlights is the way that you don't
come up with magnanimous technologies.
00;12;14;14 - 00;12;15;09
I mean, technologies
00;12;15;09 - 00;12;18;29
that actually work for people are ones
that are closer to those people
00;12;19;06 - 00;12;22;06
and the people themselves
that are built by those people,
00;12;22;06 - 00;12;25;06
or for those people, ideally by them.
00;12;25;08 - 00;12;30;00
And so, you know, like the idea
that there is no human in the frame is,
00;12;30;01 - 00;12;33;14
is a mechanism
not only to displace accountability,
00;12;33;14 - 00;12;36;14
but to displace
where the power really lies.
00;12;36;15 - 00;12;37;10
Right.
00;12;37;10 - 00;12;40;26
And one of the, you know, as I think about
that and I think about this notion of,
00;12;40;27 - 00;12;43;28
you know, benevolence,
the other sort of party line
00;12;43;28 - 00;12;46;02
that I've been hearing more and more
come out of that,
00;12;46;02 - 00;12;50;18
you know, from come out of those
organizations is that there's
00;12;51;27 - 00;12;53;19
the it's described as kind of a race.
00;12;53;19 - 00;12;55;10
And, oh, we have to keep doubling down.
00;12;55;10 - 00;12;58;29
We have to keep investing in this because,
you know, it's winner take all.
00;12;58;29 - 00;13;01;12
And like there's this conflicts here.
00;13;01;12 - 00;13;03;09
Right. That oh it's for everybody.
00;13;03;09 - 00;13;05;19
But we have to do it for everybody.
00;13;05;19 - 00;13;07;05
And and you know what.
00;13;07;05 - 00;13;10;08
What's your reaction to to
you know that narrative.
00;13;10;10 - 00;13;13;00
The the race framing is just ridiculous.
00;13;13;00 - 00;13;16;14
It's it's based on a misconception
of how science and technology work.
00;13;16;29 - 00;13;19;29
And this is some, some thinking
that I learned from Beth Singler,
00;13;20;13 - 00;13;23;03
who's looked into this in detail
says, look, you can look backwards
00;13;23;03 - 00;13;27;11
and trace the path of sort of what
built on what to get to where we are now.
00;13;27;11 - 00;13;29;21
And you can draw a straight line
if you want.
00;13;29;21 - 00;13;32;19
But looking to the future,
the future doesn't exist ahead of time.
00;13;32;19 - 00;13;34;25
And the people who say, you know,
00;13;34;25 - 00;13;38;06
artificial intelligence or artificial
general intelligence is at the end of this
00;13;38;06 - 00;13;40;09
path, and it's just a question of who
runs down it
00;13;40;09 - 00;13;44;09
the fastest, have completely misunderstood
how science works, right?
00;13;44;09 - 00;13;46;08
So first of all, there's
this an ill defined notion
00;13;46;08 - 00;13;47;08
that they're running towards,
00;13;47;08 - 00;13;50;06
but they are asserting that it's there
and it's definitely existing.
00;13;50;06 - 00;13;51;12
And we can get there.
00;13;51;12 - 00;13;55;05
And also asserting and that I think,
is a very Silicon Valley brain
00;13;55;05 - 00;13;56;11
way of thinking about things,
00;13;56;11 - 00;13;59;14
that the person who gets there first
is, as you said, winner takes all.
00;13;59;14 - 00;14;00;29
And you'll sometimes hear people say,
00;14;00;29 - 00;14;05;12
well, we have to build the good
AI or the good AGI less evil.
00;14;05;14 - 00;14;08;13
You know, opposition builds the bad one
00;14;08;13 - 00;14;12;07
as if somehow building one technology
could prevent the building of another.
00;14;12;07 - 00;14;14;11
Like it
actually just doesn't make any sense.
00;14;14;11 - 00;14;18;09
And just to build on that,
I mean, specifically, the focus here is on
00;14;18;17 - 00;14;21;16
that the US has to build this
before China.
00;14;21;16 - 00;14;25;05
And so the boogeyman here
is, is the kind of sign of phobic idea
00;14;25;05 - 00;14;28;28
that the Chinese like AI or AGI
or whatever
00;14;28;28 - 00;14;32;29
it is, is going to be by virtue,
authoritarian.
00;14;33;06 - 00;14;37;06
Given that political environment,
they're going to say anything about what
00;14;37;09 - 00;14;41;07
Chinese builders are doing,
but as if what we're doing in the US
00;14;41;07 - 00;14;42;17
isn't authoritarian.
00;14;42;17 - 00;14;46;01
By being built by one company
that is commanding
00;14;46;01 - 00;14;49;18
so many resources,
that has so many implications with,
00;14;50;17 - 00;14;51;11
with,
00;14;51;11 - 00;14;55;05
national security organizations,
with with large energy companies.
00;14;55;12 - 00;14;58;12
I mean, these are things
which are centralization of power.
00;14;58;12 - 00;15;03;12
And that itself is not
as if it is a democratic version, the AI.
00;15;03;27 - 00;15;04;07
Right.
00;15;04;07 - 00;15;07;24
And so I think there's a real way
in which those even the claim around
00;15;08;07 - 00;15;11;05
the race dynamics also,
00;15;11;05 - 00;15;15;02
obscure what is happening
in terms of power centralization
00;15;15;12 - 00;15;18;12
and how that's undermining
democratic dynamics
00;15;18;13 - 00;15;21;13
in the US and in the West.
00;15;21;15 - 00;15;26;03
The centralization of power comment
is it's really interesting to me, right?
00;15;26;03 - 00;15;26;26
Because that's
00;15;26;26 - 00;15;30;11
that's another one of these tensions
in another one of these narratives about,
00;15;31;05 - 00;15;31;27
it's benevolent.
00;15;31;27 - 00;15;33;04
It's for everybody.
00;15;33;04 - 00;15;35;28
It's going to make everybody better.
00;15;35;28 - 00;15;38;00
Or except we own the platform. Right.
00;15;38;00 - 00;15;40;14
And by the way, we're charging you,
you know, 20 bucks a month
00;15;40;14 - 00;15;43;19
to have this new, better life,
which I find really interesting.
00;15;43;24 - 00;15;47;23
Is there is there merit in your minds
to, you know, there's a lot of talk about,
00;15;49;00 - 00;15;50;26
you know, maybe maybe,
00;15;50;26 - 00;15;55;10
you know, generative AI, maybe automation,
you know, some of these functions
00;15;55;18 - 00;16;00;22
might not be as good at professionals
at any given task,
00;16;01;03 - 00;16;05;07
but they they extend the entire market.
00;16;05;07 - 00;16;07;06
And market's
a bit of a dangerous word there,
00;16;07;06 - 00;16;10;15
but they extend the number of people
who services can be delivered to,
00;16;10;15 - 00;16;12;25
and they go
after the historically marginalized
00;16;12;25 - 00;16;16;25
in a way that is empowering
or makes their lives better.
00;16;16;28 - 00;16;18;23
Do you do you buy that?
00;16;18;23 - 00;16;21;23
Do you have optimism there or
do you think that's just part of the con?
00;16;21;24 - 00;16;23;05
It's really just part of the con.
00;16;23;05 - 00;16;26;17
The, the argument is always, well,
00;16;26;17 - 00;16;30;13
we don't have enough to provide
good education to everyone.
00;16;30;13 - 00;16;32;15
We don't have enough to provide
good health care to everyone.
00;16;32;15 - 00;16;34;09
And so these poor people are left out.
00;16;34;09 - 00;16;36;05
And so this is better than nothing.
00;16;36;05 - 00;16;37;29
And anytime you hear
this is better than nothing.
00;16;37;29 - 00;16;40;06
The question should always be
why was the alternative?
00;16;40;06 - 00;16;42;28
Nothing.
Because we have enormous resources.
00;16;42;28 - 00;16;46;00
If you look at the resources
that are being poured into these systems
00;16;46;05 - 00;16;49;04
and imagine instead
those resources were used for
00;16;49;04 - 00;16;52;03
shoring up education systems
and health care systems.
00;16;52;19 - 00;16;54;09
Imagine what we could do with that.
00;16;54;09 - 00;16;57;12
You know, sort of rich
and like rich investment
00;16;57;12 - 00;17;00;14
view of social services
rather than an austerity view.
00;17;00;24 - 00;17;04;06
And very much it's tied into
I mean, when you get behind this,
00;17;04;06 - 00;17;07;29
it's tied into this idea
that those of means are going to get
00;17;08;07 - 00;17;12;29
these bespoke human oriented services
that are by humans.
00;17;12;29 - 00;17;15;28
And we already see this
in other kind of technological domains.
00;17;16;10 - 00;17;20;08
Adrian Williams was on our podcast,
and she's a former charter school teacher.
00;17;20;08 - 00;17;23;29
And one of the things that she talks about
as being a former charter school teacher
00;17;23;29 - 00;17;27;06
is that when it comes to edtech, things
00;17;27;06 - 00;17;31;03
like Google Classroom and, classes, like,
00;17;32;09 - 00;17;32;26
that,
00;17;32;26 - 00;17;35;25
or content management systems
specifically in education,
00;17;35;25 - 00;17;40;08
they are disproportionately
offered in places where the students
00;17;40;08 - 00;17;44;01
are lower income, black and brown,
in lower income communities.
00;17;44;17 - 00;17;46;05
If you go to private schools,
00;17;46;05 - 00;17;50;02
there's very few screens or those screens
are used in a much different way.
00;17;50;02 - 00;17;53;23
They're not used for surveillance or use,
you know, they're pretty optional or,
00;17;53;23 - 00;17;56;23
you know, they're they're much more pen
and paper oriented.
00;17;57;06 - 00;17;59;19
And so we're seeing that dynamic.
00;17;59;19 - 00;18;04;10
And that's not actually providing
more opportunities
00;18;04;10 - 00;18;05;18
to people in those communities.
00;18;05;18 - 00;18;08;18
It is more heavily
surveilling those communities.
00;18;08;26 - 00;18;11;26
It is being used as a way to also,
00;18;13;06 - 00;18;17;08
say that there's being an offer of,
of that technology that they have access,
00;18;17;08 - 00;18;21;09
but that's not actually education
the same way where, you know,
00;18;21;09 - 00;18;24;09
one of the things that I boosters
are very excited about is,
00;18;25;04 - 00;18;28;16
following Ilya Sutskever
while he cheap and effective therapy.
00;18;29;07 - 00;18;34;08
Because people cannot afford therapists
that could speak to them specifically.
00;18;34;13 - 00;18;35;27
The people aren't getting therapy,
00;18;35;27 - 00;18;39;03
they're getting some kind of a machine
that might just be telling them
00;18;39;03 - 00;18;43;10
what they want to hear, or leading them
down to dangerous kind of, particular
00;18;43;10 - 00;18;47;27
behaviors, or maybe encouraging them
to engage in self-harm.
00;18;48;14 - 00;18;51;01
And so it's very what really gives away
the game
00;18;51;01 - 00;18;54;04
is this comment from Greg Corrado,
who is health of,
00;18;54;26 - 00;18;57;29
head of health
AI at Google Health stage of using term.
00;18;58;24 - 00;19;03;22
And he was debuting med palm to a Google
I o in a press junket
00;19;03;22 - 00;19;08;09
he had said to the Wall Street Journal,
you know, this thing is not something
00;19;08;09 - 00;19;10;22
I'd want in my own family's
medical journey,
00;19;10;22 - 00;19;13;08
but I'm very excited for it
to be available to everybody else.
00;19;14;26 - 00;19;16;29
Wow. Yeah.
00;19;16;29 - 00;19;19;08
What what
more do you need to say about a product
00;19;19;08 - 00;19;22;09
than, you know, I wouldn't
I wouldn't have my family use it.
00;19;22;09 - 00;19;23;25
Right.
00;19;23;25 - 00;19;27;00
It's like the, you know, the
the quotes from the, you know, heads of,
00;19;27;01 - 00;19;28;05
you know,
00;19;28;05 - 00;19;29;17
social media network saying, well,
00;19;29;17 - 00;19;31;14
I certainly wouldn't
want my kids on this, right.
00;19;31;14 - 00;19;34;14
It's like, okay, well that that's
that's the whole ballgame ban.
00;19;34;24 - 00;19;35;21
That's right. Yeah.
00;19;35;21 - 00;19;38;19
I think Sam Altman,
that one of the recent, I think the recent
00;19;38;19 - 00;19;41;19
Commerce meeting was asked by someone,
00;19;41;19 - 00;19;44;19
you know, you wouldn't want your,
00;19;44;19 - 00;19;48;13
your new kid to you know, be friends
with, with an AI agent.
00;19;48;13 - 00;19;51;13
And he was like, no way I would. So,
00;19;52;15 - 00;19;54;13
I think in a lot of the tension.
00;19;54;13 - 00;19;55;24
Very, very telling.
00;19;55;24 - 00;19;57;29
I just want to add that
I think that that shows that
00;19;57;29 - 00;20;00;11
that these folks don't see
the rest of the world is really people.
00;20;01;15 - 00;20;03;15
And then sort of
00;20;03;15 - 00;20;06;19
reveals the lie of saying, well, we're
doing this for the benefit of humanity,
00;20;06;22 - 00;20;08;20
but these services are not services
00;20;08;20 - 00;20;10;21
that we would consider
good enough for our families.
00;20;10;21 - 00;20;13;21
And so everybody else doesn't count
the way our families count.
00;20;14;08 - 00;20;14;18
Right?
00;20;14;18 - 00;20;18;13
There's almost like this,
like self deification or something, right?
00;20;18;13 - 00;20;21;11
Like there's we're
we're in some way outside humanity.
00;20;21;11 - 00;20;23;17
Humanity is this project.
00;20;23;17 - 00;20;24;25
And we are these,
00;20;24;25 - 00;20;28;14
you know, saviors who are going to come in
and tell you what's good for you.
00;20;29;01 - 00;20;30;20
Exactly.
00;20;30;20 - 00;20;34;14
So, you know, in a couple of minutes,
I want to come back to this, this notion
00;20;34;14 - 00;20;35;17
and what we do about it.
00;20;35;17 - 00;20;40;05
But just just before we get too deep into,
you know, what each of us can do.
00;20;40;22 - 00;20;44;09
I wanted to come back to something
you mentioned earlier, which is AGI.
00;20;45;01 - 00;20;47;10
And as we look at some of the,
00;20;47;10 - 00;20;49;26
you know, the big promises
00;20;49;26 - 00;20;53;19
or the big,
you know, stories around, you know,
00;20;53;19 - 00;20;56;23
technology right now, two of the big ones,
we talk about our AGI.
00;20;57;01 - 00;21;00;12
And on the opposite end of the spectrum,
this, you know, p doom
00;21;00;12 - 00;21;04;04
or probability of doom that it's going to,
you know, wipe out the human race.
00;21;04;07 - 00;21;05;01
What what you know,
00;21;05;01 - 00;21;08;12
what's your outlook around,
you know, both of these in the next.
00;21;08;12 - 00;21;08;21
Yeah.
00;21;08;21 - 00;21;11;21
You know, in
any sort of reasonable time horizon.
00;21;11;26 - 00;21;13;29
I mean any time horizon at all. Right.
00;21;13;29 - 00;21;15;25
So it's it's all fiction.
00;21;15;25 - 00;21;17;23
And one of the things
it's very frustrating to me
00;21;17;23 - 00;21;20;18
is the way that the dreamers,
the people who think this is
00;21;20;18 - 00;21;22;19
going to be the end of humanity
and the boosters,
00;21;22;19 - 00;21;24;28
the ones who think it's going to solve
all of our problems,
00;21;24;28 - 00;21;27;29
present themselves
as like two ends of a spectrum.
00;21;28;07 - 00;21;31;00
And the media picks
this up and sort of amplifies
00;21;31;00 - 00;21;34;05
it and gives the idea that there's
there's just one spectrum,
00;21;34;05 - 00;21;37;08
so you're either fully doomed
or fully booster or somewhere in between.
00;21;37;16 - 00;21;39;20
But in fact, it's
two sides of the same coin.
00;21;39;20 - 00;21;41;26
So the Duma is, say,
00;21;41;26 - 00;21;45;14
artificial intelligence or artificial
general intelligence is a thing.
00;21;45;20 - 00;21;48;11
It's imminent. It's inevitable.
00;21;48;11 - 00;21;49;17
And it's going to kill us all.
00;21;49;17 - 00;21;52;22
And the boosters say AI
slash AGI is a thing.
00;21;52;27 - 00;21;55;27
It's inevitable, it's imminent, and it's
going to solve all of our problems.
00;21;55;28 - 00;21;57;19
And putting it that way,
I hope it makes it very clear
00;21;57;19 - 00;22;00;13
that these are actually the same position,
and there's no daylight between them.
00;22;00;13 - 00;22;04;06
And it's just once
you've gone down this fantasy path
00;22;04;10 - 00;22;06;03
which turn you take at the end.
00;22;06;03 - 00;22;06;12
Yeah.
00;22;06;12 - 00;22;09;24
So and
and just to be put a finer point on it to
00;22;10;06 - 00;22;13;22
there's a notion of doom
and there's a notion of hope.
00;22;13;22 - 00;22;17;03
And I think
to what we initially want to say
00;22;17;03 - 00;22;21;17
is we reject this probability
framing at all as if one can one
00;22;21;17 - 00;22;25;26
imagine this as the question
of completely fake probabilities.
00;22;26;20 - 00;22;29;10
And the thing that really sets me off,
especially
00;22;29;10 - 00;22;32;10
about, folks like the rationalists.
00;22;32;17 - 00;22;33;13
So if it's like,
00;22;37;00 - 00;22;38;29
Daniel, Carter,
00;22;38;29 - 00;22;42;26
that's who I completely
just completely mangled his name.
00;22;43;03 - 00;22;46;10
But the one of the authors of the ER,
2027 document
00;22;46;22 - 00;22;51;22
and others, of his colleagues
is that they're putting completely fake
00;22;51;22 - 00;22;55;03
probability leads to fake events.
00;22;56;20 - 00;22;58;23
And it's really in for folks
00;22;58;23 - 00;23;01;27
that really envision themselves
to be very empirically minded.
00;23;01;27 - 00;23;06;15
It's very it's just very much
just made up kinds of time horizons.
00;23;06;15 - 00;23;08;24
And it's it's really a work.
00;23;08;24 - 00;23;12;09
And Emily and I are both social scientists
and are both empiricist.
00;23;12;09 - 00;23;15;09
So that's actually very weird to see.
00;23;16;02 - 00;23;17;05
So that's one thing about that.
00;23;17;05 - 00;23;21;22
And then AGI itself serves
as this very nebulous concept as well,
00;23;21;22 - 00;23;24;22
this kind of idea
that there is some kind of a,
00;23;25;09 - 00;23;26;15
intelligence that.
00;23;26;15 - 00;23;30;21
So it's going to be,
very capable at many different tasks.
00;23;31;25 - 00;23;33;14
And we're
00;23;33;14 - 00;23;36;14
not really sure how well defined that is.
00;23;36;14 - 00;23;38;05
I mean, it doesn't seem
really well-defined at all.
00;23;38;05 - 00;23;41;09
We just, recently, published
00;23;41;09 - 00;23;46;00
a, a piece at Tech
Policy Press called The Myth of AGI
00;23;46;07 - 00;23;49;13
that was focusing
specifically on the idea on
00;23;50;18 - 00;23;53;18
the idea that this notion is nebulous,
00;23;54;00 - 00;23;56;00
and that it has a very particular
00;23;56;00 - 00;23;59;00
view of the world is wildly, scoped.
00;23;59;07 - 00;24;01;29
And somehow there's supposed to be,
00;24;01;29 - 00;24;06;13
you know, OpenAI or anthropic
being the, the particular organization
00;24;06;13 - 00;24;10;02
that is going
to, ensure that we receive this.
00;24;10;22 - 00;24;14;05
And so that itself is,
you know, like very,
00;24;15;09 - 00;24;17;01
very wrongheaded.
00;24;17;01 - 00;24;18;13
Right.
00;24;18;13 - 00;24;20;06
So I'm just no, it's it's interesting.
00;24;20;06 - 00;24;23;27
And I'm just I'm just reflecting on that,
a little bit because if you
00;24;24;23 - 00;24;28;29
if you take all that and sort of
synthesize it, that most of what
00;24;28;29 - 00;24;32;16
we're hearing in this space is,
you know, as you described a con, right?
00;24;32;16 - 00;24;33;18
It's marketing.
00;24;33;18 - 00;24;34;19
It's nonsense.
00;24;34;19 - 00;24;37;25
I mean, I'm not going to ask you
if you think this is a bubble.
00;24;37;25 - 00;24;40;19
I think that the writing
is on the wall there.
00;24;40;19 - 00;24;45;00
But I did want to ask you, you know,
do you think this bubble is going to burst
00;24;45;07 - 00;24;48;01
and is it already on a trajectory
to burst?
00;24;48;01 - 00;24;51;15
Do we, as you know, individuals
00;24;51;15 - 00;24;55;16
and maybe as leaders need to do something
differently for it to burst?
00;24;55;16 - 00;24;58;16
And you know, assuming it does, what
do we think that's going to look like?
00;24;58;23 - 00;25;00;20
Yeah, I mean it's going to burst.
00;25;00;20 - 00;25;05;11
And I think it's not a question of of
if it's a question of when.
00;25;05;19 - 00;25;08;11
And it's not a question of
00;25;08;11 - 00;25;11;04
and moreover, it's
a question of how. Right.
00;25;11;04 - 00;25;14;19
And so there's a few different ways
this bubble burst, a burst big.
00;25;15;02 - 00;25;20;08
And we got sort of the inklings of that
when we saw the freak out around Deep Sea,
00;25;20;19 - 00;25;24;11
in which Nvidia lost
a whole bunch of value in its stock.
00;25;25;13 - 00;25;25;22
And they
00;25;25;22 - 00;25;28;22
said, oh, maybe we don't need
these massive data centers to do so.
00;25;28;22 - 00;25;31;15
But the bigger you know, the bigger worry
00;25;31;15 - 00;25;34;15
here is not that there's going to be
this is really more efficient models.
00;25;34;20 - 00;25;39;17
It's that it's not going to solve
productivity or solve wages
00;25;39;25 - 00;25;42;25
in the way that the boosters
think it will.
00;25;43;05 - 00;25;46;05
And so a lot of,
you know, a lot of people have already
00;25;46;12 - 00;25;49;20
said, well, there's a huge revenue bubble
that needs to be,
00;25;50;14 - 00;25;53;12
you know, that needs to be,
00;25;53;12 - 00;25;55;09
sorry, there's a huge debt bubble
00;25;55;09 - 00;25;58;11
in terms of all the, all the,
00;25;59;10 - 00;26;03;12
the GPUs and the data centers and all that
infrastructure needs to get paid for.
00;26;04;09 - 00;26;08;14
And at some point they're
the facade is going to come.
00;26;08;14 - 00;26;12;16
And is that going to be something like
the first AI winter in which, you know,
00;26;12;16 - 00;26;16;18
there was the, the Lighthouse report
in which they said, okay,
00;26;16;27 - 00;26;20;08
this isn't this isn't panning out,
no more government funding,
00;26;20;19 - 00;26;23;17
or is it going to be something
that looks a lot more like Uber,
00;26;23;17 - 00;26;26;17
where more money gets thrown at it
00;26;26;19 - 00;26;30;28
over and over and over again,
until finally a profit gets turned?
00;26;32;03 - 00;26;34;18
The thing is, so much
00;26;34;18 - 00;26;37;21
money is going to get thrown on it,
and it's not going to turn a profit.
00;26;37;25 - 00;26;40;25
The revenue margins have been so low
00;26;41;01 - 00;26;44;07
and the investment has been so high.
00;26;44;17 - 00;26;47;17
So, so either going to happen
or it's going to happen quickly.
00;26;48;07 - 00;26;51;07
To your question of what
we should be doing about it, I think that,
00;26;51;16 - 00;26;53;01
certainly we should be resisting the hype.
00;26;53;01 - 00;26;54;06
And that's part of our goal
00;26;54;06 - 00;26;58;07
in writing this book, is to help people
articulate their objections to the hype.
00;26;58;07 - 00;27;02;22
But I think also it's really important
to not let systems that we rely on
00;27;02;22 - 00;27;06;13
get rebuilt around the false promises
of artificial intelligence,
00;27;06;21 - 00;27;10;02
because it's going to be harming us
while the bubble is still going.
00;27;10;02 - 00;27;12;01
And instead of getting, you know,
00;27;13;06 - 00;27;16;20
actual thoughtful medical notes
at the end of a doctor's appointment,
00;27;16;20 - 00;27;19;28
we get the output of a synthetic text
extruding machine with some errors
00;27;19;28 - 00;27;23;22
and the doctor saying, well,
I it's not my fault if it's wrong.
00;27;24;03 - 00;27;24;24
The system did it.
00;27;24;24 - 00;27;27;20
And, you know, medical
providers are under a lot of stress.
00;27;27;20 - 00;27;30;27
And in many cases this would happen
because their employer would say, well,
00;27;30;27 - 00;27;32;19
you got to see three extra people in a day
00;27;32;19 - 00;27;34;28
now because you're not spending time
doing the clinical notes or whatever.
00;27;34;28 - 00;27;37;28
So it can be harmful
while it's still going on.
00;27;38;00 - 00;27;41;06
But if you think ahead to
when the systems fall apart
00;27;41;06 - 00;27;44;07
and the vendors aren't there anymore,
so what?
00;27;44;08 - 00;27;46;05
How much has things been restructured?
00;27;46;05 - 00;27;48;25
How much have people's jobs been changed?
00;27;48;25 - 00;27;53;11
Either people being asked to do more
in the same amount of time, or people's
00;27;53;11 - 00;27;56;20
jobs being turned from stable jobs
into very casualised
00;27;56;20 - 00;28;00;08
gig work jobs,
because the AI supposedly could do it.
00;28;00;08 - 00;28;02;23
And so the more we can resist that
restructuring,
00;28;02;23 - 00;28;04;00
the better off we're going to be,
00;28;04;00 - 00;28;08;02
sort of regardless of when the the scales
finally fall from people's eyes.
00;28;08;02 - 00;28;10;12
And we're not forced to have this
everywhere.
00;28;10;12 - 00;28;14;10
Moreover, and one thing
we mentioned in the book and Emily took up
00;28;14;19 - 00;28;18;27
a piece of that is we have a piece called
That the grimy residue of the AI bubble
00;28;19;08 - 00;28;24;07
and the job loss is one thing,
but we can think of two other things
00;28;24;07 - 00;28;28;06
that are going to be left over if we don't
try to do some mitigation right now.
00;28;28;06 - 00;28;31;05
One of them is the environmental damage
that's already been done,
00;28;31;15 - 00;28;34;17
both in terms of the carbon
that's been put into the atmosphere,
00;28;34;25 - 00;28;37;25
the water that's been used for data center
cooling.
00;28;37;27 - 00;28;40;27
Also the types of,
00;28;40;28 - 00;28;44;16
externalities
in terms of air pollution and
00;28;45;17 - 00;28;46;18
forever chemicals that are
00;28;46;18 - 00;28;49;23
put into the ground from semiconductor
or construction.
00;28;50;14 - 00;28;51;29
So those are hard to reverse.
00;28;51;29 - 00;28;56;18
So we're already on track to blow past
the goals of the Paris climate agreement.
00;28;57;04 - 00;29;00;19
And then the kind of spills
and the information ecosystem,
00;29;00;19 - 00;29;03;18
the idea that we already has
so much synthetic text out there,
00;29;04;13 - 00;29;06;10
it's going to be hard to suss out,
you know,
00;29;06;10 - 00;29;09;23
what is synthetic text
and what is not synthetic text.
00;29;10;05 - 00;29;14;22
And we already see that babble
being done, in places like Wikipedia,
00;29;14;22 - 00;29;19;14
which is trying to fight the onslaughts
of the Lem generated and therefore,
00;29;20;18 - 00;29;23;08
not very trustworthy outputs
00;29;23;08 - 00;29;26;10
of synthetic media, text, media machines.
00;29;27;02 - 00;29;30;12
So that's one part
thinking more of what we can do about it.
00;29;30;12 - 00;29;32;19
Well, one thing is trying to
00;29;32;19 - 00;29;35;29
be more cool on investment,
especially in data centers.
00;29;36;20 - 00;29;39;27
That is harming, communities
in the here and now.
00;29;40;05 - 00;29;44;16
And whether it's the data centers,
operating in southwest Memphis
00;29;44;29 - 00;29;49;03
that is a living box town,
predominantly black and poor neighborhood.
00;29;49;28 - 00;29;54;22
Or whether it is, areas
like Northern Virginia and London County
00;29;55;04 - 00;29;59;11
or the outskirts of Atlanta, where more
and more data centers are being built,
00;30;00;01 - 00;30;05;00
specifically relying on fossil
fuel powered, power plants.
00;30;05;17 - 00;30;05;28
Yeah.
00;30;05;28 - 00;30;09;09
I'm, I'm again,
just just kind of reflecting on that.
00;30;09;09 - 00;30;10;11
Alex, the
00;30;10;11 - 00;30;12;05
I have the same concern
about the environment,
00;30;12;05 - 00;30;14;19
and I'm just I'm tying
it back to your comment about Uber,
00;30;14;19 - 00;30;18;10
because the thing that the thing that
really gets me concerned about this is,
00;30;18;18 - 00;30;22;06
you know, Uber is kind of throwing good
money after bad for a long time, right?
00;30;22;06 - 00;30;23;16
It's the it's the old joke of,
00;30;23;16 - 00;30;25;22
you know, we're losing money
on Interac every interaction.
00;30;25;22 - 00;30;27;23
But we'll make it up at scale. Right.
00;30;27;23 - 00;30;31;11
But the the thing that worries me
about this, when you combine it with that
00;30;31;11 - 00;30;34;18
kind of arms race narrative, is it
feels like it's not even linear.
00;30;34;18 - 00;30;36;12
It's like exponential, right?
00;30;36;12 - 00;30;38;18
It's like everybody's saying like,
00;30;38;18 - 00;30;41;03
we need more and more data, more
and more data centers.
00;30;41;03 - 00;30;45;04
Like, I can very easily picture
kind of an asymptotic curve
00;30;45;04 - 00;30;48;11
where while we still haven't found
a solution, it's still not profitable.
00;30;48;11 - 00;30;49;05
And so.
00;30;49;05 - 00;30;54;03
Well, let's just try throwing ten times as
much energy or 100 times as much energy.
00;30;54;13 - 00;30;58;13
And, you know, either
this whole thing collapses is fake
00;30;58;13 - 00;31;00;07
or we do an awful lot of damage.
00;31;01;21 - 00;31;04;12
You know, in the interim there. So,
00;31;04;12 - 00;31;06;17
you know that that's certainly
a piece that resonates with me.
00;31;06;17 - 00;31;08;07
And, you know, Emily,
I was certainly thinking about,
00;31;08;07 - 00;31;14;03
you know, your comments on automation
and the need for us to be more thoughtful
00;31;14;03 - 00;31;17;19
about, you know, where we automate
and how we make sure we're not just
00;31;17;19 - 00;31;21;07
replacing a solid foundation
with a flimsy foundation.
00;31;21;10 - 00;31;25;20
One of the things about these ever larger
data centers, data sets,
00;31;26;00 - 00;31;29;06
and so on, is that it becomes a metric
00;31;29;06 - 00;31;32;21
that the people who are spending
the money can say, look, we made it bigger
00;31;32;28 - 00;31;37;10
because what they're trying to build
is actually not well formed.
00;31;37;10 - 00;31;38;13
It's not it's not well conceived.
00;31;38;13 - 00;31;40;03
And so you can't evaluate
how close you are
00;31;40;03 - 00;31;41;21
to building the thing
you're trying to build.
00;31;41;21 - 00;31;43;15
And so they've got something
they can measure instead.
00;31;43;15 - 00;31;46;16
And that thing is in fact
environmentally quite ruinous and built on
00;31;46;16 - 00;31;47;19
stolen labor and so on.
00;31;48;20 - 00;31;48;29
Right.
00;31;48;29 - 00;31;51;15
It's that more parameters,
more computations.
00;31;51;15 - 00;31;54;15
And, you know, that must be good
because it's because it's bigger.
00;31;55;04 - 00;31;57;12
I want it to come back to the,
00;31;57;12 - 00;32;00;27
you know, and put a finer point
on sort of the, the adoption piece.
00;32;00;27 - 00;32;04;25
And I'm thinking specifically for,
you know, leaders of industry for,
00;32;04;25 - 00;32;07;27
for organizational leaders
who are, you know, very much,
00;32;07;27 - 00;32;10;11
you know, everywhere you turn, I'm
sure you are exposed to it too.
00;32;10;11 - 00;32;13;10
I this I that if you don't do
it, you're behind the curve.
00;32;13;10 - 00;32;16;04
What do you have any specific guidance?
00;32;16;04 - 00;32;19;20
You know, for, for for these people
in terms of what they can do to,
00;32;20;14 - 00;32;23;13
I don't know, maybe be more responsible
here?
00;32;23;13 - 00;32;27;06
Does that mean just fully saying,
you know, no automation, none of this?
00;32;27;06 - 00;32;29;01
Or is there an approach they can take
00;32;29;01 - 00;32;32;14
that's just going to yield better results
and protect them from some of this?
00;32;32;25 - 00;32;33;15
You know.
00;32;33;15 - 00;32;35;19
There's certainly a time
and a place for automation,
00;32;35;19 - 00;32;37;05
but you want to automate something
00;32;37;05 - 00;32;40;06
when you can very specifically say
what it is that you're automating
00;32;40;15 - 00;32;44;07
and do you have very good reason
to believe that the information needed for
00;32;44;07 - 00;32;45;18
the output is in the input?
00;32;45;18 - 00;32;50;03
When you can evaluate how well it works
in your current use case, when you, have
00;32;50;09 - 00;32;55;04
sufficient recourse for things for someone
who's been harmed by the automation.
00;32;55;04 - 00;32;57;27
Because one of the things about automation
is that it scales whatever you're doing
00;32;57;27 - 00;33;00;27
and if it's getting it wrong in that run,
getting it wrong is harmful to people,
00;33;00;27 - 00;33;02;20
and you do it
more and more and faster and faster.
00;33;02;20 - 00;33;04;21
You've got to be prepared
to make things right.
00;33;06;05 - 00;33;08;14
Or, you know, decide no, that's too
harmful.
00;33;08;14 - 00;33;09;27
It's not the kind of thing
that can be made right.
00;33;09;27 - 00;33;11;07
We're not going to go down that path.
00;33;11;07 - 00;33;13;28
So, you know, advice to leaders
who are making decisions here.
00;33;13;28 - 00;33;16;28
I would say, first of all, you know,
think about values.
00;33;17;18 - 00;33;20;03
And I know that
for many people in business,
00;33;20;03 - 00;33;22;01
the only value that matters is stakeholder
value.
00;33;22;01 - 00;33;24;10
And the way to get stakeholder value
right now is a promise AI.
00;33;24;10 - 00;33;27;27
So setting that aside to say, okay,
well what are our other values?
00;33;28;15 - 00;33;33;06
And to what extent does
this automation actually speak to them.
00;33;33;06 - 00;33;38;06
And especially thinking about, you know,
what could go wrong when you put
00;33;38;15 - 00;33;40;11
if you're talking about something
like ChatGPT,
00;33;40;11 - 00;33;44;08
a synthetic text extruding machine,
into the middle of a sensitive process,
00;33;44;15 - 00;33;48;21
you know, how might that impact
what, you know, your company's reputation?
00;33;49;09 - 00;33;52;09
Or what it
is that you say that you stand for?
00;33;52;11 - 00;33;55;25
And think about sort of like the,
the long term durability of the systems
00;33;55;25 - 00;33;56;23
you're putting in place.
00;33;56;23 - 00;33;59;11
Is this still going to work with
this still work?
00;33;59;11 - 00;34;03;01
If OpenAI went belly up and didn't
have access to electricity anymore,
00;34;03;08 - 00;34;05;21
but this still work. You know, if,
00;34;06;20 - 00;34;07;17
it turned out that
00;34;07;17 - 00;34;10;17
large language models aren't all that
and so on.
00;34;10;19 - 00;34;11;17
Just a few things.
00;34;11;17 - 00;34;13;26
I mean, there's just a few data points
I want to add to that.
00;34;13;26 - 00;34;17;25
So there was a survey
that was done by an org planning
00;34;17;25 - 00;34;20;24
platform in the UK called Org View.
00;34;21;01 - 00;34;25;05
They found that, 55% of companies
that replace workers with,
00;34;25;05 - 00;34;28;05
I regret the decision.
00;34;28;18 - 00;34;31;18
So already
have some buyer's remorse happening.
00;34;31;19 - 00;34;34;28
And I'm assuming that there's a notion
that there would be,
00;34;36;17 - 00;34;40;09
workers that would be able to that
that still were there
00;34;40;09 - 00;34;44;08
could, using AI tools
to shore up the difference.
00;34;44;08 - 00;34;47;08
But that is just that is noxious.
00;34;48;04 - 00;34;52;23
The largest study that's been done or
one of the largest studies, I'm assuming,
00;34;53;17 - 00;34;57;20
but I don't know if there's been a larger
study that's been, done
00;34;57;20 - 00;35;00;23
in, in, in Denmark, of
00;35;00;23 - 00;35;04;13
25,000 workers across 7000 organizations,
00;35;05;06 - 00;35;09;22
suggested that there were very modest
productivity gains,
00;35;10;17 - 00;35;14;12
by workers
using artificial intelligence tools,
00;35;15;08 - 00;35;17;16
at the rate of 3%.
00;35;17;16 - 00;35;21;27
But those gains were offset
by the new labor
00;35;21;27 - 00;35;24;10
displacing tasks
that they had to deal with.
00;35;24;10 - 00;35;28;11
There was also no increase in earnings,
for those workers.
00;35;28;27 - 00;35;33;16
And so these are not proving to be
the end of the
00;35;33;16 - 00;35;37;21
the kind of huge
an amazing kind of productivity,
00;35;37;21 - 00;35;41;02
productivity gains that the companies
are making, making them out to be.
00;35;41;20 - 00;35;45;14
So the message to business leaders,
I would say, is really invest
00;35;45;14 - 00;35;49;11
in your people and really thinking about
what are your people doing?
00;35;49;17 - 00;35;50;25
How can you better support
00;35;50;25 - 00;35;53;25
people who are already doing
the work that you need to do?
00;35;53;28 - 00;35;58;25
If people are having an issue
with being productive
00;35;58;28 - 00;36;02;16
and answering what are ways in which
they can be supported organizationally
00;36;02;29 - 00;36;06;10
and thinking about this kind of
with an organizational sociologist
00;36;06;10 - 00;36;09;10
hat on and thinking about what are ways
in which
00;36;09;12 - 00;36;12;25
those, you know,
a product can be made better by that.
00;36;13;25 - 00;36;14;24
The reason why
00;36;14;24 - 00;36;19;09
large language models are very attractive
is because if their values,
00;36;19;09 - 00;36;22;18
as Emily suggests, are very much only for
00;36;23;11 - 00;36;26;11
or shareholders,
00;36;26;11 - 00;36;29;18
shareholders often like to see layoffs,
because that means you can,
00;36;30;08 - 00;36;33;08
be making, more profit per headcount.
00;36;33;15 - 00;36;38;01
But if your product is not,
keeping up the task and staying
00;36;38;01 - 00;36;41;24
as high quality as was then, that's
going to be a huge issue if you're not,
00;36;42;27 - 00;36;46;04
and so in some cases, there's
some really interesting cases
00;36;46;04 - 00;36;49;29
in which some companies are leaning into
not using AI and saying that,
00;36;49;29 - 00;36;52;29
no, we're actually giving you
a high touch experience
00;36;53;00 - 00;36;57;17
and that is very important to signify,
because I think
00;36;57;17 - 00;37;01;15
a lot of people are culturally saying
that AI is shoddy and jank.
00;37;01;15 - 00;37;04;15
And what Tressie
McMillan Cotner said it is bit
00;37;04;20 - 00;37;07;09
the outputs are not really verifiable.
00;37;08;29 - 00;37;10;21
Or if they
00;37;10;21 - 00;37;14;05
if you can verify them,
they are very labor intensive to do so.
00;37;14;27 - 00;37;20;08
The outputs of images are very shoddy
and take a lot of labor to correct,
00;37;20;08 - 00;37;23;08
especially down in the,
in the supply chain.
00;37;23;29 - 00;37;26;19
And this is overall
of that value proposition.
00;37;26;19 - 00;37;29;20
So there's when you paint
that picture, Alex,
00;37;30;00 - 00;37;32;02
you know, I was thinking back
to what you were talking about
00;37;32;02 - 00;37;35;15
earlier in EdTech
and our conversation earlier about
00;37;36;10 - 00;37;39;22
you know, this sense
that this technology is going
00;37;39;22 - 00;37;43;26
to open up these whole new audiences
for what's going on here.
00;37;43;26 - 00;37;47;01
And you also mentioned and I've seen
first hand, by the way, that,
00;37;47;24 - 00;37;49;24
oh, you know, there's companies
not using AI.
00;37;49;24 - 00;37;52;09
And you get this, this hands on approach.
00;37;52;09 - 00;37;56;18
You know, my my concern when I read
all of this together is that sure,
00;37;56;19 - 00;38;00;03
there are companies that will say, yes,
you get there's no AI, it's human only.
00;38;00;22 - 00;38;04;08
And but of course they'll look at that
with, you know, money bags
00;38;04;08 - 00;38;07;18
in their eyes and say, well,
now of course there's this hefty premium
00;38;07;18 - 00;38;11;02
for anything with no AI or for human,
you know, for human touch.
00;38;11;11 - 00;38;14;27
And so we end up with this world
that's, you know, even more extreme where.
00;38;15;04 - 00;38;20;08
Yeah, it's it's AI slop for the masses
that we know is mid or low quality.
00;38;20;14 - 00;38;24;00
And then you have to pay a premium for,
you know,
00;38;24;19 - 00;38;25;24
that the quality of service
00;38;25;24 - 00;38;28;24
that you're either getting today
or certainly were getting a few years ago,
00;38;29;05 - 00;38;29;11
yeah.
00;38;29;11 - 00;38;33;07
We are talking about edtech
and this is, you know, as I look across
00;38;33;07 - 00;38;37;10
the spectrum of not just corporations
but also social services,
00;38;37;10 - 00;38;40;10
it feels like there's
this like this erosion
00;38;40;28 - 00;38;44;07
of quality or, you know, some people
call it like an intuitive ification.
00;38;45;08 - 00;38;47;24
And so, you know, to what degree are
the two of
00;38;47;24 - 00;38;51;20
you seeing this in your research across
some of these different sectors?
00;38;51;20 - 00;38;55;00
And, you know, in your minds,
is there a way that we can reverse this?
00;38;56;21 - 00;38;58;15
Yeah, I mean, it's a good question.
00;38;58;15 - 00;39;01;25
And I think we're seeing a little bit of,
of kind of a rush.
00;39;02;08 - 00;39;05;14
You know, there's definitely this insured
application process this term that,
00;39;06;03 - 00;39;08;29
or a doctor coined.
00;39;08;29 - 00;39;10;24
The thing about it
00;39;10;24 - 00;39;13;25
is that, thinking about what we should do,
the reverse.
00;39;13;25 - 00;39;16;24
I mean, one thing is that
00;39;16;24 - 00;39;21;02
what we hope and I think one of
the hopes of the book is to suggest that
00;39;22;00 - 00;39;24;10
the kind of Lem output
00;39;24;10 - 00;39;27;15
is just not up to snuff for any critical
task.
00;39;27;15 - 00;39;28;08
Right.
00;39;28;08 - 00;39;31;25
And so, you know,
if there's synthetic output,
00;39;31;25 - 00;39;35;27
it is just seen as either
scammy or spammy.
00;39;36;07 - 00;39;40;11
And because Manila Times
where it is, it's as well
00;39;40;21 - 00;39;43;21
is in scams and in spam.
00;39;43;28 - 00;39;49;10
And that's because it's, it's
sort of this thing where output is
00;39;49;17 - 00;39;54;24
where you don't really care
about the quality of the synthetic text.
00;39;56;09 - 00;39;57;03
And you don't
00;39;57;03 - 00;40;00;09
care whether any of these things within
that text
00;40;00;17 - 00;40;04;03
has any verifiable
truth, has any truth claims in it.
00;40;04;09 - 00;40;05;22
You don't really care
about the truth value.
00;40;05;22 - 00;40;09;19
So there's a paper called
ChatGPT is bullshit, right?
00;40;09;19 - 00;40;13;05
And it uses the Henry Frankfurt
definition of bullshit.
00;40;13;05 - 00;40;17;02
The idea that the bullshitter doesn't care
about the truth values of their claims,
00;40;17;02 - 00;40;19;04
they're just trying to reach their goal.
00;40;19;04 - 00;40;20;00
They're there.
00;40;20;00 - 00;40;23;05
Therefore, like Trump
is like that bullshit or par excellence
00;40;23;05 - 00;40;26;08
because it's just like,
get to get from get, get the deal.
00;40;26;08 - 00;40;27;07
Right.
00;40;27;07 - 00;40;32;29
And so to some degree, there is, you know,
like, we can see that any kind of
00;40;32;29 - 00;40;38;11
synthetic output is just that kind of,
you know, detritus is this kind of waste.
00;40;38;25 - 00;40;42;13
And so I don't know if humans
as being the value differentiator.
00;40;42;13 - 00;40;45;14
It's more like, well,
in the best of cases,
00;40;45;18 - 00;40;49;07
anybody that's doing anything
that's worth its salt should be,
00;40;50;05 - 00;40;53;20
you know, done with humans
in the loop in a meaningful way.
00;40;54;17 - 00;40;58;07
I just want to add there that, you know,
you said we might end up in a situation
00;40;58;07 - 00;41;01;16
where the current status quo
becomes the luxury tier,
00;41;01;16 - 00;41;04;22
where you actually have people involved,
and then everybody else gets
00;41;04;22 - 00;41;05;15
pushed down to this.
00;41;05;15 - 00;41;06;06
Well, you've got to deal
00;41;06;06 - 00;41;10;08
with the crappy synthetic system,
and there's no magic bullet here.
00;41;10;08 - 00;41;12;05
We just have to resist it
at every turn. Right?
00;41;12;05 - 00;41;15;01
If this is coming into your school system,
say no. Right?
00;41;15;01 - 00;41;17;10
If this is coming into your workplace,
say no.
00;41;17;10 - 00;41;20;09
And that's part of what we're trying to do
in this book, is to empower people
00;41;20;13 - 00;41;24;20
to use no at every turn
and to recognize it when it's happening.
00;41;25;11 - 00;41;29;05
And also, you know, the
the no can be firm, it can be angry,
00;41;29;05 - 00;41;30;28
but it can also be humorous.
00;41;30;28 - 00;41;32;04
And this is where we recommend
00;41;32;04 - 00;41;36;01
ridiculous practices and saying, this is
this is low value, it's fake, it's bad.
00;41;36;08 - 00;41;39;14
It's, it's,
you know, amid all of these things,
00;41;40;06 - 00;41;43;13
because sometimes the people
who are making the decisions,
00;41;43;24 - 00;41;47;08
it's not moneybags they have in their eyes
with stars and in particular
00;41;47;12 - 00;41;50;10
the sparkle emoji that got appropriated,
00;41;50;10 - 00;41;53;10
by the, the tech companies for this stuff.
00;41;53;14 - 00;41;55;20
It looks like magic.
And so I want the magic.
00;41;55;20 - 00;41;58;20
And so to empower people
to educate those around us
00;41;58;20 - 00;42;01;22
so that we collectively make better
decisions is really important.
00;42;02;05 - 00;42;04;15
It makes it makes complete sense to me.
00;42;04;15 - 00;42;06;29
I wanted to ask the two of you
a slightly different question.
00;42;06;29 - 00;42;11;07
So one of the things I normally ask, you
know, guests that I speak to here is I,
00;42;11;07 - 00;42;12;28
you know,
I ask them what they think is bullshit.
00;42;12;28 - 00;42;15;28
And I'm not going to ask the two of you
that because I think we've spent
00;42;16;01 - 00;42;18;27
we spent quite enough time talking about,
you know, what is bullshit.
00;42;18;27 - 00;42;19;21
And I you know,
00;42;19;21 - 00;42;23;11
I know we've got some, some strong and,
you know, well supported views here.
00;42;23;19 - 00;42;27;07
I wanted to flip the question around
and ask, you know, in this sphere,
00;42;27;29 - 00;42;29;17
what isn't bullshit?
00;42;29;17 - 00;42;31;12
What what are you excited about?
00;42;31;12 - 00;42;35;26
What's a good use of leverage,
generative AI or some of these,
00;42;36;13 - 00;42;40;12
you know, newer or modern technologies
and, and, you know, is it really a case of
00;42;40;16 - 00;42;41;23
say no to everything?
00;42;41;23 - 00;42;44;19
Just shut it off,
you know, full, you know,
00;42;44;19 - 00;42;47;28
you know, hands of work
over your eyes and ears or are there,
00;42;47;28 - 00;42;51;25
you know, very specific, very targeted
use cases where, you know,
00;42;52;07 - 00;42;55;07
there's potential for,
you know, excitement and value.
00;42;55;20 - 00;43;00;19
So I'm very excited
about the uses of language technology
00;43;00;27 - 00;43;03;22
that are for community empowerment.
00;43;03;22 - 00;43;08;07
And so and I'm being very specific here,
I'm not saying LMS and I'm not saying
00;43;08;07 - 00;43;13;18
diffusion models in, in and
or generative AI, but the cases in which
00;43;13;27 - 00;43;18;08
there are things that empower communities
to do things which served community.
00;43;18;08 - 00;43;22;02
So an example that we talk about in
the book is the example take the media,
00;43;22;25 - 00;43;27;03
in which they provide machine translation
and automatic speech
00;43;27;03 - 00;43;31;04
recognition tools for the Terraria
Maori language.
00;43;31;27 - 00;43;35;01
So focusing on
and the thing that's exciting about that
00;43;35;09 - 00;43;38;07
is that they have,
the people in the community
00;43;38;07 - 00;43;41;29
have control over
which data gets used for training models.
00;43;42;08 - 00;43;45;02
They ask their community elders on
00;43;45;02 - 00;43;48;02
what, what data that could be used.
00;43;48;09 - 00;43;50;14
Certain data can't be used.
00;43;50;14 - 00;43;53;17
And those are tools that are in which data
00;43;53;17 - 00;43;56;29
is owned by the community
as well as the computing power itself.
00;43;57;10 - 00;43;59;27
So this is kind of like the anti open AI.
00;43;59;27 - 00;44;04;00
The idea instead of building this big
everything machine that every language
00;44;04;00 - 00;44;09;05
everywhere, you have a very narrowly
scoped task, that works
00;44;09;05 - 00;44;14;05
for the specific community and building
more technical medias is amazing.
00;44;14;05 - 00;44;18;03
And it's one thing that they're trying
to do in powering a net worth
00;44;18;03 - 00;44;22;18
of individuals, through a federation
that we called the freaking Federation.
00;44;23;08 - 00;44;24;29
That is just doing off the ground now.
00;44;26;04 - 00;44;27;27
And those things were really
00;44;27;27 - 00;44;31;26
focused
on building the imagination of folks stuff
00;44;32;02 - 00;44;35;11
and basically doing it in a way
that is not environmentally ruinous,
00;44;35;16 - 00;44;38;16
that is, does not rely on data, but
00;44;38;24 - 00;44;41;24
and is really providing
for a specific need.
00;44;42;00 - 00;44;45;25
So I would like to add there, I'm
a technologist, just like Alex.
00;44;46;05 - 00;44;49;01
I ran a professional master's program
in computational linguistics
00;44;49;01 - 00;44;51;16
training people
how to build language technologies.
00;44;51;16 - 00;44;55;02
So I definitely think there are good use
cases for things like language technology.
00;44;55;02 - 00;44;58;02
And the take media example is wonderful.
00;44;58;16 - 00;45;02;12
But I see no beneficial
use case of synthetic text.
00;45;03;06 - 00;45;05;13
And I actually look into this
from a research perspective.
00;45;05;13 - 00;45;08;21
I have a talk called ChatGPT one
if ever synthetic text
00;45;08;21 - 00;45;12;03
safe, desirable and appropriate,
or those adjectives in some order.
00;45;12;03 - 00;45;13;29
I don't remember the exact title.
00;45;13;29 - 00;45;16;25
And basically it has to be a situation
where, first of all,
00;45;16;25 - 00;45;21;05
you have created the synthetic text
extruding machine ethically.
00;45;21;05 - 00;45;23;21
So without environmental ruin,
00;45;23;21 - 00;45;27;07
without labor exploitation,
without data theft, we don't have that.
00;45;27;07 - 00;45;30;02
But assuming that we did,
you would still need to meet for criteria.
00;45;30;02 - 00;45;31;04
So it has to be a situation
00;45;31;04 - 00;45;34;22
where you either don't care
about the veracity of the output,
00;45;35;03 - 00;45;38;01
or it's one where you can check it
more efficiently
00;45;38;01 - 00;45;40;05
than just writing the thing
in the first place yourself.
00;45;40;05 - 00;45;42;24
It has to be a situation
where you don't care about originality,
00;45;42;24 - 00;45;44;15
because this way,
the way the systems are set up,
00;45;44;15 - 00;45;47;27
you're not linked back to the source
where an idea came from.
00;45;48;15 - 00;45;51;24
And then thirdly, it has to be a situation
where you can effectively
00;45;51;24 - 00;45;54;28
and efficiently identify and mitigate
any of the biases that are coming out.
00;45;55;10 - 00;45;58;09
And I tried to find something
that would fit those categories.
00;45;58;09 - 00;46;02;18
And I don't so certainly language
technology is useful.
00;46;02;21 - 00;46;06;06
Other kinds of well scoped technology
where it makes sense to go from X
00;46;06;06 - 00;46;10;05
input to Y output, and you've evaluated it
in your local situation.
00;46;10;05 - 00;46;10;23
Great.
00;46;10;23 - 00;46;14;08
But you know,
the giant random eight ball y.
00;46;15;09 - 00;46;16;25
But it's interesting, right?
00;46;16;25 - 00;46;20;10
And the y question
is, is very interesting to me.
00;46;20;10 - 00;46;20;19
Right.
00;46;20;19 - 00;46;23;18
Because I think most of most of what
you said there is not controversial.
00;46;23;18 - 00;46;23;25
Right.
00;46;23;25 - 00;46;27;11
It's it's,
you know, hugely energy intensive.
00;46;27;20 - 00;46;32;08
It's probably a lower quality
than what people would come up with
00;46;32;08 - 00;46;33;21
if it weren't synthetic.
00;46;33;21 - 00;46;36;13
And yet, you know,
you have to juxtapose that with the fact
00;46;36;13 - 00;46;40;18
that there is an awful lot of adoption
and people seem to be getting,
00;46;40;25 - 00;46;44;11
you know, value out of that and how,
you know, maybe just what that says
00;46;44;11 - 00;46;48;25
about us as people that were willing
to settle for, you know, something worse
00;46;48;25 - 00;46;52;00
because it's because it's easier
and not worry about the,
00;46;52;22 - 00;46;55;16
you know, any
any of the details behind the scenes.
00;46;55;16 - 00;46;56;00
So it's.
00;46;56;00 - 00;46;58;12
Yeah, I'm
just still kind of thinking about that.
00;46;58;12 - 00;47;01;10
I mean, I, I'm every time
someone says, well, I'm using it
00;47;01;10 - 00;47;04;27
because, you know, I don't have time
or I'm using it because it's easier.
00;47;05;04 - 00;47;07;23
I think it's always worth asking. Well,
why don't you have time?
00;47;07;23 - 00;47;11;06
And if it's easier, what are you
what's the opportunity cost there?
00;47;11;06 - 00;47;14;22
What are you missing out on for
not actually connecting with a person?
00;47;14;22 - 00;47;17;22
To have a conversation
or thinking through something yourself.
00;47;17;24 - 00;47;20;24
And, you know, oftentimes
the source of the problem isn't
00;47;20;29 - 00;47;22;17
the person themselves
who made that decision,
00;47;22;17 - 00;47;25;14
but the the structure
is that put them into the corner
00;47;25;14 - 00;47;27;29
where it felt like
this was the best way out.
00;47;27;29 - 00;47;31;03
But what I got to tell, deans
of universities
00;47;31;03 - 00;47;33;07
from the west coast of the US and Canada,
00;47;33;07 - 00;47;38;19
last year was the only use of ChatGPT
for a university is as a contrast eye test
00;47;38;19 - 00;47;43;09
to see where resources are missing,
where students and staff turn to this,
00;47;43;09 - 00;47;47;02
it means something is lacking
in terms of what
00;47;47;02 - 00;47;50;20
they would need to actually fully engage
in the educational project,
00;47;50;27 - 00;47;54;00
and that information is a value
to administrators.
00;47;54;00 - 00;47;56;12
But like, that's it. That's the end of it.
00;47;56;12 - 00;48;00;18
I will also say the cases in which people
say that, well, I find this very helpful.
00;48;00;18 - 00;48;02;15
I would also note that
00;48;02;15 - 00;48;06;28
I think that's a pretty even term
in terms of the people who are using it.
00;48;06;28 - 00;48;10;09
It's kind of a limited number of workers
who are using it.
00;48;10;09 - 00;48;14;06
So in this survey,
Pew did they found that 17% of
00;48;14;06 - 00;48;17;28
workers
were using Lims at least some of the time.
00;48;19;01 - 00;48;22;01
And I think 1% were using them
all the time.
00;48;22;11 - 00;48;26;08
We're finding cases
which I think there's a mismatch reality
00;48;26;08 - 00;48;29;29
in which especially business leaders
find them to be more useful
00;48;29;29 - 00;48;32;29
than many other people who are more junior
00;48;33;00 - 00;48;36;00
on in in the, job ladder.
00;48;36;16 - 00;48;40;21
There was another story that came out
by Eno Schreiber in the New York Times
00;48;40;21 - 00;48;45;04
that talked about the uses of Lims
at Amazon, effectively, how
00;48;45;14 - 00;48;50;29
the deployment of LMS has become
almost mandatory, and Amazon in the,
00;48;51;14 - 00;48;54;12
in the, in the programing work
00;48;54;12 - 00;48;59;20
and how even though more senior devs
appreciated that, more junior devs
00;48;59;20 - 00;49;03;14
were really being forced to use it,
and their work was looking
00;49;03;14 - 00;49;06;03
much more like factory work
than the kind of creative work
00;49;06;03 - 00;49;10;01
that often goes with with software
engineering and programing.
00;49;10;18 - 00;49;15;01
And so I think that kind of notion
that it is kind of useful is
00;49;15;15 - 00;49;18;07
happens, is probably happening only
00;49;18;07 - 00;49;21;23
for a very narrow
set of workers and people.
00;49;22;15 - 00;49;26;23
Maybe it's happening more for students
because they're being constrained and push
00;49;26;23 - 00;49;31;05
for time, to not engage in classes
to the bane,
00;49;32;01 - 00;49;34;07
of, of their instructors.
00;49;34;07 - 00;49;38;01
And we've talked to a lot of instructors
and our instructors ourselves.
00;49;38;18 - 00;49;41;18
And so that itself, I think is, is,
00;49;41;28 - 00;49;44;28
is another area
where a lot of this is happening.
00;49;45;02 - 00;49;46;23
But I think that narrative
00;49;46;23 - 00;49;50;19
and what's happening across most workers,
there's a big mismatch there.
00;49;51;04 - 00;49;54;14
So just just following that thread
for a minute, Alex, on the worker side,
00;49;54;29 - 00;49;58;23
one of the again,
one of the narratives we hear is that,
00;49;59;08 - 00;50;03;00
you know, there's kind of two sides
to adopting these technologies.
00;50;03;00 - 00;50;05;11
There's,
you know, organizations or enterprises
00;50;05;11 - 00;50;08;11
that can adopt them to try and drive
productivity.
00;50;08;15 - 00;50;09;16
Organizations
00;50;09;16 - 00;50;13;15
and maybe that displaces workers, maybe
that's the do more with less mandate.
00;50;13;21 - 00;50;17;14
But then there's this other narrative
of as an individual worker,
00;50;17;27 - 00;50;22;28
you should adopt some of these,
you know, synthetic language tools
00;50;23;04 - 00;50;26;17
or some of these automation tools
because it's empowering for you.
00;50;26;23 - 00;50;30;06
It helps you take back,
you know, a modicum of control
00;50;30;06 - 00;50;33;09
from your employer
by making you more efficient.
00;50;33;09 - 00;50;34;06
Maybe it takes less
00;50;34;06 - 00;50;37;23
time to do what you're doing,
or you can do a better quality work.
00;50;38;11 - 00;50;40;18
You know, in the same amount of time
that you're doing worse quality work.
00;50;40;18 - 00;50;42;29
Now, do you think there's merit
to that argument,
00;50;42;29 - 00;50;44;21
or do you think
that's still worth resisting?
00;50;45;21 - 00;50;48;14
Yeah, I don't
think it's there's really merit to that.
00;50;48;14 - 00;50;53;02
I mean, I think the cases in which work
these tools have been deployed
00;50;53;02 - 00;50;58;21
have been in cases
which there may be some marginal gains,
00;50;58;21 - 00;51;03;12
but you have to check that
the output of this very meticulously.
00;51;03;24 - 00;51;07;05
There's been many, many different cases
in which people, like
00;51;07;05 - 00;51;10;05
lawyers, have been using tools like this
in legal briefs.
00;51;10;15 - 00;51;13;26
And those legal briefs have made
00;51;13;27 - 00;51;16;27
have a lot of made up case law. And,
00;51;17;06 - 00;51;20;12
there's cases
in which journalists have used them,
00;51;20;12 - 00;51;24;03
and it is made up a bunch of books
that don't exist.
00;51;24;03 - 00;51;25;11
And these are
00;51;25;11 - 00;51;28;11
I think there was a case with the Chicago
Sun-Times, which I think they had,
00;51;29;15 - 00;51;30;24
some of that was in their pipeline,
00;51;30;24 - 00;51;33;24
and nothing was a journalist
at such angles to Chicago Sun-Times.
00;51;34;04 - 00;51;36;01
But they had made a list of fake.
00;51;36;01 - 00;51;38;22
But so there and these are kind
of coming up over and over.
00;51;38;22 - 00;51;44;20
So if you have cases in which you're
you're using these tools with the intent
00;51;44;20 - 00;51;48;07
of being more productive, often
it is doing the opposite.
00;51;48;07 - 00;51;51;08
It's slowing workers down,
making it less productive.
00;51;51;16 - 00;51;55;02
There's also the case
that it makes you less collaborative
00;51;55;02 - 00;51;58;02
because you sort of produce something,
but you are not.
00;51;58;06 - 00;52;02;15
Then passing something on to a coworker
to use it
00;52;02;15 - 00;52;05;26
in any kind of verifiable,
verifiable, or useful way.
00;52;06;04 - 00;52;09;07
There's a good example where,
in talking with animators,
00;52;09;07 - 00;52;12;07
as I was talking with somebody
from the Innovation Guild
00;52;12;08 - 00;52;15;26
who reported that they were under pressure
to use,
00;52;16;16 - 00;52;20;26
Midjourney to produce like an image
and then fix all the artifacts
00;52;20;26 - 00;52;24;13
that come out in the image
and what you get at the bid journey
00;52;24;13 - 00;52;28;12
or stable diffusion or whatever is
you get a PNG or a Jpeg file
00;52;29;00 - 00;52;31;28
and then you might have
to fix the artifacts that come out of it.
00;52;31;28 - 00;52;34;28
But really, what you want to work with
is something like an illustrator file
00;52;35;04 - 00;52;37;18
that has many different layers
to it. Right?
00;52;37;18 - 00;52;39;03
And that's not what it produces.
00;52;39;03 - 00;52;41;01
And so if you're actually
going to fix the artifacts,
00;52;41;01 - 00;52;44;20
you actually have to reproduce the image
with all its, all its layers.
00;52;45;05 - 00;52;48;05
And so that's not actually helpful
as a tool of collaboration.
00;52;48;14 - 00;52;52;05
It's actually breaking
the collaborative pipeline there.
00;52;52;21 - 00;52;53;10
And so
00;52;54;16 - 00;52;57;08
there may be solo, folks
00;52;57;08 - 00;53;01;03
and, you know, individual contributors
that may find them very useful.
00;53;01;10 - 00;53;03;11
But once you get into a larger
organization
00;53;03;11 - 00;53;05;24
and what you have to actually
verified information,
00;53;05;24 - 00;53;07;07
it falls apart pretty quickly.
00;53;07;07 - 00;53;08;27
I would just add briefly to that.
00;53;08;27 - 00;53;11;23
I would ask workers
who are using something
00;53;11;23 - 00;53;14;24
because they feel like it helps them
speed things up to think about
00;53;14;24 - 00;53;17;23
how long they actually get
to maintain the benefits of that
00;53;17;28 - 00;53;20;26
before they're just asked to do more
in the same amount of time.
00;53;20;26 - 00;53;22;14
I think that these kinds of
00;53;22;14 - 00;53;25;24
so-called efficiencies generally
are not going to accrue to the workers.
00;53;26;03 - 00;53;27;09
Right, right.
00;53;27;09 - 00;53;29;27
That the and there's
and there's certainly a risk of that.
00;53;29;27 - 00;53;33;15
So, you know, just kind of tying
a bunch of this conversation together.
00;53;34;17 - 00;53;36;06
We're seeing all these trends
00;53;36;06 - 00;53;40;09
happening right now around the hype,
around adoption, around pushback.
00;53;40;17 - 00;53;43;17
When you look out, you know, over the,
you know,
00;53;43;28 - 00;53;46;19
your outlook for the years to come.
00;53;46;19 - 00;53;50;09
To what degree are you kind of optimistic
or pessimistic about the trajectory
00;53;50;09 - 00;53;56;03
we're on right now and our ability to,
you know, get to, you know, bend the curve
00;53;56;03 - 00;54;00;04
in a way that's actually net positive for,
you know, us as people.
00;54;00;17 - 00;54;02;10
So I am an optimist at heart.
00;54;02;10 - 00;54;04;01
I also don't make predictions.
00;54;04;01 - 00;54;05;05
So I can tell you sort of what
00;54;05;05 - 00;54;11;10
what gives me hope though is you know,
watching people, standing up and saying no
00;54;11;10 - 00;54;14;10
and watching people
adopting ridiculous practices.
00;54;15;03 - 00;54;16;29
Also,
00;54;16;29 - 00;54;20;06
taking a page from Karen
Howe's amazing book Empire of I, she,
00;54;20;06 - 00;54;23;28
she tells amazing
stories of people in, Chile
00;54;23;28 - 00;54;27;23
and Uruguay who organize
to resist the imposition of data centers.
00;54;28;00 - 00;54;32;17
And she makes the point that,
you know, people who have had more power
00;54;32;18 - 00;54;36;18
taken from them nonetheless
maintain agency and nonetheless push back.
00;54;36;26 - 00;54;41;04
And I think that, it is really important
to resist narratives of inevitability.
00;54;41;24 - 00;54;44;28
Even the ones that say, oh, well,
you know, it's here to stay.
00;54;44;28 - 00;54;46;11
So we have to learn to live with it.
00;54;46;11 - 00;54;48;22
That is still a narrative of inevitability
00;54;48;22 - 00;54;51;11
and therefore
still a bid to steal our agency.
00;54;51;11 - 00;54;54;01
But we all have agency
and we can continue to claim it.
00;54;54;01 - 00;54;56;18
I'm less of an
optimist. I think the Leah's,
00;54;58;03 - 00;54;59;21
the pessimistic part of me is just
00;54;59;21 - 00;55;03;01
that there's more and more investment
that's being put into this.
00;55;03;01 - 00;55;03;28
I mean, we have one of,
00;55;03;28 - 00;55;07;26
I think, the largest tech investment round
we've seen with open AI,
00;55;08;04 - 00;55;12;26
an investment round led by SoftBank
that was to the tune of $40 billion.
00;55;13;10 - 00;55;15;26
And so more more money. Sorry after that.
00;55;15;26 - 00;55;19;05
But I think, you know, an optimistic
reading of that
00;55;19;05 - 00;55;22;05
is that this is kind of the last gasp.
00;55;22;05 - 00;55;25;12
It is
we are tossing this much money at it.
00;55;25;22 - 00;55;27;28
This is the big bet.
00;55;27;28 - 00;55;31;15
If you don't come out of this,
this is you know, this is
00;55;31;15 - 00;55;33;02
this is going to be your last chance.
00;55;33;02 - 00;55;37;03
And it's not like
Vassar is known for good investments.
00;55;38;16 - 00;55;40;18
And we work is indicative of that.
00;55;41;25 - 00;55;42;08
And so
00;55;42;08 - 00;55;46;01
it might be the case that we're seeing
a lot of that, that bubble,
00;55;46;18 - 00;55;49;24
really reaching its, its peak in size.
00;55;50;13 - 00;55;53;21
The things that are optimistic
are the kind of ways in which workers,
00;55;53;29 - 00;55;55;19
in particular are pushing back.
00;55;55;19 - 00;55;59;14
We're seeing efforts from the Writers
Guild of America, of course,
00;55;59;14 - 00;56;03;05
that have strong protections
around generative AI in their workplace.
00;56;03;14 - 00;56;06;13
We've seen some of that work
for public workers in Pennsylvania
00;56;06;13 - 00;56;09;25
and from SEIU, the Writers Guild,
00;56;10;07 - 00;56;15;03
and the Authors Guild, also, or, sorry,
the Authors Guild in the animators film
00;56;15;09 - 00;56;18;15
as also being, organizations,
00;56;19;04 - 00;56;21;25
taking a strong line of that,
00;56;21;25 - 00;56;24;01
line against generative AI.
00;56;24;01 - 00;56;26;21
And that really does, give me hope.
00;56;26;21 - 00;56;29;21
And I think we're seeing
a really nice confluence here of people
00;56;29;24 - 00;56;32;24
trying to understand
what's behind all this.
00;56;32;29 - 00;56;36;03
Is this all hype
and what can we do about it?
00;56;36;03 - 00;56;39;21
And I think to that end,
I think our book is is very helpful,
00;56;39;21 - 00;56;42;21
and I hope it's a tool for folks
seeking that out.
00;56;42;25 - 00;56;43;20
Amazing.
00;56;43;20 - 00;56;46;00
I appreciate the thorough answer,
and I wanted to say a big
00;56;46;00 - 00;56;48;12
thank you to each of you, Alex
and Emily for joining today.
00;56;48;12 - 00;56;50;08
I thought this was a really fascinating
conversation.
00;56;50;08 - 00;56;52;10
And I appreciate
your time. Thank you so much.
00;56;53;10 - 00;56;54;19
This is great to talk to you today, Jeff.


The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Our Guest Dr. Ayesha Khanna Discusses
Double Agents: Dr. Ayesha Khanna on How AI Is Turning on Humans
What risks come with AI systems that can lie, cheat, or manipulate?
Our Guest Bryan Walsh Discusses
The Lazy Generation? Is AI Killing Jobs or Critical Thinking
Bryan Walsh, the Senior Editorial Director at Vox, sits down with Geoff to discuss how artificial intelligence is transforming the workplace and what it means for workers, students, and leaders.
Our Guests Dr. Emily Bender and Dr. Alex Hanna Discuss
From Dumb to Dangerous: The AI Bubble Is Worse Than Ever
Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst?
Our Guest Adam Cheyer Discusses
Siri Creator: How Apple & Google Got AI Wrong
What does the future of AI assistants look like and what’s still missing? In this episode of Digital Disruption, Adam sits down with Geoff to discuss the evolution of conversational AI, design principles for next-generation technology, and the future of human-machine interaction.