
Ethics
Episode 2 | 26m 46sVideo has Closed Captions
Discover the balance between technological progress and ethics regarding AI.
AI is set to redefine education, healthcare, employment, and even faith and ethics. Join us as we discover the balance between technological progress and ethics regarding AI.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
AI: Unpacking the Black Box is a local public television program presented by WITF

Ethics
Episode 2 | 26m 46sVideo has Closed Captions
AI is set to redefine education, healthcare, employment, and even faith and ethics. Join us as we discover the balance between technological progress and ethics regarding AI.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch AI: Unpacking the Black Box
AI: Unpacking the Black Box is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship♪♪ >> Support for "AI: Unpacking the Black Box" comes from viewers like you and from Goodwill Keystone Area.
It's the last tea party for Crista with Miss Marshmallow and Sarah's first day of management training at Goodwill.
When you donate to Goodwill, you help provide skills, training, and career placement, and the things you loved start a new life, too.
>> Picture this -- you're lounging on a pristine beach, the warm sand cradling your body as you sip a refreshing mojito.
The sun's golden raised caress your skin, and a gentle breeze carries the salty scent of the ocean.
Now you watch as your children nearby, laughter mingling with the rhythmic sounds of waves lapping at the shore, as they build elaborate sandcastles.
It's a perfect day in paradise, one of those moments you wish could last forever.
Suddenly, something strange happens.
The ocean begins to retreat, exposing stretches of sand never before seen.
Stranded fish flop helplessly.
And-long hidden treasures of the sea glint in the sunlight.
Now, it's an odd sight, but not immediately alarming.
You pull out your phone to capture these strange phenomena, thinking it'll make for an interesting post on social media later.
Now, your children are excited by the novelty -- obviously, want to explore the newly revealed seabed.
You hesitate, an uneasy feeling growing in the pit of your stomach.
Before you can decide, a low rumble catches your attention.
Now, at first, it's barely audible, but it quickly grows into a deafening roar that drowns out all of the other sounds.
Now, with your heart pounding, you look up, and you see a massive wave of water on the horizon, towering and ominous.
You scoop up your children, abandoning all of your belongings, and you run towards higher ground.
Now, as you race away from the shore, you hear the thunderous crash of the waves hitting the beach.
The roar of the water and the cracking of structures fill the air.
Now, you don't look back, focusing all of your energy on reaching safety with your children.
You spot a tall building ahead.
You make a beeline for it.
Now, other beachgoers are doing the same, their faces etched with the same fear.
As you reach the building, kind strangers help pull you up -- and your children, all inside the building.
You climb the stairs, heart hammering in your chest, the sound of destruction growing louder.
Finally reaching the roof, you turn to see the devastation below.
The once idyllic beach is now a churning mass of water and debris.
Buildings crumbling, cars float like toys, and the landscape is unrecognizable.
You hold your children close, grateful for your quick action, but shaken by the near miss.
As the water begins to recede, leaving chaos in its wake, you realize the gravity of what just happened.
The signs were all there -- the retreating water, the unusual quiet before the roar, but you almost missed them.
You silently vow to always stay vigilant and educated about natural disasters, knowing that this knowledge saved your family today.
This harrowing experience has forever changed your perspective, instilling a deep respect for the ocean's might and a renewed appreciation for the preciousness of each moment with your loved ones.
Now, this harrowing scenario, based on the tragic 2004 Indian Ocean tsunami, mirrors our current situation with artificial intelligence.
The rapid advancements in AI technology are like the retreating waters before a tsunami.
We're witnessing an unprecedented development -- language models that can converse like humans, AI-generated art that's indistinguishable from human creations, and autonomous systems making complex decisions in just fractions of a second.
Many of us are still metaphorically on this beach, marveling at these novelties.
We're sharing our amazement on social media, playing with AI chatbots, or trying to quickly capitalize on the opportunities presented by this new technology.
But just like the tsunami, a wave of transformative change is building.
The full impact of AI on our society, economy, and our very way of life is approaching faster than we might realize.
The signs are all around us.
AI is already reshaping industries, from healthcare to finance, from education to entertainment.
It's influencing our decisions, curating our information, and even creating art and music.
But like the receding waters before a tsunami, these changes might seem benign or even beneficial at first glance.
However, just as the retreating water precedes the devastating waves, these AI advancements come with potential danger.
The erosion of privacy, the spread of misinformation, the displacement of jobs, the amplification of biases -- these are just a few of the ethical challenges posed by the AI revolution.
The question we face now is, will we recognize the signs in time?
Will we prepare and position ourselves to ride this wave of change?
Or will we be caught unprepared and swept away by its force?
Will we be like those who recognize the danger of the retreating waters and sought higher ground?
Or will we be like those who stayed on the beach, marveling at the unusual sight until it was too late?
Now, in this episode, we'll dive deep into the ethical challenges posed by AI.
We'll examine the intersection of AI and gene editing and the profound implications for the future of humanity.
We'll also look at the broader societal impacts, how AI is reshaping our notions of privacy, altering the job market, and potentially exacerbating existing inequalities.
We'll examine how AI might influence our democratic processes, our legal systems, and even our understanding of creativity and art.
But perhaps the most profound question we must grapple with is this -- what happens if artificial intelligence becomes sentient?
Imagine a world where AI systems not only match or surpass human intelligence, but develop consciousness and self-awareness.
How would we recognize such a sentience?
How would the emergence of a sentient AI impact our relationship and social structures?
Could we form meaningful friendships or even romantic relationships with artificial beings?
Would they be seen as equals?
Would they be treated as second-class citizens, leading to a new form of a civil rights movement?
As AI continues to advance at a breathtaking pace, these are questions we must confront.
The decisions we make now could set the stage for the future of human-AI relations for generations to come.
But this isn't just about identifying problems.
Like those who survived the tsunami by recognizing the signs and taking action, we'll also explore how we can prepare for this incoming wave of AI.
We'll discuss the ongoing efforts to develop ethical guidelines for AI development and deployment.
We'll look at initiatives aimed at ensuring AI benefits all of humanity, not just a privileged few.
And we'll grapple with the monumental task of preparing our societies, economies, and belief systems as we embark on this exploration.
Remember, the AI tsunami is coming.
The waters are already receding.
The question is, will we be caught unaware on the beach, or will we be prepared to navigate the transformative wave headed our way?
Will we be ready for a world where AI doesn't just change how we live, but potentially how long we live?
The one thing we know for sure -- there's no stopping a tsunami.
So join us as we dive into these crucial questions and more as we explore the ethics of AI and continue our journey to unpack the black box.
♪♪ >> The ethics of AI is becoming important because now, finally, a lot of AI systems are becoming more powerful.
We saw this with ChatGPT -- that, suddenly, there's a lot of interest in AI, whereas, previously, AI would just be in narrow applications so that it could just do small, specific tasks.
Now we have AI systems that are more general purpose and can do a variety of things.
As well, they're on a very steep trajectory in accelerating capabilities.
So they continue to surprise us.
And that's why discussions are now becoming much more robust and including many stakeholders about what are we going to do about this technology and how do we prepare?
So, I think this is a problem in sort of ideological bias and political bias.
It's up to there being, you know, sufficient pressure in the marketplace and consumers holding AI companies accountable if they are providing biased results, if they're saying something positive about one political candidate, but saying, "I'll refrain to weigh in on politics," for another political candidate is an example I saw this morning.
There's not much that can be done regulation-wise on this, because that would start to violate free speech.
So I think there could be other things that would improve the situation.
For instance, if AIs were just trained to not just imitate the text that people are uploading online, but instead just try and predict the future -- if it was just trained to be accurate and say what's going to happen and what's true, that could possibly make them a lot more truthful overall.
♪♪ ♪♪ >> If it's something around an election, we could train our models -- instead of banking policies, now we're using election information from trusted sources like the government on, you know, when you can vote, or, you know, "Here's how the voting process works," or trusted information that's factual around what's happening in the news.
Right?
And so we could train our models to then go and detect if a chatbot is veering off from that or inaccurate.
And so we can go through those feeds and actually say, "Oh, hold on a second.
This might actually not be accurate."
And that's really important because a lot of consumers are now getting their information from chatbots from various feeds or sources.
They're also getting it from news sources or other sources that might also be generated from underlying AIs or chatbots themselves, right?
And so even if it might appear to be coming from a human, very important to fact-check that, as well.
♪♪ >> A few years ago, when you were trying to Google and they used algorithms to look at depictions of black women, a lot of the images were depicted as prostitution or depicted as, you know, less than wholesome, whereas, with white women, yes, we saw more wholesome images or more angelic images.
And so we knew that there was a problem with algorithms and how they were outputting different pieces of information about certain populations.
Looking at all of this information that's extrapolated still needs some human intervention.
Even though it might be more convenient for a computer to extrapolate all this information from around the internet and then put out this output that might be very quick, is it really accurate?
And we also have to remember that this is information that's out there.
We have to remember there's information that is not out there, that is intellectual property, that's behind a paywall that doesn't automatically give access to the technology that can scrape this information off the web.
How does that skew information, and how disparate is that information when it's only getting information that it has easy access to?
>> How do we make sure that a model doesn't say anything that's racist, sexist, or hurt someone's feelings on purpose?
I mean, that -- ideally, our bots don't do that to you, but if they're trained on large language models, which is this model that's trained on the internet's data, unfortunately, data has shown that it can become racist, sexist, and hurt people on purpose [laughs] with no governing, right?
So now we have to -- we know that that's going to be the case.
So we need to create systems that protect people that use these systems.
And so content filtering looks at patterns in human language that are hurtful.
So realizing that there is a way to build this technology to do whatever you want -- yay, good news -- but that there's a disciplined approach to do it safely and responsibly.
I think that's my biggest fear, is that we're going to build a bunch of stuff that isn't safe.
The business user has to be involved to educate the model on what's right and what's wrong, what hurting someone looks like.
The number-one question I ask people to do in the red teaming that I train is like, how will this hurt somebody?
And sometimes, I will hear people say, "I don't even see how it could hurt someone."
And that means you don't have the right people in the room.
Will some jobs go away entirely?
I actually don't think AI is a job killer.
I think it's a task killer.
I think, when it does take away jobs, almost always we are cutting into the muscle of an organization in an attempt to get tactical short-term value -- we're actually hurting our business long-term.
♪♪ >> In 2019, we had -- our Global Ethics and Compliance Office commissioned an external holistic evaluation of Hewlett Packard Enterprise human-rights positions.
And the answer came back, "You guys are leaders."
There -- great.
But the next three things that Hewlett Packard Enterprise has to worry about with human rights is AI, AI, and AI -- AI your products, AI in your processes, whether they face customers, team members, or partners, and then AI when people want to partner with you and build AI outcomes on top of your goods and services.
And so the ethics compliance team says, "Okay, other than how to spell it, we don't know anything about AI."
And so that's when they called us at Hewlett Packard Labs.
They say, "How do we understand how we should be doing AI?"
The purpose of Hewlett Packard Enterprise is to address the way people live and work.
So I started with a blank sheet.
I wrote that purpose at the top of the sheet, and I said, "Okay, everything has to derive from this."
We really debated, and we held each other accountable.
We said, "Look at that purpose at the top of the sheet.
Tell me, why is this one necessary?
Why do we put this one here?
You know, do we put do we put human rights in front of 'we obey the law'?"
And, you know, all the -- all these things, and it's interesting -- we ended up with five principles.
AI should be privacy preserving and secure.
AI should be human-focused.
AI should be equitable.
AI should be responsible.
AI should be robust.
And that was the first step.
It's one thing to claim you have principles -- it's another thing to have them evidenced in how you run your business.
And that's what we've been doing since 2019 to now is to help teams across our company, across the world to understand our principles, and then, when they have an opportunity, when they want to incorporate AI into our product, when they want to develop a new process that utilizes AI to deliver great outcomes, then we are evaluating these principles.
And by evaluating, we're -- really, what we're trying to do is we are teaching everyone in those functions, in those businesses, whether it's in the factory or out our field, understand our principles, understand how to evaluate this opportunity and apply these principles and tell us, you know, how confident are you?
Have you designed the systems so that you can uphold these principles?
♪♪ >> When the average person, looking at myself, thinks about DNA, they think about a ladder that's twisted.
Well, imagine each one of those rungs of the ladder are individuals called "nucleotides."
But what if you could, like, change those to a different type of color?
What if you could cut out pieces of that ladder?
What if you could cut out pieces of that ladder, put it somewhere else?
There's also tools where you can say, "I want to change half the ladder, and I don't want to cut it out -- I just wanted to change half the ladder."
The application of AI and software and gene editing could be so powerful.
There's lots of questions that come with it, but from an ethics perspective and capabilities perspective.
And then we thought that we could go use it to bring back extinct species, while also building technologies that could help conservation and help give new tools in the fight against loss of biodiversity.
Humans, on an individual basis, are smart.
We have to give them the data, and we have to be transparent.
And so people are going to use AI.
Bad people are gonna use AI.
Bad people do use AI.
Bad people are going to use genetic engineering, right?
China was leveraging facial-recognition technology to segregate a population.
Ethically, that's wrong.
Maybe 90% of the people in the world would assume that that is wrong, right?
And there was an over-course correction, like in San Francisco, and there's like a moratorium in San Francisco where technology companies that were facial-rec technologies -- like, "This is evil, so we're not going to fund it."
But that's like the ostrich putting its, you know, proverbial head in the sand, right?
And it's like just because other people are doing bad things with technology, that doesn't mean that we should put our backs on it.
If we build technologies that can help save species, that the conservation communities don't have to spend money and capability trying to go build, well, then, that's also a win to the planet.
>> So, I work for the Markkula Center for Applied Ethics at Santa Clara University.
We're an organization which thinks a lot about not just ethics about technology, but also ethics in government, ethics in bioethics, ethics in journalism, so a very broad-spectrum approach to ethics.
But that also lets us think about AI from all those different perspectives.
AI is being used in medicine.
It's being used in journalism.
It's being used in business and technology, of course.
So it really gives us a broad perspective on the subject.
One of the things I've been thinking about recently is this issue of, how do we actually accelerate quality-level discussions about these issues?
One thing that happened back when the Human Genome Project was happening is we had a certain percentage of every -- of every grant that was given for the Human Genome Project had a little, you know, bracketing off to the side.
I can't remember it was 3%, 5% -- something like that -- where they had to study ethical, legal, and social issues related to what they were doing in their grant.
They had to support that, that the issue -- or at least the investigation of that issue, the conversation.
And what that did is it actually gave people more confidence about what was happening with the Human Genome Project.
First of all, they thought about their -- what they were doing with the technology, with the science from a slightly different perspective.
They said, "Oh, let's think about the social impact.
Let's think about the ethics.
And what does this mean for law?"
And that -- that gave them a broader perspective.
Maybe you could talk about something once every six months in the past, but now we're going to have to talk about it every month, or maybe every week, or maybe every day.
We really have to figure out how to pick up the pace.
The biggest question of them all, right -- it's like, how do we use AI for good?
Because it has obviously good uses.
And how do we prevent it from being used for bad?
How do we -- How do we take these bad uses and -- and either ban them or somehow contain them so that these things are not destroying society?
And the answer is that we have to recognize the situation that's around us.
We have to get all the facts.
We have to put that together.
We have to have a conversation with each other so that we really know what's going on and what we need to decide to do, and then we need to actually do those things.
So, backing up to the bad uses, AI should not start replacing people.
There are all these apps out there right now which are like boyfriend and girlfriend apps.
It's like, you know, you've gotten sick of dealing with human beings, so you're going to have an AI girlfriend or boyfriend.
We should probably say to ourselves, "You know what?
That's something that we should not do.
This is going to destroy society.
We need to stop these things right now."
If those things take hold, once again, your society is destroyed within a few decades.
And so we need to be thinking longer-term good uses.
There are so many good uses.
Medicine is the prime example of this.
There are so many options for using AI to sift through medical data to find, "Oh, all these people were -- had these same conditions."
And we can -- we can look back in time and see, "Oh, you know what?
Here's an early warning sign that they had cancer, you know, maybe a decade before it showed up."
And if you can get stuff like that, you can say, "We can prevent the cancer from ever happening."
>> As we conclude our exploration of AI ethics, let's return to the beach where our story began.
The tsunami's passed, leaving devastation in its wake.
But unlike a natural disaster, the AI wave doesn't recede.
It continues to grow, to evolve, to become more powerful with each passing moment.
Imagine standing in the aftermath, surveying the changed landscape.
Some structures washed away entirely, while others stand transformed.
This is our world in the wake of AI's advancement, industries reshaped, jobs redefined, the very fabric of society rewoven.
But here's the crucial difference -- we're not helpless spectators.
We're active participants in this transformation.
The AI wave isn't a force of nature we can simply brace against or attempt to hold back -- it's a technological revolution of our own making, one that will only continue to gain momentum.
With each passing day, AI grows stronger, becomes more capable, and reaches into new areas of our lives.
It becomes cheaper, more accessible, and more integrated into the foundations of our society.
Trying to stop this progress would be like trying to hold back the ocean.
Remember, AI isn't inherently good or evil -- it's a tool, a reflection of our own ingenuity and our own flaws.
As it grows more powerful, it amplifies both our potential for progress and our potential for harm.
So, as you leave this episode, I urge you to stay vigilant.
Keep your eyes on the horizon.
Educate yourself about AI developments.
Engage in discussions about AI ethics.
Support initiatives that promote responsible AI development.
Because unlike the tsunami that strikes once and recedes, the AI wave will keep coming, keep growing, keep changing our world.
Now, we can't stop the wave, but we can learn to surf it.
We can harness its power to solve global challenges, to push the boundaries of human knowledge, to create a more equitable and sustainable world, but to do so, we must act now, we must be prepared, we must be ethical, and we must be united.
The future of AI is the future of humanity, and that future is in our hands.
Thank you for joining us on this critical exploration.
Until next time, remember -- as we continue to unpack the black box of AI, we're not just decoding algorithms -- we're shaping the very essence of tomorrow.
>> Support for "AI: Unpacking the Black Box" comes from viewers like you and from Goodwill Keystone Area.
It's the last tea party for Crista with Miss Marshmallow and Sara's first day of management training at Goodwill.
When you donate to Goodwill, you help provide skills, training, and career placement, and the things you loved start a new life, too.
♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipSupport for PBS provided by:
AI: Unpacking the Black Box is a local public television program presented by WITF