Home About Us Media Kit Subscriptions Links Forum
 
APPEARED IN:

Nov/Dec 2018Download PDF

FAMOUS INTERVIEWS

Directories:

SCHOLARSHIPS & GRANTS

HELP WANTED

Tutors

Workshops

Events

Sections:

Books

Camps & Sports

Careers

Children’s Corner

Collected Features

Colleges

Cover Stories

Distance Learning

Editorials

Famous Interviews

Homeschooling

Medical Update

Metro Beat

Movies & Theater

Museums

Music, Art & Dance

Special Education

Spotlight On Schools

Teachers of the Month

Technology

Archives:

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

1995-2000


NOVEMBER/DECEMBER 2018

Artificial Intelligence:  The Importance of Responsible Digital Citizenship
By Jason Ohler, PhD

 

DepositPhotos/the_lightwriter
Credit: DepositPhotos/the_lightwriter

Big Idea: All of our AI apps and intelligent machines will need ethical programming. Whose ethics shall we use?

Imagine that you are driving down the highway in the family SUV, your two children and the dog in the back seat. Suddenly, a deer jumps out in front of your car. You can: 1) jump the curb and hope you don’t hurt everyone in the car as well as two people on the sidewalk who are out walking their dog; 2) hit the deer, knowing that doing so would probably injure or maybe even kill you and your passengers, and certainly annihilate the deer, or 3) cross into oncoming traffic and take a chance you can outmaneuver all the cars headed straight for you. A decision needs to be made in a split second.

And, oh yes, you aren’t driving. You are in an autonomous, self-driving SUV. Your car will need to decide. Even if your car has some kind of override that allows you take control of the vehicle, it is all happening too fast. You have no choice but to let your car make the decision and hope for the best.

This is not a contrived situation. Tech ethicists are already trying to unravel quandaries like this as AI permeates daily living. And the future is just getting started.

The Trolley Problem, updated with AI

This dilemma is not unlike the one described in the Trolley Problem, a foundational thought experiment in most college ethics classes that has been debated by a number of moral philosophers. In Dr. Judith Jarvis Thompson’s version, a trolley with failed brakes is hurtling down a hill toward five workmen who are repairing the tracks. There is the very real possibility that the workmen will not see the train in time to move. However, you can throw a switch and send the trolley on to another track where it will kill only one person. Which option is more ethically sound? Or, in more modern terms, how would we program an AI machine — like a self-driving car — to respond?

In the SUV and deer dilemma, your car is being tasked with making the same kind of ethical decision that a human would need to make. So, if you had a few seconds, how might you think this through? Is it simple math? Option 1 risks hurting five people and two dogs. Option 2 guarantees some kind of damage, probably to you and your passengers, and most definitely to the deer, and perhaps risks a pileup as traffic behind you swerves to avoid the accident. Option 3 is filled with unknowns, putting everyone in your car at risk, as well as anyone you might collide with in the oncoming lane. The number of people who might be hurt is potentially quite high but impossible to calculate.

As an aside, do you value the life of your own dog in the back seat more than the dog on the sidewalk that you don’t know? Or how about the unknown dog vs. the deer; are they of equal value to you? Your car may need to know how you would answer those questions in order to calculate its response.

We are making moral machines

Whenever artificial intelligence crosses the line from functional decision-making into weighing the fate of human beings, it joins the rest of society in the gray area of moral responsibility. More importantly, in a world of deep machine learning, AI entities will develop as moral beings by learning from their experience — just as we do. Whatever the car’s AI programming decides to do will feed its evolving moral sensibilities. We better make sure that our initial SUV programming reflects what’s best in us.

Cars are just the beginning. Our robots and self-aware homes, even the bots we use to answer our email, will also be faced with similar moral dilemmas. Most of our new tech will be AI infused in some way. We will shop for the smartest AI we can afford. The smarter it becomes, the more we will depend on programmers to craft AI that extends us, in McLuhanistic terms, in ways that reflect who we are as moral human beings. Given we might all handle the deer and SUV situation differently, what kind of programmer will we turn to?

In a recent edition of Education Update, I made the case for needing Character Education Version 2.0 to help our students, as well as ourselves, make the complex ethical decisions required in living a digital lifestyle. We need to hurry because a need for Character Education Version 3.0 is already here: training our AI creations to be able to think ethically in ways that reflect our better selves. When shopping for AI that supplements and in many ways co-authors our lives, we will consider not only how smart it is but also how it frames its ethical decisions. After all, soon our robots will become our fellow digital citizens. We will want to make sure they are the kind of neighbors we want living in our communities. #

Jason Ohler is a professor emeritus of educational technology and virtual learning, as well as a distinguished President’s Professor, University of Alaska. When he is not playing with his many grandchildren, he is a professor in Fielding Graduate University’s Media Psychology PhD program. At 65 he continues to write, conduct research, oversee student PhD activities, and deliver keynotes internationally about the future of humans and technology trying to make peace with each other.

COMMENT ON THIS ARTICLE

Name:

Email:
Show email
City:
State:

 


 

 

 

Education Update, Inc.
All material is copyrighted and may not be printed without express consent of the publisher. © 2018.