►
From YouTube: Artificial Intelligence: The Future Is Here
Description
As the science and technology of artificial intelligence continues to develop, more products and services are coming onto the market. For example, companies are developing AI to help consumers run their homes and allow the elderly to stay in their homes longer. Examine AI’s benefits and challenges, along with potential roles for policymakers.
A
I
am
Heather
Morton
and
I.
Work
on
financial
services
for
NCSL
and
I
staff,
the
communications
financial
services
and
interstate
commerce,
committee
and
I
will
be
introducing
our
speakers
with
us
for
this
afternoon's
session.
As
a
reminder
to
everyone,
any
recording
of
this
session
is
prohibited
unless
previous
approval
has
been
granted
by
the
NCSL
Communications
Division,
and
we
definitely
want
this
session
to
be
interactive,
an
interactive
discussion
between
our
distinguished
panelists
and
our
audience.
A
So
we
encourage
you
to
answer
quest
to
ask
questions
when
you
have
a
question,
we
do
ask
that
you
step
to
the
microphone
and
give
your
name
and
state
and
then
state
your
question
so
that
way,
everyone
in
the
room
and
as
well
as
online,
can
hear
your
questions
to
start
off.
I'll
just
really
fast,
introduce
all
three
speakers
in
a
row
and
then
I
will
turn
it
over
to
rich,
who
will
start
giving
us
an
overview,
and
so
Rich
Eisen
is
our.
A
Is
the
managing
editor
and
publisher
of
the
state
net
capital
journal
an
online
news
publication
that
covers
state-level
public
policy
trends
across
the
50
states?
The
journal
is
the
editorial
arm
of
state
net
alexis
nexus
company,
which
has
offices
in
Washington,
DC
and
state
capitals
around
the
country
and
rich
holds
a
Master
of
Arts
degree
in
public
and
political
communication
from
the
California
State
University
in
Sacramento
Michael
Hayes.
Our
second
gentleman
is
the
senior
manager
of
government
affairs
at
the
Consumer
Technology
Association.
A
He
leads
CTAs,
federal
and
state
efforts
on
emergent
excuse-me,
emerging
technology
issues,
including
artificial
intelligence
and
the
sharing
economy.
He
is
also
responsible
for
leading
federal
policy
initiatives
related
to
patent
litigation,
reform
and
high-skilled
immigration,
and
then
our
third
speaker
is
Oshun
de
Shaba,
and
he
is
an
engineer
at
the
RAND
Corporation
and
a
professor
at
the
party
rand
graduate
school.
His
recent
research
focus
has
been
on
accountability
in
artificial
intelligence
and
data
privacy
issues
prior
to
joining
rand.
A
He
was
a
researcher
at
the
University
of
Southern,
California
or
USC,
where
he
got
his
PhD
in
electrical
engineering,
and
so,
if
you
want
fuller,
biographical
information
about
the
speakers
that
is
available
online
or
through
the
ncsl
app
and
so
before,
I
turn
it
over
to
rich
I.
Just
wanted
a
quick
pose.
A
question
to
those
of
you
in
the
audience
by
it
just
by
a
quick
show
of
hands
to
determine
who
is
looking
forward
to
artificial
intelligence
as
a
new
and
exciting
area.
A
B
Alright,
well
that
that
that
show
of
hands
pretty
much
matches,
I
think
what
most
people
think
about
artificial
intelligence,
because
I
think
if
there's
one
thing,
that's
true
about
it.
It's
there's
a
seemingly
endless
supply
of
terms
that
we
use
to
describe
artificial
intelligence
or
the
process
of
machines
that
are
performing
tasks
that
we
normally
associate
with
human
intelligence.
We
used
to
just
call
that
automation,
which
these
days
is
kind
of
quaint
and
old-school.
B
Nowadays
we
use
terms
like
machine
learning
or
we
use
deep
learning.
But,
of
course,
the
one
we're
all
the
most
familiar
with
is
artificial
intelligence
and,
while
those
terms
are
used,
interchangeably,
they're
all
actually
somewhat
different
things
which
we're
going
to
get
into
that
a
little
bit
with
our
panel
of
experts,
but
suffice
to
say
that
this
technology
is
already
all
around
us:
Google
searches
for
one
autonomous
vehicles.
B
There's
a
very
good
chance,
you're
talking
or
texting
with
a
chatbot,
even
all
of
the
smart
phones,
that
a
lot
of
you
are
probably
looking
at
right
now,
instead
of
looking
at
me
artificial
intelligence
right,
the
list
goes
on
and
on
and
on,
and
you
get
it
so
that
said,
there
may
not
be
anything
out
there.
Today
we
collectively
struggle
to
more
to
understand
than
artificial
intelligence,
and
because
of
that
there's
tremendous
optimism,
we
saw
that
for
what
the
positive
potential
for
artificial
intelligence
is,
and
also
a
lot
of
concern
and
trepidation
over.
B
Maybe
what
the
negative
potential
of
that
is,
particularly
on
the
US
workforce.
Just
I
know
about
12%
of
US
workforces
in
transportation.
I
mentioned
autonomous
vehicles.
So,
if
you
are
somebody
who
drives
a
vehicle
for
a
living
I'm
sure
that
your
level
of
trepidation
is
pretty
high
compared
to
maybe
somebody
who
doesn't
do
that
and
as
the
rise
of
this
technology
goes
on
and
this
march
toward
a
semblance
of
AI
ubiquity.
B
Maybe
to
slow
this.
What
we
think
is
probably
inevitable
transformation.
So
it's
even
sponeck,
quite
honestly,
a
whole
new
field
of
law,
artificial
intelligence
law
which
deals
you
know
primarily
with
rights
and
liabilities
around
the
use
of
AI
there's.
Also
a
lot
of
legislation
deals
with
trying
to
foster
or
encourage
investment
into
AI
technologies,
or
sometimes
to
override
state
laws.
That
would
have
an
impact
on
that
one
way
or
the
other
there's.
B
Also
things
like
data
security
and,
and
even
certainly
here
in
California,
we
have
been
talking
a
great
length
about
the
role
that
artificial
intelligence
is
playing
in
as
an
example
the
prison
system
and
how
it's
used
in
terms
of
sentencing
and
probation
and
bail
issues.
So
the
social
justice
element
is
definitely
a
part
of
the
AI
discussion
here.
So
with
all
of
that
in
mind,
I
think
we're
very
fortunate
today
to
be
joined
by
two
people
who
really
understand
this
technology,
both
its
pros
and
it's
cons,
and
so
they're
gonna.
C
Brief,
so
that
we
can
have
a
good
back
and
forth
discussion
both
with
you
all
in
the
audience
and
with
the
panelists
up
here,
I'm
Michael
Hayes
with
the
consumer
technology
Association
we're
the
trade
association
that
represents
the
consumer
technology
industry,
that
is
a
351
billion
dollar
industry.
Here
in
the
US
we
represent
over
2000
companies
and
that
supports
about
15
million
US
jobs.
Some
of
that
is
an
artificial
intelligence.
C
We
represent
many
of
the
leading
companies
in
this
space
and
because
of
all
of
the
attention
in
industry,
in
society
and
in
government
around
artificial
intelligence.
We
launched
our
artificial
intelligence
working
group
at
the
beginning
of
this
year
to
start
tackling
some
of
the
big
questions
around
AI.
How
can
we
put
forth
prudent
policy
on
AI
make
sure
we
can
take
advantage
of
the
benefits
that
it's
poised
to
bring,
but
also
answer
some
real
questions
that
people
have
about
how
it
will
impact
their
lives
and
potentially,
as
rich,
mentioned,
their
jobs.
C
You
know
it's
no
secret
title
of
this
panel
AI
is
here,
I
think.
Unfortunately,
we
often
think
about
it
as
something
that
is
hypothetical,
that
it's
out
there
in
the
future
and
that
one
day
around
the
corner
will
suddenly
show
up
and
revolutionize
our
lives
Hey.
That
is
happening.
We
are
living
it
right
now,
as
rich
mentioned,
health
care,
patient
data
is
being
analyzed
in
real-time
and
being
fed
to
doctors
to
be
able
to
make
more
informed,
better
and
in
oftentimes
lower-cost
decisions
that
are
then
improving
health
outcomes.
C
Cybersecurity
very
complex
attacks
on
systems
are
being
mitigated
by
deep
warning
and
artificial
intelligence
systems
that
can
detect
patterns.
That
would
indicate
that
there's
a
potential
threat
and,
in
some
cases,
mitigate
that
threat
immediately
or
in
other
cases
and
work
humans
as
to
how
to
best
mitigate
that
threat.
We
experienced
this
voice.
Assistance
driver
assist
technologies.
You
experience
it
every
day.
When
you
get
warnings
from
your
bank
of
potential
fraud
on
credit
cards,
that's
often
an
artificial
intelligence
system.
C
That's
making
that
alert
as
it
becomes
more
and
more
ubiquitous
in
our
daily
lives,
able
to
take
on
more
complex
tasks.
Ai
is
going
to
raise
more
complex
questions
and
that's
okay.
We
need
to
confront
those
questions.
We
see
them
falling
into
three
main
buckets:
jobs,
bias
and
security,
and
hopefully,
today,
on
this
panel,
we'll
be
able
to
get
into
how,
as
industry
and
government,
we
can
help
address
those
questions
and,
in
particular,
what
states
roles
are
to
help
address
some
of
those
questions.
C
But
I
don't
want
this
all
to
be
about
addressing
questions
and
alleviating
concerns.
Ai
represents
an
enormous
potential
boon
for
society
and
for
our
economy
right.
We
need
to
talk
about
how
states
can
poised
themselves
to
be
the
ones
to
take
advantage
of
this.
We
hear
about
how
the
US
government
is
positioning
themselves.
We
hear
about
how
China
how
France
are
putting
themselves
out
there
as
AI
leaders,
whether
in
actual
economy
or
in
thought,
it's
not
the
exclusive
domain
of
superpowers
right.
C
Your
state
can
compete
in
that
you
have
universities,
many
of
you
world-class
universities
that
house
many
of
the
leaders
in
foundational
research
that
is
driving
artificial
intelligence
and
other
computer
systems.
You
have
community
colleges
that
are
partnering
with
industry
right
now
to
create
dynamic
curriculums
that
are
focused
on
making
sure
that
have
a
pipeline
to
a
good-paying
job
in
the
technology
space.
These
are
things
that
the
federal
government
can't
facilitate,
and
you
can
so
I
want
the
takeaway
from
this
to
be.
C
There
are
some
concerns
about
AI
and
we
collectively
can
help
address
them
and
there
are
some
enormous
benefits
from
AI
billions
of
dollars
in
economic
benefit,
many
societal
benefits
for
health,
wellness
quality
of
life.
Your
state
can
be
at
the
forefront
of
bringing
those
to
your
communities
and
we
can
get
into
how
exactly
we
can
help
make
that
happen.
D
D
Hello,
ok,
so
dual
use
of
artificial
intelligence
or
any
technology,
artificial
intelligence
promises
to
revolutionize
the
way
we
live
in
our
day-to-day
lives.
It
already
is
part
of
our
lives.
It's
it's
it
to
be
creatures,
as
we
already
mentioned,
but
I
am
an
engineer
at
a
policy
think-tank.
We
focus
mostly
on
the
use
of
work
on
institutional
aspects
of
the
use
of
technology.
It's
one
of
the
portfolios
we
have.
We
have
a
whole
bunch
of
other
portfolios.
D
I
personally
said
tend
to
focus
on
technology
in
institutions,
but
if
you
think
about
the
beauty
of
artificial
intelligence
for
a
second
think
about
how
you
got
here
today,
the
maps
you
used
when
you
said
ok
I-
was
trying
to
get
the
LACC
the
convention
center.
You
told
a
piece
of
technology
that
oh,
this
is
what
I
want
to
do
and
it
allowed
you
give
you
a
path
from
where
you
were
to
where
you
are
now.
In
that
sense,
that's
the
same
thing
with
most
of
the
use
of
artificial
intelligence.
D
It's
basically
a
way
to
improve
our
impact
on
the
world,
a
way
to
achieve
goals
in
the
world,
and
so
we
often
have
this
question
of
what
exactly
is
artificial
intelligence?
I
take
a
more
expansive
definition,
I,
don't
really
care
whether
it's
machine
learning,
deep
learning,
expert
systems
or
anything
like
that.
It's
it's
big,
basically,
any
piece
of
technology
that
enables
us
to
achieve
computational
goals
in
the
world.
D
This
is
McCarthy's
definition
and,
for
the
most
part
includes
things
like
searching
for
solutions,
recognizing
patterns,
learning,
new
things
in
the
world
or
just
planning
how
to
get
from
A
to
B,
and
so
taking
that
point
of
view,
I
I
start
thinking.
Okay,
where
is
artificial
intelligence
being
used?
We
start
thinking
about
things
like
the
filter
bubbles.
That's
how
I
currently
affect
personal
lives
every
day.
We're
thinking
about.
Okay,
I
want
to
see
I
want
to
read
news
on
the
internet.
D
Well,
that's
going
to
be
mediated
and
directed
by
artificial
intelligence
systems,
but
that's
a
personal
commercial
part
of
the
of
the
core
of
the
story.
I
focused
mostly
on
things
like
institutions
like
the
criminal
justice
system
institutions,
like
insurance
institutions
like
DHS.
How
do
they
use
artificial
intelligence?
What
are
the
problems
that
may
that
may
rise
up
in
the
use
of
artificial
intelligence,
and
so
one
of
the
things
we
found
there?
D
The
the
types
of
questions
that
come
up
there
to
be
of
three
three
types
form
the
same:
it's
usually
either
questions
of
equity
I
use
the
word
equity
as
a
positive
spin
on
this
idea
of
bias.
Because,
most
of
the
time
we
talk
about
machine
learning,
we
talk
about
bias
and
machine
learning,
or
it
comes
up
as
the
privacy
concern
that
behind
many
of
the
artificial
intelligence
deployments
we
have
today.
And
the
third
thing
we'll
talk
about
quite
a
bit
today
is
the
future
of
work.
D
How
does
automation
by
artificial
intelligence
lead
to
changes
in
labor
markets,
but
first
of
all
the
equity
point.
So
recently
the
Harvard
Business
Review
had
this
piece
by
this
researcher,
saying
well
artificial
intelligence.
If
you
want,
if
you
want
better
systems,
if
you
want
fairer
societies,
you
just
use
more
algorithms
and
the
point
he
was
trying
to
make.
There
was
algorithms
and
necess
really
Wow,
at
least
usually
better
than
humans,
making
decisions,
and
that's
almost
always
true,
at
least
in
place.
D
We
already
ploy
them,
but
that
kind
of
sort
of
doesn't
account
for
the
different
ways
in
which
algorithms
are
being
used.
One
of
the
things
about
algorithms
is,
we
use
them.
I
used
algorithm,
zania
interchangeably,
we
use
them
without,
knowing
that
you
we're
not
really
clear
about
all
the
ways
these
algorithms
are
causing
errors.
We
have
this
thing
called
automation
bias
when
errors
happen,
because
they
are
done,
they
are
caused
by
artificial
intelligence,
and
it's
behind
this
black
box.
We
don't
interrogate
them.
D
D
We
have
to
be
able
to
talk
about
what
happens
when,
inevitably
they
make
errors,
because
it's
essentially
a
fundamental
law
of
computation
where,
if
you're
making
decisions
in
finite
time
with
finite
data,
you're
going
to
be
making
errors
at
some,
maybe
smaller
weight
or
you're
inevitably
going
to
pick
errors.
So
how
do
we
control
for
that?
So
that's
part
of
the
question,
but
there's
also
this
issue
of
who
bears
the
burden
of
artificial
intelligence
errors.
When
AI
makes
mistakes,
does
it
make
mistakes
equally
across
across
society?
And
increasingly
we
find
that?
That's
not
true.
D
The
recent
examples
would
be
facial
recognition
technologies.
We
are
seeing
that
because
of
the
video
train
data
which
they
are
trained,
you
have
AI
systems
that
are
very
good
at
identifying
recognizing
certain
types
of
faces,
but
have
almost
three
to
ten
times
the
error
rate
on
a
different
part
of
society.
Imagine
how
how
what
happens
when
these
types
of
technologies
are
being
deployed
would
be
critically
imagine
if
it
becomes
a
routine
part
of
our
surveillance
apparatus
who
bears
the
burdens
of
every
mistake.
D
These
types
of
closed
boxes
to
make
and
the
third,
the
the
the
second
part,
the
issue
of
privacy
current
during
the
current
generation
of
artificial
intelligence,
is
what
we
call
machine
learning.
Basically,
statistical
machine
learning
is
basically
your
basic
statistics
amped-up
on
steroids.
It
relies
on
data
who
provides
that
data.
D
It's
every
single
one
of
you
in
your
use
of
artificial
intelligence,
use
of
Google,
Facebook,
Amazon
Netflix,
every
single
device
you
every
single
commercial
entity
you
use,
is
probably
collecting
data
on
you
and
they
using
that
data
to
learn
to
become
more
intelligent
in
making
decisions.
But
then
what
happens
when
that
those
treasure
troves
of
data
broken
into
what
happens
when
somebody
breaks
in
and
takes
that
data
who
bears
the
brunt
the
burden
of
those
breaches
at
the
moment?
For
the
most
part,
it's
the
user.
D
At
some
point,
we're
probably
going
to
have
to
revisit
that
problem
that
that
issue
EU
GDP,
are
the
general
data
protection
regulation
is
trying
to
address
this.
Hopefully
we
can
get
some
version
of
an
address
in
the
States,
and
the
last
point
is
this
issue
of
the
future
of
work.
How
does
artificial
intelligence
change?
The
labor
market,
I
guess
I'm
of
two
minds
on
this
one,
the
implication
of
technology
for
it
for
for
the
labor
market
is
not
a
new
concern.
It's
a
really
old
concern.
D
Every
time
we
have
a
new
wave
or
a
talk
of
technology.
We
worry
about
whether
they're
going
to
lose
jobs
and,
historically
that's
not
being
true,
but
then
you
can
argue
that
artificial
intelligence
is
sufficiently
different,
that
the
the
disruption
due
to
AI
is
going
to
be
so
disruptive
that
is
going
to
destroy
society.
I'm,
not
gonna.
Take
that
extreme
of
a
view
and
I'm
going
to
cut
the
middle
somewhere
and
say:
there's
gonna
be
job
churns.
What
happens
when
automation
renders
is
killed,
less
valuable?
D
What
happens
to
people
who
have
spent
their
entire
lives
developing
that
skill,
so
I'm
thinking
more
like
people
like
radiologists,
what
happens
to
those
types
of
people
and
but
then
there's
the
flip
side
of
that?
What
happens
when
AI
become
so
ubiquitous
that
it
allows
for
new
types
of
work
to
up
to
appear?
Think
about
the
gig
economy.
B
Great
well,
thank
you
both
for
those
statements,
a
lot
to
chew
on
good
and
bad.
A
lot
to
chew
on
right,
but
I
want
to,
of
course,
go
back
to
the
work
force
thing.
Let's
start
there,
because
I
think
that's
the
greatest
area
of
comes
and
particularly
for
lawmakers,
because
it
really
gets
into
retraining
and
workforce
development
education,
and
that
touches
a
lot
of
areas
that
are
very
important
to
most
lawmakers
I.
B
C
But
look
we
just
hired
our
first
vice-president
of
US
jobs
to
focus
largely
on
these
issues.
Right
we
represent
an
industry
that
creates
very
transformative
technologies,
not
just
artificial
intelligence.
That
means
an
extraordinary
number
of
new
opportunities
for
people
in
the
United
States
and
globally.
It
also
means
an
extraordinary
amount
of
change
for
people
in
the
workforce
globally.
We
as
an
industry
as
the
ones
creating
this
technology
believe
that
we
have
an
obligation
to
be
part
of
the
solution
to
that,
and
that
falls
in
two
categories.
We
think
right.
You
need
to
look
future
focused.
C
How
do
we
make
sure
that
future
generations
are
able
to
take
advantage
of
the
new
opportunities
that
technology
creates?
How
are
we
making
sure
that
not
just
higher
ed,
like
I
mentioned
earlier,
but
K
through
12,
has
exposure
to
computer
coding
to
robotics
partially
to
learn
the
technical
skills,
but
partially
so
that
we
have
kids
who
are
in
traditionally
thought
of
as
disadvantaged
areas?
Who
might
just
not
think
that
a
career
in
technology
is
for
them
right?
C
They
might
not
be
exposed
to
this
and
think
that
that's
something
that's
attainable,
and
so
we
as
an
industry,
need
to
work
with
you
all
to
make
sure
local
schools
than
local
education
institutions,
others
that
work
with
the
education
space
are
exposing
people
at
a
young
age
to
the
fact
that
there
are
going
to
be
opportunities
created
by
technology.
That's
beneficial
for
your
communities
and
it's
beneficial
for
our
companies
who
have
labor
force
in
many
parts
of
the
country,
not
just
your
traditional
tech
hubs,
as
we
think
of
them.
C
The
second
piece
of
that
and
that's
challenging
right
I,
don't
mean
to
just
flippantly
say
we
can
do
this,
but
that
I
think
is
somewhat
less
complex
than
what
we
do
about
the
near-term
disruption.
That
I
think
is
what
we
often
focus
on
in
newspaper
articles
and
other
presses
that
we're
going
to
have
people
who
have
developed
a
certain
skill
who
will
then
need
to
re-skill
to
some
extent
in
order
to
have
future
opportunities.
We
have
an
obligation
to
help
those
people.
C
It's
gonna,
take
a
lot
of
different
partnerships
with
you
all
to
meet
whatever
needs
are
local
right.
Every
state
has
different
industries
ai's
not
going
to
show
up
tomorrow
and
disrupt
them
all
it's
going
to
happen
on
a
sliding
scale.
There
will
be
some
people
that
will
need
assistance.
In
a
few
years.
There
are
going
to
be
some
people
that
might
need
assistance
in
10
years
and
it's
going
to
be
important
to
have
one-off
efforts
or
replicable
efforts.
If
there
are
more
countrywide
scenarios
that
can
help
land
those
people
in
a
relevant
new
job.
C
But
that's
gonna
fall
on
a
lot
of
you
all.
That's
not
the
federal
government,
that's
gonna,
step
in
and
have
a
nationwide
jobs
program.
It's
gonna,
be
our
industry,
working
with
local
and
state
legislators
to
solve
local
problems
and
make
sure
that
there
is
a
fair
solution
and
a
fair
path
forward
for
these
people,
whose
job
might
be
changed
by
AI
or
other
technology.
I.
D
Totally
agree:
there's
the
issue
of
yes,
we
need
to
education
is
one
of
the
levers
for
addressing
the
future
work
problem.
That's
always
going
to
be
the
case,
but
I
think
back
to
when
the
car,
the
advent
of
the
car
in
society
and
think
about
okay,
did
we
want
to
tee
people
back
then
to
become
drivers?
Do
we
want
more
drivers?
It's
not
that's,
not
exactly
the
point.
So
when
I
be
able
to
tell
me
well
train
people
to
use
computers,
more
I
noticed
this
release.
D
It
has
to
be
a
way,
a
change
in
the
way
we
we
equip
people
with
skills
to
to
address
the
new,
the
future
job
market,
one
of
the
things
that
comes
up
a
lot
in
conversations
about
the
future
work
when
you
ask
for
for
a
number
that
is
like
a
bear
trap
right
there
I'm
not
touching
that,
like
we've,
we've
we've
had
people
writing
papers,
saying
nine
percent
of
jobs
at
risk
or
the
same
for
the
seventy
percent
of
jobs.
I'm
not
touching.
That's
that's
not
most
most
likely.
D
Those
numbers
are
not
accurate
and
they
don't
really
help
us.
But
one
of
the
things
that
comes
up
is
people
try
to
think
about
what
is
the
job
of
the
future
besides
trying
to
be
what
what
are
the
characters
of
the
other
job
of
the
future?
And
people
have
thought?
Well,
you
need
some
in
general,
jobs
that
are
robust
to
automation,
tend
to
be
cognitively
and
cognitively
cognitively,
heavy
and
non-routine
another
way
of
thinking
about
it.
You
tend
to
need
something
that
involves
social
manipulation,
social,
social
interaction,
some
version
of
creativity
and
it's
weird.
D
This
idea
of
fine
perceptual
manipulation
seems
to
come
up
over
and
over
again.
So
what
that
leads
me
to
think
is
when
we
think
about
the
jobs
of
the
future
rescaling
for
the
new
for
the
new
labor
market,
it's
more
about
being
flexible,
being
creative
and
less
about
saying.
Okay
go
into
this
particular
career
path,
because
eventually
we
already
see
this.
Now
people
don't
last
more
than
four
or
five
years
in
single
careers
anymore.
D
We
need
people
to
be
more
more
flexible
at
switching
between
jobs
at
using
different
skills,
and
what
that
means
also
is
well,
that's
that's
more
of
an
individual
approach,
but
from
our
from
our
regulatory
from
a
government
approach.
There's
this
question
of
how
do
you
support
people
whose
careers
are
going
to
be
more
fractured
over
time?
We
have
to
worry
about
what
does
a
safety
net?
What
does
a
safety
net
that
benefit
that
and
safeguards
the
welfare
of
workers?
D
C
Thing
too,
to
add
there
and
certainly
agree
on
the
notion
that
we're
not
just
gonna
make
everyone
coders
and
they're
gonna,
be
fine
for
life
right,
we're
in
a
much
more
dynamic,
labor
force
and
I
think
that
both
people
and
education
institutions
and
legislators
need
to
think
about
what
does
a
lifelong
learning
model?
Look
like
right.
We
talk
about
lifelong
learning
right
now
and
just
tend
to
kind
of
throw
it
on
top
of
the
existing
education
model
that
we
have
it's
just
sort
of
on
you
worker.
C
To
think
about
what
you
want
lifelong
learning
to
look
like
for
your
career.
It
may
help
as
legislators
to
start
thinking
about
how
do
your
local
institutions
meet
those
needs
and
encourage
those
needs
of
lifelong
learning
so
that
when
we
have
the
next
technological
revolution,
that's
beyond
artificial
intelligence,
whatever
that
might
be,
we
don't
sit
up
here
and
talk
about.
How
are
we
going
to
have
to
suddenly
intervene
to
help
people
that
are
displaced
by
the
next
technological
revolution?
Hopefully
societally?
C
B
Well,
that
okay,
so
this
makes
me
rethink
what
I
was
going
to
ask
you,
because
you
know
one
of
the
things
I
wanted
to
ask.
Is
you
know,
maybe
because
you
mentioned
a
couple
different
times
how
this
is
really
gonna,
come
down
to
working
with
lawmakers
and
lawmakers
needing
to
think
about
this
and
take
action,
so
my
thought
then
becomes
you
know
particular
areas
where
maybe
legislation
should
be
focused.
We've
talked
about
education.
We've
talked
about
re-educating,
maybe
older
workers,
in
particular.
B
That
was
a
really
poor
choice
of
words
offering
new
skill
sets
to
older
workers.
Let
me
try
that
one
here,
but
maybe
another
element
in
all
of
this
is
educating
lawmakers
as
much
as
we're
educating.
You
know,
students
in
school,
about
how
these
systems
work
and
maybe,
where
they're,
going
and
I
asked
that
not
as
a
detriment,
of
course,
to
anybody
in
an
elected
office.
But
you
know
things
tend,
and
technology
moves
superfast
and
if
there's
anything
anybody
that's
elected
can
tell
you
things:
don't
move
super
fast
in
state
houses.
So
how?
Where?
C
Mostly
at
the
federal
level,
because
our
bread
and
butter
is
at
the
federal
level
on
these
issues,
that's
not
to
diminish
everything
that
you
guys
are
doing.
It
just
speaks
to
where
I've
been
spending
the
past
ten
years,
but
we
have
been
making
a
concerted
effort
to
develop
new
leaders
in
artificial
intelligence
at
a
federal
level.
That
means
we've
been
organizing
small
roundtable
events,
we've
been
organizing
briefing
events
for
both
staff
and
particularly
younger
members
of
Congress,
who
really
have
an
interest
in
being
Future
Leaders
on
these
issues.
That's
an
effort
that
I
think.
C
Certainly
there
would
be
merit
to
attempting
to
replicate
in
state
houses.
It
would
be
a
heavy
lift
because
of
the
number
of
state
houses
and
the
number
of
potential
great
leaders
that
we
could
have
out
there.
I
don't
want
us
to
shy
away
from
it,
though,
and
I
think
that
it'd
be
a
good
thing
to
try
to
tackle.
D
D
This
comes
up
quite
a
bit.
How
do
you
best
optimally
inform
legislators
to
be
able
to
regulate
technology
in
a
useful
way?
It's
useful.
What
one
thing
I
found
is
that
there
are
two
types
of
skills,
especially
working,
artificial
intelligence.
Knowing
the
technology
is
one
thing,
but
knowing
how
policy
is
affected
by
the
technology
is
a
different
skill
set
entirely
it's
it's
rare
to
find
that
in
one
person
and
I,
don't
think
it's
maybe
I.
Don't.
D
Maybe
the
best
idea
to
try
to
push
that
into
one
head,
I
think
it's
important
that
legislators
are
advised
by
very
well
very
well-trained
people
on
the
technology
for
usually
about
to
regulate,
I.
Don't
I
wouldn't
expect
them
to
know
all
the
nuances
of
deep
learning
or
artificial
intelligence
like
that,
but
having
access
to
smart
people
really
smart
people
is
the
way
to
go
and.
B
Think
that's
a
really
key
thing
here
to
take
a
look
at
because
basically
I
mean
how
do
we
mitigate
against
the
human
phal
ability
of
you
know
the
of
what
goes
into
creating
algorithms
we're
basically
talking
about
teaching
machines
to
do
things
that
require
usually
require
human
intelligence.
But
you
know,
if
we're
talking
about
you
know,
making
the
machine
capable
doing
that.
We're
we're
still
talking
about
human
input.
How
is
it
even
possible
to
mitigate
against
that
and
and
what
are
maybe
the
ramifications
of
not
getting
a
handle
on
that?
The.
D
Ramifications
a
dire
terminators
doubt
that
not
quite
that
far
so
this
this
comes
up
a
lot.
How
do
you
create
an
artificial
agent
that
not
that's
able
to
control
for
its
for
its
fail
ability
and
I
always
try
to
look
back
to
how
we
deal
with
humans.
Humans
are
very
intelligent
and
we
figured
out
a
way
to
have
institutions
that
control
for
the
mistakes
we
make
individually,
and
so
we
do
this
by
systems
of
accountability.
We
do
this
by
oversight.
D
We
do
this
by
by
by
careful
certification,
same
thing
happens
with
algorithm,
you
train
an
algorithm
and
if
it's
making
a
decision
in
a
high
risk,
high
stakes
domain,
it's
somewhat
irresponsible
to
have
it
just
go,
do
its
job
without
oversight,
without
some
version
of
verification,
validation
and
safety
checks.
We
do
this
with
everybody,
everybody
who
has
a
responsibility.
Why
would
we
expect
any
different
with
artificial
intelligence
systems?
D
The
difference
here
with
it
with
algorithms,
with
machine
learning
systems,
is
getting
an
algorithm
to
explain
to
you
why
it
made
a
particular
choice,
as
is
has
been
tough.
You
hear
this
thing.
We
call
the
black
box
problem,
basically
saying
you're,
given
an
algorithm
and
input,
and
it
gives
you
an
output
and
you're
trying
to
understand
why
I
gave
you
that
output,
it's
not
quite
clear
how
to
do
that,
and
much
of
the
research
in
the
past
two
years
has
been
focused
on
trying
to
make
these
algorithms
more
explainable
and
more
more
more
interpretable.
C
Think
everything
is
very
important.
I
want
to
add
a
very
non
technological
piece
to
this.
That
would
be
of
assistance
as
well,
and
it's
a
very
important
thing
to
consumer
Technology
Association,
which
is
the
diversity
of
the
technology
workforce
right,
there's
still
a
lot
of
room
to
improve
and
that's
something
that
we're
very
upfront
about
and
open
about,
and
we
want
to
be
part
of
the
solution
for
it.
C
If
you
don't
have
a
diverse
workforce,
there
is
a
much
greater
chance
that
you
are
going
to
have
an
algorithm
that
does
not
reflect
the
society
that
it
is
then
being
deployed
on,
and
so
it
is
not
just
a
matter
of
taking
advantage
of
all
of
the
talent
that
we
have
in
the
United
States,
which
is
a
good
case
for
diversity
on
its
own.
It's
also
about
making
more
diverse
technology,
which
is
less
likely
to
be
biased
in
some
of
the
ways
that
we
talk
about
creating
biased
outcomes.
D
So,
let's
I
guess
I
it's
useful
to
point
to
concrete
concrete
examples
of
ways
to
try
to
address
a
bias
in
these
types
of
technologies.
My
friend
and
colleague
researcher
tindoga
brewer,
at
Microsoft
Research,
has
this
paper
on
creating
data
sheets
for
for
our
for
data
that
goes
into
training
algorithms
because,
as
you
say,
most
of
a
lot
of
the
biases
of
the
inequities
that
we
see
in
algorithmic
systems
come
from
the
inequities
in
the
original
training
dataset.
D
D
If
you
are
going
to
deploy
an
algorithm,
probably
in
a
public
space,
you
it's
up
to
the
designer
and
deploy
out
the
algorithm
to
check
how
that
algorithm
affects
different
parts
of
society,
to
check
for
all
sorts
of
safety
problems
also
to
bias
problems
and
have
that
in
a
in
a
concrete,
consistent
document.
Ok,.
B
D
It's
a
difficult,
it's
a
difficult
suggestion
to
tell
either
people
in
the
commercial
space
or
in
the
public
space
not
to
not
too
deep.
Well,
that's
not
sure.
It's
asking
somebody
not
to
deploy
technology
is
a
harder
sell
than
asking
them
to
be
more
careful,
be
more
cognizant,
be
more
understanding
of
all
the
ways
that
technology
can
go
wrong.
D
Ai
systems
are
beyond
bias.
I,
don't
think
that's
gonna
happen
anytime
soon,
and
there's
also
this
added
question
of
so
think
about
the
compass.
The
compass
example
from
two
years
ago,
so,
a
couple
years
back
through
publicly
released
this
report,
showing
that
a
compass
algorithm
for
recidivism
estimation
was
biased
in
interesting
ways
and
the
the
company
that
made
the
algorithm
shot
back,
saying
that
actually
no
we're
not
biased,
and
if
you
look
at
the
conversation
between
them,
they
were
talking
about
bias
in
two
different
ways
and
those
different
ways
of
talking
about
bias
or
equity.
D
C
C
B
So
those
are
some
negatives.
Let
me
ask
about
a
positive
I
hope,
so
positive
cybersecurity,
I
think
every
state
is
concerned.
Every
state
lawmaker
is
certainly
concerned
with
you
know:
improving
data
security,
improving
preventing
hacks
and
all
kinds
of
things.
Tell
me
a
little
bit
about
how
AI
is
being
used
in
the
cyber
security
space
you're.
D
Am
aware
of
I
don't
want
to
name
check
a
company
I'm
aware
of
at
least
one
company.
That's
deploying
deep
learning
to
try
to
identify
signatures
for
viruses.
My
signatures
I
mean
very
broadly
understanding.
What
a
malware
looks
like
and
trying
to
deploy
use
that
type
of
signature
in
the
world
to
identify
either
old
viruses
or
new
viruses
and.
D
Showing
really
really
good,
really
really
good
performance
in
general,
when
we
think
about
artificial
intelligence,
at
least
deep
learning.
It's
about
pattern,
recognition
as
its
main
strength
and
if
you
think
about
virus
detection
cyber
security.
It's
really
most
of
it
is
about
pattern,
recognition
recognizing
what
malware
looks
like
either
in
terms
of
its
code
base
or
in
terms
of
its
behavior
and
once
you're
able
to
recognize
those
patterns
that
helps
you
improve
your
virus.
You
antivirus
outcomes.
C
No
I
mean
you've
highlighted
the
important
stuff.
Is
that
we're
at
a
point
right
now
that
this
technology
can
often
behave
better
than
a
human?
In
this
instance
right?
This
is
a
sweet
spot
for
AI
technology.
It's
a
very
data,
heavy
computationally,
heavy
and
pattern
heavy
area.
Those
aren't
things
that
the
human
mind
tends
to
be
excellent
at
and
in
there
are
things
that
our
current
iteration
of
artificial
intelligence
technology
happens
to
be
really
good
at.
D
Will
point
out
I
know
you
wanted
to
go
positive,
but
I
was
okay.
I
will
point
out
that
intelligence
is
relative
to
background
context,
background
intelligence
levels
so
far,
antivirus
cyber
security
has
been
trying
to
thwart
human
adversary.
Artificial
intelligence
is
duval
use.
We
can
use
the
defend,
but
you
can
also
use
to
attack.
So
we
can
imagine
a
future
in
which
the
use
of
artificial
intelligence
enables
better
more
sophisticated
cyber
attacks.
D
We
already
had
one
example
almost
5
years
ago,
where
there
was
a
very
targeted
malware
that
was
trying
to
achieve
a
name,
and
he
used
very
with
me
basically
with
the
mentary
intelligence
to
achieve
that
specific
game.
Imagine
that
multiplied
by
10.
That
gets
much
much
worse,
so
you
can
imagine
nation-state
actors
buying
against
each
other,
with
artificial
intelligence,
artificial
intelligence,
enhanced
malware's
to
achieve
strategic
games.
Well,.
B
So
I
mentioned
autonomous
vehicles,
which
you
know
in
recent
years,
we've
seen
States
really
competing
and
cities
really
competing
to
get
autonomous
vehicles,
legalized
to
be
tested
on
their
on
their
roads
right.
Certainly
here
in
California,
we've
seen
at
Pennsylvania
and
Arizona.
Of
course
also.
We
saw
very
unfortunate
so
actually
a
few
very
unfortunate
accidents
related
to
autonomous
vehicles
in
recent
years
and
I
think
it
scares
people
to
some
extent
that,
whether
it's
a
vehicle
or
a
drone,
these
things
being
hacked
and
weaponized
being
used.
You
know
it's
an
act
of
terrorism
or
just
what-have-you.
B
C
This
cuts
across
not
just
autonomous
vehicle
and
drone
technology,
but
as
an
industry,
if
you
don't
have
robust
security
protocols
and
your
device,
whatever
it
is,
can
be
hacked
to
the
detriment
of
the
user
or
society.
That
is
a
terrible
business
model
right.
It's
in
our
best
interest
to
make
sure
that
these
things
have
robust
security
built-in.
So
that's
something
just
blanket
we're
committed
to
when
you're
talk
thing
about
drones
and
autonomous
vehicles.
C
Certainly
the
stakes
are
quite
high,
given
the
the
nature
of
those
entities,
all
the
more
important
to
make
sure
that
we
have
the
right
security
protocols
in
place,
because,
if
done
right,
these
technologies
could
have
extraordinary
benefits
right.
Think
about
how
bad
people
are
at
driving
right.
Your
average
person
out
there
on
the
road
if
you're
going
to
get
into
an
accident,
typically
caused
by
human
error.
It's
not
going
to
be
a
natural
fluke,
that's
something!
If
self-driving
technology
is
deployed
and
deployed
right,
we
could
theoretically
mitigate
most
human
caused
accidents
with
this
technology.
C
So
that's
something
really
to
look
forward
to
that's
an
enormous
benefit
that
we
could
have.
It
means
that
we
need
leaders
like
in
California,
like
Arizona
and,
like
other
places
that
are
allowing
for
the
testing
of
this
technology
to
make
sure
that
we
get
it
right
so
that
when
it's
deployed,
finally,
it
is
done
right,
it's
done
safely
and,
of
course,
builds
in
those
security
protocols
that
we
mentioned
yeah.
D
Definitely
security
is
always
going
to
be
a
key,
a
key
problem.
One
of
the
things
my
colleagues
found
is
even
when
we
are
able
to
probably
show,
or
at
least
convince
people
that
tech
is
safe
enough
safer
than
humans.
There
is
this
perception
problem
that
still
plays
into
regulation,
and
it's
really
path-dependent
think
about
GMOs
in
in
the
in
the
EU
versus
GMOs,
here
think
about
nukes
in
Germany
nuclear
nuclear
power
in
Germany
versus
nuclear
power
here
and
the
same
thing
with
autonomous
vehicles.
D
D
So,
even
if
you
you-
even
if
you
you're-
able
to
safeguard
secure
system
for
the
current
set
up
eventually,
an
adversary
will
learn
to
overcome
that
so
I
think
legislation
regulation
should
not
be
focused
on
the
current
level
of
technology.
It
should
be
focused
on
how
to
blunt
that
adversarial
game
I
don't
know
how
to
do
it
it,
but
that
seems
to
be
the
way
to
go
well.
D
B
So
much
of
what
we're
talking
about
is,
but
you
guys
know
these
turns
better
than
I
do,
but
the
artificial
intelligence
that
is
out
there
now
is
we
often
refer
to
as
being
narrow
or
weak
and
and
of
course,
the
goal
is
moving
toward
a
more
general
AI
which
I
understand
to
be.
You
know
much
broader,
deeper
application
of
this
technology,
so
how
close
are
we
to
having
a
general
AI
environment
and
maybe
something
where
these
these
systems
actually
can
out
think
or
outperform?
We
poor
pathetic
humans,
the.
C
Short
answer
is
we
don't
know
right,
we
don't
know
when
we
will
get
there
and
I
don't
know
you
mentioned
that.
That's
the
end
goal
I,
don't
know
that
it
is
the
end
goal
right
right
now
we
have
narrow
systems
that
are
narrowly
quite
good
at
the
tasks
that
they
are
narrowly
doing.
There
isn't
a
reason
for
the
artificial
intelligence
systems
that
we
are
using
now
to
be
more
generally
intelligent.
It
just
means
that
you
could
theoretically
apply
artificial
intelligence
to
a
broader
swath
of
use
cases
that
could
potentially
be
very
useful
I.
C
Don't
think
that
anyone
can
reliably
say
when
we
will
get
there,
there's
a
lot
of
people
that
have
fun
in
predicting
whether
or
not
that
will
be
positive
and
when
and
how
we
will
get
there.
I
think
the
important
thing
for
you
all
in
the
policy
space
is
to
make
sure
that
whatever
is
being
considered
for
legislation
for
regulation
is
focused
on
mitigating
your
actual
issues
that
your
constituencies
are
facing
or
could
theoretically
be
facing.
Based
on
you
truly
scientific
evidence
of
where
this
technology
is
going.
C
D
We
had
somebody
talk
about.
It
was
just
before
the
panel
started,
this
idea
of
AGI
versus
in
our
I
I.
Don't
think
anybody
can
claim
to
have
a
direct
path
from
here
towards
artificial
general
intelligence,
I
think
it's
what
lots
of
US
researchers
are
looking
forward
to
maybe
out
of
perverse
Glee,
but
it's
something
we
we
think
is
interesting,
but
we
don't
know
how
we
do
how
we
will
come
about
from
a
policy
regulatory
perspective.
D
It's
interesting
to
note
that,
historically,
at
least
in
the
recent
history
of
AI
policy,
which
is
a
new
space,
AI
policy,
much
of
the
discourse
has
been
driven
by
existential
risks,
conversations
with
deal
to
artificial
general
intelligence
and
even
if
I,
don't
necessarily
believe
that
is
a
thin
now
in
the
near
future.
It
has
done
a
lot
to
raise
the
profile
of
AI
concerns
and
I'm
grateful
for
that.
It
means
that
legislators
are
thinking
more
carefully
about
AI
policy.
The
population
is
thinking
more
careful
about
AI
policy.
That's
a
good
thing.
D
Trying
to
determine
legislation
and
regulation
on
the
basis
of
a
AGI
concern.
I,
probably
think,
is
less
principled.
I,
don't
think
there
is
a
good
argument
to
be
made
for
it
and
if
you
want
to
my
perspective
so
far,
based
on
a
couple
of
years,
thinking
about
this
is,
if
you
want
to
mitigate
long-term
AGI
concerns.
D
Ai
safety
concerns
working
on
near-term,
ai
safety
concerns,
at
least
as
good
at
mitigating
those
farts
and
concerns,
because,
let's
think
about
it,
many
of
the
EIC
to
concerns
they
about
some
inscrutable
terminator
ai,
destroying
the
world,
because
the
science
must
it
wants
to
achieve
some
goal.
Things
like
making
sure
that
AI
systems
are
validated
and
verified.
That's
a
near-term
concern
things
like
making
sure
that
AI
systems
are
lined
with
human
values.
That's
not
a
future
concern.
That's
a
near-term
course.
That's
a
bias.
Concern
try
to
make
AI
more
equitable.
That's
a
vow!
D
I'm
concerned
this
issue
of
forcing
them
to
explain
their
decisions,
both
near-term,
AI
and
far
term
AI
will
benefit
from
being
able
to
explain
to
users
why
it's
making
a
particular
decision.
So
the
the
dichotomy
is
not
as
useful
for
for
regulation
for
legislation
it
does
help
get
get
more
traction
and
the
population
so.
B
I
have
to
say,
as
as
someone
who
covered
Governor
Arnold
Schwarzenegger
closely
for
seven
years,
the
mentions
of
the
Terminator
are
always
it's
hard
for
me
to
not
hear
at
the
Governator
in
my
head.
When
we
talk
about
that,
I
don't
want
to
hog
the
microphone
all
day.
If
anyone
has
questions,
there's
a
microphone
here
in
the
middle
of
the
room.
If
you
want
to
position
yourself
there
I
will
see
you
I
will
call
on
you.
I'm
gonna
get
I'm
gonna
keep
going
unless
somebody
subs
up
there.
B
B
D
So
people
argue
that
the
path
between
here
and
the
future
AGI
issue
of
transfer
learning
so
transfer
learning.
This
idea
that
you
take
a
narrow,
AI
system
and
it's
able
to
perform
well
on
other
tasks
and
improving
the
capacity
of
transferring
expertise
on
one
test
or
the
task
tasks
will
improve,
will
basically
provide
a
path
towards
artificial
general
intelligence.
There's
been
a
little
a
lot
of
work
in
the
past
two
three
years
on
transfer
learning,
it's
actually
a
viable
is
looking
more
and
more
viable
as
the
months
go
by.
D
B
That
see,
that's
brilliant!
You
read
my
mind
cuz.
It
leads
you
into
the
next
question
because,
as
I
said,
I
mentioned,
you
know
that
the
Terminator
and
I
think
so
many
people,
when
you
think
of
artificial
intelligence,
that
I
do
get
this
image
of
you
know,
Arnold
and
in
any
of
the
Terminator
movies
and
I,
it's
probably
a
big
driver
of
a
lot
of
our
myths
and
maybe
misconceptions
about
artificial
intelligence.
B
C
That's
important
stuff
to
talk
about.
It
was
very
central
to
the
whole
way
that
Emanuel
Marconi
portrayed,
Francis
approach
to
AI
was
making
sure
that
it
fit
within
their
view
of
society
and
I.
Think
it's
important
for
other
societies
to
think
about
how
AI
fits
within
their
viewpoint
of
the
future
of
society.
I.
Don't
necessarily
think
that
it's
super
productive
to
think
about
how
a
Hollywood
movie
that
portrays
sentient
machines
fits
in
to
your
viewpoint
of
future
society,
but
think
about
it.
If
you
want
to
I.
D
Like
the
point
about
the
the
the
focus
on
artificial
intelligence
research
causing
a
resurgence
and
the
influence,
difficult
fought,
it's
actually
one
of
the
things
that
comes
up
over
and
over
in
my
work,
I
find
myself
reading
philosophers
because
they
help
me
inform
what
I'm
thinking
about
for
a
particular
thing.
So
thinking
about
having
AI
explain
itself,
it's
a
question
of
how
do
you
know
you
know
what
you
know
essentially
epistemology
thinking
about?
Okay,
what
is
fair,
what
is
equitable,
what
is
bias
algorithmic
system?
D
That's
a
question
of
ethics
and
trying
to
answer
it
as
just
we're.
Just
an
engineering
mindset
is
probably
not
it's
not
very
useful
in
terms
of
ooh
in
terms
of
misconceptions
or
what
the
popular
misconceptions
I
would
say.
This
may
not
be
the
most
the
most
salient
misconception,
but
I'll
harp
on
it,
because
it's
important
and
it
drives
a
lot
of
a
lot
of
the
pathologies
we
see
in
the
use
of
algorithms
for
decision
making.
Is
this
issue
of
automation
bias
this
issue
of
this
assumption
of
infallibility
in
algorithmic
systems?
D
Algorithms
are
consistent,
that's
what
they
designed
to
do.
Consistency
is
not
the
same
thing
as
objectivity;
that
is
something
that
recurs
over
and
over
again,
the
context
of
a
decision
determines
what
is
allowable.
What
is
acceptable
just
having
a
consistent
decision-maker
doesn't
mean
it's
best
for
that
context.
A
D
Might
be
so,
one
of
my
students
is
working
on
the
implications
of
algorithms
for
the
future
work,
and
this
this
question
recurs.
What
is
it's
not
quite
that
philosophically
question
is
maybe
maybe
goes
back
to
Max
Weber
as
discussion
about
work
ethic.
What
is
the
value
of
work?
Why
is
it
important?
Do
we
really,
if
you,
if
we
ended
up
in
a
situation
where
artificial
intelligence,
because
it
improves
efficiencies,
leads
to
post
scarcity
society?
Would
we
still
enforce
this
work
ethic?
C
And
the
flipside
of
that
is
that
many
people
may
view
jobs
as
a
stable
unit
of
welfare.
Whether
or
not
that
is
the
you
know
philosophically
ideal
way
to
approach.
It
is
the
societally
practical
way
in
which
many
people
think,
and
so
that
argument
has
a
complete
opposite
side
and
I,
don't
know
which
one
wins,
but
certainly
discussions
of
the
fact
that
we
might
need
universal
basic
income
in
the
next
five
years.
I
think
are
just
a
colossal
leap
from
where
technologically
and
probably
societally,
from
a
total
welfare
standpoint,
people
actually
would
want
to
be.
D
D
This
is
interesting
because
also
it
brings
up
this
issue
of
pluralism.
What
is
acceptable
in
China
is
prevalent.
It's
different
from
what's
acceptable
in
the
EU
is
different
for
what's
acceptable
in
Africa
and
from
what
acceptable
in
the
u.s.
and,
if
you're,
living
in
a
society
where
we
have
artificial
intelligence
deployed
across
the
globe,
what
set
of
values
gets
to
dominate,
gets
to
control
the
artificial
intelligence
deployment?
It's
not
quite
clear.
We've
we've
thought
that
far
ahead
and
I
don't
know
what
well.
B
Right
now,
I
mean
one
of
one
of
the
one
of
our
real
issues
and
all
levels
of
law
are
the
lawmaking
and
legislation
and
governance
is,
you
know
we
seem
to
have
sometimes
very
different
viewpoints
on
things,
then
the
EU
does
or
China
does,
or
what
have
you?
What
are
some
of
the
approaches
that
we're
seeing
in
other
nations
to
this
technology,
how
its
implemented,
how
it's
being
used,
how
its
being
regulated
that
are
different
than
what
we're
doing
and
maybe
impacting
in
some
way
what
we're
doing
with
it
I
think.
C
That
what
you
brought
up
each
society
is
and
should
approach
the
use
of
artificial
intelligence
in
a
societally
appropriate
manner,
so
we're
not
going
to
be
China
and
our
deployment
of
AI
we're
not
going
to
create
a
level
of
government
based
surveillance
using
artificial
intelligence.
The
way
that
they
are
that
our
society
wouldn't
tolerate
it.
At
the
same
time,
we're
not
going
to
be
Europe
in
our
level
of
personal
data
ownership
to
an
extreme
right,
I
think
that
the
u.s.
sits
at
a
relative
sweet
spot
in
our
approach
to
technology,
security,
data,
etc.
C
It's
played
out
in
the
fact
that
we
have
held
the
dominant
position
in
technology
creation
technology
commercialization
globally,
truly
for
the
foreseeable
past.
I,
don't
anticipate
I
hope
we
don't
see
a
radical
departure
from
that
approach.
I
think
that
we
can
continue
to
see
a
reasonable
balance
between
leadership
and
technology
and
understanding
of
the
fact
that
we're
moving
toward
more
data
heavy
technologies
that
are
going
to
require
conversations
about
data,
transparency
and
data
ownership
and
data
control.
D
You
guys
deny
that
the
United
States
context
is
different.
In
fact,
many
of
the
51
states
are
have
different
contexts
and
how
they
think
of
Technology
California
and
many
states
on
in
the
south.
They
don't
agree
very
much
I.
Think
one
of
the
one
of
the
besides
the
commercial
aspects
of
the
use
of
technologies
is
the
defense
aspect,
and
the
difference
is
in
the
different
regimes.
They
have
long
term
consequences.
I
think
it's
hard
to
argue
that
the
US
will
remain.
D
We
remain,
will
have
such
a
gap
between
itself
and
the
second,
the
second
most
technologically
advanced
country.
We
are
probably
going
to
live
more
in
a
more
multiple,
our
world
and
so
the
choices
all
the
countries,
all
the
domains
make
affect
our
choices
and
interesting
ways.
So
the
use
of
one
of
the
one
of
the
interesting
discussions
over
the
last
year
has
been
the
use
of
lethal
autonomous
weapons.
D
What
the
United
States
chooses
to
do
in
that
space
will
affect
what
China
chooses
to
do
in
its
own
space
and
will
affect
what
Russia
chooses
to
do,
and
so
there's
going
to
have
to
be
this
multiple
of
view.
In
terms
of
technology,
technology
policy
and
I,
don't
know
how
what
mechanisms
exist
for
that.
Quite
yet,.
B
Yeah,
you
know
we're
in
such
a
polarized
political
environment
in
this
day
and
age
and
I
don't
want
to
make
this
a
political
question,
because
it's
not
a
necessarily
a
political
panel,
but
clearly
when
we
have
situations
where
you
know
some
lawmakers
in
some
states
can't
travel
to
other
states
because
of
you
know,
political
decisions
that
have
been
made
policies,
they've,
been
adopted,
etc,
etc.
Technology
crosses
every
border
everywhere.
All
the
time
notwithstanding,
China
blocking
Google
or
what-have-you
I
mean
we
still
see
it.
B
It
doesn't
matter
where
you're
at
technology
is
going
to
get
accessed
somehow
someway.
So
it
does
make
me.
You
know
wonder
with
all
of
this
polarization
what
role?
Maybe
this
kind
of
technology
plays
or
how
it
will
advance
or
not
advance
based
on
these
political
things?
Can
it
can
politicians
screw
it
up?
I
guess
is
the
short
version
of
this
question,
or
is
it
going
to
March
regardless
they.
C
Here.
If
we're
going
to
adopt
a
policy
of
sending
those
people
home
to
work
for
competitive
companies
and
foreign
nations,
we
are
shooting
ourselves
in
the
foot
all
right.
If
you
want
to
be
a
leader
in
technology,
you
want
a
immigration
program
that
helps
facilitate
getting
the
minds
here
that
you
need.
You
look
at
the
way
that
foreign
countries
are
approaching
this,
whether
it's
EU
countries,
whether
it's
Australia,
whether
it's
Canada
they
are
making
active
and
concerted
efforts
to
recruit
people
that
fill
holes
in
their
economy.
D
So
back
in
grad
school
anecdotally,
all
the
engineering
classes
were
90%
immigrant
and
many
of
them
would
leave
after
they
got
their
degree
and
when
go
back
home
to
apply
the
expertise
to
other
countries,
economies,
it's
it's
gonna,
be
a
problem.
It's
gonna
be
a
huge
problem,
although
I
want
to
point
out
poke
a
hole,
technology
is
marching
forward
and
it
it
finds
a
way
to
get
in
every
nook
and
cranny
of
the
world.
But
data
does
not
at
least
by
regulatory
standards
data.
D
Increasingly,
we
are
seeing
what
we
call
data
silos
or
data
blind
spots.
Many
companies
and
United
States
have
limited
access
to
data
in
China.
Many
companies
in
United,
States
have
limited
acts
to
data
is
a
Russia,
and
so
this
has
implications
for
what
types
of
AI
deployment
you
can
create
if
you're
trying
to
create
something
that
caters
to
a
Chinese
audience.
That's
hard
to
do,
and
I
also
pointed
out
earlier
this,
this
issue
of
filter
bubbles.
D
So
even
if
data
is
everywhere,
let's
say
we're
in
a
state
where
data
flows
freely,
supposedly
people
can
still
find
their
way
through
the
use
of
algorithms
into
data
data.
Silos,
where
the
only
access
information
that's
accessible
to
a
certain
sub
network,
so
in
a
sense
it's
a
brave
new
world,
but
it's
still
just
all
fences
between
people
between
people,
our
countries.
B
So
we
only
have
a
few
minutes
left
so
I
want
to
ask
kind
of
a
final
question
here
and
you've.
You've
answered
this
to
various
degrees
already,
but
I'm
gonna
ask
you
maybe
to
put
a
little
bow
on
it
here.
What
do
you
see
is
the
most
important
role
that
state
legislatures
governors
state
lawmakers
can
play
in
regard
to
artificial
intelligence
in
the
advancement
of
artificial,
intelligent
intelligence?
Easy
for
me
to
say
and
yeah
to
the
greatest
benefit
of
their
constituents?
What
what
is
this
role?
The
state
legislatures
and
governors
and
lawmakers-
should
be
playing.
C
Education
and
testing
right-
those
are
where
I
view
your
niche,
that
the
federal
government
cannot,
even
if
they
desire
to
to
the
greatest
extent
fully
address.
You
have
much
more
local
control
over
education
partnerships
with
industry
and
collaboration
on
Co
developing
curriculums,
whatever
it
might
be.
You
have
control
over
that
on
the
testing
side
of
things
again.
This
is
a
way
to
differentiate
yourself
from
other
states
from
other
nations.
C
D
Would
say
along
the
same
lines,
be
proactive
in
regulation.
I,
don't
mean
negative
regulation.
I,
don't
mean
hammer
down.
Anybody
who
is
innovative.
I
mean
give
many.
Many
tech
companies
are
looking
for
for
more
clarity
in
what
allowable
within
their
state
within
a
state
having
some
form
of
dialogue
in
which
you
can
actually
foster
innovation
in
a
clear
way.
D
There's
a
good
thing,
and
you
can
do
that
without
necessarily
being
anti
innovation,
without
necessarily
stabbing
them
every
time
they
try
something
new
I
would
also
say
you
need
to
be
able
to
attract
talent,
talent,
attracts
talent
if
you
create
an
ecosystem
of
people
who
have
a
expertise
within
your
state.
You'll
just
keep
growing
once
you
create
critical
mass
that
increases
the
economic
capacity
of
the
state,
that's
a
huge
than
I.
Think
oh.
F
F
Regulations
are
going
to
deal
with
every
one
of
these
issues
because
it
seems
like
they
pop
up
in
different
sectors
constantly,
but
maybe
what
we
need
is
the
industry
to
sit
down
with
lawmakers
and
come
up
with
some
kind
of
an
ethics
code
to
operate
under,
because,
from
my
perspective,
you
have
strayed
beyond
as
an
industry.
What
I
would
consider
to
be
ethical
behavior,
so
just
wanted
to
put
that
out
there
and
see.
If
you
have
any
comments,
I
think.
C
It
speaks
to
the
fact,
as
I
mentioned,
that
we
do
have
a
resurgence
in
society
of
discussions
around
the
ethical
and
moral
application
of
technologies
and
how
they
fit
within
our
idea
of
what
we
want
our
society
to
be.
You
brought
up
a
number
of
different
specific
instances,
and
you
would
certainly
be
happy
to
chat
with
you
after
about
some
details
of
each
of
those
when
we
talk
about.
C
Let's
just
take
data
security,
though,
when
you
talk
about
hackable
devices,
that's
something
that,
as
I
mentioned
for
our
companies,
they
consider
it
bad
business
practice
right
if
you're
selling,
something
that
can
be
hacked
and
caused
the
consumer,
distress
or
harm.
That
is
not
the
way
that
you
are
ever
going
to
sell
them
a
product
again
in
the
future.
C
So
as
a
consumer,
you
know,
look
like
I'm
buying
this
device,
they're
gonna
make
sure
that
they
have
good
protocols
in
place,
and
then
maybe
you
would
raise
some
questions
about
a
company
that
you'd
never
heard
of
and
showed
up
for
a
really
cheap
price
on
an
online
retailer.
You,
maybe
you
rightfully,
would
have
some
questions
as
to
what
kind
of
security
protocols
were
involved
in
that
device.
D
Excellent
point:
I
love
everything
you
brought
up
the
issue
of
privacy
seems
to
be
a
foundational
point
for
you
and
it's
a
hard
problem
in
the
United
States.
We
we
haven't
had
a
strong,
strong
tradition
of
clear
privacy
laws
in
a
way
that
is
enforceable
across
the
board.
That
is
something
we
can
work
on,
as
in
through
regulation.
I,
don't
know
if
state
regulation
is
the
way
to
go,
I'm,
not
sure
this
issue
of
ethics
and
in
algorithms,
it's
it's
a
huge.
It's
a
huge
sticking
point.
D
Recently:
we've
been
working
very
carefully
essentially
trying
to
do
a
version
of
applied
ethics
for
AI
systems,
and
I
can
say
that
a
lot
of
people,
even
the
the
major
digital
natives
up
in
Silicon
Valley.
They
are
contributing
significantly
to
this
effort
trying
to
become
to
satisfy
what
the
population
thinks
is
ethical,
it's
not
easy
and
if,
if
it
seems
like
they've
been
evading,
is
because
it's
easier
to
evade
sometimes
than
to
do,
but
they
there
is
love.
D
E
B
E
Policy
side
and
the
tech
side,
and
what
we've
seen
in
the
u.s.
is
that
historically,
we
are
incredibly
reactive
to
any
sort
of
Science
and
Technology
regulation
or
understanding
and
I
think
that
there
is
this
gap
between
what
to
regulate
and
the
nuance
of
what
stifles
innovation
and
what
does
not
as
well
as
what
lends
to
safety
and
what
is
just
kind
of
lip
service
and
so
I'm
I'm.
Looking
into
this
area
of
where
we
can
not
only
have
these.
E
These
education
pushes
because
I
am
very
familiar
with
the
not
only
the
age
of
the
the
average
age
of
the
legislature,
but
also
this
fear
that
you
say
that
oh,
it's,
the
Terminator
and
that's
what
they're
scared
of,
but
really
we're
pushing
all
of
this
automation
and
and
trying
to
get
people
into
CTE
and
STEM
fields.
But
at
the
same
time,
the
things
that
can't
be
automated
are
going
to
be
what's
important
in
the
next.
You
know
Industrial
Revolution.
If
you
want
to.
E
In
that
term,
and
the
things
like
critical
thinking,
problem-solving
and
and
those
sorts
of
things
are
also
educational
priorities,
because
those
are
things
we
can't
automate
yet.
But,
as
you
said,
this,
the
the
speed
of
the
legislature
is
very
slow
and
linear,
and
now
we're
seeing
the
rate
of
change
is
not
linear.
C
C
East
Portland,
but
yeah,
actually
you've
got
a
great
resource
there
in
Suzanne.
Bonamici
is
she's
really
thinking
through,
like
frankly,
more
than
most
other
elected
officials
at
the
federal
level.
Thinking
through
what
future
education
looks
like
dip,
they're
people
for
not
just
the
careers
of
the
future,
but
the
creative
thinking
of
the
future.
So
you
happen
to
have
one
of
the
best
resources
right
there.
C
So
I
would
certainly
engage
her
and
talk
to
her
about
the
ways
that
she's
been
working
with
local
education
institutions
and
companies
and
she's
got
some
pretty
interesting
ideas,
but
certainly
happy
to
discuss
further
and
you
we
mentioned.
There
are
a
lot
of
opportunities
to
educate
people
at
a
state
level
and
have
a
deeper
connection
between
local
companies
and
people
that
have
the
ability
to
legislate
and
regulate
in
these
spaces.
C
So
it's
something
that
taking
away
from
your
suggestion
and
this
discussion
I'll
be
thinking
about
as
to
how
practically
we
can
help
facilitate
that,
as
I
mentioned,
it's
a
heavy
lift
for
having
50
states,
but
it's
something
that's
important
to
do,
and
I
want
to
think
about
creative
ways
that
we
can
help.
Make
sure
that
people
like
you
can
tap
into
the
resources
that
you
want
to
tap
into
so.
D
D
But
anyway,
one
of
the
things
I've
been
trying
to
do
is
try
to
create
projects
and
report
public
reports
that
inform
not
just
a
lay
person
but
also
decision-makers
at
high
levels,
about
the
problems
with
algorithms,
essentially
informing
them.
How
better
to
think
about
algorithms
on
AI,
going
forward
and
I'm
happy
to
say
that
we've
been
getting
some
some
traction.
People
are
understanding
more
and
more
about
what
algorithms
are
and
yeah
I'm,
using
that
to
inform
their
decision
making.
D
Also
we
recently
one
of
the
things
we're
trying
to
go
beyond
just
educating
we're,
also
trying
to
go
to
the
stage
we're
giving
decision-makers,
policymakers,
legislators
frameworks
for
thinking
about.
Okay,
you
have
this
institution
in
which,
using
you
have
a
decision
pipeline
of
a
pipeline
of
decisions,
some
of
which
will
be
replaced
by
algorithms.
What
types
of
systemic
problems
are
arise
from
those
types
of
deployments,
and
increasingly
we
are
getting
better
at
informing
and
creating
frameworks
for
that
and
in
terms
of
so
the
question
you
brought
the
issue
brought
up
about
this
differential
time.
D
Timescale
for
development
and
regulation
is
one
thing:
it's
it's
it's
as
old
as
science,
technology
policy
and
one
of
the
thing,
the
lines
of
thinking
that
has
been
that's
been
resonating
with
me
recently.
Is
this
idea
of
adaptive
regulation
where,
instead
of
just
creating
one-time
laws
based
on
limited
information,
you
are
dabbling
with
in
response
to
data
new
information
in
the
field
about
that
particular
piece
of
technology
and
another
thing
that's
been
interesting:
that's
probably
less
high-tech,
less
less
less
theoretical.
D
This
idea
of
sunset
clauses
on
on
laws
that
are
related
to
science
technology
policy,
because,
if
eventually
you're
always
most
regulations
with
the
science
technology
policy
are
going
to
get
things
wrong
in
one
way,
shape
or
the
other
and
forcing
our
continuous
reassessment
every
day,
two
years,
five
years,
that
type
of
stuff
is
extremely
important.
I'm
thinking,
right
now
about
things
like
HIPAA,
they
talk
about
it.
Hipaa
is
the
health
health,
privacy
law.
I
forget
the
fool,
the
full
details
of
the
acronym.
They
refer
to
these
things
called
personally
identifiable
information.
D
As
in
these
types
of
data,
they
are
so
sensitive,
they
must
not
be
disclosed
in
health,
health
applications
and
increasingly
we
are
finding
that
there
is
no
such
as
PII.
Every
piece
of
information
is
personally
identifiable
if
you
have
enough
of
it.
So
you
there's
an
example
of
a
law
that
was
created
at
a
time
when
a
I
was
telling
was
still
nascent
and
is
increasingly
becoming
more
and
more
obsolete.
Having
some
type
of
revision
review
of
these
types
of
law
sunset
clauses
might
help
I'm
still
thinking
through
it.
A
Based
on
the
discussion
that
we've
had
and
we've
heard,
I
wanted
to
ask
the
poll
question
again
to
see
if
who
was
looking
still
forward
to
artificial
intelligence
and
who's
looking
forward
to
it
with
trepidation
just
by
a
show
of
hands.
Has
your
position
changed
after
hearing
this
discussion,
or
maybe
it's
still
with
the
same
amount
of
trepidation,
I.
A
Good
point:
well,
please
join
me
in
thanking
our
distinguished
panelists
today
and
I
know.
I
saw
that
it
looks
like
they.
Weren't
may
have
been
a
couple
of
other
questions,
so
if
our
speakers
are
still
available
and
want
to
stay
after
it's
to
help
answer
some
of
those
questions
for
you.
Thank
you
so
much
and
thank
you
for
coming
today.
Yes,.