►
From YouTube: Interim Joint Committee on Tourism, Small Business, and Information Technology (7-31-23)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
E
D
A
A
Thank
you
guys.
Also
before
we
get
started,
I.
Believe
representative
Pollock
has
some
guests
he'd
like
to
introduce.
D
A
And
also
does
anyone
else
have
guests
they'd
like
to
introduce
today.
Okay,
then
jumping
right
into
the
meeting
with
our
first
presenters
joining
today,
we're
going
to
talk
about
Ai
and
government
and
come
up
to
the
table,
introduce
yourself
for
the
record,
looking
forward
to
your
testimony.
A
C
F
Start
a
few
are
okay,
Mr
chairman
and
to
your
co-chairs,
wives
and
King,
and
all
you
members,
all
of
you
members,
it's
a
terrific
pleasure
to
be
with
you
today
and
think
about
a
subject
of
consequence.
Perhaps
the
hardest
part
of
your
job
is
keeping
up
with
everything,
changes,
updates,
advancements
and
knowing
how
best
to
set
policy
and
law
and
spending
one
of
the
most
compelling
and
complex
and
creative
and
concerning
subjects
before
us.
F
Now
is
artificial
intelligence
or
AI,
its
algorithms
finding
patterns,
but
there's
a
lot
more
to
it
than
that
79
of
America's
CEOs,
just
told
Fortune
magazine
that
will
make
them
more
efficient.
Another
study
at
the
same
time
this
was
all
last
week
said
that
we'll
see
a
drop
in
the
service
Workforce
comprised
by
a
majority
of
female
employees
by
2030
all
because
they
will
be
replaced
by
AI.
51
percent
of
K-12
teachers
are
already
using
Ai
and
routine
matters.
Forest
magazine
says,
and
they
add
examples
in
health
care,
farming,
basic
writing
and
other
things.
F
The
commentary
is
everywhere.
You
see
it
in
all
the
news
briefs.
The
White
House
said
that
they'd
voluntary
got
voluntarily
gotten
seven
AI
companies
to
agree
to
cooperate
on
rules
followed
quickly
by
a
global
expert.
Who
said,
there's
no
way
to
do
that.
You
can't
enforce
those
rules,
so
some
of
those
questions
will
fall
to
you.
This
opens
up
a
number
of
questions
about
advertising
political
advertising,
copyrights
and
so
on.
Our
team
average
co-founder
is
privileged
to
work
with
Gartner,
which
is
the
world's
largest
consultancy.
F
Gartner
is
way
way
ahead
of
who
does
AI?
Who
should
the
Myriad
pluses,
along
with
the
major
concerns
that
are
expressed
by
leaders
like
you,
leaders
of
all
descriptions
around
the
planet
are
relying
on
Gartner
regularly
because
they
have
several
thousand
subject
matter.
Experts
around
the
planet,
including
one
on
the
screen
from
England,
so
please
be
thinking
of
the
best
and
toughest
questions
to
ask
these
two
outstanding
guests.
F
They
are
here
to
help
and
can
give
us
counsel
and
Direction
on
any
case
or
company
or
concern
it's
my
pleasure
to
first
introduce
Alicia
scholar
from
North
Carolina.
Her
role
is
clad
executive
for
the
Americas
public
sector.
She
spent
a
lot
of
time
here
already
the
Kentucky
Lottery
and
education
cabinet,
among
others,
draw
on
Gartner
for
help.
So
Alicia.
C
C
C
Gardner
is
the
world's
largest
research
and
advisory
firm
and
we're
quoted
in
reputable
news
sources
over
180
times
per
week
when
describing
what
we
do
here
at
Gartner.
Some
folks
will
say
that
we're
like
a
Google
search.
If
you
don't
know
something
you
can
just
Google
it
right,
but
I'm,
not
sure
that
that's
the
best
way
to
describe
what
we
do,
because
not
everything
that
you
look
up
on
Google
is
correct.
I
mean
have
you
ever
tried
to
diagnose
your
headache
or
your
stomach
ache?
C
C
The
home
inspector
doesn't
really
care.
If
you
buy
the
house
or
not,
their
job
is
simply
to
use
their
Knowledge
and
Skills
to
review
the
house
that
you
like
and
help
inform
you
of
exactly
what
you're
getting
into
here.
Is
there
a
new
roof
in
your
future?
Does
it
have
weeping
insulation?
They
look
in
all
the
nooks
and
crannies
to
help
give
you
a
broader
understanding
of
the
true
investment
you're
about
to
take
or
risk
you're
about
to
take,
and
then
you
can
use
that
advice
to
make
decisions
that
fit
your
family
and
your
needs.
C
C
To
do
this,
research
analysts
meet
with
vendors
to
understand
their
products
and
services,
features
and
benefits
and
how
they
operate,
and
we
do
this
twenty
three
thousand
times
a
year.
We
also
talk
with
clients
about
what's
going
on
in
their
workspace,
and
we
do
that
490
000
times
every
year.
These
interactions
help
our
analysts,
our
subject
matter,
experts
to
understand,
what's
working,
what's
not
and
why
they
have
a
broad
and
unparalleled
purview
of
the
technology
landscape.
C
Why
do
I
tell
you
these
things,
because
it's
important
to
know
where
your
news
is
coming
from?
We
have
a
very
rigorous
research
methodology
to
ensure
that
we
remain
objective
and
unbiased.
We
believe
that
being
objective
and
accurate
are
two
of
the
most
important
things
that
we
can
be
in
advance
of
this
meeting.
I
met
with
technology
leaders
from
some
agencies
here
in
Kentucky
to
get
their
perspective
on
generative
AI.
C
Everyone
that
I
met
with
was
really
intrigued
and
they
recognized
that
the
use
the
new
use
of
this
existing
technology
could
have
profound
impacts
as
an
example
of
how
generative
AI
could
impact
students
in
Kentucky
I
want
to
share
with
you
an
example
from
Khan
Academy,
an
online
tutoring
organization
and
Ben.
If
you
could
do
the
other
room.
Thank
you.
C
So
back
in
1984,
Benjamin,
S
Bloom
did
a
study
to
research.
Whether
group
instruction
was
as
in
as
effective
as
one-to-one
tutoring.
What
he
found
was
that
when
given
one-to-one
tutoring,
even
a
very
average
student
could
perform
at
exceptional
levels.
So
why
didn't
schools
just
start
giving
all
the
kids
one-to-one
tutoring
I
mean?
Obviously
that's
not
very
scalable
right
I
mean
who
can
afford
to
give
every
student
a
one-to-one
human
being
to
walk
with
them
and
talk
with
them
and
and
teach
them
everything
that
they
need
to
know
every
day.
C
It's
just
not
a
realistic
goal,
but
with
generative
AI.
Maybe
it
is
so
at
Khan
Academy.
They
actually
use
generative,
Ai
and
wrote
code
that
allows
the
the
two
one-to-one
tutor
situation.
So
the
the
computer
can
act
as
a
tutor
with
natural
language,
prompting
the
students
with
learning
cues
and
questions
that
feel
like
a
real
live
tutor
and
the
results
are
stunning.
Students
in
Kentucky
already
have
one-to-one
devices
you're
ahead
of
the
rest
of
the
country
here.
So
what
if
you
turned
those
one-to-one
devices
into
tutors.
C
If
you
did
that,
it's
very
possible
that
Kentucky
students
could
LeapFrog
not
only
other
states
but
other
countries,
how
would
that
impact
Kentucky?
How
would
that
make
you
a
destination
State?
How
would
that
impact
your
Workforce
and
fill
in
those
gaps?
The
average
agency
experiences
right
now
around
between
a
25
and
30
percent
vacancy
rate.
That's
not
just
in
Kentucky.
That's
across
the
country.
C
In
today's
presentation,
Ben
is
going
to
take
a
moment
to
Define
what
is
generative
AI.
What
does
that
technology?
Look
like
he's
going
to
share
some
potential
examples,
or
we
call
them
use
cases
of
how
generative
AI
could
help
governments
and
agencies
provide
better
services
to
Citizens,
as
well
as
some
potential
risks,
then
he's
going
to
share
how
other
states
and
governments
from
around
the
globe
are
standing
up
some
guard
rails
to
help
ensure
that,
as
we
explore
this
new
use
of
Technology
we're
doing
so
as
safely
as
possible.
C
G
Hello,
everybody,
my
name
is
Ben
Kane
I'm,
a
senior
director
Analyst
at
Gartner,
as
you
can
probably
tell
from
my
accent
and
my
background
I'm
in
the
UK
and
the
subject
we
have
today
is
a
point
of
view,
and
yes,
inevitably,
I
use
a
generative
iron
mechanism
to
generate
this
rather
creepy
picture
based
on
well.
Why
wouldn't
you
in
the
circumstances.
G
A
G
C
G
So
the
the
range
of
capabilities
in
Classical
Academic
terms
runs
from
reactive
AIS.
If,
if
an
organization
is
selling
this
to
you,
they
will
call
this
an
expert
system.
It's
easy
to
sell
an
expert
system
in
a
reactive
system,
which
means
the
same
thing:
it's
a
narrow,
single
task.
It
has
captured
some
human
knowledge,
but
it
can't
learn.
G
Then
we
run
into
a
limited
memory,
AI
That's,
where
we
are
now
it
predicts
a
short
hose
distance
into
the
future,
so
autonomous
and
self-driving
vehicles
for
those
people
who
have
various
different
forms
of
cars
and
they're.
Seeing
the
things
about
self-driving,
Teslas
or
whatever
it
happens
to
be
is
an
example.
Generative
AI
does
the
same
thing.
It
generates
the
next
word
or
phrase
or
component
of
a
picture
that
looks
like
it
came
from
an
original
beta
source.
G
If
you
want
to
go
a
little
bit
more
philosophical,
you
can
talk
about
theory
of
Mind,
where
you
actually
pretend
or
attempt
to
simulate
the
way
that
a
human
mind
works.
That
is
not
what
exists
yet
and
of
course,
self-aware
is
still
in
the
Realms
of
Science
Fiction.
Another
way
of
looking
at
it
is
narrow
versus
General
versus
super.
G
Intelligent,
narrow
is
very
specific
to
a
task
like
voice
recognition
from
your
Siri
or
Alexa
or
whatever
the
general
AI
is
ability
to
learn
and
understand
or
simulate
that
as
well
as
a
human
and
we're
not
actually
there.
Yet.
Despite
all
the
hype
and
super
intelligence
is
difficult
to
conceive
of
really
because
we
are
only
human
Aniston
Realms
of
sci-fi,
hopefully
for
quite
a
while.
G
G
Soft
matching
risk
assessments,
voice,
recognition,
remember
your
use
of
serial
or
Alexa
or
whatever
or
hey
Google
anomalies,
fraud
diagnoses,
if
you're,
if
you're
anybody's
being
for
cancer
scans
they're
starting
to
use
AI
for
for
determining
whether
a
cancer
is
present
in
a
way
that
is
actually
improved
on
the
human
ability
to
support
it
very
powerful,
very
useful
in
particular
areas.
G
Protein
folding
is
an
interesting
example,
Google
deepmind
that
managed
to
go
through
use,
machine
learning
on
known
proteins
and
then
from
that
derived
how
it
could
work
out
how
proteins
folded
from
amino
acid
sequences
for
everything
else
within
a
few
months.
Now
that
is
immensely
powerful
compared
with
what
anybody
else
could
do
beforehand
and
is
also
extremely
powerful
thing
areas
such
as
drug
development
and
Medical
Treatments.
G
But
that
is
not
generative
AI
generative
AI
very
specifically
creates
an
output
that
looks
like
it
came
from
an
original
data
set
a
picture
that
looks
like
a
a
picture
that
looks
like
a
human,
a
text
that
looks
like
it's
written
by
Shakespeare,
not
very
much
Shakespeare
music.
That
look
sounds
like
it
was
by
either
Bach
or
the
Beatles,
depending
on
which
version
you
use,
creates
outputs
that
look
or
sound
like
they
came
from
an
original
data
set,
doesn't
mean
to
say
it
did,
and
it
does
not
mean
to
say
it's
true.
G
It's
like
having
a
very,
very
convincing
person,
tell
you
something
that
maybe
complete
rubbish
now
they're
getting
closer,
but
there's
still
a
gap
there,
because
it's
it
actually
works
by
generating
information
and
then
having
a
discriminator
behind
it
to
say.
Does
that
look
like
the
original?
Yes
or
no,
and
it
evolves
a
generator
until
it's
continuously
producing
information
that
looks
like
the
original,
but
it's
generating
it.
It
is
not
real
information.
H
G
G
The
issues
with
this
is
there
is
still
I'm
commands.
If
the
AI
comes
up
with
an
answer,
you
might
the
human
who
is
doing
the
Assurance
or
your
final
say,
tends
to
anchor
on
the
information
they're
first
giving
so
there's
still
some
of
the
levels
of
bias
that
come
in,
but
there
would
be
that
bias
exists
with
a
human
anyway.
G
Medium
levels
of
power
is
where,
instead
of
having
the
human
unequivocally
responsible
for
every
single
part
of
the
answer
is
where
you
just
basically
the
human
light
touch
says:
yes,
that's,
okay!
Yes,
that's!
Okay,
if
you
can
imagine
a
human
overseeing,
a
chatbot,
they
see
15
or
20
streams,
they're
doing
a
little
bit
of
moderation.
G
G
G
So
the
example
of
autonomous
vehicles
you're
only
killing
one
one
person
at
a
time,
which
is
quite
bad,
but
at
least
it's
not
at
scale
high
levels
of
power
and
high
scale
is,
if
you
were
to
allow
the
system
to
make
decisions
on
your
behalf
affecting
large
numbers
of
people.
Now
that
means
you're
starting
to
embed
any
bias
or
any
inaccuracies
in
your
model
into
the
decisions
made
affecting
an
entire
population.
G
2019,
that's
quite
a
big
impact
folks,
but
that
can
lead
to
Major
Scandal
and
a
significant
loss
of
trust
in
government.
That
is
not
a
good
thing.
G
Now
the
eu's
actually
got
a
set
of
regulations
now
out
which
has
been
passed
for
European
Parliament
and
that,
as
at
the
top
level,
unacceptable
use
life-changing
decisions
made
automatically
with
insufficient
human
government.
G
So,
moving
on,
let's
talk
about
generative
I,
specifically
now.
People
think
that
there's
only
a
small
number
of
players
in
the
marketplace,
because
they've
seen
all
the
ones
that
keep
on
coming
up
on
the
Press,
but
there's
large
numbers.
This
is
just
a
small
set
of
the
types
that
are
available
so
don't
regardless
as
exclusive
or
complete
and
it's
changing
every
day.
It
is
a
very,
very
busy
field
at
the
moment,
as
you
can
possibly
imagine
so,
some
of
them
generate
text.
You
can
have
ones
generate
images.
G
G
Has
anybody
here
ever
been
asked
or
demanded
by
somebody
over
the
phone
that
they
provide
them
some
information
on
an
urgent
basis,
because
if
they
have,
are
you
sure,
that's
a
human
on
the
other
end
of
the
line?
Or
is
it
the
voice
of
the
person
you
think
it
is
generated
by
a
voice?
Synthesizing
system?
You
don't
need
10
seconds
of
voice
sample
to
do
this
and
it
can
be
pretty
convincing
to
a
human.
G
On
the
other
end,
there
is
already
examples
of
extortion
happening
by
people
generating
voice
from
a
social
media,
clip
of
which
there
are
an
awful
lot
around
on
the
internet,
for
you,
obviously,
as
politicians
for
your
children
whatever,
and
that
is
a
threat
service
which
doesn't
have
a
technical
defense.
It
is
a
behavioral
maturity
that
we're
going
to
have
to
learn,
but
the
core
thing
that's
in
government
use
that
people
are
thinking
about
is
chatbots
data
chats
documents,
all
these
kinds
of
things,
which
is
actually
in
the
text
Arena.
G
So,
let's
look
at
what
a
generative
fire
model
is
at
its
simplest
level,
it's
a
neural
network,
which
means
originally,
it
was
developed
on
the
way
that
human
human
neurons
function
in
terms
of
increasing
or
decreasing
links,
depending
on
how
many
times,
you've
experienced
an
event
creating
a
strong
memory.
That
kind
of
thing
many
parameters,
the
largest
models,
are
carry
up
to
about
a
trillion.
That's
a
thousand
billion
parameters
in
the
model.
G
That's
a
lot,
but
when
you
consider
that
a
standard
human
brain
which
runs
on
the
whole
power
of
20
watts,
has
over
a
hundred
trillion
neuron
Connections
in
it.
We're
not
quite
there
yet
not
by
a
long
talk,
and
they
don't
quite
parallel,
but
it's
quite
useful
to
consider
that
it's
trained
on
large
quantities
of
text
or
other
data.
If
you're
talking
about
image
most
of
the
data
is
from
public
data,
sets
I'm
sure
you're.
All
aware
that
the
internet
is
completely
impartial
and
totally
unbiased
about
every
single
subject
it
covers.
G
G
But
the
interesting
point
is:
it's
got
emergent
behaviors,
which
means
it
gets
interesting
results
because
they're,
not
what
you
would
expect
and
some
of
those
patterns
are
getting
quite
sophisticated.
Now
answering
questions
improving
your
mathematics
formulas,
all
these
kinds
of
things
the
same
well,
these
immersion
behaviors
are
not
what
we
program
into
it.
Yes,
but
it's
a
learning
pattern
system
over
a
large
amount
of
data.
You
cannot
completely
predict
the
results.
G
G
Can
speak
multiple
languages
can
draft
rapidly
is
naive,
it's
not
a
human,
it
doesn't
have
human
experiences,
it
doesn't
understand
right
from
wrong.
It's
naive
can
produce
interesting
ideas
can
learn
from
what
it's
given
you
can
reach
out.
Trainings,
you
can
add
a
fine
tuning.
You
can
add
a
find
add
a
prompt
to
it,
which
includes
data
on
which
you
wanted
to
act
might
not
understand
sensitivity
or
privacy.
G
G
G
Is
what
would
you
trust
a
graduate
intern
with
giving
advice
on
policy
Maybe
direct
responses
to
Residents
about
what
the
best
action
to
do
in
a
legal
case?
Probably
not
a
graduate
intern
I
think
you
might
want
a
human
supervisor
there
and
that's
exactly
the
same
principle
and
that's
quite
a
useful
thing.
G
He
says
what
would
you
trust
a
graduate
intern
with
now,
if
you
gave
every
member
of
your
agencies
a
graduate
intern
to
help
them
do
their
work
and
they
would
supervise
them
and
they'd
interact
with
them
and
they'd
ask
them
to
do
the
grunt
work
and
all
these
kinds
of
things
or
go
off
and
find
me
documents
on
this.
You
can
see
how
that
would
be
very,
very
great
boost
to
productivity,
but
you'd
still
want
that
human
in
the
loop
at
this.
At
this
stage,.
H
E
E
G
Now,
let's
look
at
these?
If
you
actually
look
at
these
they're
quite
common
things,
I
mean
asking
policy
questions
Freedom
of
Information
requests,
ramping
on
staff.
Having
the
you
know
like
you,
you
tend
to
ask
a
person
in
the
seat
next
to
you
how
to
claim
your
expenses
or
get
a
new
stapler.
Well,
you
can
ask
the
system
instead,
process
guides
creating
process
guides
they're
all
pretty
good
use
cases.
G
The
problem
is
that's
where
the
gap
between
what's
generated
by
the
AI
system
and
when
a
decision
affects
a
residents
of
Kentucky,
is
the
smallest
and
therefore
is
also
the
point
at
which
the
it's
quite
high
risk,
because
I
haven't
got
it
necessarily
enough
of
a
human
in
the
loop
I
mean.
Obviously,
if
you're
talking
about
the
early
Pilots
for
people
experimenting
with
it,
they're
very
aware
of
the
limitations,
but
when
you
try
and
scale
it
to
the
entire
population,
you
start
to
get.
E
G
G
G
C
I
think
it's
important
to
note
that
you
know
if
agencies
are
experiencing
vacancy
rates
of
25
to
30
percent
and
they're,
trying
to
modernize
a
lot
of
these
Legacy
systems
where
the
code
has
been
built
on
year
over
year,
every
time
there's
a
mandate
or
a
legislative
change.
You
know
they
code
over
it
and
they
code
over
and
they
code
over
it.
And
you
get
these.
C
You
end
up
with
these
huge
monolithic
systems
that
take
years
and
years
to
break
down
think
about
some
of
the
modernization
efforts
that
have
been
going
on
here
in
Kentucky
with
large
technology
systems.
There's
a
high
failure
rate.
So
if
you
can
take
a
tool
like
generative
AI,
to
take
some
of
the
pressure
off
to
do
it
right
to
do
it
well
and
to
take
the
pressure
off
of
the
teams,
that's
a
huge
use
case.
So
there's
a
lot
of
of
Downstream
impact.
I!
Think
in
these.
G
But
the
areas
around
training
is
also
very
powerful
in
the
early
phases
and
these
kinds
of
things
generating
materials
obviously
checked
by
a
human,
but
it
can
enrich
those
kinds
of
processes.
Quite
fast,
that's
the
early
use
cases.
G
Do
you
go
through
those
use
cases
you
need
to
say
well,
just
because
it's
a
shiny
Hammer
doesn't
mean
to
say
that
everything's
a
nail,
so
you
have
to
think
what
do
I
actually
use
it
on
which
parts
of
our
of
our
functions
do
I
use
it
on.
Why
do
I
use
it
for
that
and
not
something
else
so
Is
it
feasible
all
these
kinds
of
things.
G
So,
let's
start
by
looking
at
well
in
this
particular
case,
I'm
going
to
use
Protective
Services,
so
we've
done
a
capability
reference
model
for
Human
Services
and
this
particular
area
is
things
like
Child,
Protection
or
elderly
protection.
But
when
you
actually
look
at
what
happens,
it's
quite
common
referral
queues.
You've
got
charge,
you've
establishing
case
information.
So,
okay,
that's
the
activities
you
do.
Those
are
technical
capabilities
you
need
which
of
those
will
benefit
from
this
new
technology,
not
all
of
them.
Let's
not
just
scatter
it
around
and
assume
it's
like
magic
dust.
G
G
Then
you
have
to
say:
okay,
that's
a
whole
set
of
initiatives,
and
this
is
just
one
area
because
you've
got
eligibility
enrollment
and
case
management.
You've
got
Payment,
Systems,
you've
got
service
provider,
there's
lots
of
different
capabilities,
and
this
is
just
in
human
services,
never
mind
in
Protective,
Services
or
taxation
or
Democratic
Services,
all
the
rest.
G
So
one
thing
Gartner
does:
is
we
actually
look?
We
go
through
these
kinds
of
things.
Multiple
points
of
view
from
multiple
organizations
all
across
the
world
and
I
will
all
I
can
say
is
since
this
came
up.
My
diary
rather
exploded
with
calls
from
every
single
continent
locals,
went
out
to
Kia
I'm
pleased
to
say,
but
everywhere
else.
G
G
What
is
the
risk
to
service
now?
The
risk
may
be
complete
service
failure.
That's
quite
embarrassing.
It
could
be
an
adverse
event
for
a
child
which
will
be
very
bad
for
that
child
and
bad
for
the
reputation.
If
that's
what
you,
if,
if
that's,
what
happens
with
your
current
systems
or
it
could
be,
you
decrease
the
risk
by
addressing
your
your
technical
debts.
It's
less
like
your
system,
Falls
over.
G
G
G
Sixth
of
the
challenge:
if
the
target
organization
cannot
adapt
to
this
new
technology
and
that's
actually
the
hardest
point
and
is
actually
the
most
common
point
of
failure
in
major
government
programs
across
the
world,
if
the
organization
cannot
adapt
because
they
don't
have
the
skills
or
the
governance
or
the
maturity
or
even
the
operating
model,
the
incentives
are
wrong.
They
simply
will
not
use
that
technology
effectively.
Then
you've,
sort
of
wasted
your
money
and,
of
course,
being
AI.
G
G
How
destructive
is
it
going
to
be,
and
are
they
going
to
be
able
to
adapt?
These
are
ones
where
I
would
say.
Take
care.
It's
not
obvious
that
it's
going
to
be
the
right
thing.
You
should
be
starting
on
the
ones
where
you
have
high
feasibility,
even
if
it's
only
low
to
medium
business
value,
because
you
know
it'll
deliver
the
outcomes
and
from
that
you
can
learn
from
that.
You
can
gain
the
maturity
to
actually
start
dragging
these
other
ones
back
into
the
Green
Space,
in
which
case
you
can
do
it
in
an
ordered
fashion.
G
But
that
is
means
to
say
that
it's
not
just
about
what
seems
most
important
in
terms
of
business
value.
The
feasibility
is
utterly
critical,
so
the
recommendation
is
simple
as
simple
as
appraise
which
capabilities
can
benefit
from
which
capability,
the
business
capabilities
from
which
generative
AI
capabilities,
you
need
to
aggregate
the
value
and
ensure
feasibility.
Remembering
three
things.
G
G
G
Large
language
models
learn
from
existing
data,
which
means
they
will
incorporate
the
biases
in
that
data.
The
large
families
model
is
not
an
up-to-date
source
of
information
in
and
of
itself.
You
could
not
use
this
to
answer
a
chat
after
an
environmental
event
such
as
I,
don't
know
a
tornado
or
storm
and
assume
that
we'll
provide
up-to-date
information.
It
will
not.
That
is
not
how
it
works
it.
It
bases
its
answers
on
that
network
of
a
trillion
parameters
which
was
built
when
it
was
last.
G
This
model
was
last
updated
six
months
ago
or
whatever.
There
is
no
conceptual
model
of
right
or
wrong
in
this.
They
try
and
put
the
filters
on,
but
is
no
direct
theory
of
Mind
here.
The
data
provided
May
therefore
be
seriously
misleading.
That's
what's
called
a
hallucination
the
moment.
There's
questions
about
copyright,
whilst
the
products
cannot
be
copyrighted.
G
G
G
G
Very
important
this
just
because
government
is
using,
it
does
not
mean
to
say
that
government
is
the
only
user
of
this
system.
There
are
malign
users,
but
remember
that
Excel
users,
if
you
want
to
find
something
out
about
Kentucky,
you
are
not
tend
to
tend
not
to
go
to
the
government
websites.
You
tend
to
go
to
Google
first
or
Bing,
but
there
are
also
online
uses.
You
can
now
generate
large
numbers
of
foi
requests
and
complaints
just
by
generating
them
from
AI.
G
G
G
So,
very
simply
put
you
mitigate
the
risk
or
you
should
have
policies
in
place
for
your
own
use
and
thinking
about
the
state's
use
that
genesifier
is
not
an
unfollable
oracle
within
the
government.
You
should
have
a
safe
space
for
interspermentation,
because
people
are
already
experimenting,
harvest
the
good
ideas
and
scale
them
and
apply
them
once
it's
safe
sure.
G
G
You
want
the
engine
to
run
precisely
every
time.
You
want
that
pickup
to
happen
exactly
on
time.
Every
time
you
want
that
software
to
run
perfectly
every
time,
Precision
repeatability
equals
quality
generous.
If
AI
is
not
like
that,
it
is
a
generative
system
with
emergent
behaviors.
But
variation
is
a
feature
at
the
end,
which
means
the
whole
mindset
around
a
perfect
responses
has
to
go
continuous
assurance.
G
G
If
we
really
want
to
go
off
on
go
off
in
the
future,
then
this
is
not
yet
don't
worry
about
it.
It's
just
to
be
aware
of
it.
Yes,
they
can
invent
code
if
they
can
event
code.
Logically,
they
should
be
able
to
invent
a
new
Ai
and
you
could
get
a
form
of
evolution
happening,
and
what
do
you
mean
by
sentience
in
this
context,
at
some
point
that
has
to
be
addressed?
Just
that's
not,
yes,
don't
get
not
yet.
G
The
hype
should
be
driving
policy,
because
many
people
are
excited
that
policy
has
to
be
in
place
to
make
sure
this
is
delivered
at
scaling
safely.
There
are
many
uses
of
AI
that
policy.
This
therefore
cover
all
uses,
not
just
the
new
one
on
the
Block.
It
is
an
opportunity
to
set
up
Kentucky
to
advance
significantly
in
a
way
it
provides
services
and
the
service
it
provides
to
its
citizens,
such
as
the
education
Alicia
mentioned
earlier.
G
You
should
be
prioritizing
initiatives
on
a
dispassionate
basis,
not
just
who
shot's
loudest,
but
in
terms
of
value
versus
feasibility,
you
must
mitigate
the
risk.
Do
it
in
stages?
Do
not
set
up
something
where
you
get
into
it,
where
you
cannot
back
out
in
the
early
stages
scale
it
only
when
the
residual
risk
is
acceptable
to
you,
and
you
must
keep
abreast
of
threats
build
mitigation
plans.
G
You
cannot
prevent
all
these
things
happening
in
the
outside
world,
but
you
need
to
be
aware
of
them
and
keep
up
to
date
with
them
and
understand
how
you'll
respond,
particularly
when
you're
talking
about
deep
fakes
in
use
in
terms
of
Corruption
of
process
and
or
defects.
In
the
terms
of
a
political
landscape,
so
that's
everything.
A
Thank
you
presentation.
That
was
a
lot
I'd
say
this
is
something
that's
coming
as
without
a
doubt.
We
do
have
a
few
questions.
First
is
representative
King.
H
Thank
you
Mr
chair.
Thank
you
all
so
much
for
being
here
today.
This
has
been
weighing
heavy
on
my
mind
for
several
months.
Maybe
a
year
now
so
I
appreciate
you
being
here
to
explain
it
a
little
bit
more
Mr
chairman.
My
observation
is
more
of
a
comment
and
a
challenge
for
all
of
us
as
a
general
assembly.
H
More
than
being
specific
to
today's
presentation-
and
that
is
this
is
this-
is
new
for
all
of
us
and
the
decisions
we're
going
to
be
making
now
as
a
general
assembly
will
set
the
precedent
and
those
laws
and
requirements
will
have
to
be
tweaked
as
all
of
this
progresses.
But
this
is
a
white,
a
very
heavy
weight
of
responsibility
that
that
we
are
taking
on
here.
So
one
thing
that
keeps
coming
to
mind
is
the
fiduciary
responsibility
we
have
given
to
professions.
The
ones
that
come
to
mind
are
lawyers.
H
Financial
institutions,
Insurance,
probably
even
in
even
real
estate,
comes
to
mind
that
they
have
to
have
the
best
interest
of
the
client
or
the
people
they're
representing
in
mind
and
that's
how
they
make
decisions.
I
know
there
are
always
going
to
be
bad
actors,
but
I
just
hope.
We
keep
that
in
mind
and
if
and
when
we
can
put
some
of
those
requirements
in
place
to
have
this
type
of
profession
going
forward,
have
the
best
interest
in
of
their
client
in
mind.
I
think
it'll
serve
the
people,
we
all
love
and
serve
very
well.
H
C
I
I
loved
what
you
I
loved
your
intent,
I
believe
in
that
this
is
a
scary
thing
and
it
is
a
heavy
responsibility,
but
it's
so
amazing
and
it's
so
exciting
and
the
and
the
impact
could
be
so
positive
if
we,
if
we
approach
it
with
a
little
bit
of
caution,
but
also
a
little
bit
of
Hope
right
so
the
Navy
is,
is
also
looking
at
generative
Ai
and
recently
they
have
started
piloting
a
chat,
bot
called
Amelia.
That's
her
name
and
you
can
look
her
up.
C
You
can
Google
Navy,
Amelia
and
you'll
see
this
picture
of
this
beautiful
lady
and
she
looks
real.
But
what
they've
been
able
to
do
is
they've
been
able
to
consolidate
multiple
I.T
help
desks
that
solve
issues
for
over
1
million
users,
so
now
users,
whether
you're
a
sailor,
you're
a
marine
or
are
you
a
citizen
instead
of
calling
this
number
for
this
service
or
that
number
for
that
service
and
then
pressing
one
for
this
or
two
for
that
or
three
or
seven
whatever?
It
may
be
followed
by
pound
key?
C
All
of
these
people
are
having
questions
and
they're
all
different
questions,
but
they
may
have
some
common
thread.
If
I'm,
a
citizen
and
I
have
a
question
that
might
have
to
do
with
Medicare.
Well,
I'm,
probably
there
I
might
have
someone
who's
in
the
correction
system.
I
might
have
somebody
who's
in
the
court
system.
I
might
have
questions
about
that.
What
if
I
could
go
to
one
place?
I
Thank
you
Mr,
chair
and
I.
Thank
you
for
the
presentation
today,
because
I
see
all
the
benefits
of
AI.
It's
it's
a
very
powerful
tool.
It's
going
to
replace
Industries
it's
going
to
enhance
Industries
and
even
though
it
may
be
a
powerful
tool,
it
could
also
be
a
powerful
force
for
us
to
contend
with
in
the
future
and
what
I
mean
by
that.
Just
last
week
my
daughter
she's,
like
she
was
on
chat
GPT.
I
She
said
Dad,
you
can
write
a
song,
just
tell
it
what
you
wanted
to
write
and
it
wrote
a
song
just
like
that,
and
my
concern
is
this:
if
it
can
do
these
things
that
we're
seeing
it
do
today
as
technology
evolves,
my
trepidation
and
fear
is
that,
as
it
does
evolve,
that
it
will
become
self-learning,
that
it
would
develop
a
personality,
and
please
don't
laugh
on
the
committee,
but
I
watched
Mission
Impossible
last
week
and
hey
I
took
over
the
nuclear
codes
so
and
I
know
that's
movie,
but
you
know
art
and
life
they
kind
of
correspond
to
some
degree.
I
And
we
just
can't
just
turn
it
loose
and
say
you
know
we.
We
have
to
put
really
good
guard
rails
in
place
and
there's
a
lot
of
fear
with
the
general
public
when
it
comes
to
AI,
there's
so
much
that
is
unknown
and
it's
a
it's
an
evolving
technology.
So
what
can
you
give
us
on
the
committee?
That
would
maybe
put
some
fears
to
rest.
We
do
want
to
evolve.
We
want
technology
to
be
better.
I
We
want
the
help
centers
to
be
able
to
accommodate,
but
at
the
same
time
as
this
thing
evolves,
I
mean
what
could
keep
it
from
taking
over
the
power
grid.
Somebody
using
AI
to
take
over
the
power
grid,
so
I'm
just
trying
to
be
practical.
I'm
computer
illiterate,
I'm,
just
a
country
boy,
so
I'm
just
figuring
all
this
stuff
out
piecing
it
together
myself.
But
there
is
a
lot
of
fear
when
it
comes
to
Ai
and
the
progression
that's
made.
G
Well,
I
can
answer
part
of
that.
Ai
itself
does
not
have
any
desires.
It
is
not
a
human
it
doesn't
want
to
take
over
the
power
grid.
It
really
doesn't
you're,
not
interested
it.
Doesn't
it
that's
one
thing
you
assume
if
you're
anthropomorphize
it
put
a
human's
sensibility
into
the
AI,
that's
actually
not
how
it
works,
which
is
very
very
far
from
that
kind
of
capability.
G
At
the
moment,
the
what
you
probably
need
to
be
more
careful
of
is
how
humans
are
using
AI
to
empower
themselves,
give
themselves
a
form
of
acceleration
or
in
fact,
I
can't
call
it
a
superpower,
but
that
kind
of
thing
and
it's
therefore
the
human
actors
who
are
misusing
AI,
which
is
much
more
important,
and
you
have
to
be
very
careful
to
make
sure
it's
a
very
clear
path
of
liability
for,
through
whatever
the
AI
has
been
used
for
to
someone
who
is
legally
actionable
and
liable
for
that.
G
It's
quite
difficult
to
do
well
under
the
the
EU
actually
has
actually
put
that
sort
of
grade
of
responsibilities
and
structure
already
in
place,
and
there
are
guidelines
coming
up
in
quite
a
few
other
organizations
to
do
the
same.
So
the
point
is
it
is
possible
to
set
up
that
gradation
and
have
that
kind
of
levels
of
Greater
risks.
G
That
said,
yes,
there
are
malicious
players
are
Bad
actors
out
there,
and
this
does
give
them
some
more
power
and
you're
going
to
have
to
consider
those
in
your
risk.
Preparation
can't
quite
get
away
from
that,
but
it's
not
about
it's
simply
not
interested
in
trying
to
take
over
the
power
grid.
It
is
much
more
likely
that
the
power
companies
will
be
able
to
use
AI
to
significantly
improve
the
resilience
to
the
AI,
the
power
grid,
to
attempted
intrusions
by
other
people.
G
A
Thanks,
sir
and
I
think
what
we'll
try
to
struggle
with
we
look
at
AI.
Is
you
know
it's
a
that?
Sword
has
two
edges
to
it.
Without
a
doubt.
You
know
we
and
use
it
right
now,
whether
or
not
you
call
in
for
a
complaint
or
find
out
something
those
call
centers.
If
you
notice,
they
say,
tell
us
your
problem.
That's
AI,
they're,
trying
to
figure
out
where
to
get
you
to
which
is
good
and
bad
they
used
to
have
humans.
A
Do
that
now
we
have
a
computer
system
doing
that
that
can
that
concerns
me
will
take
my
jobs.
Actually,
I'll
tell
the
story.
In
earlier.
We
came
back
from
a
trip,
my
wife
and
I
adopted
too
fast
food
restaurants,
one
of
them
you
had
to
either
order
a
cake,
a
kiosk
or
you
went
through
a
drive-through
and
I
noticed.
The
little
lady
did
not
have
anything
she's
punched
in
there
when
you
give
that
order.
Ai
is
putting
that
food
order
together
for
you,
those
are
jobs
that
people
used
to
have
that
no
longer
exist.
A
You
know,
and
actually
about
a
month
ago,
thanks
to
our
comms
team.
Here
we
put
together
the
first
paragraph,
and
then
we
did
the
op-ed
that
was
printed
in
two
local
newspapers.
Ai
wrote
the
paper
for
us
trust
me.
There
was
no
misspelling.
The
grammar
is
correct,
that's
not
me,
so
it
does
have
its
good
uses,
but
the
one
thing
he
kept
saying
that
I
am
concerned
about
is
biases.
That's
going
to
be
put
into
the
system
good
or
bad
as
we
do
this.
A
All
of
us
have
bosses,
and
that's
going
to
infiltrate
into
that,
which
is
something
that's
going
to
extremely
concerned
to
me
is
whether
you
know
remember
what
we
are
humans
and
we're
going
to
interject
our
biases
into
these
programs
and
may
come
home
that
we
wish
we'd,
never
done
that
so
again
as
we
go
forward.
This
is
something
we're
going
to
take
a
hard
look
at
and
start
setting
parameters
of
what
we're
doing
don't
allow.
But
you
know
again
it's
coming,
so
we
better
get
prepared
for
it.
It's
coming
one
way
or
the
other.
A
J
Yes,
thank
you.
Your
presentation
is
incredibly
timely
because
we're
grappling
with
so
many
education
issues
in
in
Kentucky
continues
to
have
some
challenges
and
we
have
fantastic
teachers,
but
perhaps
not
enough
teachers
and
that's
where
another
area,
practical
tools
of
teaching,
math
practical
tools
of
teaching
to
read.
Is
there
a
function
for
this?
You
started
to
mention
that
sir,
in
your
presentation,
I
think
Ben.
J
C
Yes,
I
love
that
question
and
I'll.
Let
Ben
respond
as
well,
but
you
know,
51
percent
of
teachers
are
saying
that
they
are
already
using
Ai
and
there
are
applications
for
just.
How
does
a
teacher
make
more
time
to
interact
with
the
students
with
lesson
plans,
so
some
generative
AI
can
help
them
come
up
with
lesson
plans
it?
C
Can
it
can
search
out
over
all
these
different
resources
and
come
together
and
give
them
ideas
and
resources
at
their
fingertips,
so
it
cuts
down
on
a
lot
of
the
administrative
time
that
teachers
don't
like
teachers
want
to
be
in
front
of
their
students.
They
want
to
be
teaching
so
there's
some
use
there
and
then,
with
the
with
the
Sal
Khan.
It's
he
it's
Khan,
Academy
k-a-h-n.
C
So
if
you
go
out
and
you
search
for
Sal
Khan,
Ted
Talk,
there's
an
amazing
Ted
Talk
that
talks
about
some
of
the
some
of
the
use
cases
there.
Creating
that
one-to-one,
tutor
I
think
that's
an
amazing
thing.
One
of
the
things
that
Kentucky
does
have
is
they
already
your
rate
of
one-to-one
devices
is
fantastic.
It's
a
I
mean
you're
ahead.
C
So
if
your
students
already
have
one-to-one
devices
what
if
you
turned
that
device
into
a
private
tutor
that
can
interact
my
daughter,
my
precious
daughter,
she
went
to
bed
one
night,
just
thinking
that
she
was
a
regular
or
college
kid.
She
woke
up
the
next
morning
said
she
had
a
dream
that
she
wanted
to
go
to
law.
School
I
was
like
honey,
I.
Think
that
sounds
great,
but
you've
got
to
have
a
job.
C
She's
decided
that
she
was
going
to
take
the
LSAT,
so
she
took
a
practice
test
and
she
scored
well
enough
to
give
her
a
little
hope.
So
I
said
well,
if
you're
going
to
do
this,
you
need
to
get
a
tutor.
So
she
went
on
to
Khan,
Academy
and
she's,
been
using
their
private,
tutor
and
and
she's
been
using
that
and
that
child
who
has
never
been
a
good
test
taker
I
mean
she's
nailing
it.
C
It's
amazing,
so
I
believe
that
the
scalability,
which
was
missing
back
in
1984
when
we
originally
figured
out
that
private
tutoring
was
better
than
group.
The
scalability
is
there
now
with
devices
Ben.
Did
you
want
to
add
anything.
G
I
completely
agree
the
the
ability
to
actually
consider
a
transformation
of
a
service
through
appropriate
use
with
the
appropriate
guardrails
is
fantastic.
It
is
very
powerful,
exactly
the
chairman
said
the
the
ability
to
a
sharp
sword
Cuts
both
ways,
so
you
make
sure
that
it's
used
properly.
You
teach
people
how
to
use
it
properly,
with
the
appropriate,
guardrails
maturity
and
understanding
with
that
great.
G
Just
make
sure
that
you
are
aware
of
the
fact
that
you
can
also
industrialize
the
dark
side
as
well.
If
you
are
not
careful
so
make
sure
you,
you
industrialize
the
good
side
and
bring
maturity
to
the
people.
You
are
teaching
so
that
they
understand
the
limits
of
what
they're
getting
into
and
they
understand
how
to
absorb
information
properly
and
judge
its
veracity
and
therefore
you'll
have
a
much
more
better
educated,
a
much
more
mature
population
as
we
go
forward,
which
would
be
fantastic.
C
Yeah
and
that's
where
a
gardener
can
help
with
all
with
state
agency
technology
leaders.
You
know
we
work
with
agency
leaders
to
help
them
understand.
What
are
your
peers
doing
what's
happening
around
the
world,
giving
them
that
Global
Perspective
and
helping
them
develop,
giving
them
the
tools
and
the
resources
they
need
to
further
develop
the
use
cases
that
make
sense
to
them.
E
Yes,
when
we're
talking
about
our
children,
some
many
parents
in
Kentucky
actually
have
issues
with
the
data
mining
of
our
children.
With
the
current
devices,
we
have
how
much
more
data
mining
would
be
capable
with
the
AI
added
to
it.
So,
oh.
C
G
Way,
the
way
that
you
would
set
up
such
a
system
would
mean
to
say
that
the
conversations
that
you
had
in
that
particular
set,
assuming
it
was
set
up
correctly,
would
have
no
memories
outside
that
system
and
that
interaction
with
that
child.
Therefore,
there
would
be
no
data
money.
You
can
set
it
up
so
that
there
is
data
mining.
That
would
be
poor
practice,
so
done
correctly,
you
can
set
it
up
so
that
that
problem
is
just
simply
not
there.
G
If
you
think
about
what
the
the
equivalent
would
be
for
military
use
the
same
systems,
they
are
very
concerned
about
that
possibility.
Therefore,
a
lot
of
the
engineering
has
gone
in
to
make
sure
that
you
can
find
ways
through
that
it
requires
the
correct
architectural
patterns
and
design,
but
that's
fine
that
can
be
done.
K
Thank
you,
Mr
chairman
and
first
let
me
Begin
by
congratulating
Bob
Babbage
on
becoming
a
grandfather,
I
I,
can't
imagine
what
that's
like
and
I
know
he
doesn't
look
a
day
older
than
I
am
but
trust
me
he's
just
a
couple
years.
Older
I
really
appreciate
today's
conversation,
because
AI
is
going
to
revolutionize
the
world
the
way
that
the
invention
of
the
electric
generator
and
the
electric
motor
did,
and
if
you
think
back
120
years
ago,
no
one
could
conceive
what
electricity
would
do
to
humankind.
K
We're
just
scratching
the
surface.
In
this
conversation,
and
even
we
say
things
are
the
realm
of
sci-fi
sci-fi
is
not
even
close
to
being
able
to
predict
what
AI
is
going
to
be
like
in
20
or
30
years,
and
by
no
means
should
we
try
and
ban
it
or
dismiss
it,
but
it's
absolutely
essential
that
we
do
develop
these
protocols
and
these
guard
rails,
just
as
when
our
grandparents
first
heard
of
the
time
that
they
were
able
to
first
split
the
atom
in
the
Manhattan
Project.
K
None
of
us
would
be
sitting
here
today.
So
all
I'm
asking
is
that
we
keep
an
open
mind
and
we
can
make
appropriate
jokes
about
Sci-Fi
movies,
speaking
of
which
I
would
say
that
in
earlier
Tom,
Cruise
sci-fi
movie
is
more
relevant,
Minority
Report,
because
an
area
of
AI
that
was
not
discussed
today,
that
is
possible,
is
someday.
Some
computer
programmers
and
some
legislators
might
say
that
humans
are
too
fallible
humans
shouldn't
be
solving
crimes
with
our
biases,
we're
going
to
have
computers.
K
Do
it
and
humans
shouldn't
sit
on
juries
or
be
judges,
because
they're
fallible
computers
should
do
it
and
I
say
the
exact
opposite.
I
think
there's
something
innately
unique
about
humankind,
that
we
must
maintain
the
control
over
developing
our
laws,
enforcing
our
laws.
Judging
each
other
and
use
this
just
as
a
support
tool.
I
would
be
delighted
to
hear
that
detectives
were
able
to
solve
a
Cold
Case
using
AI.
K
I
would
be
delighted
to
hear
that
we
are
able
to
improve
the
efficiency
and
the
accuracy
and
Clarity
of
our
laws
by
having
AI
support
the
lrc
staff.
What
I
don't
want
to
hear
about
is
a
complete
replacement
and
not
just
because
the
step
after
replacing
the
support
staff
is
replacing
us,
but
at
that
point
we're
taking
away
our
basic
humanity
and
we're
just
going
to
be
cogs
in
the
wheel
of
AI
and
as
it
stands
right
now,
there's
no
specific
cause
for
concern.
K
But,
as
our
speaker
said,
AI,
which
is
just
a
tool
and
eventually
will
be
a
self-aware
tool,
has
no
interest
wants
or
needs
it's
not
sentient.
It
doesn't
desire
anything,
but
it
can
be
programmed
for
great
evil
and
to
me
the
best
analog
would
be
biological
warfare.
We
have
learned
so
much
about
germs
and
diseases
in
the
last
couple.
K
Hundred
years
on
how
to
solve
it,
how
to
cure
it,
how
to
treat
it,
but
at
the
same
time,
if
we
didn't
have
the
systems
in
place,
think
about
how
easily
a
bioengineer
could
Wipe
Out
the
entire
Human
Race
by
deliberately
weaponizing
a
superbug,
and
that's
where
we
need
to
to
be
focused
on
is
making
sure
that
we
have
oversight
on
who
has
access
to
the
Technologies,
what
the
trainings
are
and
have
serious
repercussions.
If
someone
goes
off
the
rails
with
it,
we
do
that
with
nuclear.
We
do
that
with
biological.
K
We
are
just
starting
to
scratch
the
surface
on
AI,
but
we,
those
in
the
legislature
today,
are
going
to
be
the
tip
of
the
spear
and
two
generations
from
now.
People
are
either
going
to
be
looking
back
and
saying
thank
God.
They
did
the
right
thing
to
keep
it
under
control
for
the
betterment
of
humanity
or
shame
on
them
for
allowing
the
destruction
of
what
took
us
thousands
of
years
to
build.
Thank
you,
Mr
chairman.
A
F
Would
submit
that
legislators
and
lobbyists
are
irreplaceable,
but
lately
I
saw
the
figure
97
000
legislative
proposals
or
laws
in
the
50
states
this
year
about
a
thousand
of
those
were
products
of
you
and
your
colleagues.
A
few
of
those
did
concern
AI
and
if
you're
going
to
ncsl
shortly,
it'd
be
worth
paying
attention
to
the
AI
presentations
that
they
plan
because
I'm
sure
that
will
include
what
some
other
states
have
already
looked
at
and
that
will
be
an
item
to
report
back
as
well,
but
as
Chair
King
put
it
so
well.
F
A
Thank
you
not
it's
something
you
did
talk
about.
My
wife's
retired
teacher,
especially
I,
know
in
Scott
County,
where
they
use
100
Chromebooks.
They
can
replace
I.
Think
that's
telling
me
five
Chromebooks
for
every
one
textbook,
so
they
just
went
one
upside
100
Chromebook
wide
in
the
district.
That's
something
kids
can
start
using
as
a
one-on-one
tutor,
which
is
a
great
asset
without
a
doubt
again,
but
my
only
thing
I
wish
a
I'd
been
around
when
I
was
in
college.
My
term
papers
would
look
a
lot
better,
but
thank
you
very
interesting.
A
Any
last
question
guys
interesting
presentation
day.
There's
a
lot
of
like
I
said
we're
just
to
tip
the
iceberg
this
little.
This
is
something
we're
going
to
deal
with
very
shortly
in
the
future.
I
look
forward
to
having
discussion
with
you
guys
going
forward.
I
know,
there's
also
some
other
groups
that
want
to
talk
to
us
about
Ai
and
I,
look
forward
to
hearing
their
presentation,
also,
but
very
interesting
presentation
day.
Thank
you
guys.
Any
last
question
before
we
adjourn.