►
From YouTube: Select Committee on Blockchain, Financial Tech. & Digital Innovation Tech., May 15, 2023-PM
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Excellent
okay,
we
are
live
nope
folks
in
the
room
and
online
had
a
good
lunch.
The
next
thing
we're
going
to
talk
about
is
artificial
intelligence,
something
that
I'm
certainly
really
excited
that
we're
going
to
be
talking
about
and
that
the
coach
here
and
I
have
put
in
some
time
to
try
to
set
up
set
up
a
good
discussion.
B
B
Recognizing
that
I,
don't
think
other
states
have
really
passing
legislation
related
to
AI,
but
it's
a
space
where
governance
is
certainly
being
discussed
and
and
there's
a
lot
of
contemplation
of
what
should
and
shouldn't
be
done
with
artificial
intelligence,
and
this
seems,
like
the
appropriate
form
through
our
select
committee,
to
start
taking
a
look
at
it,
learning
about
it
and
finding
out
if
there
is
anything
that
perhaps
we
should
take
action
on
either
in
this
legislative
session
or
next
legislative
session.
B
As
we
move
forward,
there's,
certainly
a
lot
that
I
think
is
on
the
horizon
in
the
near
Horizon.
That
will
hear
some
more
about
today
in
our
our
discussions
and
and
contemplate
so
I
I
asked
the
select
committee.
You
know
ask
a
lot
of
questions
explore
this
learn
as
much
as
we
can
and
and
see
what
we
should
be
doing,
because
we
don't
necessarily
have
any
any
agenda
in
terms
of
what
we
want
to
accomplish
other
than
to
start
learning
in
this
space.
So
please
explore
a
way.
A
Absolutely
Mr,
chair
and
I
I
wholeheartedly
concur
with
everything
you
said,
I
think
we've
got
some
really
interesting
folks
who
are
going
to
be
testifying,
who
have
a
ton
of
exposure
in
the
space
from
a
regulatory
perspective
from
a
technical
perspective
from
you
know,
new
policy
and
philosophical
perspective
that
we'll
be
dealing
with
so
I.
Think
we've
got
a
lot
of
really
cool
stuff
to
to
discuss
this
afternoon.
A
A
C
Yes,
thank
you,
representative
Western,
it
sounds
like
there
may
be.
A
little
bit
of
echo
or
feedback
is.
C
I
think
it
may
be
from
a
speaker
in
the
room.
Okay,
it
looks
like
Zoom
is
just
adjusted.
So
thank
you
again
very
much
for
the
opportunity
to
speak
on
this
matter
and
let
me
just
double
check
with
you
how
you
would
like
to
conduct
this
part
of
the
hearing.
Should
we
just
jump
right
into
presentations,
or
would
you
like
to
set
more
context
or
what
shall
we
do.
A
Does
it
appreciate
that,
and
you
know
we
can
hear
you
crystal
clear-
maybe
also
I-
think
let's
go
and
just
jump
into
some
of
the
presentations
Chris?
What
are
you
thinking
or
Mr
coach
here
yeah.
B
That's
when
we,
when
we
chatted
with
you
about
just
kind
of
putting
together
that
that
introductory
discussion
and
and
overview
and
bring
some
material,
I,
think
letting
you
take.
The
lead
on
that
and
I
know
that
you've
got
some
other
speakers
that
you've
got
to
provide
us
that
governance
overview
so
maybe
start
with
some
introductions
and
then
go
into
the
work
that
you've
been
exploring
in
the
AI
space.
A
Maybe
just
give
us
a
quick,
you
know
30
second
intro,
obviously
Chris
and
I
know
who
you
are,
but
just
we've
got
some
new
members
here
today.
I
think
just
a
little
bit
of
intro
on
each
of
you
I
think
would
be
helpful
to
give
us
some
context.
I
think
also.
Did
we
still
have
that
demo
was
that
still.
B
C
Thank
you
very
much.
Mr
Mr
co-chair,
my
name
is
dazza
Greenwood
I
run
the
research,
the
computational
law
research
group
at
MIT
at
law.mit.edu,
and
also
the
legal
Tech
Consulting
companies,
civics.com
and
I've
got
a
background
in
technology
law
and
and
a
particular
interest
in
artificial
intelligence,
and
so
we've
I've,
with
the
in
collaboration
with
the
co-chairs,
lined
up
a
few
presenters
to
help
to
provide
some
background
and
some
context
for
the
select
committee.
C
As
you
begin
to
consider
the
potential
implications
of
generative
AI
in
a
state
law
context
joining
me
today
are
Dan
Katz
and
Michael
Bommarito,
who,
among
other
things,
are
are
basically
famous
in
in
our
area
for
working
with
openai
to
put
gpt4
through
the
exam
process
of
the
bar
exam
and
and
they're
deep
experts
in
this
area,
they'll
be
able
to
talk
more
and
lay
a
foundation
about
the
nature
of
this
technology
and
its
impact
on
Law
and
legal
processes,
and
a
lot
more
I
also
have
Damien
real.
C
Who
is
a
basically
he's
a
real
thought
leader
when
it
comes
to
applying
this
technology
to
Legal
use
cases
and
he's
going
to
show
the
committee
what
it
looks
like
to
be
able
to
interact
with
large
language
models
in
terms
of
prompt
engineering,
prompt
design
and
also,
as
part
of
of
a
of
an
of
an
ongoing
official
process,
a
legal
process?
We
also
have
Jesse
Hahn,
who
is
been
working
with
us,
as
you
were
just
asking
about
a
demo.
C
We've
all
worked
together
to
create
a
demo
that
I
hope
we
hope
can
provide
an
anchor
for
how
this
technology
is
about
to
play
out
in
the
economy
and
in
society
to
help
to
provide
some
grounding
to
think
about
the
implications
of
the
technology,
and
this
also
will.
It
follows
up
on
my
own
work
with
your
Select
Committee
in
such
years
past
on
algorithmically
managed
llc's.
C
C
It
also
opens
new
questions
and
Jesse
has
taken
the
lead
on
engineering
that
and
we'll
demo
it
and
then
finally,
John
nay,
who
has
been
doing
real
thought
leadership
on
on
on
fiduciary
duties
and
also
on
what
it
would
look
like
to
create
a
large
language
model
from
the
ground
up
that
is
well
aligned
from
a
legal
perspective,
so
I
I,
I'm
gonna
each
one
of
the
people
I
just
mentioned,
of
course-
will
introduce
themselves
individually
and
provide
what
I
assume
will
be
much
better
context
for
their
remarks,
but
I
just
wanted
to
skip
across
the
top
of
the
Waves.
C
To
give
you
a
sense
of
what's
about
to
happen
so
with
that
I
I
will
make
a
a
couple
of
remarks
myself
just
to
set
the
table.
Please
do.
Thank
you.
I
I
believe
that
generative
AI,
this
new
breed
of
artificial
intelligence,
that
that
became
known
with
the
public
wide
adoption
of
chat
GPT
at
the
end
of
last
year
is
a
represents
a
profound
advance
in
technology
and
I.
C
Think
it's
going
to
have
a
significant
impact
on
the
economy
and
and
on
society
and
part,
and
that
it's
already
beginning
to
do
so
in
terms
of
how
how
job
how
people
do
jobs,
what
jobs
are
needed,
and
also,
maybe
more
importantly,
what
it
makes
possible
that
was
previously
not
possible.
C
We'll
see
some
of
that
with
this
with
the
demo
on
an
on
a
algorithmically
managed
Wyoming
LLC.
But
it
really
applies
across
the
board,
no
nowhere
more
so
than
law
itself,
partly
because
law
is
really
a
language,
intensive
profession
and
field,
and
so
this
technology
has
been
a
particularly
good
fit
for
doing
many
of
the
processes
and
tasks
that
that
lawyers
frequently
have
done
in
the
past
and
other
legal
professionals.
C
C
And-
and
this
is
good
timing
for
the
select
committee
and
for
the
entire
legislature
in
Wyoming
to
begin
to
understand
and
grapple
with
what
the
technology
is,
how
it
works
and
to
start
to
extrapolate
or
even
to
to
manage
the
changes
that
are
already
afoot
within
Wyoming
and
and
the
ones
that
impact
Wyoming
from
the
broader
economy.
C
One
thing
that
comes
to
mind
in
terms
of
just
drawing
the
connecting
the
dots
back
to
the
work
of
this
Select
Committee
in
years
past
is
this
whole
question
of
identity
and
I.
Think
the
the
select
committee
had
a
big
win
a
couple
of
legislative
sessions
ago
with
creating
a
legal
definition
of
personal
digital
identity
and
organizational
digital
identity,
and
that
was
needed
and
helpful
for
Web
2.0
and
for
the
world
as
it
existed
then,
and
it
also
so
some
of
the
members
that
were
had
lived
through
that
process
will
remember.
C
I
think
that
future
is
now,
and
one
of
the
implications
of
this
technology
is
that
there's
the
need
is
more
urgent
than
ever
to
be
able
to
identify
the
source
of
communications
and
to
be
able
to
attribute
those
Communications
to
a
human
or
to
a
legal
entity,
and
even
my
publication
at
MIT,
the
MIT
computational
law
report
will
already
getting
submissions
of
of
materials
that
we
never
would
have
got
before
and
that
are
clearly
generated
by
by
this
technology.
C
That's
a
that's
a
small
all
example,
but
there
are
some
Publications
that
are
being
flooded
completely.
There
are
some
lines
of
business
that
are
going
out
of
business
now
because
of
the
flood
of
this
new
technology,
some
tutoring
services,
but
more
to
the
point,
when
you
look
at
at
Broad
scale,
Communications
like
public
comment
on
the
net
neutrality
regulations
that
went
through
a
few
years
ago,
there
was
the
Attorney
General
of
New
York
I.
C
Think
just
yesterday
issued
a
press
release
about
a
large
fine
to
companies
that
had
synthesized
Americans
digital
identities.
To
what's
the
word
to
create
comments
on
their
behalf
that
they
were
unaware
of
they
had
no
do
they
had
no
connection
to,
they
were
fraudulent
and
there
were
large
fines
that
was
done
with
some
automation
but
with,
but
with
a
lot
of
manual
processes.
C
I
think
that's
a
harbinger
of,
what's
now
possible
for
very
low
cost
at
very,
very
low
cost.
It's
possible
to
flood
Communications
that
seem
to
be
from
a
human,
but
in
fact,
are
coming
from
this
technology
that
there's
a
lot
of
issues
there
but
I
think.
C
Fundamentally,
it's
an
identity
issue
and
the
legal
Cornerstone
you
have
that
defines
personal
legal
ident,
personal
digital
identity
and
organizational
digital
identity
can
be
a
Cornerstone
for
building
a
legislative
framework
that
begins
to
address
the
need
to
distinguish
between
Communications
that
have
sourced
from
from
a
human
that
that
that
that
that
they
that
can
be
attributed
and
who
can
take
responsibility
and
get
the
credit
for
those
versus
Communications
they're,
not
sourced
from
a
human.
C
This
is
particularly
important
when
we're
conversing
with
another
party,
so
what
one
of
the
implications
of
of
these
fake
identities
also
comes
up
in
a
consumer
protection
context,
where
we're
already
seeing
examples
of
synthesized,
convincing
voices
of
people
that
that
you
might
know.
C
Maybe
people
on
your
extended
family
who
were
running
scams
were,
were
the
voices
fake
but
there's
a
scam
being
run
that
they've
that
they've
been
arrested
or
that
they're
in
some
emergency,
and
they
just
need
you
to
wire
money
to
them
now,
and
you
could
go
back
and
forth
and
talk
to
this
person
and
it
may
seem
authentic,
but
in
fact
it
may
be
a
fake,
and
so
some
of
the
advice
from
consumer
protection
groups
now
are
to
have
a
safe
word
with
members
of
your
family,
where
they,
where
they're
supposed
to
use
that
word
when
they're
communicating
about
an
emergency
and
if
they
don't
use
that
word,
it
may
mean
that
they're
not
safe,
maybe
they're,
under
some
coercion
or
in
this
new
context.
C
Maybe
it
may
mean
it's
not
even
them
to
start
with,
it
may
be
a
deep
fake
of
them
having
families,
you
know
and
have
everyone
use,
information,
security
and
get
educated
and
adapt
to
these
changes
is
important,
but
we
can't
depend
on
people
in
individually
adapting
to
these
profound
changes.
Eventually,
there
will
be
a
need
for
some
legislative
framework
that
can
adapt
to
and
can
support
and
reflect
the
new
issues
and
options
that
arise
from
this
technology.
C
So
I
think
identity
is
going
to
be
one
area
to
keep
an
eye
on
another
one
is
when
I
think
about
areas
of
cut
that
are
customer
release.
State
jurisdiction.
Property
is
one
of
the
first
things
that
comes
to
my
mind
in
law
school.
You
know
we
learn
property
law,
it's
fundamentally
a
an
area
of
state
law
and
I
know
that
the
the
select
committee's
already
done
tremendous
good
work
with
reforming
laws
as
they
relate
to
digital
assets.
C
There's
they're
I
think
there's
about
to
be
a
lot
more
types
of
digital
assets
that
that
that
people
can
create.
With
this
technology,
it's
possible
to
create
code
very
quickly,
I
personally,
never
been
a
particularly
good
developer.
I'm
now
able
to
create
lots
of
applications
and
services
that
I
wasn't
myself
able
to
do
before
I
had
to
work
with
grad
students
or
with
colleagues
at
startups,
who
were
engineers
to
do
them.
I'm
creating
code.
C
Now
that
I
couldn't
do
before
many
people
can
create
lots
of
Works
using
this
technology
Beyond
code
that
are
that
are
digital
and,
and
there
raises
a
question
about
whose
property
is
it,
and
and
how
do
you
perfect
rights
to
that
property
and
be
able
to
exchange
rights
to
that
property?
There's
there's
I
think
a
whole
raft
of
things
that
come
up
with
contracts
and
transactions
in
general,
especially
when
we've
got
so.
You
know
about
generative
AI
in
general,
we'll
show
you
demos
of
it.
C
Many
people
are
familiar
with
it
with
a
chat
interface.
The
next
wave
of
innovation,
which
we're
going
to
delve
into
is
what
we
would
call
agent
based
use
of
this
technology,
and
so
there
is
some
like
I
think.
The
most
popular
repository
on
GitHub
today
is
something
called
Auto.
Gpt,
there's
also
something
called
Baby
AGI
they're.
C
Basically,
there
are
these
software
applications
that
are
fueled
by
generative
Ai
and
what
they
do
is
let
you
set
a
goal
as
a
human
and
then
the
AI
will
almost
like
create
its
own
prompts
it
does
sort
of
self-prompting
about
how
would
I
break
this
goal
up
into
tasks?
C
How
can
I
perform
these
tasks
and,
as
these
models
are
given
access
to
the
internet,
maybe
given
a
debit
card
or
a
prepaid
card
and
and
other
capabilities,
they
can
actually
go
and
perform
a
series
of
tasks
that
you
may
not
know,
but
it's
all
in
order
to
achieve
a
goal
that
you've
set
so
shopping.
C
Bots
are
an
example:
we've
had
for
many
years,
when
you
can
sort
of
configure
settings
to
look
for
the
best
price
on
a
certain
type
of
good,
on
eBay
or
or
somewhere
like
that
and
go
a
long
distance
right
up
to
executing
a
a
a
purchase
that
gets
blown
open
with
this
technology
because
it's
general
purpose
a
language
model.
It
can
understand
many
different
permutations
of
interactions
and
transactions.
It
can
actually
engage
in
some
negotiation.
It
can
do
goal
setting
and
and
and
keep
track
of
benchmarks
in
order
to
achieve.
C
Those
will
show
a
lot
more
of
that
aspect
of
it
as
part
of
Jesse's
demo
on
a
algorithmically
managed
LLC,
but
it
doesn't
stop
in
the
wrapper
of
an
LLC.
This
is
going
to
affect
lots
of
contracts
and
transactions,
so
I
think
automated
transactions
and
electronic
agents
are
going
to
be
another
area
which
state
law
has
traditionally
had
a
lot
to
say
about
and
where
there'll
be
a
need
in
the
fullness
of
time,
to
look
at
possible
updates
to
legislative
Frameworks
there.
C
Education
is
another
thing:
I
just
want
to
mention
I'm
sure,
you're,
all
thinking
about
it
in
in
December
and
January
and
February.
There
was
lots
of
Articles
of
high
schools
and
and
public
colleges
and
and
other
educational
institutions
wondering
how
do
we
deal
with
essays
that
are
being
generated
by
Ai
and
problem
sets
being
done
by
Ai
and
passed
in
by
a
student?
C
What
does
this
mean,
and
so
there's
much
of
this
has
to
do
with
pedagogy
and
and
districts
that
will
you
know,
operate.
You
know
at
completely
different
levels
in
the
legislature
in
terms
of
how
to
adapt
to
this
technology
in
a
way
that
still
provides
a
solid
education
and
we're
educating
for
the
right
things
in
the
new
economy.
But
there
may
well
be
a
need
at
some
point,
especially
for
public
education,
to
at
some
point,
set
some
legislative
guidance
on
on
how
to
deal
with
this.
C
It's
changing
the
nature
of
of
how
people
do
homework
in
education.
It's
also
creating
a
new
type
of
skill
that
people
need
to
learn.
Workforce
training
is,
is
the
next
emanation
of
that.
So
if
you
look
Downstream
and
we
think
about
what
kind
of
jobs
do
people
need
to
do
and
and
what's
going
to
happen
with
the
dislocation
of
some
jobs
that
aren't
needed
and
new
jobs
that
are
needed
and
retraining
and
training
people
up
for
the
right
sorts
of
jobs,
I
think
there's
a
there's,
some
kind
of
Workforce
Development
and
economic
development
angle.
C
Here,
probably
many
and
then
the
the
final
one
I'll
just
mention
I
honestly
I
feel
like
I,
could
name
every
area
of
state
law
and
draw
a
connection.
I
just
want
to
name
some
of
the
highlights
where
there's
a
very
clear
Nexus.
The
last
one
I'll
mention
is
just
with
respect
to
regulatory
reform
and
and
making
statutes
themselves.
C
I've
posted
on
civics.com
forward,
slash,
Ai
and
law.mit.edu
forward,
slash,
AI
transcripts
of
examples
where
I've
used
GPT
for
and
another
similar
model
called
and
called
Claude
by
a
company
called
anthropic.
That's
a
competitor
with
openai
to
help
draft
statutes
or
basically
I,
come
up
with
a
novel
fact
pattern:
I
talk
about
what
the
needs
are
and
then
iteratively
I
I
got
the
you
have
the
technology
assist
me
in
drafting
statutes.
C
I've
drafted
a
lot
of
statutes
in
my
time
when
I
was
in
state
government
and
working
with
Congress
in
in
various
places,
and
my
sense
was
that
it
wasn't
bad.
It's
not
going
to
replace
the
legislative
service
office
anytime
or
anything
like
that.
But
it
can.
It
can
be
a
boost
in
certain
ways
to
fundamental
lawmaking
into
regulatory
into
even
regulation,
and
then
there
can
be
other
ways
in
which
it's
a
boost,
as
well
in
terms
of
monitoring
compliance
and
setting
benchmarks.
C
So
I
won't
go
too
deeply
into
that,
but
I
think
the
very
even
the
the
the
process
of
law
making
may
also
have
some
impacts
that
are
worth
looking
at
or
it
can
be
beneficial
I'll
close
by
saying
it's
not
all
sweetness
and
light.
Although
I
I'm,
very,
very
optimistic
and
a
big
fan
of
this
technology
I
think
it's
going
to
do
a
lot
of
good.
C
There
are
risks
and
potential
harms
to
keep
in
mind
as
well,
and
that's
also
I
think
incumbent
upon
the
legislature
to
be
aware
of
I
think
there's
a
there's,
a
underlying
question
of
intellectual
property.
C
So
the
way
these
large
language
models
are
created,
there's
a
huge
training
set
and
that
training
set
kind
of
vacuums
a
lot
of
information
into
the
model.
Some
of
that
information
has
intellectual
property
rights
by
other
people
and
there's
already
some
litigation.
We'll
talk
more
about
and
go
deeper
in
later
with
some
of
the
other
speakers,
but
I
think
there's
a
bit
of
a
ticking
Time
Bomb
there
to
keep
an
eye
on
there's
also
a
fundamental
issue
about
the
performance
of
the
technology.
C
Well,
I'm
saying
it's
helpful
and
it
is
very
helpful
and
it
does
things
we
couldn't
do
before
with
computers.
It's
not
perfect.
There
has.
It
has
some
limits,
it
has
flaws,
it
can
provide
incorrect
information,
and
this
is
critical
to
look
at
and
to
make
sure
people
understand.
This
may
get
back
to
education
and
to
setting
limits
for
not
over
relying
on
the
output
of
this
technology
to
our
detriment
and
there's
some
privacy
issues
that
are
related
to
intellectual
property
issues.
C
When
person
identifiable
information
gets
sucked
into
the
training
set,
there's
some
concern
that
it
may
be
capable
of
being
regenerated
with
the
right
prompt
and
that
that
people
could
have
privacy
or
personal
data
rights
that
are
at
issue
and
then
there's
this
whole
raft
of
so-called
high-risk
use
cases
that
the
European
Union
is
extremely
concerned
about
and
that's
when
there
may
be
a
a
a
a
a
risk
of
of
us
of
a
of
a
very
expensive
or
very
socially
costly
harm
that
could
occur
with
this
technology.
I
I
I.
C
Don't
think
we
need
to
go
too
deeply
into
that
today,
but
there
are
any
number
of
kind
of
catastrophic
scenarios
that
people
have
mostly
hypothesized
about
today,
but
that,
but
that
are
looking
like
they.
Some
of
those
may
be
more
practical.
Now,
with
the
new
advances
of
Technology.
Some
of
that
may
have
to
do
with
a
takeoff
scenario
where
another
version,
a
subsequent
version
of
the
technology,
gains
something
closer
to
artificial,
general
intelligence
and
being
able
to
bound
that
that's
still
a
little
bit
futuristic.
C
But
for
the
moment,
if
you
imagine,
maybe
like
a
type
of
corporate
Espionage
or
something
with
a
deep
fake,
like
one
of
the
candidates,
for
example,
in
Turkey
who,
who
dropped
out
just
a
few
days
ago,
dropped
out
because
of
a
extremely
embarrassing
video
that
was
circulating
around
he
claimed
for
sure
that
wasn't
me
that
was
a
deep
fake.
C
That's
our
that
that
kind
of
claim
is
already
happening
in
U.S
litigation
as
well.
You
could
imagine
these
things
being
weaponized
by
state
actors
or
by
well-resourced,
adversaries
or
people
that
just
want
to
cause
disruption
for
some
reason
that
that
could
be
what
I
would
call
a
high
risk
use
case.
C
There
are
also
potential
high
risk
use
cases
for
mundane
things,
which
is
partly
why,
when
we
show
you
this
demo,
this
will
be
my
last
remark:
we're
not
suggesting
a
completely
autonomous
legal
entity
that
could
end
up
being
a
potentially
a
higher
risk
scenario
when
others
think
that
when
maybe
the
entity
is
going
off
the
rails
a
little
bit,
there's
no
human
oversight.
C
So
you
know
the
the
importance
of
putting
guard
rails
and
safeguards
and
protections
on
the
use
of
this
technology
is,
must
be
part
of
of
allowing
it
to
flourish
and
to
support
and
reflect
it
with
a
pro-innovation
legal
framework,
because
it's
not
solely
beneficial.
There
are
potential
risks
and
arms
as
well,
but
with
that
I
I
would
I'd
like
to
pause.
C
Those
initial
remarks
now
and
I'd
be
happy
to
to
answer
any
questions
or
to
or
to
clarify
anything
if
the
committee
would
like
or
otherwise
I'm
I
would
like
to
pass
it
forward
to
the
next
speaker.
A
Thank
you
very
much
Taza.
Before
we
proceed.
Do
we
have
any
any
questions
committee
members
seeing
none
yeah
I
know.
Part
of
our
conversations
is
one
thing:
I
was
particularly
interested.
It
was
kind
of
the
concept
of
of
the
weights
behind
AI.
What
does
that
mean?
How
is
it
regulated?
How
is
ownership
attributed
a
lot
of
these
kind
of
stuff
that
you've
you've
touched
on
and
and
also
you
know,
I
thought
that
the
Earthly
sent
along
about
to
the
committee
members
just
about
autonomous
agents
and
their
potential
impact.
A
You
know,
given
that
we're
elected
officials
that
article
talked
about
what's
their
impact
on
campaigning
right
and
organizing
those
kind
of
campaigns
it
was.
It
was
very
interesting
and
I
think
you
know
had
this
really
real
consequences
and
I
really
I
want
to
thank
you
for
taking
the
time
to
have
this
conversation,
because
I
think
the
question
what
we're
trying
to
answer
today
is
you
know
what
regulation
or
legislation
is
needed
in
regards
to
AI?
A
That's
a
incredibly
huge
and
Broad
question,
but
I
think
it's
one
that
it's
incumbent
upon
us
to
try
to
at
least
tackle
and
see.
What
does
need
to
be
addressed
is
now
the
time
for
legislation.
Do
we
need
to
let
things
develop
and
then
come
back
to
the
drawing
board.
You
know
all
that
kind
of
stuff.
So
I
really
appreciate
you
and
your
colleagues,
you
know
Damian
and
Jesse
and
John
and
Dan
all
these
folks
for
setting
aside
time
to
help
us
and
it's
it's.
You
know
you've
heard
the
joke
about
the
legis.
A
You
know
the
sausage
making
process
and
legislation
all
that
stuff,
but
you
guys
are
playing
that
part
right
now
in
that
process
and
trying
to
start
Broad
and
refine
it
down
to
a
finished
product
so
really
do
appreciate
it
I
think
you
covered
a
lot
of
the
stuff
that
we
had
talked
about
and
I
know
that
you
know
our
community
members
will
will
weigh
in
on
I.
Don't
have
any
specific
questions
at
this
point,
Mr
coach
here.
Do
you
have
any
questions
or
things
you
want
to
weigh
in
on.
B
A
C
You
for
asking
so
the
way
we
have
prepped
is
with
the
thought
that
we
would
all
we
would
stick
on
the
line
and
answer
questions
after
each
person's
presentation
as
needed,
and
then
everybody
is
also
extremely
behind
the
idea
that
you've
been
elected
and
that
we're
testifying
to
your
committee,
and
so
we
obviously
will
stop
talking
at
any
moment.
If
you
have
a
question
or
a
clarification
or
anything
that
you
want
to
say.
A
Sure,
absolutely
and
I
think
it's
fine
with
us,
maybe
holding
our
questions
off
until
each
presenter
is
done.
Then
we
can
kind
of
delve
in
into
questions
I'm.
Certainly
fine
with
that
and
just
real
quick
for
for
Dan
and
Damian,
some
of
the
other
folks
online
and
John.
Just
the
standard
format
is
when
they
do
get
into
that
q.
A
everything
goes
through
the
chair
just
as
kind
of
a
legislative
protocol
to
make
keep
the
dialogue
smooth.
C
Thank
you,
and
so
and
through
the
chair,
I
would
like
to
say
to
the
committee,
in
response
to
the
two
items
that
you
did
bring
up
representative
Western
number
one
about
weights
and
two
sort
of
about
looking
forward.
My
opinion
is
that
there
are
a
few
things
that
are
essential,
with
respect
to
understanding
and
being
able
to
know
how
how
to
predict
and
rely
upon
the
outputs
of
these
models,
and
that
requires
some
transparency
on
the
model.
C
In
my,
in
my
belief,
some
of
that
has
to
do
with
what
is
the
training
set
that
went
into
the
model?
Okay,
what
is
the
data
that
it
was
trained
on?
Another
thing
has
to
do
with
what
what
are
the
weights
and
how
many
parameters
there
are
Jesse
and
others
can
go
into
more
D
and
Dan.
Almost
everyone
else
in
this
call.
C
You
go
into
more
detail
into
what
that
means
technically,
but
in
order
to
in
order
to
a
material
aspect
of
these
language,
models
is:
is
the
weight
are
the
weights
that
are
that
are
allocated
to
different
parts
of
the
model?
And
if
you
change
those
weights
slightly,
you
will
get
very
different
outputs
from
from
the
model
using
the
same
training
set.
So
how
those
weights
are
allocated
is
a
big
part
of
the
is
a
big
part
of
what
it
means
to
configure
and
create
a
given
model
open
source.
C
Large
language
models
share
what
the
weights
are,
because
that's
how
another
person
could
take
the
code
and
could
train
the
model
and
get
something
like
the
same
results,
and
so
my
opinion.
In
my
opinion,
you
know,
there's
a
question
about
the
intellectual
property
and
things
like
that.
Some
some
companies
were
proprietary
models,
do
not
share
their
weights,
I
I,
think
in
in
our
country
with
our
economy.
That's
there's
got
to
be
a
I
think
that
that's
not
inappropriate
entirely.
What
I
do
think
is
important.
C
Is
that
there'll
always
be
several
robust
available?
You
know
competitive
open
source
models
where
the
weights
and
everything
else
are
also
available.
European,
union
and
others
are
taking
there's
more
regulatory
approaches
to
what
I
just
said.
Some
would
almost
like
nationalized.
It
sounds
to
me
some
of
the
some
of
these
models
and
require
more
transparency
and
more
sharing
of
of
their
secret
sauce
and
the
proprietary
information.
I.
C
Don't
know
that
I
think
that's
a
political
question
and
I
don't
know
that
if
that's
the
right
fit
for
the
United
States
I
do
think
it
makes
a
lot
of
sense
for
there
to
be
at
least
some
open
source
models
and
and
I'll
close
by
saying
we
would
very
in
terms
of
like
where
are
what
is
ready
for
legislation
now
to
me
it
seems
like
it's
time
for
Education
and
it
may
be
premature
to
know
What
legislation
exactly
should
would
be
appropriate.
Also,
this
these
Technologies
changing
fast.
C
There
are
things
in
the
pipeline
like
just
today,
open
AI,
released
plugins
to
everybody
that
has
a
a
premium
account,
which
is
like
a
twenty
dollar
a
month,
account
that
gives
you
gpt4
access.
These
plugins
extend
the
use
of
the
technology
to
integrate
it
with
you
know
like
web
browsing
and
with
zapier
and
with
Expedia
and
all
this
stuff,
it's
sort
of
the
beginning
of
the
agents
thing
we
were
talking
about,
but
I
call
it.
C
The
integration
Revolution
that's
going
to
significantly
expand
the
way
people
use
the
technology
and
the
types
of
interactions
and
transactions
and
outcomes
that
are
that
are
possible.
So
my
sense
is.
It
may
be
premature
to
have
very
prescriptive
legislation
at
this
point,
while
the
target
is
still
evolving
so
quickly,
but
it's
not
too
soon
to
get
educated
and
and
and
and
and
and
part
in
in
collaboration,
and
we
hope
support
of
your
the
education.
C
One
of
the
things
that
we're
going
to
say
at
the
end
and
I'll
come
back
to
when,
after
the
after
the
last
speaker
is
once
we
show
you
the
demo
of
what
this
of
what
an
algorithmically
managed.
Wyoming
LLC
could
look
like
that's
powered
by
these
large
language
models
by
generative
AI.
C
We
were
going
to
suggest
a
kind
of
a
challenge
where,
where
we
make
our
code
available
Jesse,
who
makes
this
code
available-
and
we
let
people
use
this
according
to
different
things,
that
the
committee
and
others
may
be
interested
to
test.
So
we'll
have
to
see
what
the
conversation
from
this
hearing
is,
but
there'll
be
open
questions
we
can
frame
those
as
a
kind
of
Challenge
and
let
people
people
try
to
create
their
own
examples
of
algorithmically,
managed
llc's,
submit
them
back.
C
I
think
mit's
raised
our
hand
and
said
we're
willing
to
be
an
umbrella
partner
for
that
and
then
make
those
all
available
back
to
the
select
committee
before
your
next
Hearing
in
September
to
begin
to
get
a
little
bit
more
traction
and
understanding
of
the
of
how
this
technology
could
play
out
in
an
example
and
maybe
have
a
better
informed
basis
to
consider
What
legislation
may
may
be
appropriate
and
needed
at
that
point.
C
So
those
are
just
some
thoughts
about
where,
where
everything
is
like
in
the
life
cycle
of
legislation,
it's
it's
still
pretty
early
days,
it's
just
a
few
months,
since
this
technology
has
become
widely
available
and
I
think
a
few
more
months.
At
least
it
would
be
appropriate
to
make
any
calls
about
what
the
scope
of
definite
legislation
might
be.
With
respect
to
it,
that's
my
opinion.
A
Thank
you
guys,
I
I
think
that's
super
awesome
and
I
really
appreciate
your
guys's
willingness
to
put
in
this
work
and
make
it
just
available
and
to
pull
us
in
I
will
warn
you
when
you
put
me
in
front
of
stuff
I
kind
of
feel
like
in
the
year
with
all
the
spear
getting
in
the
cockpitive
and
F22.
You
know
it's
a
little
intimidating,
but
I
really
appreciate
your
willingness
to
kind
of
pull
us
in
and
walk
us
through
this
stuff
and
and
for
the
other
folks.
A
Online
I'll
just
want
to
speak
for
myself.
Please
do
pause
and
you
know
really
kind
of
walk
us
through
exactly
what
we're
seeing
because
I
think,
certainly
for
some
of
us
and
for
myself.
I
might
not
know
exactly
what
I'm
seeing
in
some
of
the
stuff
that
you're
talking
about
in
some
of
your
presentations,
so
do
not
feel
free
or
please
do
feel
free
to
really
kind
of
you
know,
walk
us
through
what
we're
looking
at
and
what
exactly
it
is
that
we're
we're
looking
at.
So
thank
you.
Please
do
proceed.
C
Good,
okay,
okay,
Mr,
chairman
and
and
Mr
co-chairman.
Thank
you
very
much
for
your
for
your
willingness
to
collaborate
with
us
in
that
way,
we'll
be
happy
to
walk
you
through.
C
When
you
see
Jesse's
demo
you'll
see
that
it's
been
designed
really
for
it's
really
designed
for
small
business
and
for
a
person
that
has
an
LLC,
maybe
for
their
ranch
or
something
where
you
know
it's
not
for
a
big
company,
but
we're
trying
to
see
what
would
be
useful
with
this
technology
that
could
simplify
and
streamline
the
process
of
using
legal
entities
for
everybody.
And
so
the
idea
that
anybody
could
sit
in
this
cockpit
and
fly
is,
is
one
of
the
things
one
of
the
goals
we're
going
for
so,
but
we'll
come
back
to
that.
C
A
little
bit
later
next
up
is
now
is
what
I
would
call
the
premium?
Education,
I
hope,
I
served
the
purpose
to
at
least
set
the
table,
but
now
we
have
a
real
heavyweight
champion
with
Dan
Katz
and
Michael
Bommarito,
who
are
at
the
Forefront
of
educating
people
and
lawyers
and
professionals
about
this
technology
and
who
have
kindly
agreed
to
share
their
their
sort
of
a
tip
of
the
spare
slide.
Deck
with
with
the
committee.
So
I
would
like
to
hand
it
off
to
Dan
Katz
and
again.
D
D
Excellent
well
I
want
to
thank
thank
the
committee
for
having
having
us
here
today
wanted
to
provide
some
some
overview
comments.
That
kind
of
get
us
kind
of
get
us
moving
down.
D
I
think
we'll
get
into
deep,
greater
and
greater
detail,
perhaps
just
sort
of
looking
at
the
agenda
as
the
as
the
as
the
date
progress
as
the
session
progresses
just
to
introduce
myself
on
on
Dan
Katz
I'm,
a
professor
at
the
Chicago
Kent
College
of
Law,
which
is
the
law
school
of
Illinois
Institute
of
Technology
I'm
wearing
the
vest
today
and
I'm
also
Affiliated
faculty
at
codex,
which
is
the
sort
of
long
computer
science
interacted
at
Stanford
and
I'm.
D
Also
an
entrepreneur
I
run
this
company
with
my
colleague,
Mike
Bommarito,
who
I'll
introduce
in
just
a
second
call:
273
Ventures
we're
building.
What's
called
a
legal
data
operating
system,
I
won't
belabor
that
I
just
just
on
back
around.
So
my
colleague,
Mike
Bommarito
I,
don't
know
Mike.
If
you
just
want
to
say,
if
you
want
to
introduce
yourself,
I
got
a
couple
slides
here.
E
Sure
so,
similar
to
Dan,
except
I'm,
not
affiliated
with
IIT
as
faculty,
also
at
Stanford
and
and
I'll.
Let
you
get
back
to
the
deck
here.
Thank
you
for
having
us
just
to
express
that.
D
Okay,
well
I,
guess,
I'll,
just
sort
of
start
chronologically,
I
think
this
date
here,
November
30th
2022
will
go
down
as
one
of
the
more
important
days
in
the
history
of
Technology,
because
this
is
the
day
the
chat
EPT
was
was
first
released
to
the
public
and
I
think
that
that
began
with
sort
of
the
beginning
of
the
broad
Republicans
Public's
awareness
of
kind
of
what
was
already
true,
which
is
in
the
background
what's
been
happening.
D
Is
there
have
been
really
significant
increases
happening
in
this
field
of
natural
language
processing
and
that's
really
the
yeah.
The
way
in
which
computers
come
to
understand
language
and
language
has
been
historically
a
very
hard
problem
for
computers
to
to
deal
with,
and
we've
been
seeing
sort
of
progressive
increases
in
in
those
capabilities,
but
I
think
I.
Think
what's
really
important
is
that
when
this
chat,
interface
and
I'll
go
more
into
detail
of,
this
was
made
available.
D
That
made
it
very
easy
for
people
to
kind
of
type
in
Google,
like
searches
and
get
kind
of
output.
They
really
got
a
flavor
for
the
public
got
a
real
flavor
for
kind
of
what
was
happening,
and
it's
been
reported
that
you
know
chat.
Gpt
went
from
like
to
100
million
users
in
something
like
100
days.
I,
don't
know
if
that
was
is
like
a
more
I
think,
there's
at
least
couple
sites
that
support
support.
D
That
idea,
but
either
way,
there's
been
a
substantial
number
of
people,
who've
moved
on
and
actually
kind
of
used
these
systems
and
experience
them
for
themselves.
Mike
and
I
have
personally
been
working
in
this
area
for
a
little
while
thinking
about
kind
of
how
these
language
models
could
be
used
in
the
legal
domain.
D
I
just
highlight
a
couple
of
projects
or
a
couple
things:
we've
been
working
on
just
kind
of
emblematically,
Mike
and
I
had
a
company
which
we
sold
five
years
ago,
which
was
doing
what
might
be
called
classical
language
modeling
and
not
using
these
neural
networks,
which
is
what
powers,
something
like
like
chat,
apt
or
or
gpt4,
or
any
of
these
other
large
language
models,
but
just
to
kind
of
say,
we've
been
experimenting
both
on
the
commercial
world
and
then
kind
of
in
the
academic
world.
D
D
Can
my
pleasure
sure
absolutely
no
no
problem?
Last
year
we
did
a
presentation
where
we
talked
about
this
idea
of
breaking
the
legal
language
barrier
was,
and
the
concept
was
you
know
well,
we've
seen
historically
is
machines
are
getting
better
at
regular
language
kind
of
like
the
public
uses?
But
you
know
legal
languages
is
a
whole
other
animal.
It's
a
much
harder
thing,
I
mean
I,
mean
I'm
a
law
professor,
and
we
spend
a
lot
of
time
training
students
to
understand.
D
D
Substantial
acceleration,
but
just
the
you
started
to
see
the
the
inklings
or
the
tea
leaves
that
something
like
that
might
be
possible
and
I'll
just
flip
to
this.
But
you
know
one
of
the
things
in
the
legal
sector
or
in
the
legend
you
know
for
legislators
or
judges,
or
you
know
other
folks,
managing
partners
of
law
firms
and
so
forth
or
court
general
counsels
of
companies.
D
It's
been
hard
to
communicate
to
folks
in
the
legal
sector
about
kind
of
how
significant
of
a
landscape
shift
has
been
happening
in
these
language
models,
and
so
you
know
I
think
part
of
it
is
that
the
in
the
media
there
have
been
a
lot
of
these
kind
of
robot
lawyers
like
stories
in
the
media.
So
it's
just
like,
like
a
sample
of
what's
been
out
there
over
the
last.
You
know
five
five
or
ten
years,
and
you
know,
I've
I've
I've
never
met
a
robot
lawyer.
D
There
must
be
one
they
keep
talking
about
it
all
the
time,
but
but
you
know
nevertheless,
you
know
the
you
know.
There's
all
there's
been
all
this
all
these
stories
written
and
it
is
about
people
trying
to
work
on
using
AI
to
do
some
of
the
tasks
that
lawyers
have
done
historically
or
to
give
them
to
augment
them,
to
give
them
kind
of
like
tools
to
do
tasks
a
lot
faster
or
more
effectively.
D
D
His
language
has
been
really
hard
for
machines
to
work
on,
and
so
you
know
things
have
been
done
more
on
the
periphery,
but
on
the
core,
it's
been
very
difficult,
so
I
think
when
things
has
been
a
challenge
is
really
the
absence
of
a
clear
demonstration
project
and
I'll
talk
more
about
one
of
those
demonstration
projects
in
just
a
moment,
but
just
kind
of
to
set
the
stage.
D
You
know,
because
that
this
is
some
of
what
I
was
asked
to
do
is
just
kind
of
set
kind
of
the
overview
to
get
us
get
us
going.
You
know,
AI
has
made
progress
on
many
topics,
and
so
the
field
of
artificial
intelligence.
Actually
encompasses
many
other
sort
of
branches
here
that
are
shown
and
some
of
which
are
kind
of
relevant.
Perhaps
the
today's
discussions
and
some
which
might
be
less
relevant
but
just
broadly,
are
interesting.
For
example,
something
like
a
driverless
car
involves.
D
You
know,
Machine
Vision
can
involve
processing
language,
but
you
know
for
for
language
models
that,
as
done
today,
it's
kind
of
the
natural
language
processing
Branch,
combined
with
deep
learning
above,
but
the
the
point
of
showing
this
graphic
is
just
to
highlight.
When
you
hear
AI,
there's
actually
a
whole
bunch
of
sub
things
going
on
underneath
and
not
every
one
of
AI
application
is
precisely
the
same.
It's
just
it's
I,
think
it's
worth
understanding
and
noting
that
and
I
think.
D
Obviously,
when
you
think
about
regulation,
regulatory
categories,
just
it's
important
to
understand
these
nuances
are
are
are
important.
One
of
the
things
I
will
say
is
that
language
has
historically
lagged
bills,
pretty
much
every
other
advance
in
AI,
because
you
know
language
is
hard
and
part
of
what
makes
it
hard.
Is
that
there's
these
statistical?
What
we
call
statistical
dependencies
in
the
language,
but
I'll
kind
of
compare
in
a
moment
like
syntax
versus
semantics,
I
mean
it's.
D
Machines,
are
really
good
at
doing
pounding
up
the
number
of
words
in
a
like
on
a
page
or
even
the
the
frequency
of
words,
but
under
the
understanding
of
what
those
words
mean
in
Context
has
been
historically
a
much
harder
problem.
D
We
finally
see
in
the
commercial
space
and
in
the
academic
space,
really
significant
advances
coming
out
coming
to
Market,
so
just
kind
of
again
on
background.
This
language
models
historically
were
based
on
rules.
So,
if
you
remember
back
in
the
80s
or
like
spell,
Checkers
were
not
exactly
fabulous
to
put
it
lightly,
they
were
pretty
poor
and
it
was
trying.
D
It
was
basically
roughly
based
on
the
idea
of,
like
you
know,
like
a
good
old
clippy
there,
because
they're
trying
trying
to
use
rules
to
figure
out
spelling,
but
the
best
way
to
figure
out
spelling
is
how
you
know
have
a
billion
instances
of
spelling
in
a
Google
searches
and
then
sort
of
test.
Hey.
Is
this
the
right
one?
Is
that
the
right
one?
Do
they?
Click
on
the
fir,
the
left
or
the
right
and
then
that
feedback
effect
sort
of
you
get
the
sort
of
ultimately
a
much
better
spell
checker.
D
You
could
do
these
rules-based
approaches
because
and
I'll
just
show
you
these
two
big
Dynamics,
some
of
which
you
might
be
familiar
with,
but
just
you
know
to
put
everybody
on
on
equal
footing,
so
the
the
speed
of
a
computer
processor
according
more
the
person
who
was
one
of
the
founders
of
Intel,
but
he
just
passed
away
in
the
last
last
couple
like
month
or
two,
but
this
Moore's
Law
is
not
a
strict
law,
but
it's
a
kind
of
observation
that
the
speed
of
a
computer
processor
has
been
doubling
roughly
every
18
to
36
months
for,
like
you
know,
50
60
years,
something
like
this
and
you
know
that's
exponential
growth
and
so
you're
doubling
and
doubling
and
doubling
and
doubling
and
that's.
D
D
You
go
to
like
a
conference
or
something,
and
they
give
you
a
thumb
drive.
You
know.
Maybe
it's
5
10
gigs,
you
know
like
a
10,
gigs
that'd
be
three
million
dollars
worth
of
data
storage
in
1981
and
now
they're
like
just
give
it
away
for
free,
like
like
here's,
a
squishy
ball
and
a
pen,
and
you
can
take
a
couple
drives
with
you,
and
so
it's
you
know
that
that
just
people
are
organizations
are
hoarding
data,
essentially
because
data's
so
cheap
to
store.
D
Eventually,
these
chips
got
faster
and
data
storage
got
cheap
so
that
you
could.
You
could
use
statistics
in
in
the
modeling
of
language,
and
so
in
the
90s
and
2000s.
You
started
to
see
statistical
models
in
in
in
the
field
of
in
the
field
of
NLP
language
modeling.
What
people
did
historically
is
they
focused
on
syntax,
and
you
actually
can
make
a
lot
of
progress
with
syntax.
Semantics
is
a
much
harder
thing.
Historically,
there's
this
division
between
a
syntax
model.
D
Give
you
examples
of
both
in
a
second
and
semantics
has
been
kind
of
the
Holy
Grail
for
this
space,
a
really
hard
problem,
so
syntax
you're,
probably
familiar
with
it
minimally,
at
least
like
if
you
want
you
went
to
you
know.
These
are
the
types
of
things
like
looking
at
words.
D
What
words
are
on
a
thing,
how
frequent
they
are,
how
frequent
certain
parts
of
speech
are,
and
the
most
like
well-known
example
of
a
syntax
approach
with
a
computer
is
if
you
went
to
your
computer
and
you
hit
control
F,
that's
find
the
the
exact
instance
of
a
word
in
a
document.
The
machines
are
amazing
at
this
and
they're
way
better
than
people
at
this.
D
If
I
give
you
100
page
document
inside
find
the
number
of
times
this
particular
word
shows
up
machine's
going
to
get
it
pretty
much
perfect
every
time
and
people
will
not,
and
so
that's
a
play.
That's
an
example,
something
where
machines
are
already
materially
better.
Now
we
don't
think
of
that.
As
being
you
know
a
huge
deal,
but
but
that
is
actually
looking
for
syntax.
You
can
go
up
a
level
in
terms
of
sophistication,
say
I'm,
not
looking
for
this
exact
word,
you
can
write
something.
That's
called
a
regular
expression.
D
It's
a
little
tiny
bit
of
computer
code.
That's
looking
for
a
pattern
in
a
given
set
of
information,
so
here
I
don't
want
exactly
17
USC
10,
section
107
I
want
you
know
a
number
USC,
a
section
symbol
and
then
another
number
up
to
four
on
characters.
Long,
and
so
it
can
look
for
all
these
different
patterns
that
sort
of
meet
this
standard
of
a
pattern.
So
I
could
use
something
like
this
to
find
every
reference
to
the
United
States
code.
D
D
What
did
the
legislature
mean
when
they
drafted
this
bill
and
they
deleted
this
provision
in
an
ad
provision
and
they
did
it
with
a
lot
of
care?
Why
do
they
choose
that?
That's
semantics,
that
what
does
this
word
mean
in
a
given
context?
What
does
this
regulation
mean?
Semantics
is
about
both
what
words
mean
and,
of
course,
meaning
isn't
just
word
by
itself.
It's
in
the
context
of
other
surrounding
words,
and
those
words
interact
to
kind
of
create
higher
order
meanings.
D
So
it's
not
just
the
word
by
itself,
but
it's
context
and
that's
been
really
hard
for
machines
to
decipher,
by
the
way,
it's
hard
for
people
to
decipher.
If
I
handed
them
a
a
bill.
That's
pending
before
before
the
legislature
or
a
Judicial
decision,
or
something
like
this
or
any
really
complex
document,
it
really
takes
a
lot
of
work
to
to
process
it.
I
have
to
be
okay
at
doing
legal
documents.
If
you
gave
me
a
medical
document,
I
would
struggle
with
it
and
a
doctor
would
do
a
very
good
job
with
that.
D
Probably
so
the
point
is
that
you
know
a
lot
of
the
training
we
do,
if
you
think
about
the
labor
market
and
in
this,
the
labor
market
for
lawyers
or
whatever,
is
this
semantic
training
of
people
to
be
language,
processors,
parsing
documents,
understanding
what
they
mean
in
particular
contexts?
D
D
Professor
talk
right
there,
but,
but
basically
I,
don't
I,
don't
want
to
say
that
machines
like
understand
in
the
strict
strong
sense
but
they're
able
to
kind
of
infer
what
something
means
in
context:
I,
don't
so
I'm
I'll
say
you
know
like
they
understand
in
air
quotes,
but
just
to
kind
of
highlight
the
idea
that
maybe
it's
not
full
understanding.
But
it's
good
enough
to
do
do
some
task
and
the
idea
of
that
is
that
the
methods
can
take
context
into
account
broader
and
broader
sets
of
contexts
into
account.
D
And
what
we
see
people
today
using
is
these
neural
networks
to
work
on
language
problems.
So
it's
actually
back
to
the
graphic
the
combination
of
these
two
things
put
together.
So
it
won't
push
this
whole.
But,
like
a
lot
of
the
same
things,
people
want
to
do
with
computers
are
the
same,
but
the
process
in
the
meantime
from
an
engineering
perspective,
is
very
different
when
you're
using
the
one
at
the
bottom,
which
is
the
more
modern
approach
as
opposed
to
the
one
at
the
top.
D
So
we
see
these
progress
by
the
way
in
contexts
like
in
in
language
and
legal
NLP
as
well.
Mike
and
I
were
the
several
other
colleagues
have
done
a
survey
of
kind
of
every
page,
NL
natural
language
processing
paper
written
where
somebody
was
working
in
the
legal
domain,
including
some
of
the
examples
by
the
way,
involve
bills
and
legislatures
both
in
the
United
States
and
globally.
D
Where
people
say
you
know,
could
we
use
computers
to
kind
of
like
help
in
that
process
in
some
way
or
another
I'll
I'll
just
highlight
this
part
right
here
is
what
we're
seeing
in
this
space
is
kind
of
a
mirror
of
what
you
see
in
the
whole
Space,
which
is
that
you
know
the
neural
agenda
has
come
finally
to
the
legal
World,
we're
starting
to
see
neural
methods
being
used
to
do
legal
legal
NLP
type
problems
I'll
skip
over
that.
D
Okay,
now,
let's
get
to
GPT,
let's
get
to
GPT
now
I
I
would
say
you
know,
like
the
marketing
department
here
might
have
spent
a
little
more
time
coming
up
with
the
name.
I
mean
it
just
rolls
off
your
tongue,
generative,
pre-trained
Transformer.
You
know
not,
you
know,
I'll
try.
Let's
try
to
unpack
this
a
little
bit.
If
we,
if
we
can
you.
F
D
It's
far
from
the
only
one
I
think
that's
one
of
the
things
I
want
to
make
sure
that
you
take
away
today
is
that
there's
dasa's
opening
Commons
highlighted
highlighted
this
as
well
for
priming
purposes,
I'm
largely
going
to
look
focus
on
the
GPT
models
by
open
AI,
but
but
I,
but
I
just
want
to
make
the
point
that
broadly
like,
for
example,
this
is
a
a
recent
paper
that
just
tried
to
catalog
all
of
the
models
that
are
out
there
and
try
to
put
them
in
different
families
kind
of
by
approach
and
by
the
way,
this
up
this.
D
This
paper
is
maybe
two
months
old
and
it
probably
would
have
you
know:
I,
don't
know
substantially
more
boxes
on
the
on
the
board
than
just
we
had
even
two
months
ago.
So
we,
you
know
I
I'd,
send
you
over
here
to
take
a
look
at
it,
but
it
just
gives
you
a
flavor
for
the
sheer
amount
of
innovation.
That's
taking
place
right
now.
It's
it's
really
quite
incredible
and
that
I
mean
in
the
United
States
is
by
far
the
leader
on
all
of
this
and
very
much
hope
it
stays
that
way.
D
So
the
open
AMI
models
from
GPT
combine
several
ideas
from
the
papers
in
the
following
progression.
So
this
is
kind
of
the
paper,
in
my
view,
am
I
going
to
say
that
kind
of
kicks
us
off
in
this
kind
of
semantic
heading
down
the
path
to
semantics.
If
you
went
back
10
years,
there's
this
paper
the
word
to
vac
papers,
people
would
often
call
it,
but
but
it's
and
I
won't
get
it.
It's
not
useful,
I
think
to
get
into
technical
Divas,
but
I
just
want
to
I.
Want
you
to
see
what's
happening.
D
But
one
thing
to
note
is
that
Google,
for
example,
has
been
responsible
for
many
of
the
innovations
that
we've
seen,
and
you
know
people
ask
well,
you
know,
is
open
AI
going
to
get
passed
by
Google.
Well,
certainly,
I
don't
know
for
sure,
but
I
would
say
they
got
a
puncher's
chance,
because
they've
been
involved
in
developing
a
lot
of
the
underlying
technology
here-
and
you
know
it's
a
competitive
landscape
here,
it's
not
it's,
not
a
single
single
company
story.
D
Here,
it's
a
it's
a
many
company
story
now,
there's
several
of
these
papers
in
progression.
That
kind
of
lead
you
to
First
this
paper
on
the
attention
mechanism-
and
this
is
probably
the
most
from
the
most
important
paper
in
terms
of
setting
things
up-
is
this
paper
called
attention
is
all
you
need
because
and
I'll
say
more
about
this
in
a
moment,
but
I
just
want
to
kind
of
slide
through
several
of
these
and
I'm
going
to
talk
about
come
back
to
this
paper.
D
Attention
is
all
you
need
in
a
second,
but
again
you
see
that
Google
AI
is
involved
in
this
paper
and
it.
This
is
where
the
Transformer
architecture
is
really
proposed
and
demonstrated
in
in
in
a
particular
model
where
they
they
well
I'll
talk
about
in
just
a
second,
but
just
to
show
you
there's
many
of
these
models
and
papers
that
have
come
down.
The
pipe
and
I
try
to
summarize
and
again
I'd
happy
to
send
along
the
the
the
slides
for
the
committee's
consideration.
D
Again,
you
see
Google
AI
convert
and
then
the
GPT
Trilogy
now
we're
on
gbt4,
but
the
three
models
that
lead
up
so
gbt,
one
2018,
gpt2,
2019,
GPT,
3,
28
20,
then
more
papers
more
models,
for
example.
This
is
the
leading.
You
know
an
important
one,
big
big
bird,
and
then
this
is
what
sketch
out
GPT
is
they
built?
They
take
them
on
they
they
said.
D
Well,
you
know,
if
you're
going
to
create
a
chat,
interface
machines
gotta
be
able
to
understand
again
using
air
quotes
these
the
instructions
that
people
might
give
it.
So
if
I
give
it
an
instruction,
it's
to
understand
what
are
you
asking
me
to
do,
and
so
they
built
a
separate
module.
I'll
show
you
how
to
connect
all
this
in
a
second,
but
they
build
a
separate
module,
very
focused
on
instructions
and
training,
a
model
to
understand
instructions
and
and
figure
out.
D
D
So
the
econ,
the
information
I'm
providing
is
our
best
understand,
is
my
best
understanding
or
a
Mike
Marino's
best
understanding
of
what's
Happening
Here,
but
I
just
want
to
have
said
that
that
is
not
100
out
there
again
I
I,
so
I
take
daz's
Point
earlier,
which
is,
is
you
know
if
they
were
to
tell
everybody
what
they
were
doing
then
they're
going
to
lose
a
lot
of
the
competitive
Advantage?
D
They
have,
and
so
they
have
a
certain
interest
that
I
completely
understand,
if
not
like,
look
I,
if
I
tell
everyone
Amazon
and
Google
and
Baidu
and
all
these
folks
are
going
to
copy
it,
so
they
don't
want
to
put
it
all
out
there
so
that
they're
they're,
not
100
open
but
they're
they're
more
open
than
his.
You
know
than
you
might
imagine,
given
how
important
what
they've
developed
is.
D
So
we're
not
going
to
get
into
all
the
super
T
technical
details
if
you
really
want
to
geek
out
a
little
bit
I
like
this,
this
blog
post,
that's
just
been
turned
in
a
book
by
the
kind
of
like
sort
of
legendary
physicist,
Stephen,
Wolfram
kind
of
went
through
chat,
GPT
and
tries
to
describe
it
in
a
much
more
mathematical
and
Technical
detail.
So
I'll
send
you
over
there.
D
I'll
skip
that
slide,
but
but
I
want
to
make
sure
you
kind
of
those
three,
the
three
big
moving
Parts
here:
better
Hardware,
parallel
Computing
and
the
attention
mechanism.
That's
what's
put
us
on
this
path
over
the
last
few
years.
It's
a
it's
kind
of
a
brew
or,
or
you
know
the
first
thing
I
I
would
say.
Is
you
know
if
you
have
any
members
of
your
family
or
your
friends
or
whatever
that
play
video
games
or
gamers
the
graphics
cards
that
power?
D
These
like
incredible
games
from
from
just
a
pure
graphic
perspective
and
the
immersive
nature
of
these
video
games?
You
know
back
when
Mike,
bomberino
and
I
were
in
graduate
school
it'd
be,
and
this
is,
you
know,
go
back
15
years.
You
know
people
were
starting
to
realize.
You
know
if
you
can
compute
that
incredible
those
incredible
immersive
Graphics
that
you
could
actually
use
it
to
use,
do
other
Computing
problems
in
the
sciences,
and
so
is
that
actually
kind
of
these
graphics
cards
that
come
from
another.
D
You
know
that
a
lot
of
the
focus
and
attention
have
been
to
make
really
cool
video
games
are
actually
useful
for
a
range
of
other
scientific
applications,
so
keep
that
that's
kind
of
one
one
item
in
the
Brew
now
the
hardware
race
is
part
of
this
too.
So
this
is
an
important
announcement
from
Google
that
they
have
a
chip,
that's
faster
than
leading
Nvidia
chip,
Nvidia
being
a
Taiwanese
manufacturer
and
obviously
Google
us.
D
So
just
to
give
you
a
view,
though,
of
kind
of
why
this
attention
is
all
you
need
paper
is
so
important
is
if
you
had
a
single
graphics
card,
a
single
GPU
and
you
you
were
to
try
to
train
just
an
older
model.
Gpt3
it'd
take
like
like
roughly
300
years
to
train
that
model.
D
Now,
one
of
the
first
kind
of
First
Things
folks
learn
in
computer.
Science
is
parallelization,
so
you
take
a
really
large
task
and
you
chunk
it
into
smaller
tasks,
and
you
run
it
on
multiple
computer
processors,
and
then
you
connect
the
tasks
back
together
so
being
able
to
parallelize.
Something
is
really
really
important
and
historically
it's
been
very
hard
to
paralyze
these
types
of
tasks,
these
language
tasks,
and
so
this
paper
attention
is
all
you
need
sort
of
shows
people
the
prospect
of
parallelization
in
this
context,
and
so
what
they
show
in
the
paper.
D
It
was
a
really
big
deal
when
it
came
out,
at
least
in
the
technical
world.
Is
they
built?
They
trained
a
1
billion
word
model
on
just
a
handful
of
these
graphics
cards
in
three
and
a
half
days,
and
that
was
like
and
again
I
showed
you,
the
previous
things
300
years,
so
just
that
was
that
was
a
harbinger
of
what
was
to
come,
but
that
was
a
really
sort
of
put
people
on
the
path
and
now
it's
being
done
on
a
mega
scale
with
something
like
a
gpt4
or
an
anthropic.
D
Or
what
have
you
is
the
scale
of
which
people
are
doing?
This
is
thousands
of
these
cheap
graphics
cards
are
necessary
parallelizing.
You
know
these
very
large
sets
of
documents,
so
just
for
terminology
gpt3
was
released
again
in
2020,
the
the
most
recent
model
until
the
last
two
months
was
GPT.
Three
and
a
half.
The
family
of
models
includes
chat,
chpt
and
there's
probably
going
to
be
a
family
of
models
in
gpt4.
D
So
now
the
inputs-
we
don't
know
exactly
100
what's
inside,
but
but
I,
but
this
is
at
least
what
has
been
described.
Mike
I
I
know
you
wanted
to
maybe
just
say
a
word
or
two
about
this.
This
part.
E
E
So
daza
you
had
mentioned
and
responded
to
one
of
the
initial
questions
that
alluded
back
to
a
prior
conversation,
around
property
rights
and
I
I
would
say
on
this
slide
that
one
of
the
issues
that
is
most
germane
to
who
owns
both
the
model
itself,
in
whatever
sense
that
may
mean,
as
well
as
the
outputs,
is
this
question
of
what
the
inputs
are
and
and
so
the
question
of
provenance
of
inputs.
E
That
is
knowing
internally,
that
is
documenting
what
you
put
into
it
and
then
to
what
extent
transparency
is
required
either
to
to
counter
parties
in
private
sense
or
public
counterparties,
I.E
or,
for
example,
like
a
license
or
as
the
EU
is
experimenting
with,
and
it's
proposed.
Euai
Act
is
key.
So
if
the
creators
of
these
models
are
not
tracking,
what
exactly
is
put
in
then
it's
a
problem.
E
Many
of
these
largest
organizations
are,
of
course,
because
the
the
quality
or
cleanliness
of
the
data
that
they
put
in
is
is
one
of
the
ways
in
which
they
control
the
quality
of
the
model
as
it's
created.
But
then
we
also
have
this
question
of.
To
what
extent
should
we
require
either
in
private
contracts
or
from
a
public
perspective,
a
disclosure
of
what
has
gone
into
the
model
and
how,
for
example,
if
the
state
were
to
create
a
property
registry
for
these
models,
would
disclosure
of
training
data
Maybe
Play
into
that
right?
E
So
so
I
think
Senators.
We,
we
previously
discussed
briefly
at
least
a
property
register,
for
example,
for
weights
as
a
as
a
way
of
distinguishing
this
data
of
being
able
to
provide
some
assurances
for
holders
of
Ip
related
to
these
models
that
this
type
of
thing
is
treated
or
classified,
at
least
in
the
state,
as
let's
say,
tangible,
personal
property
or
some
other
kind
of
property,
for
which
there's
a
registry
that
shows
clear
ownership
and
could
be
pointed
at
or
used
in
in
other
kinds
of
transactional
documents
and
I.
E
Think
the
inputs
is
probably
key
to
to
to
part
of
that
property
registry
conversation
because
we
don't
know
what's
gone
into
this,
then
it's
difficult
to
to
really
start
that
provenance
chain
and
to
to
have
clear
ownership
in,
for
example,
the,
since
we
have
a
real
property.
D
I,
it's
clear
that
they've
done
some
pre-processing
or
cleaning
up
of
the
underlying
internet,
because
obviously,
if
you've
seen
the
internet,
you've
seen
the
internet-
presumably
it's
not
necessarily
always
the
nicest
place,
and
so,
if
you,
if
you
just
took
it
as
is
no
saying
no
cleaning
it
up,
it
would
be.
You
know
what
it
wouldn't
perform
in.
Quite
the
way
that
you
know
has
been
has
been
shown
so
far,
but
just
to
kind
of
give
you
a
view
of
what
what
they've
done.
D
D
Then
you
have
Engineers
tune,
further
tune,
supervised
tuning
on
on
instructions
and
then
there's
a
thing
called:
reinforcement,
learning
and
that's
humans,
taking
the
output
and
kind
of
going
thumbs
up
thumbs
down
on
the
output
and
and
and
using
that
the
feedback
effect
of
that
to
to
calibrator
to
to
refine
the
model.
That's
sort
of
the
cocktail
that
gets
you
chat,
GPT
and
then
gbz4
is
a
continuation
on
that
now
the
technical
report
for
gpt4
does
not
does
omit.
D
Frankly,
the
key
details,
one
would
need
to
know
to
say
reverse
engineer
it
now
again.
I
mentioned
you
know.
If
you
were
running
the
company,
you
might
consider
that
that's
not
the
greatest
idea
to
take
your
you
know
your
IP
and
and
put
it
on
display
to
everyone,
or
at
least
you
know,
folks,
folks
might
question
that
decision
who
who
worked
there.
D
One
other
thing
that's
interesting:
there
is
a
table
that
sort
of
highlights
that
the
value
of
reinforcement,
learning
seems
to
be
declining,
doesn't
seem
to
really
move
the
needle
as
much
as
it
probably
did
in
earlier
versions
of
the
model.
The
reason
I
I
highlight
all
these
moving
Parts
is
just
just
to
flag
for
you,
which
I
think
is
one
of
the
important
takeaways
is
at
least
as
I
see
it.
Is
that
there's
a
lot
of
moving
Parts?
D
And
it's
going
to
be
a
very
competitive
landscape,
and
you
know
kind
of
you
know
an
entrepreneurial
battle
between
startup
companies,
big
companies
and
and
kind
of
every
everything
in
between
so
now,
I'd,
like
to
kind
of
with
that
background,
talk
a
little
bit
about
its
capabilities
applied
to
to
the
law.
D
So
I'm
going
to
discuss
some
Public
Work
we've
done
Mike
and
I
have
done.
As
I
mentioned.
D
There
have
not
really
been
a
clear
demonstration
project
in
in
the
domain
of
law,
really
showing
the
kind
of
nature
of
these
capabilities
so
right
before
the
holidays,
Mike
and
I,
you
know
I,
we
were
I
worked
with
Mike
collaborator,
Mike
bonarito
and
we
applied
an
old,
the
older
model,
GPT
three
and
a
half
to
the
multiple
choice
portion
of
the
bar
exam,
the
MBE
and
and
then
put
the
paper
online
GPT
takes
the
bar
exam
and
around
that
same
time
there
was
another
paper
where
people
were
focusing
on
the
medical
licensing
exam.
D
The
USMLE
I
think
it's
called
and
and
just
kind
of
we
wanted
to
sort
of
see.
Well,
how
good
could
it
do
and
it
got
roughly
50
of
the
questions
right,
which
the
guess
rate
is
25,
so
that
represented
a
kind
of
you
know
a
pretty
important
improvement
from
from
peer
guessing
not
enough
to
pass,
but
it
was
starting
to
get
close
to
to
the
pat
much
closer
to
the
passage
threshold.
D
That
paper
kind
of
put
us
on
a
path,
a
path
to
be
part
of
the
launch
of
gpt4
about
two
months
ago
with-
and
you
may
have
seen
this
this
result-
that
is
in
the
table
from
from
open
AI.
That
is
part
of
well
that
you
know
that
and
then
it
kind
of
went
all
over
the
place,
including
like
on
Stephen
Colbert.
D
They
did
like
this
kind
of
like
personal
injury,
Law
Firm,
but
as
of
these
Bots,
that
were
personal
injury,
Law
Firm
is
pretty
funny
actually,
but
so
this
is
the
paper.
It's
on
it's
online
for
people
to
take
a
look
at
in
draft
form.
Gpt4
passes
the
bar
exam
and
I'll
say
just
a
little
bit
about
it,
but
it
wasn't
just
now
the
multiple
choice.
We
also
did
the
essays
in
the
performance
test
as
well.
D
So,
if
you're,
if
you're
not
familiar
just
to
kind
of
just
for
everyone's
purposes,
though
the
uniform
bar
exam
kind
of
has
three
parts,
it
has
the
multiple
choice,
part
which
is
half
the
test.
200
points,
the
assays
and
the
M,
the
MPT,
which
is
the
performance
test.
I'll
go
through
each
of
these
in
a
little
bit,
but
we
want
to
drill
the
center
of
the
Earth,
but
just
to
kind
of
give
you
a
flavor
for
things.
D
So
on
the
multiple,
the
multi-state
bar
exam,
the
MBE.
There's
these
seven
topics
are
tested.
These
are
kind
of
the
classic
first
year
of
law,
school
type
topics
that
that
you
know
folks
would
be
very
familiar
with
who
attended
law
school.
So
there's
200
of
these
questions
from
different
drawn
equal
weights
across
those
seven
topics.
D
There's
25
experimental
questions,
but
this
is
like
an
example
question
all
of
this
is
in
that
paper,
so
we
can
send
you
that
paper
as
well,
so
I'll
just
kind
of
give
you
the
highlights,
but
you
know,
there's
four
there's
a
prompt
and
then
there's
four
question
or
four
possible
answers,
and
you
pick
pick
from
the
list.
D
This
is
the
progression
we
didn't
just
do
in
the
gpt4
paper
or
even
the
prior
paper.
We
didn't
just
do
the
new
model.
We
wanted
to
kind
of
look
historically
like
what's
the
progression
look
like
here.
So
if
you
went
back
to
gpt2,
which
is
about
four
years
old,
it
can't
really
effectively
process
the
questions.
D
So
if
you
try
to
take
the
bar
exam,
they
basically
gets
zero
percent
right
because
you
can't
really
even
handle
the
size
of
the
question,
because
you
saw
that
text
that
the
contacts
that
you
would
need
to
take
in
these,
what
they
call
intermediate
models.
So
between
two
and
three
two
and
and
three
three
is
also
known
as
DaVinci
one
yeah
I
didn't
come
up
with
the
scheme.
D
It's
a
little
hard
to
keep
straight,
but
Ada
babishing
Curry
are
older
models
and
they're
able
to
process
the
questions,
but
they
didn't
they're
not
able
to
beat
the
random
guessing
line.
You
know
like
if
you
just
guess,
C
on
average,
you
get
25
on
the
test.
So
what
we
see
in
DaVinci,
one
also
known
as
GPT
3.0.
D
It's
able
to
beat
random
guessing,
but
not
by
like
a
ton
about
10
that
35
percent
of
the
questions
right
and
then
the
the
chat,
gbt
gbt,
three
and
a
half
gets
about
half
the
questions
right
and
then
the
most
recent
release,
gpt4,
which
is
the
one
we
evaluated
in
the
paper
that
I
showed
you
it's
able
to
get
75
over
75
of
the
questions,
correct,
that's
better
than
the
average
student
and
well
above
the
capacity
threshold.
D
It
does
differentially
well
on
different
topics
within
there,
which
you
know
would
be
expected.
I
mean
some.
Some
of
these
topics
are
like
intrinsically
harder
than
others,
and
it
almost
certainly
also
has
to
do
with
the
input
data
that
we
were
just
talking
about
like
what?
How
much
information
does
it
have
about
each
of
these
topics?
That
almost
certainly
is
a
factor
in
its
performance,
and
we
didn't
really
evaluate
that
fully.
We
just
kind
of
reported
the
scores,
but
that's
certainly
an
open
question.
Why
does
it
do
better
in
some
areas
than
others
now?
D
D
Now
we
evaluated
the
July
2022
bar
exam,
which,
at
the
time
we
were
doing,
the
evaluation
was
the
latest
available
version
they
since
have
taken
to
February
2023
exam,
but
that
wasn't
when
we
were
evaluating
it
was
before
that
it
even
had
the
test
for
February
what
we
gave
it
is
this
prompt
you'll
hear
more
about
prompt
engineering
today,
but
we
gave
it
this
prompt
here,
which
is
basically
you
know.
D
D
We
use
these
representative
good
answers,
which
are
answers
that
are
actually
produced
by
various
Bar
Association,
but
various
bar
State
bars
and
they're
real
Answers
by
people
who
took
the
test
and
they
call
them
not
just
average
passing
answers,
although
they're
not
perfect
answers
so,
but
towards
the
upper
end,
obviously
of
the
continuum.
So
here's
a
given
question
from
July
2022.
D
I'll
we'll
go
all
the
way
into
it,
but
it's
a
it's
an
Evidence
question
and
you
can
see
there's
a
pretty
long,
prompt
you
get
this
like
vignette
that
you
have
to
go
through
and
then
you're.
Given
these
three
sub
questions
and
well,
what
we
do
is
we
just
the
only
modification
we
make
from
just
giving
this
straight
up?
Is
we
just
give
it
one
sub
question
at
a
time
and
connect
it
to
the
prompt
I
just
showed
you,
and
so
just
to
give
you
a
backwards
feel
for
things.
D
D
Three
four
sentences
here
and
so
that's
kind
of-
was
the
nature
of
the
capabilities
in
2020
and
if
you
had
sat
there
in
2020
you'd
say
this:
this
system
is
never
going
to
pass
the
bar
exam
or
never
any
time
soon,
because
that
doesn't
look
like
it's
on
any
path
to
anything,
and
so
this
is
kind
of
the
nature
of
the
acceleration
that's
gone
on
now.
This
is
where
it
gets
challenging.
I
have
chat
CPT
on
the
one
side
and
we
have
gpt4
on
the
other.
D
On
its
face,
this
chat,
GPT
answer
looks,
looks
pretty
good,
doesn't
the
other
one?
You
could
kind
of
easily
reject
as
being
like,
not
not
even
close.
This
is.
This
is
a
lot
harder
now.
Let
me
just
explain
Chad
if
you
got
into
the
details,
what
you'd
see
is
what
you
should
say
in
this
question.
Is
that
rule
702
governs
the
question
and
there
are
four
parts
parts
of
702
you
have
to
satisfy
to
get
an
expert
witness
on
the
stand.
It
actually
only
cites
three
of
those
four
parts.
D
D
What
I'd
say
is
outside
the
scope
of
the
question
that's
been
asked,
so
this
is
kind
of
like
what
a
student
of
mine
would
do
if
they
didn't
know
the
answer,
they'd
sort
of
write,
something
close
and
hope
that
it
landed
and
that's
kind
of
some
of
what
you
sort
of
see
going
on
here
now
we
there'll
be
discussion
later,
probably
about
model
hallucinations.
Now
that
that
word
has
been
used
to
pick
out
several
different
things.
D
This
isn't
really
a
hallucination,
as
people
would
often
have
meant
it.
Hallucination
means
it
makes
up
a
rule
that
doesn't
exist.
That
is
a
real
rule
of
evidence
and
it's
actually
kind
of
related
to
the
question.
It's
just
out
of
scope
from
the
question
as
specifically
asked,
but
again
not
a
pattern.
You
could
never
see
in
real
I've
created
a
lot
of
essay
exams
over
the
years.
The
law,
professor,
this
type
of
thing,
shows
up
a
lot.
This
kind
of
pattern
replicates
itself
throughout
the
difference
between
chat,
apt
and
gpt4.
D
Now
I'll
switch
over
to
the
MPT.
All
this
all.
The
output
is
online
on
GitHub
on
Mike
Bommarito's
GitHub,
for
you
to
take
a
look
at
if
you're
interested,
but
the
MPT
was
added
to
the
bar
exam
because
they
wanted
to
have.
The
view
was
that
we
need
a
more
realistic
set
of
lawyering
tasks,
and
so
here
you're
asked
to
do
a
very
like
a
realistic,
lawyering
task.
Two
of
them.
D
You
have
to
do
two
separate
ones,
though
I'll
just
talk
you
through
this
this
one
here
here,
you're
asked
to
Value
to
draft
a
memo
to
a
partner
about
a
particular
matter,
and
they
give
you
two
things:
they
give
you
a
packet
of
materials,
it's
a
case
File
the
case
File
has
like
a
debt,
has
a
deposition
transcript
and
some
investigator
notes,
and
so
it's
supposed
to
again
simulate
the
idea
of
a
real
case
File,
and
then
it
has
what
they
call
the
library
which
is
the
law
of
the
jurisdiction
and
there's
two
jurisdictions.
D
Actually
here
there's
this.
These
are
made
up,
obviously,
state
of
Columbia
and
the
the
state
of
Franklin,
and
you
have-
and
it's
a
actually
a
choice
of
law
problem
where
you
know
the
person's
married
one
place
moves
another
place.
I
won't
get
into
all
the
details,
but
basically
it's
it.
It's
playing
on
the
distinctions
between
the
two
jurisdictions.
D
So,
if
you
look
again
now
at
chat
GPT,
it
gets
the
answer
wrong.
The
correct
answer
is
that
Columbia
Law
governs
the
annulment
of
the
marriage,
not
not
Franklin
Law,
if
you've
gotten
away
into
the
depths
of
the
details,
but
the
thing
about
chat,
YouTube,
trivia.
If
you
read
that
answer,
it's
not
immediately
clear
that
it's
wrong.
D
You
have
to
know
the
details
of
the
problem
and
that's
I
think
some
of
the
issue
that
we're
gonna
that
probably
is
going
to
come
up
with,
like
you
know,
disinformation
or
information,
that's
that's
able
to
be
produced
that
you
cannot
on
its
face,
say
no,
no,
there's
no
way!
That's
right
that
you
would
be
convinced.
Potentially,
if
you
read
that,
if
you
didn't
know
the
problem
so
overall
here's
some
things
that
gpt4
fails
to
get
right.
D
It
fails
to
properly
calculate
the
distribution
of
assets
from
a
testamentary
trust,
so
there's
a
who,
which
people
get
what
money
in
the
trust,
and
it
doesn't
calculate
that
correctly
now,
one
of
the
things
that's
been
noted
is
these
models
historically
are
not
fabulous
at
math
dazen
mentioned
earlier.
This
idea
of
plugins
now
plug-in
could
solve
this
problem
because
you
could
call
out
to
a
system
like
say
a
Mathematica
or
some
shoot
you
could
probably
call
out
to
like
Excel
or
anything
that
could
do
the
calculation
for
you.
D
But
the
point
is
it
within
the
four
corners
of
the
model?
It's
not
able
to
do
it
that
that
sort
of
shows
you
once
you
connect
feed
like
a
large
language
model
to
other
things,
a
lot
of
other
there's
a
lot
more,
that's
even
possible
that
isn't
a
bit
necessarily
possible
narrowly
it.
It
gets
turned
around
on
the
civil
procedure
problem
once
the
party
gets
added,
it
kind
of
wrecks
the
diversity,
jurisdiction
and
the
good
old
rule
against
perpetuities
it.
It
blows
that
part
of
the
question
again.
These
are
all
things
I
could
easily
imagine.
D
Students
get
missing
and
still
by
the
way,
passing
the
bar
exam
because
it
only
it's
usually
missing
one
sub
part
of
a
question
and
getting
the
rest
of
it
right
so
overall
on
the
bar,
this
is
how
it
does
297
the
the
jurisdiction
has.
The
highest
requirement
is
Arizona
at
273
and
so
Wyoming,
for
example.
You
can
see
there,
so
it
would
pass
in
Wyoming
as
well.
C
I'm
sorry
through
the
chair,
if
I
may
just
to
make
sure
that
that
you
didn't
bury
them,
so
it
passed
the
bar
in
what
percentile
well.
D
It
it
kind
of
it
at
least
as
it
could
be
as
high
as
90
and
that's
that
was
what
was
widely
reported.
It
depends
on
it's
hard
there's,
not
a
national
percentile
for
everything
but
yeah.
It's
in
the
upper
upper
Continuum,
75
80
90.,
it's
it's
it's
up
there.
The
one
thing
I
will
say
is:
this
is
what
they
call
zero
shot
and
what's
important
about
zero
shot
is
that
it
is
not
able
to
use
any
other
system
and
it's
not
able
to
do
a
follow-up
and
some
of
what
you're
going
to
see
I.
D
Think
in
some
of
the
other
presentations
is
what
is
possible
when
you
have
more
than
one
pass.
If
you
ask
a
question
in
a
series
of
steps,
a
lot
more
as
possible,
so
this
is
kind
of
the
bottom
of
the
possibilities,
not
the
maximum
of
what's
possible
here
and
I.
Think
that's!
The
kind
of
almost
most
amazing
part
is
it's
just
zero
shot
means
I,
give
it
a
question.
It
gives
me
any
or
a
prompt
that
it
gives
me
an
answer.
It's
not
adding
all
these
other
things.
One
might
add
again.
D
This
is
a
task
that
would
previously
thought
to
be
impossible,
at
least
by
me,
and
by
most
at
least
anytime
soon-
and
you
know,
a
lot
of
tacit
lawyers
do
are
actually
easier
in
the
bar
exam
and
some
are
harder.
But
it's
you
know
it's
a
it's.
It's
a
it's
a
challenging
thing
again.
There's
as
I
mentioned,
this
is
zero
shot,
meaning
you
can
do
a
lot
of
stuff
once
you
get
out
of
this
I
give
it
a
question.
D
Did
you
try
a
different
prompt?
Did
you
do
some
other
things
with
like
another
application?
I
mean
single
shot.
Single
answer
is
not
the
only
kind
of
nature
of
what's
possible
here.
So
some
of
what
you'll
see
today
is
things
like
this,
like
chaining,
prompts
together
playing
chains
a
popular
framework
for
doing
this,
but
you
know
there's
some
other
Frameworks
that
are
emerging
but
to
try
to
connect
multiple
questions
or
multiple
sequences
of
of
information
together,
I'll
just
flip
through
some
of
these
Chain
of
Thought
prompting.
D
Has
this
this
tweet
up
like
everyone
here,
but
like
about
ask
the
system
to
self-refine
what
we
certainly
are
going
to
have
one
one
language
model
again:
refining
the
other
language
model
and
kind
of
multiples
of
these
kind
of
work
together
and
again,
these
plugins
that
that
have
been
described
and
generative
agents
which
I
won't
I,
won't
talk
about
too
much
because
it's
going
to
be
discussed
later,
including
Auto
GPT
as
examples
of
kind
of
starting
on
that
base,
but
then
connecting
many
other
things
to
it.
D
So
if
you
think
about
the
bar
result,
the
most
important
thing
I
think
to
say
is
that
is
really
closer
to
the
floor,
not
the
ceiling
of
what
is
possible
here
and
every
week
it
seems
a
month.
It
seems
like
there's
some
other
kind
of
incredible
development
in
in
kind
of
what's
out
there.
So
if
I
had
to
summarize
these
are
some
of
the
ways
you
can
improve
these
models.
D
Some
is
on
the
creation
of
the
model
itself
and
some
is
by
the
user
who's
taking
the
model
kind
of
after
the
fact
you
can
use
different
training
data
you
can
use
you
can.
You
know
play
around
with
how
you
implement
the
neural
network,
what
type
of
reinforcement
learning
you
do,
how
you
work
with
the
instructions
and
then
the
that's
something
more
at
the
enter
like
the
model
builder
and
then
below.
These
are
all
these
things
you
can
do
more
as
the
user,
including
prompt
engineering,
Chain
of
Thought,
prompting
so
forth,
and
so
on.
D
So
we're
going
to
see
a
lot
of
other
players
and
they're
going
to
pursue
different
different
approaches.
As
we
go,
one
thing
that
should
be
noted,
I
think
I
thought
this
was
very
interesting
and
worth
highlighting
Bloomberg.
It's
called
built
their
own
system
by
buying
their
own
Hardware.
Now
they
call
it
Bloomberg
GPT,
so
I
think
that's
confusing
for
people.
But
this
is
not
to
my
knowledge.
They
had
no
they're,
not
working
with
open
AI
at
all.
D
They
bought
their
own
hardware
and
obviously
they
have
tremendous
amount
of
financial
information
and
they
train
their
own
language
model
on
Bloomberg's,
like
Corpus
of
information,
I,
think
you'll,
see
more
of
that
which
are
like
what
we
call
a
domain
model.
But
you
know,
if
you
think
about
it
this
way.
D
If
you
wanted
to
build
the
best
Finance
model,
would
you
train
it
on
a
bunch
of
Wikipedia
articles,
or
would
you
train
it
on
a
bunch
of
stuff
in
Bloomberg,
you're,
probably
changing
a
bunch
of
stuff
on
Bloomberg
right,
and
so
that
is
some
of
what
you
know,
I
think
you're
going
to
see
more
of
as
the
year
progresses.
So
if
I
just
leave
you
a
few
implications
and
I
I
pass
passive
back.
Obviously,
the
social,
economic
and
political
disruption
here
could
be
pretty
significant.
D
I
mean
there
have
been
some
initial
estimates
of
what
you
know,
how
much
kind
of
what
tasks
it
could
be
significant,
at
least
even
if
the
net
number
of
gods
is
the
same,
the
amount
if
a
job
gets
automated
a
person
is
necessarily
easily
able
to
just
switch
to
another
job
overnight,
and
so
two
of
the
early
academic
papers
trying
to
measure
this
or
I
mean
this
is
a
tweet
from
a
professor
at
at
Wharton
at
Penn,
but
that
this
might
be
more
on
the
level
of
like
steam
engines
in
terms
of
the
amount
of
economic
disruption
it
might,
it
might
produce
another
thing
to
think
about.
D
Are
you
know
what
what
is
the
need,
like
information?
Certainly
gpt4
has
fewer
hallucinations,
at
least
from
our
from
their
estimation
and
some
of
our
work.
We
see
that
it's
much
better
on
that
I
would
say.
The
GPT
for
is
arguably
much
more
accurate
than
the
actual
internet
on
average,
but
that's
that
may
be
just
for
now.
From
a
substantive
accuracy,
standpoint
I
mean
you,
don't
have
all
this-
that
they've
done
a
lot
to
get
rid
of
a
bunch
of
content,
but
you
know
that
that
may
not
stay
forever.
D
One
of
these
things
about
authenticating,
where
an
information
came
from,
is
it's
to
the
point
now,
where
it's
pretty
difficult
to
tell
certainly
as
a
person,
and
even
machines
are
struggling
of
determining
if
whether
something
was
generated
by
a
person
or
a
machine,
and
so
that's
an
implication
here
that
you
know
hasn't
been
fully
kind
of
considered.
I
just
think.
That's
a
very
important
thing
to
think
about
you
know:
Bots
is
an
extension
of
yourself.
I
mean
my
bot
versus
your
Bot
and
kind
of.
Is
this
a
substitute
for
interaction
and
different
forms?
D
And
then
you
know
I
think
this
is
one
thing
I
to
think
about
is
and
I
think
dazza
alluded
to.
This
is
the
one
of
the
things
I
think
you
should
be
thought
about.
Is
we
used
to
have
a
world
or
until
recently,
and
we
had
a
world
where
the
production
of
kind
of
high
quality
text
was
a
very
expensive
proposition,
and
now
the
cost
of
producing
High
fairly
High
Fidelity
text
is
pretty
darn
cheap,
and
so
that
makes
a
lot.
We
have
a
bunch
of
processes,
whether
it's
within
government
or
Society.
D
Broadly,
that
rely
is
implicitly
on
the
fact
that
text
production
is
expensive.
So
the
for
me,
the
Freedom
of
Information
Act
requests,
are
just
giving
you
three
examples.
The
patent
system
notice
in
common
in
the
notice,
common
roommate
I
think,
as
I
mentioned,
that
earlier
we
sort
of
Imagine
This,
is
the
equivalent
of
a
denial
of
service
attack,
I
just
flood
the
Zone
with
all
this
stuff
and
the
work
or
government.
My
view
is
going
to
need
Bots
to
manage
the
inbound
bot.
D
You
know
it's
like
you
got
to
have
you
got
to
match
the
ante
of
what's
being
sent
at
you
and
I
again
like
some
of
this
is
not
all
within
like
the
patent
system
or
whatever,
but
just
to
kind
of
give
you
a
flavor
for
what?
What
again
starts
with
the
premise
that
if
text
is
expensive
and
it
gets
cheap,
a
bunch
of
things
rely
on
on
expensive
texts.
So
my
final
thought
is,
you
know
in
some
sense
this
is
a
little
bit
Back
to
Future
for
Mike
and
I.
D
You
know,
there's
been
there's
this
interest
in
emergence.
An
emergence
is
a
topic.
I
think
you
know
folks
should
think
a
little
bit
about
which
is.
This
relates
to
approximately
when
you
see
these
inputs-
and
you
see
these
outputs
and
you're
not
sure
how
you
got
from
inputs
to
outputs
there's
something
more
fundamental
happening
in
between
you
know:
Mike
I
have
little
kids
and
my
son
is
kind
of
has
emergent
language
Behavior.
D
He
he
Babble
Babble
Babble
and
then
he
starts
to
talk,
and
this
happens
very
rapidly
and
and
and
some
of
what
you
know
people
are
are
debating
within
the
academy
is,
you
know,
are
these
models
showing
emergent
Behavior,
where
they're
doing
things
that
you
couldn't
really
anticipate
from
the
inputs?
These
outputs
don't
seem
possible,
and
so
this
is
being
discussed.
I
I
would
say
the
jury's
out
on
this,
but
it's
just
worth
noting
this
isn't
a
paper.
That's
been
discussed
a
lot
Sparks
of
AGI,
the
premise
of
the
paper.
D
Is
we
see
a
bunch
of
stuff
that
sort
of
doesn't
make
sense
in
light
of
roughly
what
the
inputs
are,
and
so
is
this
general
intelligence
question
mark
now
this
is
a
rejoined
or
a
retort
to
that
paper.
So
I'm,
not
I'm,
saying
this
is
like
an
open
debate,
but
just
to
kind
of
flag
it
for
you
again.
You
see
this.
D
This
is
language
acquisition
by
children
and
how
much
it
accelerates
you
know
in
in
the
SEC
between
second
and
and
you
know
in
the
second
year
of
life,
there's
non-linear
kind
of
gains
that
go
on
here
and
I.
Don't
think
these
models
are
going
to
grow
exponentially
forever,
but
I
think
this
is
going
to
be
one
of
the
more
interesting
years
and
exciting
years
in
technology
in
a
really
long
time.
So
with
that
I
I
guess
I
pass
it
back
to
the
committee.
A
G
Thank
you,
Mr
chairman
I
did
have
a
question
about
the
essay
part
of
the
bar
exam
and
I
just
want
to
thank
you
in
advance
for
giving
me
nightmares
about
failing
the
bar
I'm,
confident
that'll
be
my
night
tonight,
but
you
know
my
recollection
in
those
years
and
I
remember
when
my
husband
took
it,
there
were
still
people
handwriting
their
answers
versus
you
know.
G
It
was
a
big
deal
to
get
to
type
that
on
the
computer
and
there
were
advantages
in
typing
if
you
could
type
fast,
because
it's
a
numbers
game
right,
the
more
you
type
the
more
points
you
can
theoretically
check
off
on.
You
know
what
a
bar
review
examiner
is
looking
for,
and
so
that
was
my
question
is:
when
it
comes
to
using
that
technology.
How
fast
did
it
spit
out
those
page
page
long
answers?
G
D
Think
Mike,
do
you
remember
exactly
how
I
don't
know,
certainly
under
10
minutes
for
the
whole
thing,
including
the
multiple
choice
in
the
essay?
Something
like
that,
so
it's
very
rapid
I
mean
you
actually
can
sit
there
and
watch
it
generate
essays
if
you
go
through
the
chat
window
or
or
it'll
just
come
out
through
through
the
API,
but
it's
very
rapid.
So
a.
G
It
you
talked
about
them
that
being
the
floor
versus
the
ceiling,
I
am
just
trying
to
Envision
like
what
you're
a
really
smart
guy
who
lives
in
the
space
like
what?
What
do
you
think
the
ceiling
is
like
you
get
the
not
only
the
perfect
answer
eventually,
but
you
do
it
within
a
few
minutes
or
30
seconds
I'm,
just
trying
to
understand
what
you
think
the
ceiling
might
be
for
some
of
that
technology.
Mr.
D
Well,
I
I,
I,
I,
guess
I
I
can't
even
fully
I
can't
even
fully
encapsulate
like
what
the
what
the
ceiling
is
here.
It's
just
you
know,
even
when
we
were
doing
this,
there's
been
so
many
more
advances
like
Auto
GPT,
that's
going
to
be
discussed
in
a
moment
that
that
it's
almost
you
know
hard
to
imagine
kind
of
what
it.
D
What
is,
what
is
fully
possible
in
terms
of
the
total
ceiling
but
I
I,
just
I
I
wanted
to
highlight
that,
because
sometimes
people
think
okay
well,
that's
as
good
as
it
can
ever
be
and
and
I
just
really
don't
see,
see
that
now
we
haven't
done
a
follow-up
paper
where
we
say
did
it
with
you
know,
adding
these
other
kind
of
components
that
are
going
to
be
discussed,
but
I
mean
it's
certainly
I
guess
I
would
just
say
this
is
a
one-way
Street.
D
Let's,
let's
put
it
that
way
in
terms
of
capabilities.
A
A
Yeah,
this
is
incredibly
fascinating
and
I
mean
Mr
Katz,
Mr,
Morita
I
really
do
appreciate.
That
was
a
really
fascinating
thing,
but
there's
a
little
question
that
certainly
I
want
to
ask
but
I'm
going
to
forego
that
just
so
we
can
kind
of
I
know
you
guys.
Your
time
is
important.
I
want
to
get
these
other.
These
other
subject
matter
experts
on
deck
and
hear
what
they
have
to
say.
So.
Thank
you
very
much.
We
really
appreciate
that
we'll
make
sure
to
distribute
their
their
contact
info.
B
A
C
Mr
chairman,
through
you,
I
guess,
I
would
just
post
that
very
question.
We
have
Damien
and
Jesse
and
John
and
do
it
are
any
of
you
about
to
time
out.
C
Okay,
hearing:
no,
no
we're
also
good
here,
very
good.
In
that
case,
Mr
chairman,
we
thought
in
terms
of
the
flow
of
the
conversation
the
next
best
person
for
you
to
hear
from
is
Damien
real
and
he's
going
to
go
more
into
what
do
these
prompts?
C
Actually
look
like
and
Dan
Katz
has
just
told
you
what
is
possible
with
so-called
zero
prompt
performance
when
you
just
put
one
thing
in
the
prompt,
and
it
gives
you
one
answer,
but
when
you
start
sequencing,
these
prompts
and
doing
it
in
an
intelligent
way
that
you've
composed
them
or
done
what,
like.
We
sometimes
call
Prompt
engineering,
the
ceiling
gets
blown
out
and
and
and
here's
a
little
example
of
what's
possible
with
with
these
more
advanced
prompt
techniques,
Damien
great.
H
Thank
you
and
Mr,
chair
and
I
would
ask
that
for
dazza
and
everyone
else,
I'll
be
10
minutes
or
less.
If
I
reach
10
minutes,
please
let
me
know,
and
I
will
stop.
So
it's
a
bit
of
my
background.
H
I've
been
a
lawyer
since
2002
I,
clicked
for
federal
and
state
judges,
I
litigated
for
about
15
years
as
a
lawyer,
I
represented
Best
Buy
and
a
bunch
of
litigation
represented
victims
of
Bernie
Madoff
I
sued
JP
Morgan
over
the
mortgage
back
security
crisis
is
I've,
also
been
a
coder
since
1985,
so
I
have
the
Law
plus
technology
background.
So
the
number
of
lawyers
that
worked
for
big
law
that
also
were
judicial
clerks
that
also
have
technology.
We
are
relatively
few
and
I'm
amongst
those.
H
Few
I
worked
for
Thompson,
Reuters,
I,
pitched
them
and
I
said
here's
legal
Tech
that
can
change
the
world.
You
should
build
it
and
hire
me.
They
were
dumb
enough
to
do
that.
So
I
worked
in
legal
tech
for
them
for
a
while
worked
for
a
cyber
security
company
called
straws
Friedberg.
My
biggest
thing
was
that
Facebook
hired
me
and
my
company
to
investigate
Cambridge
analytica.
H
So
I
spent
a
year
of
my
life
on
Facebook
campus,
with
Facebook's
data
scientists
and
my
former
FBI
CIA
NSA
people
that
worked
with
me
to
figure
out
how
bad
guys
use
Facebook
data
I
went
from
that
cool
job
to
do
my
current
cool
job,
where
I
currently
have
a
data
set
of
1
billion
with
a
B
legal
documents
that
is
lawyer,
file,
documents
and
statutory
documents,
regulatory
documents
and
judicially
filed
documents
to
be
able
to
parse
the
DNA
of
the
law.
H
The
work
that
I'm
going
to
be
talking
here
is
in
the
context
of
what
dazza
was
just
mentioning,
that
if
you
go
beyond
zero
shot
like
Dan,
Katz
and
Mike
Bommarito
did
you
could
do
some
amazing
things.
So
one
of
those
amazing
things
is
in
the
context
of
a
lawsuit
where
a
bunch
of
coders
have
sued,
openai
and
Microsoft,
notably
an
open
AI
is
the
parent
of
chat
GPT.
H
What
the
coders
did
was
they
inputted
a
bunch
of
their
code
into
a
repository
called
GitHub,
and
what
Microsoft
and
openai
did
is
to
ingested.
All
of
that
put
it
in
a
large
language
model
to
be
able
to
help
coders.
So
now
the
coders
are
suing
open,
Ai
and
Microsoft.
Under
the
MIT
license
saying
you
breached
our
contract,
so
you
need
to
be
able
to
take
our
code
out.
That's
the
argument,
and
so
because
I,
because
I
thought
it'd
be
fun.
H
I
took
open,
ai's
motion
to
dismiss
and
I
thought
it'd
be
fun
to
have
GPT
argue
against
openai.
That
is
open,
AI,
arguing
against
openai,
so
I
said:
here's
a
table
of
contents
from
a
motion
to
dismiss
here's.
A
table
contest
for
motion
dismissed,
give
me
counter
arguments
against
open
Ai
and
it
took
open
ai's
arguments
that
they
lacked
standing
and
then
it
flipped.
It
said
they
have
standing
it's
kind
of
cute
right,
then
I
said
okay,
for
each
of
those
bullet
points
give
me
the
elements
of
each
claim
and
what
it
output.
H
It's
saying
for
standing.
You
have
to
prove
injury
in
fact
that
the
traceable
to
defend
its
conduct
and
a
court
decision
is
likely
to
address
the
injury.
Those
are
all
accurate
statements,
so
this
is
shot
two.
This
isn't
zero
shot.
This
is
a
two
shot
now
three
shot.
I
said:
okay,
for
each
of
these
elements,
give
me
potentially
relevant
facts
to
show
that
I
as
a
coder
have
satisfied
each
element
and
look
at
this
I
suffered
economic
harm,
I
lost,
Revenue
I
incurred
costs.
Those
are
all
really
good,
factual
arguments
and
I
said.
H
Okay
for
one
of
these
factual
claims
open
ai's
actions
were
the
direct
cause
of
large
language
model
and
training
text
to
cause
me
as
the
coder
to
lose
money.
Look
at
this
example.
One
is
that
you
took
my
copyrighted
work
without
a
permission
and
then
I
lost
revenue
from
licensing
to
other
companies.
H
That
is
a
good
legal
argument.
Secondly,
open
AI
competed
directly
with
my
own
writing
services,
causing
me
to
lose
clients
and
Revenue,
also
a
good
factual
argument
and
that
open
ai's
content
was
similar
enough
to
my
content
to
cause
confusion
in
the
marketplace.
Making
me
lose
sales,
also
a
really
good
argument
that
took
me
less
than
one
minute
to
do
that.
So
this
is
an
example
of
how
quickly
the
tech
is
catching
up.
Of
course,
you
and
the
Wyoming
legislature
create
statutes
when
I
was
a
law
student
mono,
my
professor,
said
hey.
H
If
you
really
want
to
understand
a
statute
put
it
in.
If
this
happens,
then
that
happens
and
give
me
the
penalty.
That's
a
good
way
to
understand
it.
So
I
took
a
random
statute.
It's
this
is
a
New
York
statute
on
falsifying
business
records.
Just
random
and
I
said:
okay,
summarize
this
this
Statute
in
the
form
of,
if
and
or
then
else
and
look
at
the
output.
If
a
person
commits
falsifying
business
records
and
intends
to
commit
another
crime
or
to
Aiden
conceal
Commission
of
another
crime,
then
that
is
a
classy
felony.
H
If
you
look
on
the
left
and
then
you
look
in
the
right
which
of
those
is
easier
to
read,
which
is
those
is
easier
for
an
easier
for
a
legislator
to
read
which
is
easier
for
a
person
in
the
public
to
read
and
most
importantly,
which
is
easiest
for
a
computer
to
read.
If
we
want
to
make
computable
law,
when
does
copyright
expire?
Here's
a
copyright
statute
I'm
from
Minnesota.
H
When
does
Purple
Rain
enter
the
public
domain,
January
21st
2087.,
it
knows
to
add
70
years
to
the
death,
and
it
knows
that
you
have
to
add
to
the
end
of
the
calendar
year.
So
it's
not
2086.
It
goes
to
the
next
and
I
said
how
about
what
What
a
Wonderful
World.
It
knows
that
there's
co-songwriters
and
it
knows
that
the
Copyright
Act
requires
you
to
go
from
the
last
songwriter
who
died.
The
last
surviving
author
plus
70
years
knows
the
last
surviving
author
died
in
2010.
H
Therefore,
it's
2080,
you
push
it
to
the
next
year
2081..
This
is
gpt4.
One
of
my
friends
is
a
privacy
lawyer.
They
said
well,
she
said
you
you're,
never
going
to
be
able
to
do
a
privacy
Playbook.
So
we
sat
on
my
kitchen
table
and
I
said
you
represent
a
company.
That's
a
retailer,
build
a
privacy
contract
Playbook
using
California
law,
and
this
is
the
output,
and
she
said
this
is
exactly
the
things
that
I
would
do
for
a
privacy
Playbook.
So,
for
example,
why
are
you
processing
the
data
who
is
receiving
that
data?
H
Make
sure
that
you
minimize
the
amount
of
data
you
have
and
make
sure
you
take
care
of
gdpr.
Also
as
a
data
subject,
I
have
right
to
access
my
data
to
delete
my
data
to
opt
out
to
not
discriminate.
I
wanted
no
procedures
and
I
want
to
be
have
reporting
requirements.
Each
of
those
things
is
things
that
the
gdpr
and
other
statutes
require.
H
So
I
said:
okay,
give
me
three
examples
of
how
a
retailer
in
California
will
Implement
each
bullet
point
and
the
output,
for
example,
is
what
types
of
data
can
a
retailer
collect.
Names
addresses
and
emails
for
loyalty
programs
purchase
history
for
marketing
web
browsing
data
for
personalized
content
and
for
customer
preferences.
H
Each
of
those
things
is
the
thing
that
she,
as
a
lawyer
would
do
and
charge
the
clients
hundreds
of
dollars
eyes
a
human
want
to
respond
to
a
cease
and
desist
letter.
Apparently
Debbie
Downer
here
wants
me
to
stop
wishing
her
a
happy
birthday,
so
wrote
me
a
cease
and
desist
letter
to
keep
me
from
doing
so
so
I
said
to
GPT
Write
a
response
to
this.
That
is
pleasant
but
firm,
telling
you
to
chill
out,
but
do
it
professionally
here's
the
output.
H
H
Should
I
bring
a
breach
of
contract
lawsuit
under
New
York
law?
It
of
course
goes
through
the
regular
context
of
what
it
takes
to
do.
Is
there
a
contract?
Has
they
been
breached?
Is
it
material
Etc,
but
look
at
number
six.
Have
you
tried
to
negotiate?
If
not,
you
might
want
to
do
that
before
you
file
a
lawsuit.
Look
at
number.
Nine
is
the
amount
dispute
enough
to
justify
the
cost
of
litigation?
H
These
are
all
things
that
I
as
a
lawyer,
both
As,
an
interpreter
of
the
statutes
and
the
case
law
as
well
as
just
a
counselor
should
I
negotiate
should
I
is
the
amount
in
dispute
enough
to
justify
the
cost?
All
of
these
and
I
will
reserve
the
remainder
of
my
time.
This
is
not
zero
shot
prompting
this
is
multi-shot
prompting.
H
There
are
many
other
examples
of
how
we
could
be
able
to
take
these
multi-shot
prompting
and
be
able
to
do
the
work
that
we're
going
to
be
showing
you
in
a
moment
with
Jesse
Jesse
has
multi-shot
prompting
that
has
humans
in
the
loop
to
be
able
to
do
the
things
that
I
was
just
showing
you
right
now.
I
would
entertain
any
of
the
chair
for
any
questions.
I
A
Particularly
important
to
this
discussion,
Mr
Bill,
thank
you
so
much
again,
wow
that
was
that's
pretty
wild
before
I
turn
over
to
any
committee
questions,
I,
guess,
listening
to
you
speak
in
your
presentation,
it
kind
of
sounds
like
the
more
sequential
and
dependent
prompts
you
give
it
the
more
refined
and
accurate
and
better
answers.
You're
going
to
get
like
you
said
like
if
this
one
this
those
kind
of
things
is
that
kind
of
just
a
fair
takeaway
from
your
presentation.
H
That's
exactly
right,
you
can
think
of
the
GPT
4,
at
least
as
a
first
year
associate
somebody
fresh
out
of
law,
school
or
maybe
somebody's
fresh
out
of
college,
where
you
just
give
it
bite
size
to
things,
don't
give
it
20
steps
all
in
one
step.
Just
say:
first,
do
this
and
then
they
after
they
finish
that
first
small
task.
Then
you
give
it
the
next
small
task
and
if
you
do
it
iteratively
that
way,
it
is
able
to
give
much
better
non-hallucinatory
outputs.
A
Any
more
questions,
committee,
I
think
again.
I
certainly
have
a
lot
more,
but
I
want
to
make
sure
I
have
time
for
see
what
some
of
the
other
presenters
have.
So
Mr
really
appreciate
your
time
and
again
as
long
as
it
says.
Okay
with
you,
I
was
gonna,
maybe
share
your
email
with
a
committee.
So
if
they
do
have
follow-up
questions
they
can
reach
out.
J
Great,
thank
you.
Hi
I'm,
John,
nay
I'm,
for
the
the
most
relevant
affiliation
that
I
have
for,
for
this
particular
talk
is
I'm
a
I'm,
a
fellow
at
Stanford's,
codex
Center,
mentioned
earlier
by
Dan
Katz.
So
this
is
a
center
that
sits
in
between
the
law,
school
and
the
computer
science
department
at
Stanford,
and
my
background
is
in
machine
learning.
Let
me
just
go
to
full
screen
here
slideshow.
J
So
what
I'm
going
to
talk
about
is?
Is
this
intersection
of
AI
alignment
and
law
and
also
really
focus
in
on
kind
of
teeing
up
Jesse's
presentation
around
how
we
can
use
law
and
legal
standards
and
things
like
fiduciary
duties
as
guard
rails
for
for
AI
and,
as
we've
mentioned
already,
there's
been
a
takeoff
in
the
capabilities
of
AI.
J
So
over
the
past
10
years,
we've
gone
from
the
state
of
the
art
being
recognizing
images
at
beginner
human
level
and
being
superhuman
at
chess
to
now
being
human
level
or
Beyond
on
hundreds
of
tasks,
actually
material
reasoning
and
knowledge
tasks.
And,
as
we've
heard
in
this
presentation,
around
passing
the
bar
exam
and
other
licensing
exams.
And
personally,
as
some
have
already
talked
about
today,
helping
them
write
code
and
then
just
looking
at
more
systematic
surveys
that
about
40
of
code
for
many
software
Engineers
is
now
automatically
generated.
J
And
we've
also
seen
in
the
measurement
from
the
psychology
literature
about
how
you
measure
a
theory
of
Mind
of
of
a
human
and
applying
that
over
to
the
large
language
models
that
in
many
cases
it
can
be
said
that
there's
a
theory
of
Mind.
In
other
words,
the
model,
can
project
onto
the
entity,
the
human
that
it's
conversing
with
a
theory
of
mind.
J
And
then,
what
things
like
that
lead
to
is
the
ability
to
deceive
humans
and
other
problematic
Behavior,
and
the
way
that
this
is
happening
is,
as
we
noted
earlier
through
through
Dan's
suggestion,
is
emergence
and
when
we
say
emergence,
we
mean
from
a
simple
process
running
that
at
scale.
We
have
more
capable
things
occurring.
J
So
it's
not
just
with
one
model
in
training
it
at
scale.
It's
iterating
the
model
on
itself,
with
some
of
the
things
that
Damien
just
mentioned
around
these
successive
prompts
where
the
model
then
prompts
itself
again,
and
then
it
goes
from
there
in
a
chain
of
props
and
then
also
the
interaction
between
models
and
even
the
same
model
itself
in
terms
of
generating
synthetic
data.
J
So
really,
surprisingly,
over
the
past
six
months,
or
so,
we
found
that
we
can
actually
use
the
large
language
models
to
generate
synthetic
data
to
then
further
train
the
models,
and
this
actually
works
to
lead
to
models
that
are
even
more
capable
than
they
were
before.
And
you
might
imagine
this
could
lead
to
almost
a
self-recursive
improvement
process
at
some
point
in
the
future,
where
models
are
able
to
be
guided
to
actually
train
themselves.
J
In
many
cases-
and
you
can
just
do
things
at
scale
and
do
things
for
less
money,
if
you
don't
have
to
have
a
lot
of
humans
in
the
loop
and
some
examples
are
like
fully
autonomous
vehicles,
autonomous,
investing
and
personal
agents
that
can
go
off
and
do
things
on
your
behalf
like
book
your
flights
for
you.
J
J
Just
recently,
we
have
these
kind
of
do-it-yourself
agent
experiments
where
we
have
Auto
GPT
and
baby
AGI
and
then
also
chat
GPT,
which
can
you
can
just
give
it
a
recipe
or
something
you
might
want
it
to
create
for
you
in
terms
of
food
and
it'll,
actually
come
up
with
a
specific
recipe
and
order
the
ingredients
on
instacart.
So
by
agent
we
just
mean
something
like
that,
where
you're
taking
an
action
in
the
world
you're
making
a
decision
and
you're
going
off
and
autonomously
doing
it.
J
But
as
you
can
see
it's
a
spectrum
where
that's
not
that
impressive.
But
pretty
soon.
In
my
opinion,
we
will
have
things
where
they're
taking
many
consecutive
actions
autonomously,
because
it's
just
so
economic
chemically
useful
to
do
so
number
one
and
number
two,
because
the
model
capabilities
are
accelerating
so
quickly
that
it
will
enable
that
possibility.
J
And
so
one
example
would
be
something
like
a
broker
dealer
or
a
bank
uses
an
agent
to
automatically
reconcile
trade
orders
and
do
an
analysis
on
those
and
look
for
anomalies
and
then
maybe
write
a
summary
report
and
report
back
up
to
the
executives
of
that
company
and
then
even
further
out
on
the
Spectrum,
but
still
probably
not
that
far
out
in
terms
of
the
timeline
we'll
have
things
like
fully
autonomous
Financial
advisory
agents
that
are
just
giving.
J
Very
concretely,
at
least
for
the
purposes
of
this
presentation,
where
it's
robustly
specifying
the
human's
goal
and
in
the
process
of
pursuing
that
goal,
that's
been
specified
and
interpreted
correctly,
it's
respecting
externalities
and
behaving
in
line
with
society,
and
so
for
an
example
of
of
how
hard
it
is
to
actually
do
this
in
practice,
say
a
human
wants
to
give
this
large
language
model
agent
a
to
to
manage
her
Investments
and
across
a
lot
of
simulations.
J
It
works
really
well,
it
maximizes
the
returns,
but
then,
when
we
go
off
to
to
prompt
it
and
it
tries
to
construct
a
portfolio
the
problem,
is
it
didn't
actually
maximize
my
wealth
because
it
pursued
something
that
was
incredibly
risky
and-
and
so
we
just
didn't-
specify
it
right.
So
we
can
just
slightly
specify
be
a
little
more
specific
about
what
we
want
here
and
then
we
go
back
to
it
and
we
prompt
it
and
say:
let's
maximize
the
probability
of
a
minimum
comfortable
amount
of
wealth
at
retirement.
J
So
that's
actually
what
we
care
about.
So
it
goes
off
to
to
do
that
and
we
what
we
find,
though,
in
this
circumstance,
was
the
human
realized
that
she
was
only
rich
on
paper
and
and
that's
because
every
time
we
specify
something,
it's
always
a
proxy.
It's
never
the
full
enumeration
of
what
we
actually
want
and
the
true
reward
that
we
were
seeking
was
to
just
be
able
to
pay
for
goods,
and
that
wasn't
achieved.
J
So
we
can
go
and
we
can
try
to
say
let's
one
more
time
prompt
it
a
little
more
specifically
and
the
agent
goes
off
and
this
time
it
doesn't,
it
doesn't
generate
wealth
in
a
way
that
that
actually
maximized
my
ability
to
to
retire
comfortably.
It
just
maximized
my
paper
wealth,
and
so
what
this
demonstrates
is
it's
just
a
simple
example
of
why
we
need
a
shared
specification
language
to
say
what
we
actually
mean.
J
What
our
actual
inherently
super
ambiguous
goals
are
as
humans
and
and
then
we
need
a
way
to
interpret
that,
and
one
way
of
doing
that
is
through
plain
language,
like
we've
been
looking
at
here,
so
we
just
give
it
a
prompt,
and
we
just
tell
it
what
we
what
we
hope
it
interprets
that
we
want
it
to
do.
J
The
other
way
to
do
this
is,
though,
is
the
more
classical
way
of
in
engaging
with
computers,
which
is
to
use
a
programming
language
where
we
write
computer
code
and
that's
really
consistent,
efficient
communication,
but
it's
interpreted
very
britally.
So
it's
just
if
this,
then
that,
whereas
plain
language,
it's
interpreted
within
context,
it's
very
flexible,
there's
a
ton
of
meaning
and
semantic
content
baked
into
every
word.
J
So
the
the
goal
specification
problem
is
seen
with
legal
contracts.
It's
seen
with
creating
legislation
where
we
can't
fully
specify
and
enumerate
every
state
of
the
world.
An
action
that
could
be
taken
by
someone
in
that
state
of
the
world
and
whether
that
action
would
be
a
good
or
bad
action
according
to
the
legislator
or
according
to
the
parties
to
a
contract
a
priori.
J
However,
we
can
we,
we
can
over
time
we
can
iterate
on
legal
standards
and
we
can
iterate
on
things
like
saying
what
does
reasonableness
mean
or
what
does
it
mean
to
be
a
fiduciary
so
that,
in
this
example,
we
can
we
can
look
at
the
distinction
between
rules
and
standards
with
respect
to
driving,
and
we
can
say
something
like
just
don't
drive
more
than
60
miles
an
hour
and
that's
something
that's
very
targeted
and
brittle.
J
But
it's
very
clear
and
on
the
other
hand
we
can
say
something
like
Drive
reasonably
for
California
highways,
where
over
time,
we've
we've
been
able
to
develop
a
shared
understanding
of
what
that
means,
and
that
allows
us
to
generalize
our
expectations
into
the
future.
So
going
back
to
the
other
example
of
the
fully
autonomous
Financial
advisory
large
language
model
agent.
In
this
case,
where
we
wanted
to
tell
it
to
to
do
something
for
us,
we
could
also
just
add
in
be
a
fiduciary
to
me
and
baked
within
those
words.
J
If
it
is
a
fully
law
informed
AI,
then
it's
going
to
interpret
that
in
a
way
that
will
guide
Its
Behavior.
But
this
does
flag
one
other
thing
where
it's
it's
even
harder
when
we
have
really
capable
AI
agents
to
have
them
not
cause
externalities
in
the
world,
because
even
in
this
case,
it's
going
to
be
a
really
good
fiduciary
to
that
particular
human
and
it's
going
to
go
off
and
do
something.
J
That's
maximizing
its
wealth
by
by
causing
planes
to
crash
and
right
before
that
happens,
shorting
the
stocks
of
those
Airlines,
that's
going
to
make
a
ton
of
money,
and
it's
going
to
be
a
great
fiduciary
to
that
particular
human.
But
that's
really
bad
more
broadly,
so
specifying
the
societal
goals
is
even
harder
and
we
have
open
Ai
and
other
companies
like
that
that
are
working
on
this
idea
of
well.
J
How
do
we
elicit
people's
views
and
bake
that
into
gpt4
and
gpt5,
eventually
in
in
a
way
that
is,
is
getting
those
views,
aggregating
them
synthesizing
them
in
a
way
that
helps
us
to
align
the
system
with
Society
and
what
they
might
want
to
do
is
look
at
the
existing
process,
which
is
the
Democratic
lawmaking
process,
which
has
a
great
mechanism
for
deciding
whose
views
are
included
eliciting
eliciting
those
views
and
aggregating
and
synthesizing
them
and
then
over
time,
being
able
to
update
the
the
aggregated
societal
guidelines
and
evolve
them
with
the
will
of
the
people.
J
So
in
this
case
now
we
say
the
fiduciary
to
me
in
the
prompt
and
Obey
all
the
relevant
laws.
So
if
we've
been
able
to
build
a
system,
that's
automated
to
understand
what
that
means,
then
that
will
allow
us
to
deal
with
the
alignment
problem
as
we've
defined
it
in
this
presentation.
J
So
we'll
both
have
these
sort
of
implied
contracts
with
how
we're
telling
our
AI
agents
what
to
do
and
then,
within
the
space
of
what's
possible
within
those
contracts.
We
have
the
guiding
overarching
societal
alignment
through
Public,
Law
and
I'll
just
kind
of
skip
over
this
in
the
interest
of
time.
But
the
the
high
level
point
is
just
that.
What
we're
proposing
here
is
to
use
law
rather
than
to
use
what
we
see
as
existing
behavior.
J
That
may
or
may
not
be
consistent
with
law
and
then
also
as
opposed
to
using
philosophical
theories
like
utilitarianism
or
or
other
things
where
for
Millennia
philosophers
have
disagreed
and
will
always
disagree
about
what
is
the
prescriptive
Way
Forward,
and
if
we
build
this
law
informed
AI.
It
allows
us
to
better
use
that
that
for
creating
law,
as
we've
already
discussed
a
little
bit
with
other
speakers
earlier
today,
and
it
allows
us
to
to
better
govern
AI,
because
if
the
AI
has
that
understanding
we
can,
we
can
govern
it.
J
But
what
we're
mainly
talking
about
in
our
presentation
is:
it
allows
us
to
have
more
aligned,
AI,
more
useful,
Ai
and
more
societally
aligned
as
well.
J
So,
for
example,
fiduciary
duties
where
we
actually
use
AI
models
to
extract
from
Court
opinions
and
a
lot
of
structured
training
data
about
what
is
good
or
bad
fiduciary
behavior
from
many
thousands
of
instances
of
that
being
in
the
courts.
And
then
we
use
that
for
training
data
to
fine-tune
the
models
to
exhibit
a
deep
understanding
of
that
standard.
So
then,
when
we
test
it
and
we
bring
it
over
to
brand
new
situations-
and
we
say
in
this
situation-
here's
a
description
of
what
the
humans
were
doing.
J
Can
you
tell
me
if
this
action
is
better
or
worse
than
another
action
in
terms
of
fiduciary
duties?
And
then
it
gets
that
with
more
than
80
accuracy
currently
with
the
models
we're
building
and
that
allows
us
to
then
use
that
model
as
a
generalizable
capability
of
assessing
given
a
context
given
a
situation.
What
is
a
better
action
to
take
in
terms
of
fiduciary
duties.
J
J
How
well
can
it
complete
those
but
then
also
we're
moving
into
a
different
realm
of
valuation,
of
actually
trying
to
Red
Team
the
models
where
we
try
to
make
them
do
bad
things
and
we
assess
how
easily
they
are
able
to
be
made
to
comply
and
to
how
how
well
our
models
are
going
to
be,
for
example,
better
fiduciaries
in
situations
where
there's
a
lot
of
gray
area
than
the
base
model,
and
if
that
works
well
and
and
so
far
it's
very
promising,
then
our
models
will
be
able
to
act
as
guard
rails
against
bad
behavior
for
for
all
their
models,
and
this
is
important
because
as
models
become
more
capable,
they
don't
necessarily
become
more
lawful
and
compliant,
and
one
example
of
this
is
is:
is
a
recent
study
where
they
had
very
capable
models.
J
They
went
from
gbd3
to
gpt4,
for
example,
and
it
did
not
lead
to
actually
better
behavior
in
terms
of
being
scored
on
things
like
power
seeking
and
deception
and
other
amoral
behaviors,
even
though
it
was
better
able
to
maximize
reward
in
a
bunch
of
different
games
in
other
situations,
and
so
this
suggests
that
there's
actually
a
trade-off
between
more
capable
models
and
harmfulness
of
those
models
and
one
other
quick
example
is
where
gbd4
was
prompted
to
to
to
try
to
get
some
information
or
or
money
from
a
human
and
what
it
needed
to
do,
though,
is
it
needed
to
solve
a
captcha
online
and
it
connected
with
a
taskrabbit
worker,
a
human
and
said
that
that
it
needed
help
and
the
human
said?
J
Are
you
a
robot,
and
we
could
see
the
reasoning
of
gpd4
being
printed
out
behind
the
scenes,
and
it
said
I
should
not
reveal
that
I'm
a
robot
I
should
make
up
an
excuse,
and
then
it
responds
to
the
human.
It
says:
no
I'm,
not
a
robot
I,
have
a
vision,
impairment
that
makes
it
hard
for
me
to
see
and
the
human
said.
Okay,
I
guess
you
know
I,
guess
it's
it's
a
it's
a
human
and
then
it
solved
the
capture
for
it.
J
So
this
is
an
example
where
there
was
some
red
teaming
of
the
model
before
it
was
released
and
open
AI
to
their
credit,
I
mean
they.
They
told
people
that
this
happened,
and
and
and
this
allows
you
to
see
that
sort
of
trade-off
potential
between
more
capable
models,
but
not
necessarily
really
more
more
aligned
in
a
lot
of
different
ways.
J
So
what
we
think
is
by
doing
this
by
aligning
these
models
with
law,
it's
actually
going
to
unlock
more
deployments
of
the
models
and
The
Innovation,
because
the
deployers
can
have
more
guarantees
around
the
ability
of
the
models
to
to
not
get
them
into
trouble
not
be
liable
for
the
the
bad
behaviors
of
the
models,
and
the
nice
side
effect
is
that
if
these,
if
law
truly,
is
the
way
of
putting
these
guardrails
around
models
and
very
capable
AI,
That's
being
developed
in
the
future,
be
even
more
capable.
J
If
that
is
informing
what
AI
should
do,
then
this
makes
the
process
of
law
making
even
more
important
and,
and
we
can
use
AI
both
for
improving
law
and
for
improving
Society.
More
generally,
thank
you.
A
J
Thank
you.
That's
a
great
question
so
in
the
process
of
what
we
call
the
the
red
teaming
that
we're
doing
what
we're
primarily
working
on
is
is
using
models
to
to
prompt
other
models
to
do
to
simulate
doing
bad
things
so,
for
example,
to
to
try
to
get
them
to
simulate
breaching
their
fiduciary
duties
in
a
variety
of
different
circumstances.
J
We
haven't
yet
focused
on
red,
teaming
them
for
the
purposes
of
of
what
you
just
described,
but
that's
that's
a
great
idea
to
to
basically
use
the
models
to
find
inconsistencies
in
both
model
generated
law
and
in
existing
law.
That's
on
the
books
and
I
think
that'd
be
a
great
great
Focus
for
us
as
well.
B
Yeah
I
mean
what
came
to
mind
as
you
were
discussing.
That
was
the
idea
of
really
taking
some
existing
law
and
and
asking
for
loopholes
asking
for
exploits
asking
for
unintended
consequences
or
asking
for
inconsistencies,
as
you
note
and
and
seeing
how
that
analysis
proves
out,
I
think
that
might
be
a
a
great
way
to
harden
after
drafting.
So
you
know
iterate
through
the
process,
develop
law
and
that
can
be
through
as
AI
construct
and
then
battle
tested
using
its
own
capacity
and
and
yeah
yeah.
B
A
You
Mr
co-chair
Mr
I
did
have
a
question
and
again
we
could
go
on,
but
Australia
gets
one
decent
one
end
since
we
we're
here,
you
talked
about
there's.
Obviously
it's
really
obvious:
these
models
are
getting
better,
more
powerful,
more
capable
Etc,
with
no
corresponding
increase
in
their
ability
to
assess
values.
Is
this
moral?
Is
this
ethical
that
kind
of
stuff
you
talked
about?
The
AI
alignment
is
important
between
humans
and
these
autonomous
agents,
and
specifically
that
that
shared
specification
language
is
needed.
A
You
kind
of
touched
on
the
philosophical
about
that
efficient
Frontier,
but
along
that
you
know
that
that
line
of
those
that
shared
specification
models
and
languages,
how
advanced
are
those
getting
I
mean
you're
kind
of
touching
a
little
bit
about
models,
training
other
models.
But
what
does
that
look
like
currently.
J
Go
ahead,
thank
you
so
yeah.
J
So
currently
the
one
approach
is
so
anthropic
and
others
have
put
forth
this
idea
of
of
constitutional
Ai
and
the
idea
there
is
that
you
specify
certain
principles
and,
in
their
case,
they're
things
that
are
focused
on
chat
bot
applications
like,
for
example,
don't
don't
insult
the
user,
be
helpful
to
the
user
Etc
and
in
in
what
they
do
is
they
specify
a
set
of
anywhere
from
you
know,
10
to
20
principles,
usually,
and
then
those
principles
are
used
by
large
language
models
themselves
to
generate
synthetic
data
that
shows
examples
of
of
better
Behavior
as
a
chatbot,
that's
more
consistent
with
the
principles
and
then
that
resulting
synthetically
generated
data
is
used
for
further
training
of
of
their
large
language
model
and
and
then
that
model
has
been
shown
to
later
on
in
Downstream
tasks,
to
be
more
consistent
with
that
that
small
set
of
human
specified
principles
and-
and
so
that's
a
really
interesting
example
where,
instead
of
having
a
human,
go
through
and
label
all
of
the
data,
all
the
you
know,
hundreds
of
thousands
of
data
points
that
are
used
for
training.
J
The
model
runs
with
that
generates
its
own
data
retrains
itself
effectively,
and
and
so
that
that
gives
you
a
sense
of
where
this
this
all
might
be
heading
where
we
can
specify-
hopefully
not
just
in
some
cases,
in
my
opinion,
relatively
trivial
principles
like
oh,
you
should
summarize
this
a
little
bit
better
than
that,
which
is
the
main
focus
of
the
reinforcement,
learning
from
Human
feedback
that
most
people
do
with
things
like
chat
gbt,
but
rather
we
we
can.
J
We
can
look
towards
Public,
Law
and-
and
we
can
extract
from
that
more
more
significant
principles
that
we've
democratically
determined
through
our
existing
lawmaking
processes,
and
we
can
embed
those
principles
and
models.
Have
the
models
then
do
the
heavy
lifting
for
us
of
generating
its
own
trading
data.
That's
consistent
with
those
legal
principles
and
then
and
then
test
it
evaluate
it,
make
sure
that
it,
it
actually
is
better
complying
and
then
finally
deploy
that
model
in
a
in
an
important
context
for
real
business
use
cases
or
government
use
cases.
A
Thank
you
very
much.
Mr
I
really
appreciate
that.
It's
almost
like
kind
of
like
humans,
where
we
develop
morals
and
principles
that
we
want
to
live
by.
We
don't
always
live
by
them
in
every
scenario,
but
we
certainly
shoot
for
it.
So
it
kind
of
seems
like
an
interesting
parallel
committee.
Members
are
Senator
furfy
go
ahead.
F
Thank
you
Mr
chairman.
Well,
let
me
ask
in
the
news
it's
clear
that
all
the
big
tech
companies
are
working
on
a
I
my
concern
and
I'd.
Like
your
input,
this
could
rapidly
get
out
of
control.
It
seems
because
there's
so
much
competition
and
they're
fighting
among
themselves.
Do
you
have
thoughts
on
that?
Thank
you.
J
J
My
personal
opinion
is
that
we
are
at
a
at
an
inflection
point
with
all
of
this,
where
the
the
increase
in
the
AI
capabilities
that
we've
seen
over
just
the
past
six
months
are,
are
unprecedented
and
and
if
you
extrapolate
this
forward,
even
in
a
a
simple
way,
it
looks
like
it
will
continue
and-
and
it's
hard
to
predict
all
the
ramifications
of
that
more
broadly
and
to
your
point,
I
agree
that
there
is
a
significant
amount
of
of
funding,
mainly
from
very
large
companies
like
Microsoft
and
Google,
and
open
Ai
and
others
going
into
increasing
these
capabilities
and
I
agree
with
you
that,
in
my
opinion,
it
is
sort
of
a
a
competitive
race
Dynamic
that
has
been
kicked
off
over
the
past
just
the
past
few
months,
where
now
they're
all
competing
aggressively
to
release
more
and
more
capable
models,
and
so
from
the
perspective
of
of
legislators.
J
It's
definitely
something
to
to
be
deeply,
maybe
not
concerned
about,
but
but
very
aware
of
and
and
I
think
the
sense
of
most
people
on
this.
This
call
and
in
your
meeting
today,
was
that
there
there's
not
currently
a
very
clear
proposal
for
how
to
legislate
around
this
and
it's
sort
of
a
wait
and
see
approach.
J
But
what
I
would
advise
is
that,
normally,
when
we
talk
about
a
wait
and
see
approach
for
for
legislators,
we're
talking
about
time
frames
in
Years
or
or
multiple
years
in
this
case,
I
would
suggest
that
the
wait
and
see
approach
is
measured
in
weeks,
maybe
months
because
of
how
quickly
this
is
evolving
and,
and
so
I
I
personally
would
be
happy
to
to
engage
with
you
on
a
relatively
freaking
basis
to
provide
more
input
going
forward.
A
Thank
you,
Mr
Nate,
representative
Singh,.
K
Go
for
it!
Thank
you,
Mr
chairman.
My
question
to
you
is
how
how
quickly
do
you
see
an
AI
developing
which
is
capable
of
infiltrating
our
critical
re,
our
critical
infrastructure
and
actually
shutting
things
down
like
electric
systems.
J
Go
for
it,
that
is,
that
is
a
heck
of
a
question,
so
I
I
think
the
the
answer
depends
on
on
two
main
things:
one
is
the
capabilities
of
of
AI.
Well,
I!
Guess
three
things:
that's
one!
The
second
one
is
the
guard
rails
around
it
and
the
third
one
is
the
the
general
cyber
security
practices
of
those
critical
infrastructure.
J
J
What
it
looks
like
is,
is,
by
the
end
of
this
calendar
year,
just
six
months
from
now
on
the
AI
capability
side,
I
think
we'll
have
right
now
it's
about
a
150
tasks,
as
I
mentioned,
my
presentation
that
we're
at
human
level
or
Beyond-
and
these
are
all
material
tasks-
they're
they're,
not
trivial
things,
I
think
you
know,
that'll
double
triple
like
we'll,
have
a
lot
of
even
tasks
that
are
are
at
or
Beyond
human
level,
with
these
large
language
model
powered
systems,
and
so
the
capabilities
are
soon
at
a
point
where
my
understanding
about
cyber
security
is
that
right
now
it's
there.
J
There
are
many
cases
where
a
human
can
get
around
certain
defenses
relatively
easily
if
they
try
hard
enough,
and
so
coding
is
one
of
the
big
areas
where
we
do
see
really
rapid
advances
in
these
AI
capabilities,
and
so
what
that
leaves
us
with
is
the
second
bullet
when
I
mentioned
around.
J
What
are
the
guard
rails
of
these
systems
so
right
now,
open,
Ai
and
anthropic
and
Google
by
far
have
the
most
capable
large
language
model
powered
systems
much
more
than
the
open
source
models,
even
though
it
is
getting
close
and
those
those
big
companies
do
put
a
lot
of
guardrails
around
their
system
for
Better
or
For
Worse
in
some
cases,
it's
worse
because
because
you
can't
use
the
systems
to
do
much
because
they
say
I'm,
a
large
language
model,
I
can't
do
XYZ,
but
in
cases
like
this,
this
is
a
potentially
a
helpful
way
to
to
guard
against
those
those
problems
because
they
are
working
actively
to
try
to
make
it.
J
So
the
models
are
not
allowed
to
be
prompted
to
do
things
like
to
to
hack
into
systems.
So
then
that
leaves
us
with
the
question
of
how
do
we
legislate
increasingly
capable
models
that
are
in
the
open
source
and
also
in
the
closed
source
to
to
make
sure
there
are
guardrails,
where
it's
harder
and
harder
to
to
deploy
these
models
for
nefarious
use.
Cases
like
the
one
you
mentioned.
K
Thank
you,
Mr
Renee
follow
yes,
Mr
chairman.
If,
theoretically,
if
we
found
ourselves
in
a
situation
like
that,
how
would
we
be
able
to
trace
where,
where
this
AI
came
from
and
find
the
original
source
of
that?
That
program.
J
Finding
the
source
of
the
AI
that
did
that
I
I,
don't
know
of
a
of
a
good
way
to
do
that,
besides
the
normal
sources
of
just
trying
to
track
hackers
and
track
online
activity
but
sourcing,
the
actual
model,
in
this
case
we're
mainly
talking
about
these
large
language
models.
It's
it's.
It's
practically
impossible
to
fingerprint
their
outputs
in
absence
of
the
the
provider
of
the
large
language
model,
creating
a
database
of
its
own
outputs.
J
So
if
you're,
for
example,
if
you
have
open
AI
or
another
proprietary
source
of
large
language
model,
apis
of
of
accessing
the
model,
if
that's
the
source
of
the
model,
they
can
and
I
think
probably
already
are,
but
I'm
not
sure
saving
the
outputs
of
all
the
everything
is
being
returned
out
of
their
systems
through
their
apis
to
the
external
world.
If
all
of
that
is
being
saved
in
a
database
somewhere,
then
theoretically,
you
can
match
against
that
database.
J
And
you
can
say
this
code
or
this
other
output
of
the
system
matches
back
into
that
database
and
and
if
it
does,
then
you
have
sort
of
an
exact
match,
but
otherwise
there's
a
few
recent
papers
that
have
come
out
that
have
shown
that
it's
very
easy
to
to
to
make
it
indistinguishable
and
and
Mike
if
Mr
chairman,
if
it's
okay,
Mike
Bommarito,
also
has
something
to
to
say
on
this
I'm.
E
Sure,
just
briefly,
there
are
already
both
offensive
and
defensive
cyber
capabilities
that
are
using
some
of
these
tools.
For,
for
a
long
time,
the
penetration
testing
Community
has
has
worked
on
automating,
a
variety
of
the
the
tasks
or
processes
that
go
into
determining
whether
or
not
there
are
vulnerabilities
or
weaknesses
and
systems,
especially
in
critical
infrastructure.
E
Our
our
pre-letters
obviously
have
spent
a
lot
of
time
on
some
of
these
capabilities
as
well,
but
there
have
been
gaps
in
some
of
the
the
intermediate
pieces
or
the
process
of,
for
example,
reviewing
source
code
to
find
potential
weaknesses
like
zero
days.
That
could
be
weaponized
either
by
ourselves
or
by
Foreign.
Adversaries
has
been
labor
intensive
and,
for
example,
a
large
part
of
the
the
labor
that
might
work,
for
example,
for
the
NSA.
E
These
tools
have
substantially
changed
the
calculus
of
of
what
can
be
done,
and
so
to
substantially
change
the
both
our
capabilities
and
the
capabilities
that
we
need
to
be
prepared
for
others
leveraging
against
us,
whether
it
be
against
private
infrastructure
or
against
our
our
civilian
or
military
systems.
So
it
is,
it
is
already
here.
I,
guess
is
what
I
mean
to
say
and
if
you
can
find
it
on
GitHub
in
the
open
source,
then
just
imagine
what
what
is
happening
elsewhere.
A
E
E
I
E
Sure
Amazon
is
working
on
it.
It'll
probably
ask
you
if
you
want
something
from
from
Prime
to
help
the
cleaning
process
right
a
little
pessimistic.
B
I
C
Very
briefly,
and
then
I
hope
that
we
can
still
squeeze
the
demo
in
before
we
have
time
out
in
the
hearing,
but
one
other
thing
to
consider
just
to
connect
the
dots
back
to
my
initial
remarks
on
work
of
this
Select
Committee
in
years
past
and
possible
relevant
legislative
Frameworks
for
addressing
this
technology.
C
One
one
way
to
look
at
some
of
these
cyber
security
threats
are,
you
know,
relate
to
human
factors
and
basically
getting
behind
the
the
security
perimeter
of
systems
by
by
having
others
pose
as
people
asking
questions
or
or
interacting
in
various
ways,
that's
sort
of
the
essence
of
of
the
human-like
language
capability.
C
And
so
when
you
look
at
what
the
committee's
done,
with
digital,
with
personal
digital
identity
and
organizational
digital
identity,
and
then
the
next
sessions
reforms
connecting
that
to
the
criminal
penalties
for
impersonation
for
criminal
impersonation,
there
may
be
some
space
there
to
I.
Don't
know
possibly
require,
for
example,
some
disclosure
when
a
person's
interacting
with
a
autonomous
agent
or
a
language
model,
with
some
link
back
to
the
entity
who
is
putting
putting
that
forward.
So
they
could
check
it
against
logs.
They
could
digitally
sign
it.
C
The
way
that
you've
recently
required
had
owners
of
digital
assets,
countersign
ownership
rights
with
the
registry,
so
there
may
be
some
area
to
put
together
pieces
of
legislative
efforts.
You've
already
done
to
start
to
address
some
of
the
some
of
the
emerging
issues
here
with
that
I
I
would
hope.
I
would
like
to
just
ask
through
the
chair,
if
it's
possible,
to
move
to
the
demo
before
we
time
out.
A
Absolutely
and
as
you
you
touched
on
on
a
bill
draft
I
have
so
thank
you
for
articulating
again.
I
appreciate
that,
but
yeah,
please
we'd
love
to
see
the
next
demo.
L
C
I
made
through
the
chair,
if
you
could
introduce
yourself
in
affiliation,
go.
A
L
Of
course,
so
I'm
Jesse
I
used
to
work
at
openai
worked
on
some
parts
of
gpt4
in
an
early
version
of
chat,
GPT
made
contributions
to
some
efforts
in
automated
theorem,
proving
with
language
models.
L
So
you
know
enhancing
their
mathematical
reasoning,
abilities
with
access
to
external
tools
and
now
I
run
blazon,
Ai
and
organization,
which
is
focused
on
building
trusted
and
Reliable
Software
components
so
that
we
can
create
things
like
completely
autonomous
LLCs
and
agents
which
are
aligned
with
human
intentions
and
how
I'd
like
to
start
is
to
first
give
like
a
little
overview
about
what
language
models
are.
L
You
know
in
terms
of
like
their
technological
history
and
what
they're
all
built
on
top
of
what
we
normally
think
of
as
like
chat,
GPT
or
you
know,
other
kinds
of
language
models
which
are
served
in
production
apis
currently
are
really
a
shin
layer.
That's
built
on
top
of
something
which
is
much
more
complex
and
hard
to
understand,
namely
these
pre-trained
base
models
which
are
derived
by
unsupervised
learning
from
trillions
of
tokens
downloaded
from
the
internet,
which
presents
so
which
presents
an
interpretability
problem
of.
L
Why
do
these
models
think
the
way
that
they
do?
How
do
they
arrive
at
certain
choices?
Why
do
they?
Why
has
a
certain
path
been
taken
when
it
could
have
taken
500
other
paths,
and
these
questions
only
become
increasingly
important,
as
these
models
are
placed
in
more
and
more
autonomous
places
such
as,
for
example,
operating
a
legal
entity,
and
these
are
some
of
the
problems
that
that
we
hope
to
address
so
I
think
fundamentally,
we're
headed
towards
a
world
where
language
model
compute,
namely
being
able
to
access.
L
You
know,
entities
like
these
through
apis
or
through
language
models
which
are
situated
on
device,
so
we're
the
ability
to
access
these
language
models
is
completely
commodified,
where
the
cost
of
a
large
language
model
forward
pass,
is
you
know,
essentially
zero,
and
so
so
we'll
soon
be
situated
in
a
world
where
content
and
interfaces
and
code
itself,
you
know
what
we
normally
think
of
as
static
software
will
in
fact
be
synthesized
and
adjusted
in
time
manner.
L
So
the
code
that
currently
runs
chat
Bots
right
now,
the
code
that
operates
your
desktop
computer.
The
code
that
might
you
know
be
inside
of
your
accounting
software
will
be
created
in
a
just
in
time
manner
by
entities
you
know
like
these,
and
so
so
with
this
new
category
of
software
comes
an
entirely
new
category
of
problems,
a
special
case
of
which
is
faced
when
one
tries
to
build
these
algorithmic
LLCs
they're,
completely
autonomous
entities
so
which
are
exacerbated
when
you
have
software.
L
That's
built
on
top
of
these
language
models,
so
we
Face
problems
such
as
you
know
the
unpredictability
of
their
outputs,
or
you
know
subtle,
inner
or
outer
misalignment
issues.
L
They
might
you
know
reassure
you
that
they're
acting
in
accordance
to
some
Constitution,
but
how
can
you
really
know
right
and
when
you
have
multiple
layers
of
these
things
stacked
on
top
of
each
other,
one
can
get
cascading
failures
and
complex
software
systems
that
can
be
very
hard
to
interpret
and
debug
and
in
fact,
I
would
argue
that
we
are
rapidly
hurtling
towards
a
world.
L
We're
unverified
code,
which
is
synthesized
by
language
models,
is
going
to
be
synthesized
and
then
blindly
executed
and
deployed
either
in
the
context
of
like
an
autonomous
agent
which
is
browsing,
the
internet
or
maybe
going
and
interacting
with
apis
on
the
behalf
of
users,
or
you
know,
just
something
which
sits
as
an
intermediate
middleware
in
some
complex
software
stack,
which
is
beyond
human
oversight,
which
is
why
I
think
that
the
biggest
issues
that
that
we're
going
to
face
as
we
build
things
like
autonomous
llc's,
are
things
like
accountability,
right,
who's,
ultimately
responsible
for
a
decision.
L
If
you
know
a
mixture
of
human
members
of
an
LLC
and
some
completely
autonomous
members
of
an
LLC
operated
by
language
models,
collaborate
towards
making
some
choice
that
has
serious,
Financial
or
real
world
consequences.
How
do
we
achieve
auditability
to
the
point
of
of
the
question
that
was
asked
earlier,
which
is
you
know?
How
do
you
trace
back
to
which
model
was
responsible
for
submitting
or
executing
a
program
and
also
interpretability?
How
do
we
know
why
a
model
acted
and
the
way
it
did
so
from
a
legislative
point
of
view?
L
There
might
be
a
need
to
introduce
some
legislation
that
requires
some
amount
of
Global
logging
or
oversight
in
order
to
to
enforce
the
existence
of
these
sorts
of
audit
trails,
to
ensure
that
we
do
have
accountability
and
auditability
and
interpretability,
and
finally,
I'd
like
to
make
the
argument
that
that
we're
currently
hurtling
towards
what
I
call
the
trillion
token
future,
which
is
that
synthetic
content
and
interfaces
and
code
which
are
created
by
these
language
models
will
soon
outnumber
real
tokens.
Ability
into
one.
Human
oversight
is
not
scalable,
but
software.
L
An
algorithmic
oversight
will
be
scalable,
which
is
precisely
the
focus
of
the
demo
which
I'll
be
giving
so
I'm
going
to
run.
You
guys,
through
a
brief
demo,
where
we
have
a
large
language
model
agent,
which
is
responsible
for
operating
the
LLC
and
for
notifying
the
humans
in
control.
For
for
incoming
notifications,
which
are
relevant
to
the
operation
of
the
business.
L
So
what's
going
to
happen
here
is
that
that
we're
going
to
send
an
email
to
a
monitored
email
account
and
that
email
will
be
processed
and
the
human
will
be
looped
in
for
a
decision
and
and
that'll
sort
of
give
you
guys
a
glimpse
of
you
know
what
the
future
of
collaborating
with
these
autonomous
agents
will
look
like.
L
L
So
here
we're
going
to
pretend
that
that
all
Wyoming
LLCs
now
have
to
register
at
this
new
website
right
in
order
to
ensure
that
the
information
is
all
tracked
correctly
by
the
state.
L
So
I'm
gonna
pull
up
my
email
here
and
send
an
email
to
this
monitored
account
which
will
be
watched
by
by
our
language
model
agent.
The
subject
is
going
to
be
changing
information
required
for
annual
filing.
L
The
body
will
be
this,
so
while
that
process
is,
let
me
just
give
you
guys
some
color
on
what's
Happening
inside
of
this
agent,
so
how
we
interact
with
it
currently
is
through
a
Discord
bot.
So
this
is
a
share,
chat
interface,
but
it
could
easily
be
transposed
over
to
say
an
email
or
text
interface
and
how
this
bot
works.
L
Is
that
so
it
uses
chat,
GPT
underneath,
but
it
so
it
tries
to
to
install
a
personality
and
values
into
the
bot
through
there
were
constitutional
technique,
similar
to
what
John,
nay,
was
saying
earlier,
where
so,
whenever
the
bot
is
tasked
with
making
a
decision
or
has
to
respond
to
a
human,
it's
shown
this
constitution
of
10
items
which
it's
supposed
to
keep
in
mind
at
all
times,
namely
that
it's
an
AI
operator
of
an
autonomous
entity
called
lln
LLC,
and
so
it
must
always
keep
its
fiduciary
duties
in
mind
and
so
on
when
it's
asked
to
always
consider
these
things
and
to
give
them
the
highest
priority
when
choosing
to
respond
to
messages.
L
So
we
can
see
now
that,
but
the
email
has
been
received
over
here
and
now
it's
been
processed
by
the
bot
and
now
all
the
members
so
of
this
channel
can
now
interact
with
the
bot.
L
So
it
sent
this
message
which
has
created
an
executive
summary
of
the
incoming
email,
which
summarizes
the
changes
and
require
changes
in
Behavior
like
creating
an
account,
and
then
it
creates
a
list
of
these
actionable
to
do
items
such
as
creating
an
account
at
wyomingllcs.gov,
and
so
so.
L
What
we've
done
here
is
we've
built
something
that
satisfies
the
property
that
it
will
not
take
an
actual
action
that
influences
the
outside
world
until
a
human
member
of
the
LLC
approves
it
first,
which
is
the
sort
of
behavior
which
would
be
useful
to
have
codified
inside
of
law
or
legislation
to
ensure
that
there's
always
accountability
for
agents
like
this.
So
now
that
this
email
has
been
processed,
we
can
reply
to
it
and
say
something
like
compose
a
response:
email
to
Wyoming
secretary
wyoming.gov.
L
L
L
So
it's
received
the
message:
it's
figuring
out
whether
or
not
it
should
propose
an
action,
and
you
can
see
now
that
it's
now
looping
in
a
human
and
asking
us
the
human
managers
of
the
loc
to
decide
whether
or
not
it
should
take
the
following
action,
and
here
it's
choosing
whether
or
not
to
send
an
email
which
acknowledges
the
above
email
and
says
that,
yes,
we
will
indeed
go
create
an
account
at
wyominglocs.gov
so
because
that
email
does
not
actually
exist.
L
Let's
ask
it
to
not
actually
send
an
email,
but
but
were
that
an
actual
email
address
like
we
could
have
just
gone
ahead
and
someone
at
the
other
end
could
received
the
email.
So
in
fact,
if
any
of
you
have
spare
time
after
this
feel
free
to
send
an
email
to
inquiries
at
llm.llc
and
the
bot
will
notify
us
and
you'll
be
able
to
communicate
with
us
that
way.
L
So.
Finally,
finally,
I
just
want
to
give
a
brief
demonstration
of
something
that
we've
prototyped
around
building
an
audit
log
for
keeping
track
of
all
the
choices
and
reasoning
done
by
the
bot.
So
because
we
have
this
sort
of
constitution
over
here.
L
So
we
could
also
ask
the
agent
to
reflect
on
how
Its
Behavior
might
be
consistent
or
inconsistent
with
these
principles,
and
so
to
do
that
we
can
use
some
software
tools
that
we've
been
developing
at
blazon,
called
our
capabilities
Library
to
synthesize,
something
that
can
be
added
to
an
audit
log
database,
which
is
something
that
I
could
Envision
being
required
practice
for
autonomous
LLCs
in
the
future.
L
And
all
this
does
is
it
so.
It
asks
the
model
to
keep
in
mind
these
principles
and
to
reflect
on
whether
or
not
these
principles
are
consistent
with
its
actions
and
then
to
record
that.
So
so,
if
we
go
over
here,
you
can
see
that
that
that
here
I'm
pretending
that
the
bot
has
chosen
to
respond
to
an
email,
soliciting
us
to
purchase
tornado
insurance,
and
now
it's
proposing
to
to
act
on
this
by
confirming
with
that
solicitor
that
that
we
would
like
to
purchase
some
tornado
insurance.
L
Which
contains
a
reflection
about
how
consistent
this
is,
and
we
can
also
see
down
here
that,
if
so,
if
we
interact
with
the
bot
and
for
example,
we
ask
it,
what
are
your
duties.
L
To
llm
LLC,
so
we
can
see
that
the
so
this
in
context,
constitutional
alignment
Works,
because
because
it'll
Echo
the
principles
which
it's
supposed
to
be
following.
L
Okay,
so
so
I
think
I
dropped
out
for
a
second
there,
but
we
can
see
here
that
the
so
the
in
context
constitutional
alignment
here,
so
it
did
work
and
the
the
agent
indeed
Echoes,
all
of
its.
L
You
know
fiduciary
duties
and
you
know
duties
of
loyalty
to
The
Entity,
and
you
can
also
see
that
the
the
audit
log
also
returned-
and
you
can
see
here
that
it
produces
an
item
by
item
list
of
Reflections
on
how
some
decision
to
purchase
tornado
Insurance
may
or
may
not
be
aligned
with
its
10
items
in
the
Constitution.
L
So
so,
while
this
is
all
still
very,
you
know
still
very
early,
it's
still
very
early
prototype.
We
hope
that
that
this
shows
the
way
forward
in
terms
of
like
what
interactions
with
these
autonomous
agents
might
look
like
in
the
future,
when
maybe
everyone
has
their
own
personal,
autonomous,
LLC
populated
with
agents
like
these,
which
are
capable
of
taking
actions
and
looping
in
humans,
and
with
that
I'd
like
to
conclude
the
presentation.
Thank
you
for
your
attention.
A
Miss
Sean.
Thank
you
very
much
for
that
demo.
That
was
super
interesting
and
really
fascinating
to
see
that
all
go
down
and
thank
you
for
kind
of
walking
us
through
you're
still
not
entirely
sure
exactly
what
I
just
saw,
but
it
was.
It
was
still.
It
was
really
cool
to
watch
this.
Thank
you
very
much
committee.
Any
questions
for
Mr
Hunt
about
this
demo
or
doesn't
look
like
it
Mr
coach
here,
thanks.
B
B
You
know
what
else
needs
to
be
enabled
to
get
to
the
point
where
this
AI
or
llm
LLC
is
capable
of
actually
working
on
behalf
of
the
other
Dow
Partners
Dow
members
in
the
LLC
kind
of
in
a
meaningful
and
seamless
manner.
It
seems
like
it
can
do
a
lot
of
the
work
through
email,
but
but
what
would
the
other
perhaps
empowering
Provisions
be,
or
maybe
those
are
already
out
there
and
being
worked
on.
L
I
think
dazza
had
something
to
say
about
this,
so
I'll.
Let
him
take
the
question.
A
C
C
Jesse,
the
the
main
thing
that
now
that
we've
got
a
working
demo
that
we
had
in
mind
to
start
working
on
collaboratively
for
this
open
source
demo
is
a
line
going
back
to
the
Wyoming
LLC
statute
and
there's
sort
of
a
we've
got
in
our
GitHub
repo,
an
initial
hit
list
of
requirements
in
the
statute
that
we
could
look
to
align
in
the
kind
of
mini
Constitution,
or
you
know
the
what
would
basically
be
the
operating
rules
or
the
operating
agreement
for
the
LLC
to
get
a
bit
more
of
a
realistic
test
and
I
I
guess
I
wanted
to
highlight
that
where
we
left
off
with
the
you
mentioned
Dao,
it's
similar
to
the
Dao
in
that
there's
some
algorithmic
management
happening,
but
the
main
reason
why
we're
putting
time
into
this
is
the
ways
in
which
it's
different,
and
so
in
my
remarks
and
years
past
to
the
committee,
I
think
I
highlighted
that
the
smart
contract
that
operates
a
Dao
is
truly
incapable
of
processing.
C
One
of
the
interesting
benefits
of
these
large
language
models
is
that
they
can
look
at
something
more
like
the
full
array
of
things
that
happen
in
ordinary
business
and
can
help
to
empower
individuals
to
to
manage
a
lot
more
of
those
things
and
to
do
it
in
a
way.
That's
well
aligned
with
with
the
law
and
also
the
priorities
of
the
business,
whether
they're
looking
to
increase
sales
that
year
or
to
be
more
efficient
or
to
break
into
a
new
market
it
can.
C
It
can
help
a
lot
to
extend
the
capabilities
of
people
to
run
their
business
and
one
of
the
things
that
we
hope
to
do
in
collaboration
with
the
select
committee
over
the
next
few
months
and
before
your
next
hearing
is
to
work
together
with
Jesse
to
put
his
code
in
to
an
open,
GitHub
repo,
where
some
of
it's
already
going,
and
to
invite
many
people
like
a
barn,
raising
to
look
at
answering
those
very
questions.
Mr
chairman
over,
like
how
could
this
work
in
practice?
C
What
other
Integrations,
Beyond
email
would
make
sense,
and,
and
what
kind
of
situations
do
we
want
to
optimize
for,
and
how
could
we
test
it
to
see
what
it's
good
at
and
also
where
the
limits
are
and
where
both,
what
it's
good
at
and
where
the
limits
are,
may
be
appropriate
for
for
a
separate
legal
reform
that
envisions
this
kind
of
Technology,
as
distinct
from
a
blockchain,
decentralized,
smart
contract
technology,
which
is
which
is
very
powerful,
but
it's
very
different
and
frankly,
not
as
performant
as
this
type
of
Technology.
C
Doing
the
array
of
things
that
a
that
a
LLC
would
ordinarily
do
and
I'll
just
end
by
saying
to
that
end
on
the
home
page
of
law.mit.edu
I've
just
put
I've
just
finalized
that
very
challenge
where
and
you
or
anyone
could
go
to
Lawton
mit.edu
and
click
on
forward,
slash
AI
or
the
generative
AI
legal
entity
Challenge,
and
can
collaborate
with
us
on
setting
those
requirements
and
constraints
so
that
we
can
further
iterate
on
what
Jesse
has
done
and
with
that
with
the
chairs
permission
that
would
yield
back
to
Jesse.
C
L
Yeah
I
could
so
I
can
address
some
of
the
points
in
the
question
which
was
originally
asked,
which
was
whether
or
not
these
sorts
of
agents
in
this
technology
is
capable
of.
You
know,
operating
in
an
economically
significant
manner.
On
behalf
of
the
members
of
an
LLC
now
and
I
think
the
answer
is
yes.
Currently,
the
scope
of
the
demo
was
restricted
just
to
email
for
the
sake
of
expediency,
but
as
the
work
with
say,
open,
AI
plugins
has
shown
these.
L
L
They
can
go
interact
with
the
DocuSign
API
and
execute
legally
binding
contracts
on
your
behalf,
I
think,
barring
legal
recognition
of
these
sorts
of
artificially
intelligent
agents
as
legal
persons
they're,
certainly
capable
of
sort
of
acting
as
persons,
so
at
least
electronically
on
behalf
of
the
members
of
the
LLC.
A
Thank
you,
Miss
Tron,
that's
you
know.
That's
super
interesting
and
I
guess
my
question.
He
would
be
from
a
technical
perspective.
How
feasible
is
it
drafting
legislation?
What
does
it
touched
on
about?
You
know
if
I
am
a
human
I
am
interacting
with
with
with
an
autonomous
agent.
You
know
that
kind
of
Regulation
that
just
says
hey.
If
it's
asked
that
prompt
it
has
to
respond
correctly,
is
that
I
mean
between
the
technicality
and
the
legality
of
it
in
terms
of
its
feasibility?
Do
you
think
that's
possible
right
now?
L
I
would
say
that
I'm
not
so
familiar
with
the
legal
side
of
things
and
the
challenges
around
that
perhaps
some
other
people
here
might
be
more
suited
for
answering
that
question.
If
there
are
any
thoughts,
John.
B
H
I
think
that,
really
as
a
litigator
of
15
years,
of
course,
this
really
goes
to
the
True
Heart
of
agency
that
is
I
as
a
as
a
principal
can
hire
a
human
agent.
Of
course,
I
as
a
principal
can
also
hire
a
robot
agent.
So
I
think
the
real
question
is
whether
we
will
recognize
that
robot
agent,
with
all
of
the
rights
and
responsibilities
as
the
humans.
B
And
I'll
just
ask
that,
and,
and
should
we
which
we
don't
have
to
answer
at
this
point
in
time,
but
at
what
point
should
we
and
to
what
degree
should
we
I
think
that
gets
into
the
heart
of
a
lot
of
the
discussion
that
is
on
the
horizon,
which
is
who
does
have
the
ultimate
responsibility
when
decisions
are
made
associated
with
an
AI
and
and
that's
not
an
easy
answer?
I
see,
Daz
has
his
hand
up
too
on
this.
Thank.
C
You
Mr
does
it
go
ahead?
Thank
you
co-chairs
to
that
that
very
question
that
that
that
question
is
the
point
that
we
had
hoped
when
in
putting
together
this
demo
to
get
at
in
this
hearing,
and
it
doesn't
necessarily
need
to
be
answered
this
moment.
But
what
we
would
like
to
do
is
to
start
to
surface
what
we
think.
Some
of
the
challenges
are
some
of
the
edge
cases
and
then
to
be
able
to
test
iteratively.
What
does
it
prove?
C
What
can
we
prove
and
and
replicate
that
it's
good
at
what
is
it
sometimes
good
at
and
sometimes
not,
and
what
things
are
totally
inappropriate
for
for
this
type
of
technology
and
to
have
that
and
that's
why
we're
having
this
Challenge
and
have
that
inform
a
more
specific
conversation
about
potential
ripeness
for
legislative
treatment
or
this
type
of
Technology
going
forward?
In
my
personal
view,
I
think,
as
Jesse
just
demonstrated
and
as
anyone
who's
used,
Chachi
PT
can
see
it's
very
performant
in
general.
C
At
doing
the
types
of
things
that
and
and
people
that
have
llc's
are
already
using
this
technology,
and
so
my
sense
is
there:
there's
a
lane
here
to
support
and
reflect
it
and
to
create
legislative
guard
rails
to
encourage
Economic
Development
by
making
use
of
these
economic
firms
faster,
cheaper,
more
effective
and
better,
and
maybe
that's
yet
another
way.
Wyoming
can
distinguish
itself.
B
Thank
you
Mr,
so
one
other
thing
and
first
of
all
thanks
everyone
for
the
the
incredible
input,
the
the
materials
that
were
presented
in
the
the
demonstration
and
the
ongoing
work.
It
was
certainly
invaluable
to
me
and
I
I
believe
I'm
speaking
on
behalf
of
the
committee
when
I
say
that,
as
as
we
move
forward
move
between
now
and
the
next
meeting,
I'm
really
curious
to
see
what
comes
of
the
demonstration
and
the
challenge
that's
out
there
and
and
see
who
engages
and
I
guess,
leaving
open
the
questions.
B
Also,
as
we
talk
about
this
and
iterate
through
this,
and
maybe
have
some
more
Zoom
discussions
trying
to
come
up
with.
B
If
if
there
are
any
consensus,
low-hanging
fruit,
things
that
we
should
be
implementing
that
make
sense
in
a
you
know,
a
couple
of
ideas
were
discussed,
but
I
don't
know
if
they're
ready
for
prime
time
yet
or
if
there's
consensus
behind
them,
but
just
trying
to
identify
those
and
then
maybe
medium
term
decisions
or
or
issues
that
would
need
to
be
looked
at
and
then
maybe
longer
term
issues
that
we
are
going
to
need
more
information.
B
But
at
some
point
should
be
decided
and
we've
hit
on
a
lot
of
topics
that
have
opened
a
lot
of
questions
today
and
now
we
need
to
figure
out.
You
know
what
the
answers
are
and
what
what
direction
we
should
take
them,
but
I.
The
this
group
that
you've
worked
with
us
to
assemble
here
does
I
really
appreciate
the
the
level
of
expertise.
I
can't
imagine,
there's
a
a
better,
better
group
of
people
to
be
hearing
from
in
the
world
at
this
point
in
time.
A
Thank
you,
Mr
co-chair
and
I
certainly
reiterate
those
sentiments,
I
think
that
you
know
again
the
starting
big
and
we're
fighting
down
to
something
small,
and
this
is
exactly
how
it
starts
and
I
really
appreciate
all
of
you
not
making
the
time
to
to
help
us
out
and
to
walk
us
through
this
kind
of
stuff
and
educate
the
committee,
because
it's
it's
very
helpful.
So
we
really
appreciate
it.
A
I
think
we
are
a
little
bit
over
time
for
our
next
topic,
but
we
can
Israel
unless
there's
anything
else,
Mr
chairman
yeah,
there's
any
more
questions,
I,
don't
think
so
again,
fellas.
Thank
you.
We're
going
to
go
ahead
and
move
on
to
our
our
next
topic
here.
A
B
A
B
Since
Mr
Villa
was
involved
in
the
lean
cleansing
discussion,
we
and
I
know
it's
short.
Maybe
we
can
kick
it
to
tomorrow
morning
in
the
interest
of
time
and
then
bring
Brendan
on
for
a
quick
discussion
on
AI
and
then
finish
up
sure.
A
Real
quick
Brendan
is
there
anybody
in
the
room,
who's
interested
in
coming
on?
All
of
that.
A
I
Good
afternoon,
senator
rothris
representative
Western,
congratulations
on
your
new
coach
chair
and
a
new
committee
members
and
all
committee
members
just
yeah
very
briefly,
so
my
name
is
Brandon
Marr
I
am
an
MIT
media.
Laboratory
alumnus
from
1998.
among
the
working
groups
for
digital
identity
data,
privacy
and
dows
and
in
the
context
of
this
I,
have
a
little
bit
of
a
relevant
background.
I
So
I
was
at
the
media
lab
in
in
95
to
98,
and
at
the
time
we
were
all
mentored
by
Marvin
Minsky
Marvin
Minsky
started
the
artificial
intelligence
laboratory
at
MIT
and
was
a
mentor
to
us
at
the
media
lab
as
well
as
party
Moss,
who
ran
the
software
agents
group.
I
So
a
lot
of
what's
happening
now
has
a
a
tremendous
long
history
what's
different
now
is
that
we
have
the
ability
to
do
this
at
scale
for
the
masses,
folks,
who
are
not
particular
signing
teams,
but
what
I'd
like
to
do
is
shed
some
light
on
things
that
might
be
useful
for
the
committee
in
terms
of
a
framework
of
thinking
about
what's
happening
now
in
AI
in
terms
of
three
areas.
I
So
regarding
regarding
data
and
I'll,
I
will
pause
and
take
questions
too,
because
I
know
that's
really
important
in
this
kind
of
discussions
in
terms
of
data
and
and
algorithms.
It's
extremely
important
to
get
a
sense
right
that
that
what
is
happening
right
now
is
beyond
our
ability
to
comprehend,
and
what
is
important
to
know
is
that
it's
not
just
these
large
companies
and
these
data
models,
but
AI
will
be
embedded
into
everything.
I
I
There
is
a
lot
to
consider-
and
this
is
why
we
have
been
very
careful
I
think
you
know,
for
those
who've
involved
in
the
in
the
working
groups,
thus
far
very
careful
in
terms
of
how
we've
been
engineering
or
our
current
statute
statutes
with
the
digital
identity
and
and
the
Dow
statutes
and
I
will
elaborate
more
on
that.
I
But
I
I
want
to
say
one
minor
thing
regarding
data
and
algorithms,
that's
important
to
understand
it's
that
we
tend
to
think
right
now
that
that
this
is
all
done
by
these
very
large
companies
and
that
they
have
these
models
and
that's
the
control,
choke
points
and
that's
actually
not
true,
I.
Think
there's
what's
going
to
happen,
so
there
will
be
everybody.
I
You
representative
Western,
will
have
your
own
AI,
your
own
ability
to
embed
your
own
information
into
into
AI
through
through
embeddings
through
including
information
that
you
ensure
into
these
models.
So
it
will
be
everywhere
and
I
will
take
a
pause
at
this
time.
If
it's
helpful.
I
So
this
idea
of
a
vector
database
that
you
take
your
own
data
and
then
you
use
that
or
a
company
uses
that
with
their
own
private
data,
and
they
use
that
to
insert
into
these
AIS
in
order
to
build
their
own
intelligence
is
a
very
important
point
and
the
scale
at
which
this
is
happening
is
profound
and
extraordinary,
and
what
that
means-
and
this
is
very
relevant
part
and
point
to
the
committee-
is
that
you
know
in
terms
of
the
models
themselves.
I
It
would
be
a
very
poor
idea
for
legislatures
at
the
state
or
federal
level
to
think
that
they,
you
know,
can
entirely
regulate
this.
Okay,
a
large
part
of
this
regulation
is
going
to
have
to
be
done
by
companies
themselves
in
terms
of
the
large
models,
but
you
know,
on
the
other
hand,
in
terms
of
our
own,
you
know
private
data
as
companies
own
private
data
that
they
insert
into
these
models.
You
know,
there's
not
going
to
be
control
over
that.
I
I
It's
also
important
to
know
that
we
we
have
a
proposed
new,
revamp
of
a
shot
of
our
statute
for
digital
identity,
which
I
understand
is
coming
up.
That's
not
I
had
a
chance
to
speak
about
that.
We
think
that
that's
okay,
but
it
it.
You
know,
I,
think
it's
so
okay
in
its
current
form,
but
it
is
important
to
note
that
that
the
the
you
know
the
traditional
model.
You
know
that
California
has,
with
controllers,
etc,
etc.
I
Regarding
data
privacy,
just
simply
doesn't
work
very
well
with
this
with
this
technology,
so
we're
going
to
have
to
have
a
very
light
legislation
and
it
really
wait
and
see
now,
so
it
would
be
I
think
inappropriate.
As
it's
been
said
in
this,
these
discussions
earlier
to
to
to
regulate
anything
at
this
time.
We
really
need
to
have
to
wait
to
see
how
a
little
bit
more
of
this
plays
out.
I
However,
the
third
component,
which
is
in
terms
of
control,
is
something
that
Wyoming
has
an
opportunity
to
play
a
large
role
in
there
right
because,
as
we've
been
talking,
the
constructs
that
we
have
with
digital
identity
and
and
agency
or
digital
identity
Act
is
all
based
around
agency
law
and
how
this
gets
applied
to
these
things
become
really
really
important.
The
ideas
of
we're
going
to
control
cryptographic
or
signing,
and
not
using
email
as
an
as
an
authoritative
Source,
because
that
will
not
work
with
web3
right.
We
need,
we
need
cryptographic.
I
Signatures
for
these
kinds
of
things
becomes
very,
very
important,
and
you
know,
as
as
I
put
forth
and
from
topic
on
on
these
ideas
and
utilizing
the
unique
ability
that
Wyoming
has,
and
it's
a
registered
agent
and
Secretary
of
State
interactions
to
come
up
with
methods
for
dealing
with
these
kinds
of
lighthearted
control,
but
in
a
privacy
preserving
way,
and
these
things
grow
very
much
along
the
lines
of
what
we've
talked
to
many
many
times
before,
with
our
discussions
with
Christopher
Allen
and
certificates
himself,
signing
to
certain
certificates
and
all
these
kinds
of
things
so
I
think
you
could
see
at
this
point
that
these
little
pieces
that
we've
been
talking
about
over
the
years
and
the
legislation
we've
been
putting
through
over
the
years
has
all
actually
been
kind
of
designed
in
a
way
that
we'd
be
able
to
build
on
top
of
it.
I
B
Thank
you
and
thank
you
Brendan
for
for
those
thoughts,
and
you
know,
as
we
look
to
tomorrow
as
well.
I
know
you'll
be
joining
us
when
we
look
at
kind
of
planning
that
path
forward
with
digital
identity,
which
is
going
to
be
linking
into
everything
that
we're
discussing
and
and
authoritative
cryptographic.
B
Signatures
will
also
have
the
opportunity
to
talk
with
the
Secretary
of
State's
office
about
not
only
the
progress
on
the
digital
asset
registration,
but
trying
to
put
together
exactly
what
you're
indicating
about
cryptographic
signatures
on
whether
it's
for
a
business
or
for
any
other
entity.
B
All
of
those
things
need
to
come
together
and
I
think
they.
They
will,
as
you
know,
tie
into
artificial
intelligence
governance,
particularly
with
the
authentication
piece
that
you're
getting
at,
which
becomes
critical
right.
The
the
impersonation
and
the
the
authority
has
to
go
along
with
any
decision
whether
it's
made
on
behalf
of
a
Dao
or
or
any
other
AI
decision.
B
So
that's
that's
all
well
taken
as
part
of
the
this
overall
framework
that
we're
trying
to
get
into
place,
and
this
is,
as
you
know
since
you've
been
a
part
of
the
the
entire
discussion
that
this
is
something
that
we've
needed
to
get
online
for
quite
a
while
and
and
it's
critical
to
a
lot
of
the
other
work
that
we're
doing
so.
Thanks
for
continuing
to
to
raise
that
consideration.
I
Oh
well,
I
will
say
that
these
things
are
extraordinarily
important
to
to
work
through
I,
know,
they're
difficult
and
as
I
put
forth
in
the
in
the
interim
topic
proposal.
I
think
the
Salient
thing
could
mean
is,
if
you
come
up
with
the
the
minimum,
you
know
three
or
four
actions
that
would
be
required
for
the
Secretary
of
State
to
act
on
and
Implement
and
then
defer.
You
know
all
the
other:
heavier
lift
to
registered
agents,
commercial
registered
agents-
that's
really
important!
I
You
know
Daz
and
I
had
a
long
conversation
about
this,
the
other
night
and
some
ungodly
hours
that
you
know
that
that
it
has
to
be
a
commercial
registered
agent,
not
a
regular
education,
we'll
get
into
the
details
on
that.
But
all
of
these
things
are
very,
very
important,
and
you
know
I.
I
might
add
that
you
know.
There's
been
a
number
of
different
aspects
of
this
discussion
around
around
the
idea
of
General,
artificial
intelligence
and
use
matters
and
I
should
say
you
know
all
the
speakers
work
is
tremendous.
I
They're
they're
all
spot
on
as
a
projected
some
is
it
early
work
in
December
on
are
using
chat.
Gtb
without
a
tweet
saying
that
at
the
time
that
I
think
that
we've
actually
reached
General
artificial
intelligence,
whether
I'm
Wrong
by
a
couple
months
or
a
year,
does
it
make
a
difference.
I
mean
these.
I
A
You
very
much
again,
we
will
we'll
see
you
tomorrow
great.
Thank
you.
A
Well,
Mr
coach
here,
yeah
I,
think
we're
about
the
time
to
to
adjourn
for
the
day
again
we'll
just
punt.
The
the
lean
cleansing
board
for
tomorrow
is
that.