►
From YouTube: Agency, Identity, and Knowledge Transfer
Description
A thought experiment, started on Twitter at https://twitter.com/rhyolight/status/1129047857250484224
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
A
So
I
was
just
presenting
this
little
thought.
Experiment
and
I'm
just
gonna
walk
through
that,
and
so
you
don't
have
to
read
that
I
can
get
this
thing
back
up,
so
you
have
to
read
it.
So,
let's
say
you've
got
an
agent.
Let's
say
you
have
an
environment,
boo,
boo,
boo,
boo,
boo,
just
a
2d
environment,
and
then
we've
got
it
agent.
This
is
like
center
of
the
agent
or
whatever,
and
this
agent
has
some
sensors.
It's
got
like
little
legs,
I,
don't
know
it
has
some
way
to
interact
with
the
environment.
A
We
can
make
them
simple,
something
like
that
and
let's
make
them
a
little
small
them
down
a
little
and
say
this
guy.
So
what
what?
What
an
agent
wants
to
do?
You
have
like
this
natural
curiosity,
right,
I,
think
I.
Think
life-forms
have
this
natural
curiosity
that
they're
just
pre-programmed
with
I,
don't
know,
but
there's
they
have
an
incentive
to
explore.
So
so
what
you
could
do
is
program
somehow,
if
you
can
imagine
an
agent
having
a
representation
of
space
that
it
builds
over
time,
which
is
what
we
talked
about.
A
You
know
if
you
can
program
random
movements
through
the
space
right
and
then
you
can
start
to
get
a
sense
of
the
space
and
and
if
you
make
it
a
goal,
if
you
added
a
system
to
try
and
have
goals
and
rewards
in
this-
and
you
made
it
a
goal
like
it
felt
pleasing
it
was
good
to
explore
more
space
right.
It's
good
for
this
thing.
It
feels
good.
It
gets
a
reward
when
it
explores
more
space.
A
A
So
now,
let's
say
that
this
agent,
that
we've
just
as
a
totally
like
learned
it's
environment
right
and
we
take
that
agent
and
we
move
it
right.
We
will
just
move
it
to
a
let's
move
it
to
another
environment,
thanks
for
the
for
the
follow-up,
Brandon
I'm
talking
about
agency
and
identity
right
now,
and
so
let's
say
we
have
another
environment
over
here.
Let's
say
it's:
the
exact
same
environment,
right,
okay,
so
I'm
gonna
copy
this
and
paste
it
over.
Here,
let's
make
a
little
sweet
a
little
smaller.
A
So
my
point
gets
across
okay,
it's
irrelevant
exactly,
but
so
let's
say
we
take
this
agent
over
here
and
we
move
him
to
a
new
environment
right.
So
we're
now
we're
in
a
new
environment.
But
we
say:
hey,
look!
You
don't
get
these
sensors
anymore.
You've
got
a
different
sensor,
setup,
we're
gonna,
we're
gonna,
give
you
you're
gonna,
be
like
on
a
pyramid
right
and
it's
gonna
have
wheels
and
it's
going
to
be
able
to
go
that
way.
A
In
that
way,
the
wheels
underneath
of
it
and
your
sensors
are
gonna,
be
like
there's
an
IR
sensor
over
here
and
there's
a
camera
over
here
and
there's
a
lidar
I,
don't
know,
but
but
the
point
being
that
you're
you're
what
you
used
to
be
that
like
robot
or
the
this
thing,
oh
shoot
I
lost
the
original
one,
but
the
other
the
thing
you
used
to
be
with
the
spider
legs.
You
left
that
behind.
A
When
that
sensor
set
like
everything
that
you
learned
everything
you
you
learned
as
a
spider
in
this
world,
once
once,
you
detach
the
agent
even
the
intelligent
system,
the
model,
because
this
thing
has
a
model
of
the
world
right.
You
move
that
model
over
here
it
loses
everything
it
loses
this
whole.
This
I
mean
it
still
exists.
It
can
still
have
a
representation
of
that.
But
now
you've
got
this
new
sensor
set
over
here.
So
well,
this
guy,
let's
make
him
a
little
bit
smaller
too.
A
So
we
can
so
while
he
can,
when
he
bounces
around,
like
he's
at
his
goal,
could
still
be
the
same
thing
to
explore
right
and
then
the
last
environment.
Once
you
learn
a
little
bit
of
space,
you
get
better
at
it
and
you're
like
oh
I,
know
how
to
explore.
You
would
assume
you
could
transfer
that
now
edge
of
space
from
the
first
environment
to
the
second,
but
because
we've
moved
to
another
sensor
set
this
guy
has
to
relearn
everything
he
can't
apply
any
like.
A
He
can't
apply
this
knowledge.
This
doesn't
apply.
He
has
to
build
up
over
time,
just
like
we
did.
He
has
to
build
up
by
like
bouncing
around
randomly
and
then
being
like.
Oh
I,
realized
I
can
explore.
This
is
how
you
explore
at
blah,
and
he
has
to
learn
that
up
entirely
in
a
different
way,
and
these
representations
are
not
compatible
right,
they're,
incompatible.
A
That's
a
hard
thing
to
understand,
I
think
when
you're
thinking
about
intelligent
systems
and
a
lot
of
people
brainstorm
about
the
idea
of
like
brain
transfer
right
and
they
say
how
will
he
when
can
I,
applaud
my
brain
to
a
computer
and
and
be
able
to,
like
you
know,
wake
up
in
an
Android
body
or
something
right
at
some
point
in
the
future?
First
of
all,
that's
that
wouldn't
be
you
at
all.
Whatever
wakes
up
in
that
Android
body
would
not
be
you
they
would,
they
would
I,
don't
know
what
what
it
would
be.
A
It's
not
you
and
the
the
the
the
way
that
you
would
get
this
transfer
to
work
would
would
be
to
a
really
detailed
associative
map
of
a
knowledge
of
this
sensor
set
versus
this
sensor
set,
and
some
really
really
detailed
information
about
how
to
translate
knowledge
learned
using
one
sensor
set
versus
knowledge
learned
using
a
different
sensor
set
hey
the
Michael
jolly
the
man
himself.
I've
watched
your
stream
before
thanks
for
following
I
appreciate
it.
A
It's
not
even
if
there
was
a
continuation
of
conscious
experience,
not
even
if
there
were
even
not
talking
about
consciousness
at
all
consciousness,
doesn't
have
anything
to
do
with
this
problem.
This
is
a
simple
problem
of
intelligence,
of
an
intelligent
system,
learning
a
space
here
and
then
being
detached
from
its
sensor
set.
That's
the
thing
when
you,
when
an
intelligence
system
builds
up
a
model
of
reality,
it
has
to
use
some
type
of
sensor
set
to
do
that
and
the
model
that
it
creates
is
directly
attached
to
that
sensor
set.
A
It
cannot
be
detached
from
that.
So
I
mean
this
being
said.
You
could
certainly
have
an
Asia
over
here
with
the
spider
legs
that
learns
an
environment
just
like
we
did,
and
then
you
put
it
in
a
completely
different
environment
and
you
give
it
the
same
spider
legs.
It
will
know
how
to
explore
right.
So
so,
in
this
case,
if
this
agent
was
stupid
at
first
and
it
didn't
realize
what
space
was,
it's
just
randomly
moving
and
we're
rewarding
it
for
exploration,
okay,
but
after
a
while,
it
will
realize.
A
Oh,
this
is
how
I
move
through
space
and
it'll
figure
out
how
to
explore
space
right
in
this
case.
If
we
move
this
guy
to
another
to
another
environment
with
the
same
sensor
set.
Let's,
let's
make
this
even
more
clear
that
what
we're
doing
this,
if
we
move
this
guy
to
another
environment,
bear
with
me
here
all
right.
A
So
it's
the
exact
same
the
exact
same
one
and
he's
going
to
immediately
apply
his
knowledge
of
space
that
he
learned
using
his
sensors
in
the
previous
environment
and
be
like
oh
I,
need
to
maximize
space
exploration
route
right,
so
it'll
know
how
to
achieve
its
goal
much
faster
because
it
already
is
learned
about
space,
and
this
brain
transfer
is
up
here
because
I'm
traveling,
it's
really
really
really
really
way
way
far
just
in
the
future.
Now,
even
if
it's
possible-
and
if
it
were
to
happen,
it
would
not
be
you
so.
A
Nick
Neil
says
I
agree,
but
man
I've
had
that
chat
so
many
times
about
consciousness,
I
still
I,
don't
I,
don't
think
any
of
this
has
to
do
with
consciousness.
That
consciousness
I
feel
like
you
can
have
the
discussion
about
consciousness,
almost
orthogonal
to
Intel.
It's.
You
don't
have
that
here.
No
album!
Oh
man,
let's
I'm
from
Missouri
I'm
from
the
Midwest,
so
I
am
empathetic
okay,
so
that
was
my
discussion.
I
wanted
to
have.
That
was
based
off
of
this.
It
was
based
off
a
tweet
tweet
thread
that
I
did
yesterday
and.