►
A
Everything
is
a
chain
of
system
context;
okay,
let's
do
it
so
where
are
we
at
we're?
Writing
some
python.
Of
course,.
A
Let's
just
go
ahead
and
put
it
in
its
own:
should
we
do
this
yeah?
Let's
just
go
ahead
and
pull
this
out
into
its
own
file
right
now,
because
we
have
way
too
much
going
on.
Let's
be
honest,
so
let's
pull
this
into
its
own
file.
A
A
A
A
A
A
Well,
what,
if
we
don't,
are
you
serious.
A
A
A
A
So
the
name
says
maybe
we'll
just
leave
that
as
the
to-do
there
there,
okay
so
right,
because
then
we
won't
grab
the
name
from
that.
So
that's
kind
of
like
the
only
comment.
We
really
need
there
because
it's
pretty
obvious,
what's
going
on
there
right
we're
just
like
right.
Well,
ideally,
we
would
do
it
this
way.
So,
no,
okay,
let
me
just
say
that
that's
not
obvious
enough.
Okay,
so,
ideally
we
would
have
load
not
setting
properties
on
the
loaded
classes.
A
A
A
Links
live
within
overlay,
don't
get
it
wrong?
Okay,
I
keep
screwing
that
up,
so
the
links
live
within
the
overlay,
so
okay,
so
how
do
we
turn
the
system
context
into
time?
And
then
how
do
we
time?
Travel?
A
A
A
Well,
we
know
we
can
only
travel
forwards
in
time
and
that
time
is
relative
and
that
the
states
of
consciousness,
which
we
see
entities
in
as
described
by
the
strategic
plan-
outputs
aka,
this
conceptual
layers,
which
we
may
be
viewing
in
this
case
as
states
of
consciousness.
If
they
were
overlapping
clustering
models
with
you
know
things
we
would
consider
to
be
states
of
consciousness.
A
It's
also
just
sort
of
like
a
general
like.
I
think
I
think
what
did
I
say.
I
think
I
think
this.
What
was
it
so
there's
a?
I
have
like
way
too
many
notebooks
now
so.
A
A
A
A
We
have
this
loop,
where
we're
ingesting
input,
data
right
and
so
for
a
conscious
state
to
exist
consciousness
consciousness.
A
A
A
A
Okay,
so
tick
on
tick.
We
have
the
present
the
system
context.
We
execute
it
on
talk.
We
enter
the
future,
we're
constantly
going
from
tick,
tock
tick
tock,
as
we
switch
from
effective
what
is
effectively
one
system
context
to
the
next?
If
we
look
at
all
system
contexts
as
if
they
were
unique
right
so
for
that
state
of
consciousness
for
that
state
of
consciousness,
okay-
so
oh
yeah
was
there
was
something
in
here
so
so
time
is
relative
time
is
relative.
A
A
Now
and
they're,
not
necessarily
different
things.
It
just
depends
once
again
on
the
context
within
which
you
are
executing
right,
because
what
is
subconscious
what
what
is
a
subcon,
okay,
so
think
about
them
as
as
almost
sub
flows,
so
slice,
the
graphs
slice,
the
graphs
based
on
what
so
visualize
so
visualize
the
system
context
as
graphs
right
now.
A
So,
what
about
a
house?
What,
if
house,
is
ephemeral?
Well,
this
is
only
ephemeral
because
her
state
is
distributed,
and
so
it's
it's
called
on
demand.
So.
A
Because
we're
looking
effectively
we're
just
looking
at
the
data
and
we're
seeing
patterns
and
we're
identifying
what
patterns
map,
to
analysis
before
feedback
or
before
control
and
what
patterns
map
directly
to
control
right,
so
conscious
versus
subconscious.
So
what
was
the?
What
was
the
thing?
What
was
the
thing?
Okay,
so
the
point
of
this
was
the
point
of
this
was
the
point
of
this
was
oh
and
I
forgot
to
write
this
part,
so
if
so,
snapshot
right,
remember
so
alice's
remember
so
once
again
on
the
ephemera
femoral
nature
of
alice
right.
A
So
so
alice
is
a
she's.
You
know
all
of
a
sudden
boom.
The
world
comes
alive
right
and
alice.
Is
there
poof?
She
exists
right
now,
her
job.
Now
she
she
her.
She
discovers
what
her
job
is,
what
context
and
when
within
which
she
is,
she
is
operating
right
and
then
she
goes
about
executing
that
that
function.
So.
A
So
because,
oh
yes,
yes,
yes,
okay,
so
optimal
communication,
optimal
communication!
So
there's
a
so
hypothesis,
so
optimal
communication
exists
when
entities
are
in
entities
or
no.
No,
don't
don't
even
worry
about
entities
right
now.
Don't
even
worry
about
entities
right
now,
because
entities
are
sort
of
a
sub.
A
A
But
time,
okay,
so
the
states
of
consciousness,
so
the
hypothesis
is
that
optimal
learning
occurs
when
communicating
agents
are
in
aligned,
states
of
con
or
so
optimal
communication,
which
is
effectively
optimal,
learning
occurs
when
agents
are
in
aligned
states
of
consciousness
right
so
effectively
when
you
have
a
line
system
context
right.
So
what
is
an
aligned
system
context?
Well,
an
align
system
context
is
when
you
have
shared
states
of
consciousness.
This
is
this.
Is
the
hypothesis
right
so
here
one
one
way
that
we
can.
A
We
can
determine
alignment
is
the
analysis
of
states
of
consciousness,
and
so
you
know
what
like
this
is
basically
saying.
Okay,
so
is
this
environment
conducive
to
communication?
Right?
Is
this
environment
conducive
to
giving
me
the
data
that
I
need
to
affect
it
to
to
optimally?
Learn
right,
because
what
is
the
goal
of
communication?
Well,
the
goal
of
communication
is,
I
mean
you're
you're
communicating
for
some
purpose
right,
so
you're.
A
If
you
already
knew
you
wouldn't
have
to
communicate
right,
so
you're
learning
what
it
is
that
needs
to
be
done
right
and
then
you're
and
then
doing
it
is
also
a
form
of
communication
as
well,
because
the
other
party
may
you
know,
would
would
would
view
that
right.
So
if
two
people
are
working
together,
right
they're
both
moving
a
bunch
of
boxes,
they
don't
have
to
talk
to
each
other
to
know
that
they're,
because
the
communication
becomes
visual
right
so
remember.
A
I
think
I
misspoke
about
context
and
language
and
the
fact
that
it's
not
that
we
can
have
you
know
communication
with
that
language.
It's
that
communication
is
really
more
about
having
you
know
you
can
have
more
than
one
language
involved
and
that
you
know
all
of
that
is.
Is
those
languages
exist
within
the
context
and
those
languages
are
patterns?
A
Should
states
of
consciousness
are
like
the
lens
through
which
the
agents
viewed
the
world
and,
if
they're
viewing
the
world
in
the
same
lens,
the
language
becomes
more
effective
because
they
effectively
have
because
the
state
of
consciousness
is
also
a
language
in
a
way
because
it
provides
the
shared
frame
of
reference
or
no.
What
is
it.
A
Okay,
there's
something
good
there:
okay,
so
the
patterns
patterns,
the
states
of
consciousness.
A
A
The
more
shared
states
of
consciousness
is
a
shared
state
of
consciousness
is
like
anything
so
we're
okay.
So
basically,
what
we're
gonna
do
is
we're.
Gonna
say
you
know
a
language
like
okay,
so
right
now
what
we're
thinking
of
we're
talking
about
our
communication
rate
and
our
hypothesis
is
that
we
can
transform.
You
know
any
system
state
from
one
state
to
the
other,
provided
we
have
adequate
or
learning
aka
optimal
communication
right.
So
basically
you
know
like.
If
we
work
together,
we
can
do
anything
right.
A
So
that's
that's
the
point
so,
and,
and
so
can
the
computer
right
so
canalis
so
and
she
can
work
with
us
so.
A
A
A
We
can
almost
look
at
this
as
the
shared
states
of
consciousness.
So
if
we
see
one
entity
so
one
chain
of
system
context
one
entity
is
but
to
both
entities
are
their
chain
of
system
context
puts
them
in
a
classification,
you
know
some
kind
of
unsupervised,
classifier
bucket
right,
we're
gonna.
Call
that
a
you
know.
We
can
call
that
a
language
we
can
call
that
a
well
okay,
so
we're
observing
a
pattern
in
the
end.
Okay,
we're
observing
forget
the
entity,
we're
observing
a
pattern
in
system
context.
A
A
A
Never
mind
I
don't
know,
I'm
not
pretty
sure,
I'm
not
pretty
sure.
So
I
don't
know
there's
patterns.
So
that's
the
point.
Auto
ml
will
figure
out
the
patterns
so
provided
it
doesn't
matter,
okay,
so
time,
okay,
goddammit,
okay,
fine,
whatever,
okay,
okay,
I
think
I
need
to
go.
I
need
to
go
for
a
walk
all
right.
Okay,.
A
All
right
so
86
duh,
okay.
So
let's
just
do
return.
A
System
contact
system
context,
dot,
dot
base
attempted
relative
import
with
no
known
paired
package.
Indeed,
indeed,
you
did.
Okay.
A
A
A
A
A
A
A
Entry
point:
this
is
a
new
one.
A
Oh
wow,
okay
type,
entry
point:
this
is
not
a
fun.
One
is
what
this
one
is
type
entry
point
has
no
attribute
annotations.
A
This
is
just
a
horrendous
name,
so
let's
go
fix.