►
From YouTube: Tracking synapse usefulness with a “permanence”
Description
Numenta Research Meeting, Nov 13, 2019.
with Marcus Lewis
Tracking synapse usefulness with a “permanence”.
A
A
What's
actually
going
on,
because
I
find
that
often
with
these
neural
network
experiments,
you
you
get
these
empirical
results
and
you
have
these
stories
of
why
they're
working
and
sometimes
the
story
can
be
out
of
sync,
so
you're
trying
to
figure
out
the
right
story,
because
that's
the
thing
you
can
carry
forward
into
later
experiments
and
and
my
story
for
this
kind
of
changed
a
little
bit
over
time.
So
the
this
all
started
with
with
this
paper,
the
l0
regularization
paper.
B
C
A
A
A
C
D
A
A
This
is
Z
value
that
is
somewhere
between
0
and
1,
and
so
this
is
the
their
way
of
having
synapses
randomly
fail
as
a
way
of
kind
of
seeing
can
is
it
ok
if
the
synapse
pills
can
it
can
it
prune
it
basically
and
and
I
thought
it
was
really
nice
paper
they
used,
though
this
this
this
distribution.
So
this
see
this
gate
is
between
0
&
1.
They
came
up
with
their
own
probability
distribution.
A
They
called
it
the
hop
the
hard
concrete
distribution
where
this
random
variable
z,
it's
sometimes
exactly
zero,
sometimes
exactly
one
and
sometimes
it's
somewhere
in
between,
and
they
get
good
empirical
results.
My
only
complaint
with
it
is
that
this
feels
very
invented.
It
feels
very
just
kind
of
cobbled
together.
It
feels
like
they
put
something
together
and
it
worked,
which
is
cool,
but
I
would
like
something
that
feels
a
little
more
fundamental
it'd
be
nice
if
you
could
get
it
working
with
straight-up
Bernoulli
random
variables.
A
A
You
so
just
very
briefly,
here:
ok,
the
gates
are
this
Bernoulli
random
variable,
where
your
training,
the
probability,
your
training,
the
probability
of
the
state
being
1
or
being
versus
being
0
and
and
I've
I've
I've
adopted
the
language
of
calling
this
parameter.
A
permanence
and
permanence
is
something
that
we've
used
in
our
models
here
and
I've
kind
of
co-opted,
the
term
for
because
it's
conceptually
quite
similar
but
I've
been
I've
added
this
notion
of
of
it
being
of
it
being
the
parameter
of
a
stochastic
variable.
C
C
A
C
A
A
C
C
C
C
A
So
I
haven't
prepared
that
for
today,
though,
anyone
wants
me
to
go
through
this,
that
this
is
the
D
rubbish,
the
derivation,
showing
that
this
is
an
unbiased
estimator
for
for
the
great
figuring
out
how
to
change
this
parameter
so
that
so
that
results
improve
the
truck
being
that
you
are
just
very
briefly.
You're.
A
It's
other
people
who
have
written
about
it,
talk
about
it
as
being
something
that
just
empirically
works,
despite
being
biased,
but
just
by
being
a
biased
estimator,
but
in
this
case
I
make
the
case
that
it's
not
biased
for
a
Bernoulli
random
variable
where
you're
changing
changing
the
parameter.
It
is
actually
a
an
unbiased
estimator.
I
can
talk
about
that
with
anyone
who
wants
me
to
go
into
a
deeper
but
I
figured
I've
just
just
put
that
out
there.
A
Okay,
so
this
I
understood
this
with
is
theoretically
appealing.
This
is
like
you're
tuning
the
you're,
treating
all
synapses
as
these
this,
these
probabilistic
gates
and
you're.
Just
tweaking
your
your
figuring
out.
Should
this
one
fire
or
should
this
synapse
exist?
Should
this
one
should
this
one?
It
all
make
sense.
C
A
Correct
but
I
also
can
run
experiments
where
I
hold
all
the
synapse
weights
fixed,
either
all
at
one,
all
at
all,
at
one
or
all,
at
random
numbers
and
just
train
these,
and
the
results
are
quite
good
yeah,
so
so
it's
so
it
I
can
do
all
of
that.
However,
today's
and
I'm
really
more
focused
on
the
ones
where
the
wits
are
trained
and
where
we're
training,
sports
networks
and
cetera
so.
A
Like
this
felt
good,
it
works,
the
results
were
nice,
but
then
I
threw
some
experiments.
I
also
tried
a
deterministic
one.
That
is
a
lot
more
like
our
classic
permanence
--is
and
that
it
removes
the
stochastic
mark
and
I
use.
I
use
the
same
thing:
I
derived
for
Bernoulli
us
for
Bernoulli
variables,
but
instead
I
just
I
just
if
the
theta
is
above,
the
threshold
is
connected.
If
it's
not
above
that
threshold,
it's
zero
but
but
use
this
rule
and
the
confusing
that
I
was
surprised
to
find
this.
A
A
A
theoretical
understanding
of
what's
going
on
here,
because
I
like
derived
this.
This
is
just
cobbled
together.
It's
strange
so,
as
I
wrote
here,
the
surprising
result
is
the
deterministic
approach
works,
pretty
well,
despite
being
cobbled
together.
So
I've
found
myself
in
a
place
where
I
needed
a
different
story
of
what's
going
on
here,
and
it
actually
gets
a
little
bit
into
how
you
were
just
describing
permanence
--is
that
there
are
like
a
record,
a
history
of
whether
the
synapse
has
been
good
and
said.
A
That
sounds
good,
but
the
fact
that
this
one
works
made
me
realize
that
that's
probably
not
the
right
story,
and
so
the
new
story
I'm
putting
on
this
is
that
all
these
methods
are
doing
something
like
having
this
parameter.
That's
somewhere
on
the
number
line
is
called
a
permanence
and
that
parameters
being
pushed
in
one
direction
by
a
constant
decay.
Now
I
didn't
mention
that
that,
in
these
methods,
you're,
usually
typically
what
you'll
do
is
have
the
parameter
decay
over
time
and
and
rely
on
the
error
gradient
to
push
it
back
up.
So.
A
A
A
A
A
There
are
different
ways
to
do
this,
but
at
the
core,
the
algorithms
kind
of
doing
this
and
an
interesting
thing
is
the
permanence
is
not
itself
the
average
error
gradient.
It's
more
just
that
this
all
the
cell
growth
was
being
implemented,
but
no
no
with
something
was
kind
of
strange
here-
is
that
if
this,
if
the
gradient
is
above
this,
it's
just
going
to
keep
increasing
Leslie
and.
B
F
B
A
This
could
be
anywhere
on
the
number
line.
This
could
be
42
and
this
could
be
44
and
the
threshold
could
be
anywhere
there's
nothing
essential
about
the
numbers,
0
1
and
1
point
5
for
this,
and
but
at
the
core
that
this
is
kind
of
I.
Think
this
is
the
right
story
for
what's
going
on
here,
but
there's
there's
also
the
fact
that,
like
the
randomness,
the
stochastic
part
does
help
in
many
cases
and
I've
found
that
today
and
the
results
show
that
the
random,
the
randomness
is
still
needed.
C
A
D
D
A
D
A
D
A
E
A
C
B
F
D
E
A
So
like
just
as
put
this
very
concrete,
like
one
of
the
one
of
the
classic
one
of
the
classic
deep
networks
is
a
layer
network
Alex
it
uses
drop
out,
it
uses
it
drops
out
and
it's
in
its
top
few
layers.
It
drops
out
all
the
tudents
with
50%
probability
turn
turn.
If
you
remove
that,
if
you
just
train
the
network,
it
achieves
100%
accuracy
on
the
training
set
and
zero
percent
accuracy
on
the
test
set
without.
E
C
B
C
And
I
have
discussed
a
lot
in
the
past.
This
team
has
discussed.
You
know
there.
There
are
multiple
types
of
representations
that
going
on
and
the
concept
of
generalization
isn't
just
comes
about,
because
I
mean
we
call
this
generalization
right.
I'm,
a
training
set
and
I
want
to
generalize
it
to
a
broader
dataset.
Well,
I,
don't
think
the
brain.
Does
it
by
introducing
elements,
I
think
rainbows
by
having
more
systematic
way
of
discovering.
E
C
To
make
sure,
in
places
in
my
mind
of
what's
going
on
in
the
classic
neural
networks,
I
can
now
understand
where
you're
coming
from
that.
This
helps
I
can
see
why,
but
from
a
brain
point
of
view
and
I,
don't
think
that's
what's
going
on.
This
is
like
this
is
overcoming
one
of
the
limitations
of
simple
artificial
neural
networks,
as
opposed
to
fundamentally
discovering
how
brains
generalize
from
training
due
to
non
training,
data
and
I'm.
B
A
D
D
F
A
C
C
A
I
totally
agree
and
I
guess
part
of
why
I
wanted
to
write
this
out
was
it
exposes
its
naive
naivety
and
I
you
it's
naivete
that,
like
there
are
that
it's
gonna
have
flaws
in
certain
situations
and
their
main
use
in
some
ways
is
strange.
Like
I
just
pointed
out
the
case
that
can
just
increase
indefinitely,
if
you
don't
do
anything
about
that,
you've
raised
their
raises
questions
of
is
this.
How
we
want
to
do
permanence
is
if
this
is
the
core
algorithm.
A
D
F
This
into
simurgh
of
the
family
right
on
metastability
or
meta,
plasticity
of
of
sign-up
sheets,
actually
two
papers,
one
where
the
synapses
are
stochastic
and
half
so
like
a
probability
of
dropping
out
that
is
proportional
to
to
their
learning
significance,
although
that
is
a
heavy
end,
associative
body
move.
This
is
talking
of
memories
and
one
where
he
has
multi
stage
level
of
meta
plasticity
in
the
sense
that
the
plasticity
changes
over
time
so
which
turns
out
to
do
the
same
thing.
F
It's
again
discovering
that
many
of
these
things
are
like
you
know,
homologous
in
some
sense
what
I
wonder
about
adults,
because
it's
such
a
big
thing
in
our
logic
models,
and
it
is
a
constant
for
machine
learning
models
to
a
stasis
of
all
of
its
fighters.
He
pointed
out
that
error
that
you
know
they
could
put
on
like
pushing
the
abdominus
forever,
and
so
the
question
is:
how
do
you
bring
it
back
down
when
it
runs
away?
F
Biologically
speaking,
there's
always
limits
to
like
any
powerful
processes
like
it
is
limited,
and
the
brain
is
full
of
homeostatic
mechanisms
that
make
sure
that
a
neuron
that
is,
you
know
like
overexcited.
You
know
like
changes
it
special
than
a
neuron
that
hasn't
been
participating
forever.
You
know
it's
like
increasing
its
firing,
and
so
in
that
sense,
I
wonder
because
there's
two
different
ways
to
go,
but
that
one
is
to
scale
the
error
at
the
individual
synapse
sort
of
in
proportion
to
its
like
its
total
contribution,
the
total
error,
for
example,
that
gets
propagated.
F
Then
you
would
know
whether
to
change
it
more
or
less
or.
Alternatively,
you
could
change
the
decayed
such
that
when
you
have
too
many
sign-up
sheets
in
the
network.
The
decay
is
faster
so
as
to
balance
out
any
sort
of
over
accumulation
of
sentences
to
keep
its
parts.
I
wonder
how
you
think
about
that
in
the
context
of
these
models,
I.
A
F
F
C
A
very
really
basic
question
skin
from
my
education:
just
take
it
back
we're
talking
about
this
during
training
and
a
cruise
during
Japanese
for
changes
or
all
these
scenarios.
These
options
you
talk
about
we'll
all
those
basically
introducing
the
cast
disappea
during
training
and
not
during
inference
correct
okay.
So
this
is
all
about
a
training
program,
but
what
you
train?
The
network.
A
C
C
A
E
C
C
G
C
Basic
question
I
may
be
that
you
don't
understand
convolution
that
layers
very
well.
We
think
about
vision.
You
said
I,
this
picture.
You
is
one
assumption
you
can
make
and
it's
inception
that
the
statistics
of
any
part
of
that
visual
image
are
going
to
be
the
same
and
therefore
you
could
cop
in
replicating
that
spacing
of
convolution
does,
but
in
a
spectrogram.
That's
not
clear
to
me
that
that
property
is
true
that
that
different
parts
of
that
spectrogram
image
here
are
statistically
similar
as
other
sections.
C
D
C
G
The
convolution
doesn't
necessarily
have
to
be
isentropic.
It
could
be
just
one
direction.
You
know
equally
weighted
across
here,
so
you
can
learn
the
weight
so
that
you
could
say
I'm.
Looking
at
a
particular,
a
platoon
arrange.
Excuse
me
a
particular
frequency
in
a
range
of
amplitudes
or
vice
versa,
because
you
can
shake
that
convolution
yeah.
B
A
Yeah,
there
are
three:
it's
a
four
layer:
Network
the
convolutional
convolutional
fully
connected
fully
connected,
but
the
top
one
is
sort
of
different
in
the
fact
that
it's
I
don't
know
it's
a
salir
classify
a
linear,
classifier
or
the
number
of
units
is
the
number
of
cells
it
kind
of
gets
treated
a
little
differently,
so
I'm
just
drawn
a
box
around
it,
because
sometimes
we
sometimes
we
focus
in
on
the
hidden
layers
and
treat
the
top
one
a
little
differently.
For
example,
we
would
never
run
it.
A
E
D
A
New
numbers
got
it
got
it
I
see
what
you're
saying:
okay
yeah.
So
these
are.
These
are
numbers.
The
left
side
of
this
is
numbers
from
the.
How
can
we
be
so
dense
paper?
I've
color-coded,
a
lot
of
them,
yellow
to
point
out
that
it's
sort
of
unfair
for
me
to
try
to
compete
against
the
yellow
ones,
because
that
wasn't
really
trying
the
house
against
paper
was
mostly
focused
on
the
other
layers.
It
didn't
try
for
for
sparse
connectivity
in
the
convolutional.
B
E
A
C
C
A
Yes,
yes,
and
so
the
main
thing
to
compare
to
on
the
left
side
is
the
is
the
two
grayish
blue
numbers
and
and
and
of
course,
the
bottom,
the
classification
accuracy
yeah.
So
on
the
right
side
of
this
before
I
go
into
the
numbers.
I
did
two
approaches:
one
I
did
stochastic
synapses
and
second
I
did
deterministic
synapses
with
permanence.
Is
both
of
these
have
permanence
a--'s,
but.
E
A
Found
I'd
needed
to
introduce
traditional
dropout
of
units
to
bring
the
results
up
to
make
up
for
all
that,
so
I
had
to
do
both
and
I.
Guess
we'll
start
by
looking
at
the
fully
connected
layer,
the
one
that
is
blue
all
across
our
great
across
and
show
that,
like
the
the
the
hamsa
dense
paper
fix,
the
sparsity
is
at
40%
in
10%
and
showed
that
it
could
get
achieve
good
classification
accuracy
with
those
with
this
I've
I've
gotten
down
to
when
I
did
when
I,
when
I,
aimed
for
similar
similar
accuracy.
A
A
F
A
B
B
A
C
D
B
D
D
B
C
C
A
D
A
C
D
A
B
A
A
A
D
E
D
A
F
D
A
A
C
G
Then
tried
train
against
random
face
and,
to
the
extent
that
it
could
not
learn
the
random
noise
patterns.
They
saw
that
no
network
is
being
robust
in
the
sense
that
he's
learned
what
it
needed
to
know,
but
he
could
reject
noise
and
those
are
there's
kind
of
two
dimensions
of
that.
That's
the
logic
and
welcome
possibilities
that
you
Sparsit
sparsa
fight
it
down.
So
it
learns
what's
essential
about
the
task
given
it,
but
you
don't
want
it.
G
G
G
G
Well,
it's
just
a
probably
the
existing
that
works
is,
is
that
they
find
solutions
in
subspace
and
they
sometimes
will
react
to
noise
in
a
way
that
says
I've
got.
You
know,
high
confidence
that
this
is
a
cat.
It
just
looks
like
noise,
and
part
of
the
argument
is
that
when
you
have
dense
networks,
that
becomes
possible
yeah
and
the
sparsity
is
actually
probably
not.
G
G
A
To
one
point
I'd
started
to
make
a
while
ago,
but
was,
as
you
pointed
out,
the
the
number
of
parameters
is
dominated
by
the
fully
connected
player.
Most
of
the
weights
are
there,
but
it's
important
to
note
that
if
you
look
at
the
yellow
rose
bottom
left
most
when
the
sparsity
dropped
from
44%
to
15%
we're
going
from
sparse
stuff
super
sparse,
the
number
of
multiplies
was
barely
affected.
B
C
A
A
The
number
of
floating-point
operations
etc.
That's
all
real,
but
the
thing
that
is
important
to
look
at
is
what
are:
how
are
the
weights
distributed
through
the
units
have
I
essentially
killed.
A
lot
of
units
have
I
totally
stripped
a
lot
of
units
of
all
of
their
synapses
or
have
I.
Actually,
you
know
gotten
the
sparsity
of
each
unit
down
to
something.
G
A
Network
in
terms
of
individual
units
yeah,
so
another
information,
that's
like
on
the
left,
have
just
read
affected
all
of
these
layers
and
I'm,
showing
histograms
of
of
how
many
synapses
are
on
each
unit
per
layer,
so
the
top
would
be
like
the
classifier,
which
is
you
can
mostly
ignore.
The
bottom.
Three
are
the
interesting
ones
and
of
course,
I've
drawn
so
tastic
and
the
deterministic
and
the
two
columns
and.
C
A
I've
just
gone
with
this
naive
approach
of
just
performing
this
decay
on
everything.
I
didn't
do
anything
to
make
it
to
make
it
shoot
for
a
certain
target.
If
I
wanted,
I
could
have
come
up
with
a
way
to
say:
I
want
every
unit
to
have
ten
senses
or
I
could
I
could
shoot
for
some
goal?
I
could
do
something
sort
of.
A
A
D
C
A
F
C
C
A
C
C
A
C
A
Is
pretty
much
all
my
material
I
just
was
laying
out
the
question
of
stochastic
versus
deterministic
I.
Don't
have
a
single
thing:
I,
don't
have
a
single
thing
to
say.
As
far
as
that's
right
or
that's
right,
there
are
different
things
that
work.
It
seems
to
be
some
mix
of
permanence
Azure,
some
mix
of
keeping
a
history
of
how
useful
of
synapses
and
some
amount
of
surpass
Ignace
mixed
together,
causes
good
things
to
happen.
D
D
A
D
C
D
C
C
C
F
That
there's
a
question
when
it
comes
to
this
desire
to
reduce
computational
cost
of
not
just
the
signers
but
also
the
Centaurs
training
to
like
get
away
from
you
know
double
floating-point
precision
where
yeah
it's
very
nice
of
these
sentences
are
gonna
on/off
and
of
course,
we
can
always
make
an
argument
that
you
know
maybe
biological
sign
ups.
You
know
it
doesn't
need
like
32
bits.
Maybe
you
know.
C
C
A
E
C
E
C
A
C
D
G
We
take
all
these
things
and
then
we
mix
them
all
up
so
that
there's
no
concentration
in
one
area
and
then
you've
gotta
feed
them
through.
That's
not
how
we
learn.
You
learn.
You
know
fixating
on
something
yeah
we're
ready,
multiple
things
through
in
small
variations,
and
then
we
cut
off
and
do
something
else
so
that
might
reinforce
the
compartmentalization.
That
you're
talking
about
only
a
certain
section
of
the
network
is
actually
a.
C
Calorie
now
we're
getting
additional
part
of
the
theory
of
there
Kevin
now
we're
talking
about
putting
in
the
whole
reference
frame
component
of
it,
because
then
you
can
attend
to
some
part
of
you
know.
The
learning
of
something
is
a
set
of
data
in
a
reference
frame
right
and
when
you
attend
Europe,
you're
attending
to
one
part
of
the
reference
frame
and
and
therefore
all
the
other
parts
of
referencing
right,
so
there's
multiple
levels
of
which
that
this
gets
compartmentalize
one
is
this
like
dendritic
segments,
the
sparsity.
C
C
C
C
G
One
of
the
things
you
know
just
to
take
my
straw
hat
is
that
if
you,
if
you
do
these
episodic
trainings
on
particular
tasks
on
small
variations,
your
learning
rate
could
be
relatively
high
during
that
and
then
you
kind
of
shut
it
on
down
I'm,
looking
for
some
way
to
say
to
self
identify
these
categories
and
locking
things
down
not
everywhere,
but
just
in
those
areas
where
yeah
we
have
confidence
that
we
found
something
that
has
some
commonality.
So
it's
well.
E
C
C
Right,
although
we
could
go
down
a
huge
long
rat
hole
trying
to
change
the
way
people
think
training
which
ultimately
I'm
going
to
abandon.
So
that's
my
concern
of
us
I'm
like
that.
Obviously
it'd
be
better
to
get
to
lower
position
without
having
to
do
that
kind
of
stuff
you're
talking
about.
Yes,
it
was
more
because
of
question
right.
F
The
question
is
the
sparsity
mean
automatically
even
the
understand,
an
existing
paradigm,
that
the
error
signal
requires
so
much
variance
that
lower
resolution
accumulator
will
do
so.
If
you
test
the
the
the
slope
of
failure
in
decreasing
the
the
the
theta
resolution,
that's
from
say
from
double
to
a
single
to
you
know
something
a
bit
thing:
that's
the
slope
of
that
this
day
of
that
it,
you
know,
decrease
in
performance
change
with
at
a
different
level
of
sparsity.
And
how
does
that
change?
C
We
don't
have
to
get
that
sponge
right,
so
I
think
what
we're
saying
here
is
from
where
we
are
today.
You
know
we're
over
here
want
to
get
someplace
over
here,
and
the
first
thing
we
can
do
is
we
could
say
well
and
what
we're
trying
to
ask
is
from
these
high
precision
the
program,
low
precision
and
one
of
the
first
things
we
can
say:
oh
well,
we
think
if
we
had
a
sparsity,
that's
going
to
help
right.
C
Have
active,
dendrites
or
I
should
say
segments
and
dendrites
really
that's
another
way,
breaking
things
out
like
bone
trying
on
some
of
these
right
and
then
we
know
that
there's
another
one.
We
need
to
do
eventually,
which
is
going
to
be
reference
frames
and
learning
objects,
is
a
complete
thing
and
that
also
helps
that
means
I
only
have
to
train
on
some
part
of
the
object
representation
at
an
important
time.
So
I
guess
I,
guess
what
we
don't
we
just.
C
These
are
three
steps
along
the
way
Kevin
suggested
another
one
which
is
I
would
call
it
sort
of
a
a
side.
One
like
you
know,
go
over
here
and
try
try.
You
know
different
tricks
for
a
trick
for
training.
Something
like
that.
If
you
don't,
let
me
call
him
Zack,
but
I.
Don't
think
that's
kind
of
on
this
pathway
here,
so
I
guess,
I
guess
the
only
question.
D
D
Fair
yeah
and
there's
put
one
more
between,
which
is
this
temple
memoryless
tile
predictive
in
between
well.
D
C
Yeah,
so
I
would
agree
if
we're
doing
streaming
type
of
analytics
or
something
like
that,
but
we're
not
really
doing
that.
Much
I
mean
that's
sort
of
you
could
put
it.
You
could
say
you
know
the
way
we've
always
talked
about
here.
There's
you
know
the
way,
I'm
writing
about
there's
sort
of
two
versions
of
this
here.
You
have,
you
know
a
predictive
model.
This
is
a
predictive
model.
This
is
a
predictive
model.
This
is
the
this
is
the
temporal
predictive
model.
This
is
the
sensorimotor
yeah.
C
Yeah,
these
are
both
types
of
sequences.
The
sensorimotor
sequences
is
a
purely
temporal
sequence
and
yeah,
so
I'm
not
sure.
That's
I
would
question
whether
that's
in
between
these
two
that's
the
way
we
we
thought
about
it
in
our
research
here,
but
in
the
end,
if
we're
going
to
solve
like
visual
pattern,
recognition
problems
you're
going
to
be
doing
it.
This
way,
if
we're
going
to
tumble
things
are
going
to
play
in
the
brain
of
the
balls.
Of
course,
alright.
B
C
G
Are
important
in
the
sense
that
you're
trying
it's
like
the
number
I
ask
now
you
know
what
it
believes:
important
expression,
in
other
words,
if
I
get
an
image
and
totally
dis
predict
as
I've
told
this
forum
is
totally
this
for
image.
I
tried
to
find
a
comment
that
well,
the
commonality
is
like
the
little
low
you
know
the
below.
If
I'm
looking
at
you
know,
I
get
this
image
slightly
displaced.
This
image
I
need
to
change
with
this
image.
G
C
C
C
C
C
C
E
G
C
G
G
C
C
C
C
C
F
F
Like
this
one
bit
right
in
the
output
that
says
this
is
the
dog.
A
problem
is
that
that
bit,
while
you
pop
the
error
back
from
it
and
whatnot
you,
it
doesn't
do
anything
to
the
network
right,
it
doesn't
change
which
sign
up
sees.
Are
you
know
available?
It
doesn't
recruit
parts
of
a
network
right,
and
so
in
that
sense
there
there's
some
idea
of
you
know
class
right,
but
it's
not
being
used
as
a
reference
right.
C
F
C
E
C
I
am
or
what
things
are
worth
accomplice
on
the
counter
and
so
on,
and
it's
like
I,
don't
have
the
problem
of
separating
improve
matter
like
this
I
have
this
model
of
the
world
the
mom
soon
as
they're
looking
for
some
of
the
model,
the
world
that
all
separation
of
that
much
take
care.
I
know
because
I
know
what
everything
is
and
the
slightest
little,
but
just
to
reactor
myself,
and
so
it's
not
like
I
look
at
the
world
say:
oh,
it's
all
gray.
Where
is
the
ends
of
these?
C
Keep
the
kitchen
sink
and
where
is
the
edge
of
this?
What
I
have
in
the
world
then
I
know
all
those
things.
Are
it's
not
problem
of
separating
out
the
Batman's
problem,
knowing
where
you
bought
none
of
your
location?
And
so,
if
you
don't
have
this
idea
of
a
reference,
a
I
mean
you
don't
like
you
know
what
the
structure
of
the
world
isn't,
your
your
then
your
your
tax,
we're
separating
out.
You
were
just
constant
in
your
input
and
figure
out
what
it
is.
When
you
know
the.
C
G
I
understand
and
I
say:
what's
the
what's,
your
pamphlet
models
is
sophisticated.
You
can
do
all
these
short
hands,
but
I
would
say
that
there's
if
you
look
at
you
know
how
you
get.
You
know
step
wise
to
that
spring
forth.
As
a
baby
with
disability,
there
are
levels
of
learning
in
a
patient
to
the
environment.
What
the
dimension
can
to
the
point
where
I
don't
need
all
that
stuff
I,
don't
need
the
background
separation
I
can
do
that,
but
I
think
you
do
it.
C
Anything
you
take
a
child
train,
your
child,
you
always
present
them
in
the
clearest,
simplest,
isolated
form
as
possible.
You
know
here
in
one
letter,
I'm,
like
you
know
it's
like
you,
don't
start
off
with
some
Florida.
You
know
that's.
You
know
it's
like
into
really
not
good.
At
separating
background
from
foreground.
We.
C
Very
very
clear
and
simple
one
that
at
a
time
then
one
word
at
a
time
so
I
think
it's
a
I
think
it's
an
incremental
build
up
problem,
but
we
don't
I,
guess
I'm!
Just
reacting
to
your
comment
that
you
know
you
have
the
big
problem
of
separation
for
back.
That's
it
I!
Think
it's
a
the
questions.
How
do
we
build
my
house?
How
do
we
built
this
model?
Peace
know
slowly,
one
minute,
one
step
at
a
time.
Well,.
C
C
And
I
think
we
should
talk
about
future
great.
How
are
we
gonna
slowly,
Kermit,
the
probably
the
juicy
reference
friends
not
be
tarnished
and
other
work?
Well,
we
don't
say
no
good
sauce
work,
we
don't
stand.
Our
orientation
works
better.
We
never
get
to
the
bottle
displacements.
So
I
know
that's
like
a
huge
can
of
worms,
we're
not
even
close
to
understanding
koala
double
planet.
I,
don't
want
to
focus
on.