►
Description
Thirteenth DevoWormML meeting, November 25. Attendees: Richard Gordon, Ujjwal Singh, Vinay Varma, Asmit Singh, Bradly Alicea, and Jesse Parent
A
B
A
A
A
We
have
a
few
more
sessions
as
far
before
the
new
year,
so
yeah
you
want
to
present
anything
you're
welcome
to
present
and
then
we'll
probably
continue
in
the
spring,
but
I
haven't
finalized
the
schedule
for
that.
Yet
so
I
think
yeah
I.
Think
next
week
Gujarat
was
planning
on
presenting
were
talking
about
his.
She
has
a
proposal
he
wants
to
share
with
us
on
and
never
always
sure
what
it
is,
but
but
he's
gonna.
A
Right,
yes,
but
I'm
glad
you
could
make
the
new
time
and
I
would
mention
that
Jesse
was
usually
in
her
meetings
in
to
envision,
which
is
a
conference
at
Princeton
last
week
this
last
weekend,
so
I
don't
know
if
he
I,
don't
think
he's
made
it
back
from
that.
You
know,
but
he's
I
think
he's
you
know.
I
gave
him
some
materials
actually
to
do
a
presentation
there.
If
he
had
the
chance
it
was
sort
of
a
free-form
conference,
so
he
may
have
been
able
to
present.
A
He
may
not
have
been
able,
but
anyways
I
was
kind
of
hoping
he'd
make
it.
So
you
could
tell
us
a
little
bit
about
it,
but
it
you
can
tell
us
next
week.
That's
fine
dick
did
you
have
anything
any
news
to
share
with
the
know.
C
C
C
C
C
C
A
C
C
History
yeah,
where
the
diatom
works
like
candles
does
okay
Campbell
works?
Is
you
got
this
molten
wax
and
a
wick
and
the
molten
wax,
but
capillarity
goes
up
to
it.
Then
a
chemical
reaction
occurs,
the
stuff
that
gets
hot,
so
some
stuff
goes
I,
believe
mechanic
burns,
but
that
run
in
these
vacancies
in
the
wick
some
more
liquid
comes
up
so
you're
gonna
continue
emotional,
liquid
up
the
wick.
C
We
move
a
chemical
reaction:
okay,
okay,
so
here's
here's
a
diatomic
oriented,
it's
the
same
as
the
orientations
the
candle
here
you
about
this
little
vesicles
of
this
fluid
coming
in
here
emptying
into
the
slit,
which
is
called
the
rage
and
they're
fibrous
and
when
they
touched
the
water
they
hydrate
so
thicker,
blinding
that
high
and
they're
coming
in
at
one
end
going
and
then
all
of
them
move
up
this
way
and
as
they
move
up
if
they
stick
to
the
surface.
This
is
the
this.
This
bar
is
the
substrate,
then
that
moves
down.
C
If
there's
little
particles
here,
the
little
particles
we
move
up.
Okay.
So
it's
the
same
thing,
but
with
this
region
here,
acts
just
like
the
wick
in
the
candle
and
this
stuff
goes
in,
tell
us
the
way
and
then
it
reacts
with
water
and
becomes
an
inform
that
can't
stay
inside
and
it
comes
up.
So
the
reaction
this
case
is
hydration
with
water
yeah.
A
C
A
C
C
C
C
A
A
A
C
A
C
Yeah
you're
pretty
crazy
ways
now
working
on
them
working
on
a
paper
on
creation
of
black
cult
leaders.
Okay,
the
idea
is
that
the
if
you
want
to
stand
on
objective
one
with
the
so
the
gravitation
force
is
the
same
with
the
surface
of
an
object
as
on
earth.
But
you
liked
it
to
be
a
lot
smaller
than
Earth.
C
B
A
Yeah
so
I
guess
I'll
move
to
on
to
the
thing
I
wanted
to
discuss
today
and
meeting
so
last
week
we
talked
about
reinforcement,
learning
and
at
the
end
of
the
talk,
there
is
sort
of
this
discussion
about
feedback
and
cybernetics
and
the
connections
between
cybernetics
and
reinforcement
learning,
and
it
got
me
to
thinking
that
maybe
there's
a
broader
theme
here
and
I
prepared
some
slides
that
actually
are
interesting.
I,
don't
know
how
connected
I
mean
they're
connected
to
a
lot
of
things.
I
talked
about,
but
I
think
is
it
worth
going
over.
C
A
A
A
Machine
learning,
reinforcement,
learning
in
cybernetics,
so
I
mean
to
be
present
here,
and
this
is
just
kind
of
a
bunch
of
things
around
this
theme
and
I
just
wanted
to
show
people
with
the
Opera.
You
know
opportunities
might
be
in
this
area.
What
the
current
thinking
is
there
is
that
a
lot
of
thinking
in
it,
but
people
are
thinking
about
it.
A
So
there's
this
article
I
found
was
by
Carlos
e
Perez,
who
does
a
lot
of
writing
on
machine
learning
and
deep
learning.
They
runs
this
blog
called
intuition
machine
and
he
wrote
an
article.
What
deep
learning
can
learn
from
cybernetics
and
he
I
think
he
actually
does
think
a
lot
about
cybernetics
outside
of
his
machine
learning
ideas.
So
he
sees
a
lot
of
synergies
between
artificial
intelligence
and
cybernetics,
and
he
and
this
article
he
goes
through
some
of
the
history
of
cybernetics
and
artificial
intelligence.
A
So
for
those
of
you
who
are
unaware,
artificial
intelligence
started
in
the
maybe
in
the
30s
and
40s,
where
people
started
to
come
up
with
idea
mean
the
idea
of
like
automation
and
make
you
know
creating
artificial
behaviors
started
well
before
that.
But
you
know
people
the
modern
field
of
artificial
intelligence
started
back
then,
and
cybernetics
was
a
little
bit
later,
but
these
sort
of
you
know
they
developed
sort
of
together
in
some
ways
early
on.
So
in
this
article
he
talks
about
artificial
intelligence
and
cybernetics
having
different.
A
A
Put
it
in
here
yeah,
he
doesn't
want
to
writing
a
lot
of
it's
a
machine-learning,
but
there's
a
lot
of
other
stuff
that
he's
got
in
his
mind,
so
that's
worth
looking
into,
but
let
me
go
back
and
present.
So
the
point
he's
trying
to
make
here
is
that
there
are
a
number
of
different
issues
that
are
dealt
with
here
in
in
both
fields
and
they've
dealt
with
them
in
different
ways.
So
you
know
there's
this
basic
notion
that
you
have
to
have
cognitive
systems
that
are
autonomous
and
then
in
artificial
intelligence.
A
You
deal
with
that
in
ways.
So,
for
example,
cognitive
systems
have
an
inside
and
outside
and
that's
sort
of
referring
to
embodiment
organisms,
map
external
objects
to
internal
states,
that's
dealt
with
in
artificial
intelligence
terms
and
both
representation
in
memory.
But
in
the
cybernetics
you
have
things
like
organisms,
mapping
through
an
environment
back
onto
themselves
as
sort
of
a
feedback
mechanism
and
nervous
systems,
produce
adaptive
relationships
and
that's
part
of
memory,
and
so
he
can
see
it's
a
very
complex
figure,
but
he
walks
through
kind
of
the
differences
in
the
two
fields.
A
And
so
at
some
point
you
know
these
fields
were
kind
of
developing
together
and
at
some
point
they
urged.
So
he
actually
has
this
graph
in
that
article.
That
shows
kind
of
the
development
of
AI,
which
is
this
function
and
artifact
like
sort
of
landmarks
and
artificial
intelligence,
and
you
know
they're
sort
of
mapped
the
development
of
of
AI
in
terms
of
well
there's
like
it.
So
the
Maisie
conference
was
a
conference
that
happened
in
the
50s
that
you
know,
as
a
cybernetics
conference
hold
on
a
minute.
A
So
the
macey
conference
was
a
comprar
it
together
of
all
the
early
people
in
cybernetics,
and
this
happened
in
the
40s
and
about
the
same
time
you
had
the
McCulloch
Pitts
neuron,
which
is
the
standard
model
of
sort
of
how
you
have
a
neuron
that
you
know,
system
owner
owns
that
are
connected
together.
I
will
show
I'll
show
you
a
picture
of
that
in
a
minute,
but
it
from
about
the
30s
on
to
about
the
60s.
A
There
was
a
connection
estate
that
was
being
developed,
and
this
oftentimes
was
in
conjunction
with
cybernetics,
so
you
had
a
lot
of
ideas
that
were
very
compatible
with
cybernetics,
where
you
had
a
bunch
of
neurons
that
were
connected
and
people
thought
that
they
were
going
to
solve
some
pretty
big
problems.
Then
you
had
some
criticism
of
the
perceptron,
which
was
this
McCulloch
Pitts
neuron
sort
of
developed
a
bit
more,
and
this
is
the
onset
of
symbolic
thinking
in
AI.
A
A
Then,
eventually,
you
know
in
the
80s
and
90s
he
started
to
get
these
statistical
models
like
support,
vector
machines
and
they
became
prominent
and
their
connectionist
systems
like
deep
learning
were
like
neural
networks,
but
they
are
divorced
now
from
a
from
cybernetics
and
they
sort
of
operate
on
their
own
in
their
own
space,
and
so
that's
basically
what
this
graph
tells
us.
Let's
see
if
they
had
a
comment
in
the
comments.
Let
me
get
that
comment.
Oh
yeah,
Jesse,
parent
yeah,
so
he's
really
excited.
A
So
this
is
a
image
of
the
McCulloch
Pitts
Neronian.
So
this
is
one
of
the
earliest
devices
that
we
have.
That
are
like
artificial
intelligence
like,
and
the
idea
here
is
that
it's
it.
It's
resembles
a
neural
network
and
it
really
is
like
the
first
neural
network,
but
it's
a
very
simple
model
that
but
the
McCulloch
Pitts
actually
the
paper
that
they
published
the
first
paper
they
published
on
this
argued
that
neurons
and
synapses
are
actually
logical,
computing
devices.
So
up
until
that
time
in
neuroscience,
they
were
developing.
A
You
know
ways
to
view
neurons
to
stain
neurons
to
see
what
the
function
was.
They
were
exploring
different
types
of
neurons,
but
they
hadn't
probably
thought
too
much
about
the
computation,
and
this
is
the
first
paper
argue
that
neurons
and
synapses
are
illogical,
computing
devices.
What
that
means
is
that
if
you
take
a
bunch
of
inputs
and
you
weight
them-
and
you
put
them
into
this
black
box-
you
get
in
a
sum
and
then
it's
either
above
threshold
or
below
threshold,
and
that
gives
you
an
output.
A
So
there
are
different
logical
rules
that
you
can
combine
to
weight
them
to
make
sure
that
you
know
they're
there
Sun
properly
and
that
there's
a
proper
response
in
terms
of
threshold
so
but
but
McCulloch
and
Pitt's
weren't.
Actually,
the
only
people
thinking
about
that,
and
nor
is
this-
the
only
thing
that
you
need
for
like
a
nervous
system
or
some
intelligence,
Gordon
Pasque,
who
was
a
another
cybernetics
person.
He
published
a
book
called
conversation,
cognition
and
learning
any
argued
that
intelligence
resides
in
conversations
or
what
we
might
call
interactions.
A
So
his
argument
was
that
if
you
have
a
neuron
like
this,
it's
not
enough
to
really
give
you
very
much
when
you
can.
You
know
do
very
simple,
like
classification
with
something
like
this,
but
you
need
a
bunch
of
neurons
in
parallel
to
get
a
good
nice,
you
know
representation
and,
of
course
this
is
the
way
connectionism
developed
right.
We
have
these
neurons
in
parallel
and
they're
doing,
processing
and
that's
where
your
intelligence
resides.
A
So
deep
learning
is
really
an
instantiation
of
this
idea,
where
you
have
a
bunch
of
layers
of
neurons
that
are
communicating
and
they're.
Their
conversations
are
being
filtered
by
some
algorithm
by
some
set
of
thresholds
and
we're
getting
an
answer,
and
so
you
know
a
deep
learning
algorithm
another
way
to
think
of
it.
It's
like
a
giant
cocktail
party
where
people
are
talking
and
you're
extracting
information
from
those
conversations,
but
Alan
Turing
also
thought
about
this
he's
out
of
this.
A
He
came
up
with
something
called
a
B
type,
an
organized
machine,
and
this
is
actually
something
that's
been
lost
to
history
somewhat,
there's
an
article.
This
is
more
of
a
philosophy
of
science
article,
but
this
goes
over
the
B
type
of
organized
machines
quite
well,
and
what
his
vision
of
like
a
neural
network
was,
was
that
its
simplest
possible
version
of
a
nervous
system
which
consists
of
a
NAND
logic
gate
with
a
modifier.
A
So
this
is
a
be
typed,
an
organized
machine
and
you
can
array
them
in
parallel
or,
however,
you
want
to
ray
them,
but
this
is
basically
the
simplest
possible
version
of
a
nerve
system,
and
so
you
can
have
multiple
be
type
unorganized
machines
to
have
this
more
complex
nervous
system.
But
basically
you
have
these
units
that
are
connected
by
directionally
in
each
unit
is
a
NAND
logic
gate.
So
for
those
of
you
don't
know
what
that
is,
it's
a
logical
command.
That's
it
has
its
and
it's
a
modified
end
function.
A
Okay,
so
now
I'm
gonna
turn
to
deep
learning
and
and
the
more
recent
sorts
of
developments
and
I
want
to
talk
about
a
paper.
That's
interesting
here,
I
found
this
action
was
an
article
like
a
popular
science
article
about
this.
A
lot
of
people
in
machine
learning
are
getting
excited
about
this
method,
called
information
bottleneck
and
it's
a
theoretical
idea
that
says
that
you,
you
know
how
to
deep
learning
networks
learn
and
you
go
back
to
actually.
A
A
Fairly
well,
sighted,
since
then,
so
it's
been
kind
of
like
toiling
in
the
background,
and
now
people
are
getting
excited
about
it.
You
know
more
popularly,
but
this
is
actually
an
archive
paper
from
2000
left,
le
Tisch,
P
and
William
Bialik.
So
William
Bialik
is
a
actually
a
biophysicist
and
he
happens
to
have
also
worked
on
this
problem.
A
But
the
idea
here
is
that
the
information
bottleneck
method
actually
describes
sort
of
how
information
theory
can
be
applied
to
deep.
So
they
proposed
this
idea
that
you,
basically
in
a
Learning,
Network
or
a
deep
network,
you
retain
features
most
relevant
to
general
concepts
using
a
version
of
compression
called
lossy
compression.
A
A
So
the
most
important
part
of
learning
here
is
forgetting,
and
that
means
that
when
you
learn
things,
you
encode
the
important
features
of
learning,
but
you
also
have
to
forget
other
things.
You
have
to
forget
things
that
are
extraneous
to
the
problem,
and
so
this
is
their
sort
of
theoretical
result.
This
is
a
figure
from
this
quantum
magazine
article.
This
is
the
recent
iteration
of
this
work
sort
of
like
revisited
and
the
basically
argue
that
their
networks
converge
to
an
information
bar
theoretical
bone.
A
But
that
means
is
that
you
have
this
bound
where
you
can
train
a
model
and
you
can
compress
the
data.
But
if
you
compress
the
data
too
much,
you
sacrifice
prediction
accuracy.
So
the
idea
here
is
that
you're
trying
to
find
the
optimal
compression
of
the
data
and
Beyond
like
a
certain.
You
know
level
of
compression.
If
you
compress
the
data
too
much
you've
lost
your
prediction
accuracy.
You
suffer
in
terms
of
that
now.
A
This
sounds,
of
course
like
compression
algorithms
in
computer
science,
where
they
try
to
compress
data
down
into
a
smaller
package,
and
you
can
do
it
in
different
ways
that
are
lossless,
meaning
you
lose
new
information,
but
there
are
a
Waseem
techniques
that
also
can
reconstruct
images.
But
if
you
take
too
much
out
of
that
image,
then
you've
lost
your
image
or
your
data
and
so
you're
getting
data
loss
and
that's
basically
the
idea
that
they're
working
from
here,
so
they
proposed
that
there
are
three
phases
to
this.
A
One
is
a
fitting
phase,
which
is
where
the
networks
learn
labels.
Just
like
your
typical
learning
phase
in
a
machine
learning
algorithm,
then
you
have
a
compression
phase,
which
is
where
networks
become
good
at
generalization.
So
this
is
again
like
the
training
phase
of
a
ML
algorithm,
but
in
between
they
propose
that
there's
a
phase
transition
and
in
physics,
that
term
is
used
to
denote
some
sort
of
change
in
state
like
from
a
solid
to
a
liquid
or
from
a
gas
to
a
solid
and
there's
a
sort
of
it's
a
rapid
change
that
occurs.
A
A
It's
a
basically
a
trade-off
graph
where
you
have
information
about
input,
data
and
information
about
output
label
and
they're,
showing
these
points
here
where
you're
testing,
different
layers
of
a
network
and
they're
looking
at
the
results
in
terms
of
you
know
different
phases
of
the
problem.
So
you
know
you
have
some
layers.
I,
do
you
know
less
well
than
others
and
I'm
not
gonna,
go
into
the
paper
really
deeply
here,
but
I
just
wanted
to
make
you
aware
of
this
paper.
A
I'll
make
these
slides
available
later
so
that
you
can
look
at
them,
get
the
citations
for
this
paper,
but
then
there's
another
paper.
That's
interesting,
and
this
kind
of
follows
up
on
this
idea
of
forgetting
and
it's
called
selective
brain
damage
measuring
the
disparate
impact
of
model
pruning.
So
this
is
an
archive
paper
here,
but
there's
also
an
open
review
and
the
open
review
papers
are
interesting
because
they're
published
with
reviews,
in
this
case
their
conference
paper
reviews.
So
you
can
read
what
people
thought
about
it.
A
The
one
of
the
open
reviews
here
actually
said
that
this
paper
has
a
novel
finding,
but
it
can't
be
applied
to
anything
which
is
like
I.
Don't
know
what
that
means.
I
think
it
was
to
a
machine
learning
conference
or
something
so,
let's
just
say
that
one
person's
treasure
is
another
person's
trash
or
something
like
that.
So
I,
don't
know
you
know
the
lesson
there
is,
you
know,
submit
it
to
the
right
place,
but
anyways.
A
So
what
they're
talking
about
in
terms
of
model
proving
is
that
there's
a
neural
network
technique
called
pruning
and
that's
where
you
can
remove
a
vast
majority
of
weights
in
a
network
whether
it
be
a
brain
or
a
neural
network,
with
little
degradation
to
accuracy.
So
we
might
ask
like
what,
if
you
take
nodes
away
from
a
neural
network,
does
it
have
any
effect?
And
the
answer
is
usually
not.
You
can
reconfigure
the
network
in
a
lot
of
ways
and
you
can
make
you
know
compensate
for
losses
in
the
network.
A
This
is
also
a
little
true
for
the
brain.
When
say,
for
example,
someone
suffers,
brain
damage
and
part
of
their
brain
is
removed
their
brain
depending
on
what
part
of
the
brain
it
is,
can
reroute
that
network
through
other
existing
pathways
and
cobble
together,
something
that's
resembles
functionally
the
original
function.
So
it's
actually
an
interesting,
fascinating,
actually
feature
of
networks
that
this
can
be
done.
A
One
is
to
remove
parts
of
the
network
that
are
essential
to
function,
and
so,
if
you
remove
those
parts
of
the
network,
the
network
is
degraded
for
everything
or
you
can
have
these
exemplars,
which
are
introduced
to
the
network
as
inputs,
and
when
they
go
in
to
a
degraded
Network
a
prune
network.
They
can
either
work
well
or
they
can
in
a
blow-up
model.
Basically,
and
so
what
are
these
P
II's?
A
Well,
they
tend
to
be
hard
to
generalize
images
so
they're,
you
know
maybe
unique
images
that
have
features
or
they're
mislabeled
images,
so
it
might
be
like
we
talked
about
false
positives,
with
the
example
from
adverse
aerial
networks
or
from
pareidolia,
where
you
mislabel.
Something
like
you
have
this
thing
where
you
have
a
thing
that
looks
like
a
face,
but
it's
not
a
face.
So
when
the
algorithm
hits
that
example,
it
doesn't
know
what
to
do
with
it
or
something
that
requires
more
detailed
classification.
A
So
there
are
a
lot
of
things
in
the
world
that
are
not
well
classified,
and
so
that's
one
of
the
examples
that
they
give.
So
this
is
a
actually
a
compression
technique
and
they
tested
this
pruning
process
on
imagenet
and
resonant
networks.
So
imagine
and
ResNet
are
both
image
based
pre-training
models,
as
we
talked
about
in
another
meeting,
and
they
use
this
pruning
technique.
A
They,
you
know
it's
a
compression
technique,
so
it's
compressing
your
data
in
in
a
way
in
a
very
abstract
way,
and
you
want
to
see
if
it's
resilient
to
this
compression
and
so
there.
Our
conclusion
is:
is
that
they're
unknown
trade-offs
governing
pruning
and
network
resilience?
So
this
is
a
part
of
the
paper
where
I
agreed
with
that
reviewer
that
it's
they
didn't
really
explore.
A
This
trade-off
very
much
in
the
paper,
but
they
they
do
mention
it,
and
the
idea
here
is
that
there's
a
trade-off
between
like
taking
things
out
of
the
network,
like
maybe
key
nodes
or
key
edges.
You
know
whatever
those
happen
to
be
we're
just
you
know
extraneous
nodes
or
edges
and
the
amount
of
resilience
exhibited
by
the
network
and
there's
one
area
in
the
literature
that
they
could
have
cited
where
that
might
be
a
follow
up
to
this,
and
that's
if
you,
google,
the
term
complex
network
resilience.
A
So
if
that
term
actually
I
know
yields
a
number
of
papers,
some
of
them
in
in
brain
networks,
some
of
them
in
electrical
networks.
It's
a
wide
range
of
complex
networks,
but
they've
done
they've
actually
do
experiments
where
they
take
like
they
build
models
of
these
networks
and
they
remove
nodes
and
edges
from
them
systematically
and
they
look
to
see
you
know
which
nodes
and
edges
if
they're
removed
have
a
catastrophic
effect
on
the
function
of
the
network.
A
So
sometimes,
if
you
remove
up
like
maybe
half
of
the
nodes,
it
doesn't
have
very
much
of
an
effect.
Things
can
be
rerouted
if
you
remove
one
node
in
the
right
place,
it
can
destroy
the
network
as
a
functional
entity
and
one
way
to
think
about
that
is
electrical
networks
where
you
have
a
blackouts
occur,
because
you
have
some
node
in
some
location
that
you
know
might
seem
insignificant,
but
it's
a
place,
that's
maybe
in
between
two
electric
grids.
A
And
so,
if
you
take
that
note
out,
you
can
have
a
huge
blackout
and
so
that's
a
kind
of
depth.
The
idea
we're
getting
at
here
there's
more
follow
up
to
be
done
on
this
idea,
but
it's
and
it
does
relate
to
looks
like
Burnett,
expand
that
we're
talking
about.
You
know
feedbacks
and
complexity,
so
yeah
that
there's
nothing
on
that
in
terms
of
literature,
but
this
isn't
actually
an
example
of
pruning,
though
there
is,
another
paper
called
optimal
brain
damage.
A
People
like
this
idea
of
brain
damage,
although
it's
not
really
brain
damage
in
this
case,
so
in
biological
brain
development
pruning
actually
is
a
common,
it's
actually
normal,
and
this
can
be.
This
is
actually
used
as
an
analog
to
neural
network
pruning
for
excess
capacity.
So
what
happens
in
development?
Is
you
have
this?
This
is
a
model
of
I,
think
visual
cortex
in
a
human
infant,
we're
in
a
human
fetus
and
then
into
an
infant
so
into
a
young
child.
A
Actually,
so
this
is
the
fetus
where
you
have
neurons,
you
know
appearing
in
the
visual
cortex
and
you
can
see
that
they
have
very
sparse
connections.
The
cells
are
just
being
born.
Then
the
newborn
you
have
a
little
bit
more
connectivity
then,
as
they
start
to
interact
with
their
environment,
their
postnatal
environment.
They
start
to
gain
a
lot
of
connections
so
from
3
month,
two
years
they
get
this
massive
number
of
connections
between
the
cells
and
that's
you
would
think
what
more
connections
are
better,
but
actually
that's
not
true.
A
You
know
pretty
efficient
transmission
of
information,
but
it
requires
this
overgrowth
in
pruning
and
perhaps
because
the
brain
is
a
self-organizing
system,
that's
that's
necessary,
but
we
can
make
use
of
this
in
artificial
systems
too.
So
you
can
make
computational
models
more
compact.
Without
losing
accuracy,
you
can
take
a
model
over
a
row
with
connections
and
then
compress
those
connections-
and
this
is
analogous,
of
course,
to
waiting
your
connections
and
filtering
them
through
thresholded,
but
in
the
vertebrate
variable
system.
A
So
where
are
we-
and
this
is
another
figure-
I
can't
remember
where
this
comes
from,
but
this
is
a
big
complex
map
which
is
suitable
for
what
we're
talking
about
here.
Here
are
cybernetics
and
information
theory
and
then,
of
course,
there
are
all
these
connections
to
different
areas.
So
we
don't
even
really
have
learning
on
here.
We
have
artificial
intelligence
as
a
subset
of
cybernetics,
but
we
do
have
all
these
complex.
You
know
we
do
have
all
these
ideas,
so
you
know
we
can
go
from
cybernetics,
start
official
intelligence
to
computational
theory.
A
We
have
neural
nuts,
something
called
hierarchical
temporal
memory,
which
is
a
form
of
AI.
We
might
get
to
maybe
in
the
last
meeting
and
then
that
all
results
in
something
called
emergence
which
are
self-organizing
systems,
say
I
put
this
slide
in
here
for
Jessie,
largely
because
I
think
you'd
find
this
useful
to
think
about
all
the
connections
between
things,
but
I
want
everyone
to
think
about
it,
like
you
know,
think
about
machine
learning.
Broadly,
you
know
don't
just
think
about.
A
Let's
see
okay,
so
we
start
here.
We
talked
about
the
Jessie,
got
a
comment
and
dick
had
a
comment.
The
original
the
original
perceptron
was
at
the
University
of
Illinois.
I
saw
it
when
visiting
Ross,
Ashby
Jessie's,
awesome
Richard,
maybe
Bradley
can
check
in
if
it
still
exists,
he's
in
Urbana.
Yes,
so
about
the
lab
that
you're
talking
about.
So
there
was
a
lab
in
UIUC
years
ago
on
where
they
did
a
lot
of
early
cybernetics
and
I.
Think
Ross
Ashby
was
a
visiting
scholar
and
they've.
A
Actually
what
happened
was
they
had
a
a
significant
amount
of
military
funding
and
they
went
into
this
late,
60s
and
early
70s
and
they
started
proposing
things
that
were
I,
guess
not
really
in
the
military's
purview.
I,
don't
know
what
that
means,
but
I
think
they
were
exploring
their
mind
and
they
were
coming
up.
You
know
they
were
kind
of
moving
more
towards,
like
maybe
social
theory
or
other
areas
where
the
military
was
less
interested,
so
they
stopped
funding
the
lab
and
lab
the
guy.
A
A
Now
they
actually,
there
was
a
history
history
of
science
person
on
campus
who's
documented
the
history
of
the
lab,
so
also
now
to
a
link
to
that.
They
have
a
really
good
documentation
of
the
lab
and
its
history.
So
I
can
send
you
that
link.
That
would
be
interesting,
I
think
a
lot
of
Jesse
and
maybe
to
dick-
and
you
know
anyone
else
is
interested.
It's
just
kind
of
goes
through
what
they
did
in
the
lab.
They
did
a
lot
of
the
you
know.
A
C
A
A
So
it's
you
know,
there's
that
but
yeah
also
not
that
leak.
So,
let's
see
warden
fish
B
at
all
sounds
like
just
computed
tomography
from
a
limited
number
of
views,
and
this
equals
reconstruction
of
an
image
from
a
limited
number
of
projections.
So
I
think
that's,
maybe
that's
what
they
were
getting
I
think
they
were
trying
to.
They
were
thinking
in
terms
of
information
theory
in
terms
of
their
model,
for,
like
you
know
how
information
is
processed.
A
A
You
know
very
small
buffers
of
data
to
work
with
you
couldn't
really
just
throw
anything
into
it,
so
you
had
to
really
compress
your
data,
and
so
that's
where
a
lot
of
that
literature
comes
from
and
still
even
if
you
use
a
zip
file
or
if
you
use
an
image
file
there
compression
algorithms
behind
those
that
compress
the
data
down
to
this
sort
of
essential
set
of
bits,
and
so
that's
kind
of
yeah.
They
are
talking
about
something
very
similar
to
that.
A
So
Richard
again,
if
pruning
identifies
critical
nodes,
is
there
work
on
replacing
critical
nodes
of
circuits
that
reduce
criticality,
I'm,
not
sure
I
know
that
people
have
done
work
on
like
like
I
guess,
they're
I,
don't
know
if
they're
called
chaos,
chips
or
like
they're
chaotic
circuits,
so
there's
been
some
work
done
on
that
where
they're
looking
at
you
know
like
resilient
chips
and
resilient
architectures,
that
sort
of
reduce
this
threshold,
you
know
to
criticality
type
stuff,
so
I
don't
know
the
state
of
that.
It's
not
like
I
mean
I
have
some
readings
on
it.
A
Maybe
I
can
send
people
readings
if
they're
interested
but
I,
don't
think
it's
a
maybe
at
the
forefront
of
like
research
and
sort
of
the
chip
industry
in
that,
but
I'm,
not
sure
but
yeah.
That
is
a
that,
isn't
a
thing.
That's
general
interests
is
like
finding
networks
that
are
resilient
to
failure.
So
it
is,
you
can
imagine,
with
electrical
networks,
it's
very
important
to
keep
those
running,
but
also
like
other
types
of
networks
that
are,
you
know,
can
fail
so
Jesse
had
a
comment.
These
maps
are
super
helpful,
I
forgot.
A
If
the
session
is
recorded,
yeah
I
can
give
you
a
recording
of
this
slideshow
here.
So
I
can
actually
send
you
the
slides
as
well
on
slack.
So
you
can
have
the
slides
and
follow
up
on
these
things.
So
Richard
says
see
my
then
this
is
the
Gordon
and
stone
paper
cybernetic
embryo
also.
So
this
is
in
his
folder.
A
We
didn't
like.
We
don't
have
a
direct
link
to
the
folder,
but
those
of
you
on
the
folder
I
think
I
can
send
a
copy
of
this
out
to
people
it's
a
book
chapter,
so
it's
not
readily
available,
but
this
is
the
first
version
of
the
cybernetic
embryo
paper.
So
there
were
two
versions
of
this
one
where
dick
worked
with
Rob
stone
on
it
and
the
other
one
where
I
worked
with
Rob
stone
on
it.
This
is
the
first
one,
so
I
can
send
that
out
to
people
who
are
interested
and
then
he's
Richard.
A
So
send
me
your
email
address
if
you
want
access
to
my
online
papers.
So
if
you
could
put
your
email
address
in
the
chat,
so
people
could
contact
you,
okay,
very
good,
and
you
can
contact
dick
Gordon
if
you
want
to
be
added
to
his
repository
and
so
the
one
more
comment
the
iliac
was
64
parallel
computers
was
from
the
University
of
Illinois
and
it
yes,
that's
true.