►
Description
Sixteenth DevoWormML meeting, December 16. Attendees: Richard Gordon, Ujjwal Singh, Vinay Varma, Devansh Batra, Yash Agarwal, Bradly Alicea, Susan Crawford-Young, and Jesse Parent
C
D
B
So
yeah
I
think
I,
don't
know
who
else
is
joining
us,
but
well
yeah?
Why
don't
we
get
started
and
then,
when
they
join
in
so
today
we're
gonna?
This
is
the
last
meeting
of
the
year
calendar
year.
Okay,
I
just
see
okay,
you'll
be
fairly
limited,
so
the
first
thing
we'll
be
doing
is
doing
the
hierarchical
temporal.
Modeling
talk,
okay,.
B
Well,
first,
are
we
doing?
Is
the
hierarchical,
tempora,
modeling
talk
and
then
we'll
go
on
to
talking
about
ji-suk
2020,
and
you
know
I
wanted
to
talk
about
that
I'm,
just
gonna
kind
of
go
over
what
our
planet?
What
the
plan
is-
and
you
know
what
the
schedule
is
for
that
it's
coming
up
pretty
soon.
I
know
this
is
a
summer
program,
but
the
planning
starts
very
early
on
this
weekend
at
the
end
of
the
calendar
year
in
previous
year.
So
you
have
to
kind
of
come
up
with
ideas
now
and
then
we'll
be.
B
You
know
ready
to
go
by
the
time
that
students
are
soliciting
the
program
in
January
February,
then
we'll
be
talking
about
two
papers,
one
from
the
one
thing
that
Vinay
actually
put
up
and
we'll
share
my
so
you
that
and
we'll
discuss
that
and
then
Jesse
also
had
a
paper.
That
word
will
just
do
a
quick
discussion
of
he's
going
to
present
it
in
a
later
time.
So,
let's
get
started
then
let
me
share
my
screen.
First.
D
B
All
right,
so
this
is
this-
is
gonna
tie
together
a
lot
of
what
we've
been
talking
about
in
last
several
weeks,
hierarchical
temporal
modeling,
and
maybe
some
alternate
approaches
to
that.
So
a
hierarchical
temporal,
modeling
htm'.
This
model
was
proposed
by
Jeff
Hopkins,
who
was
the
inventor
of
the
Palm
Pilot.
That's
him
here
at
the
upper
left
there
/
left-hand,
where
this
set
of
pictures.
Here
he
wrote
a
book
in
about
2004.
It
was
a
popular
book
called
unintelligence,
and
so
he
he
was,
he
did.
He
was
working
in
industry.
B
B
B
B
So
anyways
Jeff
Hopkins
had
this
thing
on
him.
He
had
sort
of
the
neurosciences
like
this
iceberg
and
theory
is
like
the
tip
of
the
iceberg,
and
data
is
the
under
the
iceberg
under
the
water.
So
we'll
keep
that
in
mind
and
then
Susan.
You
can
go
ahead
and
introduce
yourself
and
you
want
to
meet
yourself.
Okay,.
B
D
B
B
B
This
is
a
link
to
resources
on
the
unintelligence
book,
and
this
is
a
link
to
delete
dissertation
and
together
they
came
up
with
this
idea
called
the
CIA
or
the
cortical
learning
algorithm,
and
so
this
is
what
we're
kind
of
doing
talking
about
when
we
talk
about
hierarchical
temporal
modeling.
So
this
is
the
neocortex,
and
many
of
you
will
recognize
this
may
be
from
textbooks
or
maybe
like.
B
B
Are
you
know,
taking
inputs
from
other
regions
of
cortex,
the
middle
layer
is
taking
inputs
from
the
thalamus,
which
is
a
different
structure
in
them
in
the
midbrain
and
then
there's
a
and
then
the
lower
set
of
layers,
taking
inputs
from
the
brainstem
or
in
the
pine
green,
and
basically
things
get
processed
this
way.
So
this
cortical
layer
could
be
in
the
visual
area
like
an
in
v1
or
v2.
B
So
it
deals
the
visual
one
puts
mainly
it
could
be
in
the
auditory
cortex,
which
is
sort
of
in
the
temporal
area
which
is
inside
here,
not
time,
but
they
all
have
basically
the
same
structure
and
their
evolutionary
reasons
for
this,
of
course,
because
it's
easy
to
sort
of
replicate
a
structure
than
to
invent
new
structures.
But
in
humans
we
have
a
greatly
expanded
neocortex
and
we
devote
a
lot
of
our
brain
to
this
sort
of
structure.
B
Where
you
have
these
neurons
that
are
work
or
neural
pathways
that
are
organized
in
layers
like
this
and
you
can
see
using
different
types
of
staining
from
you
know
in
golgi,
Nissel
and
Weikart,
which
are
different
types
of
standing.
Maze
of
neuroscience
I'm.
Not
gonna
get
into
these
sort
of
the
procedure
of
each
one.
B
The
Golgi
stain
shows
these
the
the
axons
and
the
nissl
stain
shows
the
cell
bodies
mainly,
but
you
can
see
that
there,
the
or
with
that
things
are
organized
their
differences
between
the
layers
and
there's
also
the
structural
differences,
as
well
as
functional
differences
between
the
layers.
Now
that's
pretty
important
for
this
technique
because
it
doesn't
rely
on
these
layers
and
a
layered
approach
to
processing.
B
B
It's
a
supervised
de
proach,
so
you
need
input
data
with
HTM,
it's
basically
unsupervised,
so
you
can
plug
data
into
it
and
get
something
useful
out.
I'm,
not
really
sure.
If
you
actually
do
need
some
sort
of
training,
but
they
do.
There
is
a
learning
curve,
of
course,
but
they're.
Basically,
the
the
sewing
point
here
is
that
it's
considered
unsupervised
learning,
but
the
batch
size
needed
to
learn.
B
Then
that
means
that
for
HTML
bit
very
small
data
sets
are
sufficient
to
do
an
analysis,
whereas
a
deep
learning,
you
require
huge
amounts
of
data,
and
so
this
is
a.
This
is
a
you
know.
Probably
debatable
issue,
but
this
is
important
and
important
site
attributes
for
the
HTM,
so
HTM
there
unsupervised
they're,
similar
sort
of
a
clustering
or
maybe
like
self
organized
Maps.
B
If
you've
heard
of
that,
it's
kind
of
an
obscure
method
that
they
use
in
computational
linguistics
in
other
areas,
it
learns
continuously
from
new
input
patterns,
it's
actually
specialized
for
time,
dependent
sequences.
So
when
I
showed
you
the
brain,
the
those
columns
when
things
are
processed
down,
columns
of
cortex
like
that
it
takes
into
account
two
things.
It
takes
a
new
con
spatial
context
of
the
data,
but
also
temporal
context
and
the
sort
of
the
order
in
which
things
are
presented
to
the
network,
and
it's
also
robust
with
respect
to
inputs
and
outputs.
B
B
So
this
is
an
example
when
this
figure
came
out
kind
of
blurry
in
this
mode.
But
this
is
comparing
a
param
Idol
there
on
in
an
HTM
neuron,
so
this
kind
of
neuron
is
a
biological
neuron,
a
pair
of
metal
neuron.
And
that's
because
it's
shaped
like
a
pyramid,
you
have
a
tender,
it's
down
here
and
dendrites
up
here
and
they're
more
dendrites
down
here
and
what
happens
in
this
neuron?
Is
you
get
feed-forward
from
other
neurons
as
part
of
the
network?
B
So
you
have
all
these
these
dendrites
that
are
taking
inputs
from
other
neurons
and
they
get
they.
You
know
they
get
summed
in
the
cell
body
here.
So
all
those
signals
go
to
the
cell
body
and
they
get
some
here.
There's
also
context,
which
is,
you
know,
part
of
the
feed-forward
signal
that
happens
when
you
get
these
interactions
between
dendrites
and
things
like
that,
so
they
all
come
up
into
the
cell
body
and
like
in
theoretical
neuroscience.
B
They
have
sort
of
functions
for
the
summation
process,
but
basically
those
inputs
are
some
door
somehow
weighted
and
then
they're
sent
up
to
the
up
the
axon
and
then
out
to
other
neurons,
but
you
also
get
feedback
from
other
neurons
up
the
synapses
up
here.
So
you
get
like
feedback
from
other
neurons
here
and
you
get
feed-forward
signal
from
other
neurons
here.
So
that's
the
typical
structure
of
a
pyramidal
neuron.
The
context
is
nonlinear,
so
it
occurs
in
contact
with
these
feed-forward
bridges.
B
By
contrast,
we
have
the
HTM
neuron,
which
is
this
thing
here
and,
as
you
can
see
there
is.
There
are
series
of
registers
and
those
registers
all
sort
of
fit
into
this
cell
body
or
processing
unit,
and
you
have
feedback
context
and
feed-forward
all
kind
of
being
sort
of
collected
in
a
register
and
then
sent
into
this
neuron
they're.
Actually,
a
number
of
summation
functions
here.
So
there's
one
for
feedback,
one
for
context,
look
for
feed-forward!
B
It
all
goes
into
this
neuron
and
then
it
goes
out
to
other
neurons
and
the
output
is
here
and
so
feedback
in
context
or
parallel
registers.
They
don't
really
a
co-occur
like
they
do
in
in
the
real
neuron
and
feed
forward
as
a
convolution
of
feedback
in
context.
So
when
it
enters
the
main
cell,
it's
involved
somehow,
and
then
you
have
an
output,
so
it
does
mimic
real
neurons,
but
their
differences
I
wanted
to
point
that
out
now.
B
B
Where
you're,
using
your
touch
to
you,
know,
grab
onto
the
mug,
and
you
know:
do
it
so
that
you're
not
squeezing
it
too
hard
and
breaking
the
mug
but
you're
not
holding
it
too
loosely
and
drop
it
so
and
then
you
want
to
move
your
hand
as
well
and
move
it
to
somewhere
else.
So
there
are
a
lot
of
things
that
actually
go
on
in
the
brain
when
that
happens.
B
So
this
network
has
to
encode
these
purple
receptive
stimuli
and
it
doesn't
using
this
cortical
hierarchy.
It
does
this
by
bringing
in
sensory
input,
but
also
locational
input.
So
there
are
locations
on
the
mug
that
are
important
relative
to
your
fingertips
and
your
fingertips
then
grab
those
locations
and
move
the
bug,
but
those
locations
have
a
spatial
context
in
your
sensory
input
from
your
fingertip.
B
So
so
this
context,
which
is
temporal,
meaning
that
you
know
there's
a
time
that
before
you
touch
the
mug,
we
need
to
have
the
mug
in
your
hand
and
then
as
you're
moving
it
there's
a
differences
in
time
in
terms
of
the
grip
and
things
like
that.
So
there's
a
lot
of
information
in
there,
and
so
you
end
up
having
locational
inputs
and
sensory
inputs
brought
together
in
this
input
layer,
and
these
happen
actually
in
different
columns.
B
So
for
each
location
on
this
mug
they
represent
it
with
a
different
column,
cortical
column
and
things
come
into
the
input
and
they
get
processed,
but
they
also
get
set
up
another
level
to
the
output
layer.
The
output
layer
actually
is
stable
with
regard
to
the
sensor
movement,
but
it
also
is
used
to
communicate
across
columns.
So
the
inputs
are
here
and
those
change
relative
to
the
sensor
position,
which
is
your
sensor,
is
your
fingertip.
But
it
could
be
a
robot
hand
too.
B
So
that's
where
these
the
term
sensor
and
then
the
outputs
which
remains
stable
relative
to
the
sensor
position.
So
the
outputs
are
here
and
those
kind
of
put
this
these
these
moving
targets
into
a
framework
that
can
be
sort
of
shared
amongst
the
different
locations
and
hence
the
different
columns.
So
that's
one
task
that
they
were
able
to
model
using
this
technique
and
that's
how
you
would
set
the
problem
up
so
for
different
problems.
B
You'd
set
the
columns
up
differently,
but
they
basically
work
in
this
way,
and
so
Jeff
Hopkins
has
a
quote
in
it:
I
don't
know
where
it
came
from,
but
I
found
it
online.
If
you
solve
a
problem,
no
one
has
solved
before
people
will
take
notice
and
he's
referring
to
his
approach,
which
is,
of
course,
people
will
ask
when
he
gives
these
presentations
on
it.
B
How
does
this
differ
from
deep
learning
and
how
is
it
better
than
deep
learning,
and
so
his
answer
is:
if
you
haven't
solved
it,
if
you
can
solve
something,
no
one
solved
I
mean
by
solve
I
mean
like
if
I
maybe
now
find
a
solution
to
but
perform
well
on.
People
will
take
notice
and
say:
oh
that's,
a
pretty
good
technique.
So
one
way
you
can
do
that
is,
you
can
address
from
perhaps
morphogenesis
with
the
HTM
model.
B
Given
our
interests
in
this
group
and
I
just
brought
up
two
examples
for
people
who
aren't
familiar,
you
have
turns
a
reaction,
diffusion
model
and
you
positional
information,
and
so
those
are
two
competing
models
in
the
literature.
But
let's
think
about
that,
maybe
in
terms
of
using
the
HTM
up
roach
to
like.
Maybe
how
would
we
like
rephrase
this?
These?
You
know
questions
and
morphogenesis,
but
using
the
HTML,
and
so
this
would
be
a
totally
different
approach
than
either
Rd
or
positional
information.
B
How?
How
would
we
apply
HTM
to
this
problem,
so
I
just
bring
this
slide
up
just
to
kind
of
get
people
thinking
and
you
know,
I'll
share
the
slides
and
you
can
think
about
it
later.
How
would
you
apply
so
I
mean
we've
been
talking
about
how
mat
apply
deep
learning
and
machine
learning
techniques
like
you
know
regression
and
other
types
of
things
to
biological
problems
like
image,
processing
one
of
the
problems
in
image
processing.
Is
this
morphogenesis
problem
which
you'll
find
in
you
know
characterized
didn't
say,
like
movies
of
you
know,
developmental
processes?
B
How
could
we
model
that
using
HTM,
and
is
that
even
appropriate,
so
I'm
not
going
to
discuss
that
now,
but
I
want
people
think
about
that,
and
maybe
we
can
discuss
that
later
in
different
ways.
So
I
want
to
get
onto
the
other
alternative
models.
So
I've
been
talking
about
HTM,
but
there
are
other
models
that
are
very
different
from
say,
like
deep
learning
and
machine
learning
that
I
wanted
to
cover
just
again
to
get
people
thinking
about
it
and
be
aware
that
it
exists.
B
It's
the
sort
of
the
temporal
pattern
of
the
electrical
activity
goes
through
the
network,
so
they
have
concepts
like
rate
coding,
and
you
know
other
types
of
coatings
that
you
know
you
take
like
action
potentials
you
take
like
spikes,
and
you
know
you
kind
of
sum
them
up
or
you
count
them
counter
interval
or
whatever
they're
different
types
of
coding,
and
then
you
and
that
that's
the
important
parameter
in
these
networks,
not
so
much
the
topology
per
se.
So
you
know
the
idea
would
be
you
train
something
to
have
a
characteristic.
B
B
This
is
a
better
example
here
there
are
two
papers
that
I
found
in
the
literature
fairly
recent
papers
on
applications
of
neuromorphic
models.
So
you
have
you
know
you
have
this
network,
you
have
a
computational
layer,
but
what's
important
about
neuromorphic
models,
as
you
can
encode
them
into
hardware.
So
there
are
a
lot
of
models
that
have
been
coded
in
the
hardware.
Carver
Mead
was
a
pioneer
in
this
area.
B
He
was
about
25
years
ago,
or
so
he
developed
neuromorphic
models
that
you
can
encode
directly
under
hardware
and
they
have
this
sort
of
design
where,
instead
of
using,
like
a
you,
know
a
neural
network
you're
using
something
more
akin
to
a
circuit
and
so
they're
different
way,
but
they're
designed
in
a
way.
That's
you
know,
mimics
sort
of
what
the
brain
is
doing
instead
of
like
a
traditional
circuit.
B
B
Another
approach
is
called
dendritic
trees,
and
so
this
is
again
where
you're,
using
a
different
part
of
the
neural
machinery,
to
sort
of
represent
your
problems.
So,
instead
of
using
in
like
a
deep
Learning
Network
where
you
have
connections
between
nodes
that
mimic
neurons
and
then
you
use
weights
to
mimic
synapses
you're,
actually
using
your
computing
them,
what
they
call
didn't,
critic,
trees
and
again
I'm
not
going
to
walk
through
it
in
detail.
B
But
I'm
gonna
point
you
to
review
paper
here
where
they
talk
about
sort
of
using
dendritic
trees
as
a
memory
structure,
and
so
they
actually
review.
You
know
what's
going
on
in
the
brain,
and
then
this
this
author,
Mel
he's
actually
done
a
bunch
of
work
on
that
in
this
area.
He's
at
USC
in
the
US
University
of
Southern
California
Bartlett
Mel
is
his
full
name
and
he's
done
a
lot
of
published
a
lot
of
work
on
these
dendritic
trees.
And
so
that's
another
way.
B
You
can
represent
your
problems
in
terms
of
what's
going
on
in
the
dendrites,
as
opposed
to
like
in
the
cell
body
itself
and
then
finally,
there's
a
nun
approach
and
I
wanted
to
mention
this.
This
is
a
little
bit
closer
to
neural
networks,
but
it's
different
from
like
deep
learning
and
other
types
of
neural
networks
that
we've
talked
about,
and
that
is
neuro
evolution
and
something
called
NEET,
which
is
what
does
it
stand
for?
It's?
Oh,
yes,
neural
evolution
through
augmenting
topologies,
so
you're,
basically
evolving
neural
networks,
and
so
these
are
genetic
algorithm
for
this.
B
B
The
genetic
algorithm
is
so
they
call
these
things
a
hybrid
model
or
you
have
two
competing
models
or
to
know
if
they're
competing,
but
in
this
case
the
models
are
working
together
to
ensure
some
sort
of
action.
So
you
have
this
neural
network
which
takes
in
observations
and
there's
action,
that's
made
on
the
environment
and
then
that
results
in
a
fitness.
B
So,
instead
of
having
like
a
loss
function
where
you're
trying
to
minimize
it,
but
you
don't
really
know
or
parameter
to
minimize
on
the
genetic
algorithm
is
able
to.
Actually
you
know
do
something
a
little
bit
more
continuous
and
as
long
as
you
get
the
fitness
function
fairly
right,
you
can
do
you
know
you
can
have
a
lot
better
solution
to
your
problem.
So
there
are
a
couple
of
citations
here,
and
this
is
basically
just
talking
about.
How
do
you
evolve
the
neural
network
using
this
type
of
topologies?
B
It's
an
area
that's
been
around
since
maybe
the
early
2000s,
but
you
know
people
have
published
on
it.
There's
been
a
fact.
This
review
is
from
this
year,
so
people
were
using
this
approach
and
if
you
look
it
up
online
they're,
even
like
videos
of
people
incorporating
this
into
robots,
robotic
design
and
other
types
of
applications,
video
game
I
think
there
was
one
where
they
were
using
it
to
play.
Video
games
like
they
would
do
with
a
deep
learning
model
where
they
test
it
on
Atari
games,
but
yeah
there's
some
interesting
applications
there.
B
B
B
Now
you
were
talking
both
models.
You
were
talking
about
like
the
deep
learning
in
HTM
or
I
mean
that's,
you
know,
then
these
are
all
models
and
they're,
probably
you
know,
there's
this
saying
that
says
all
models
are
wrong,
but
some
are
useful.
So
we
kind
of
maybe
go
by
that
philosophy
in
a
lot
of
this
yeah,
so
I
mean.
Let's
keep
that
in
mind
that
they,
you
know,
I.
Think
I.
Think,
though,
that
this
is
an
interesting
4e
and
you
know
getting
closer
to
like
how
the
brain
works.
B
But
of
course
you
don't
really
know
how
that
works,
because
it's
like
there's
the
brain
just
so
much
that
a
lot
of
these
models
can't
do
and
we
don't
really
understand
whether
green
is
it.
That's
why
we
have
you
know:
people
are
exploring
things
like
dendritic
networks
in
other
aspects
of
neural
processing,
and
so
dick
has
pointed
to
his
book
embryogenesis
explained
so
that
that's
actually
well
I
think
he
was
referring
to
the
models.
B
The
models
of
morphogenesis,
which
is
true.
It's
they're,
probably
wrong
in
a
lot
of
ways.
Dick
is
pointed
to
his
book
to
give
you
an
introduction
to
his
position
on
morphogenesis
and
how
like
he
was
mentioning
before
how
the
positional
information
model
might
be
wrong.
So
I
mean
again,
it's
like
I,
said
I.
Think
the
models
comment
fits
there
as
well
applies
there
as
well,
so
Susan
Crawford
young,
do
these
Boltzmann's
constant,
and
that
was
for
what
model
or
context.
B
B
B
B
Yeah,
like
I,
said:
if
I
can
send
the
slides
out
and
people
are
interested
in
a
certain
area.
They
can
do
it
a
little
bit
more.
Maybe
we
can
go
a
little
bit
further
into
the
you
know.
In
a
later
meeting
we
can
follow
up
on
some
of
them.
We've
done
that
a
couple
times
in
this
group
we've
followed
up
on
a
certain
topic
if
it's
really
interesting.
So
any
other
questions
on
that.
So.
B
Yeah
then
well,
well,
you
know,
I'll
send
out
the
slides
and
again,
if
you
have
any
specific
things,
you'd
like
to
explore
further
things
you
want
to
talk
about,
we
can
do
that
offline
or
you
know,
maybe
next
year
or
something
but
definitely
keep
like
thinking
about
it.
So
I'd
like
to
move
on
couple
other
things
so
I
was
talking
about
next
order
of
business
is
the
ji-suk
proposals
for
2020,
and
so
like
I
mentioned.
This
is
something
that
we
have
to
start
thinking
about
now
to
get
into
the
swing
of
things.
B
B
This
is
a
pre
train
model,
so
this
is
the
one
that
I've
come
up
with
for
a
diva
worm
and,
as
you
mentioned
before,
this
is
that
evil
worm
is
hosted
there
open.
So
what
happens?
Is
that
open
worm
the
to
an
organization
called
AI
ncf
and
we
asked
them
for
slots
for
they
they
sort
of
broker
between
us
and
Google
for
slots
for
the
program,
and
we
usually
get
a
lot
of
usually
as
many
spots
as
we
ask
for
what
happens
in
open
Worman.
B
Since
I,
NC
f
is
neuro
informatics
group,
so,
like
I've,
taken
a
lot
of
the
deva
worm
stuff.
That
kind
of
sold
is
like
being
relevant
to
like
developmental
neurobiology.
You
know
we
don't
necessarily
do
a
lot
of
talking
about
that
in
the
in
the
projects
like,
but
that's
sort
of
the
idea.
So
we,
this
is
the
thing
I
have
for
this
year,
that
I've
written
up,
pre-trained
models
for
developmental
biology,
and
so
this
developmental
machine
learning
is
actually
what
I've
kind
of
included
machine
learning
in
this
one.
B
So
this
sucks
about
a
project
that
will
center
around
building
your
pre
trained
model
for
developmental
biology
in
neurobiology
got
to
put
that
in
because
of
the
sponsor
or
organizations,
machine
learning,
interest
group,
and
so
this
is.
Our
group
here
has
published
a
blog
post
and
I'm.
Gonna
cite
the
blog
post
here
that
we
made
on
creating
models
and
the
advantages
in
need
for
pre-trained
models
in
this
area.
B
So
biological
development
is
characterized
by
correct
characteristic
shapes
movements,
changes
in
shape
and
temporal
processes
that
define
important
features
and
then
I'm
going
to
go
into
like
what
we
kind
of
want
to
think
about
during
this.
So
the
each
applicant
will
be
writing
that
proposal
and
I'm
trying
to
give
them
some
idea
of
how
to
pose
the
proposal.
So
I
kind
of
talked
about
what
we're
interested
in
extracting
spatio-temporal
temporal
features
from
image
data,
and
then
the
pre-trained
model
is
described
here
as
a
network
with
non-random
weights.
B
That
allows
you
to
generalize
about
your
problem
space,
and
so
you
know
in
linguistics
they
have
a
certain
set
of
pre
training
models
that
are
specialized
for
language
processing.
There
are
other
types
of
pre
train
models.
They
give
you
like
bounding
boxes
and
things
like
that.
But
we
don't
really
have
anything.
That's
you
know
maybe
like
developmental
biology,
friendly
or
neurobiology
friendly,
and
so
that's
what
we're
after
here
and
so
then
you
know
just
kind
of
talk
about
the
assets
of
the
open-room
foundation.
B
Talk
about
your
you
know,
institutional
support
and
then
the
programming
requirements
I
see
a
type
over
there.
Programming
requirements
which
are
listed
right
now
is
C++
and
Python,
and
a
machine
learning
platforms
such
as
tensor,
Florek,
eros
and
I
think
that's
I,
think
they're
profitable.
There
are
things,
I
could
add,
I,
don't
know
about
the
languages.
I
think
Python
is
yeah
with
speaking
of
experience,
the
best
language
for
like
an
open
setting.
You
know
we're
creating
open
source
software
and
something
that's
you
know
relatable
to
a
scientific
computing
context.
B
In
our
other
group,
we
have
someone
who
last
year's
ji-suk
student,
who
is
exploring
different
languages
like
Julia
and
all
these
other
languages,
but
that
might
be
a
little
too
obscure
for
some
applicants,
but
I
and
I
don't
want
to
I.
Don't
necessarily
want
to
go
into
a
huge
debate
about
it
now,
but
I'd
like
to
hear
people's
feedback
on
this
proposal.
So
I'm
going
to
put
I'll,
send
out
a
sort
of
a
link
to
the
proposal,
and
then
you
can
comment
if
you
wish
I
I.
B
Think
probably
just
in
terms
of
you
know,
maybe
the
mechanics
of
what
we're.
After,
if
you
think
it's
not
really
something
that
can
be
done
in
a
you
know,
a
summer
long
period
but
I'm
trying
to
give
them.
You
know
sort
of
bounds
for
their
application,
so
people
are
gonna,
write,
applications
that
are
going
to
be.
B
So
if
illusional
was
talking
about
medical
images
or
something
that
you
might
be
able
to
fit
that
into
one
of
those
projects
as
well,
if
you,
you
know,
maybe
write
it
with
a
little
bit
of
relevance
to
the
general
open
one
goal.
You
know
like
I'm,
interested
in
developing
a
technique
for
image
processing.
It
could
be
like
you
know,
in
medical
images,
or
it
could
be
images
of
a
worm.
B
You
know
that
we
can
maybe
draw
upon
like
the
movement
database,
which
we
have,
which
is
online.
We
can
draw
from
other
types
of
data,
not
just
developmental
data,
but
we
can
draw
from
you
know
different
types
of
secondary
data
or
out
there
not
even
just
image
data,
but
we
have
data
that
are
like
gene
expression,
data
and
you
know
different
types
of
image.
You
know
we're
extracting
things
like
stains
and
you
know
fluorescent
markers
so
things
that
are
like
just
segmenting
cells
but
you're
actually
measuring
you
know
different.
B
Okay,
so
today,
let's
see
today,
I
will
talk
about
how
we
can
extend
last
year's
project.
Briefly,
the
next
meeting.
Okay,
so
I
like
proposed
new
project,
so
we
can
finalize
it
should
be
this
year,
this
year's
project
so
as
well.
Why
don't
you
turn
on
your
mic
and
why
don't
you
say
a
few
words
about
that.
E
E
E
This
year,
I
like
I,
specifically
want
to
focus
to
how
to
lose
the
reach
of
all
good,
as
well
as
our
project
shows
foundation.
That
is
to
be
participation
when
have
you
seen
Tessa
so
this
year,
I
would
like
to
make
our
project
accessible
to
the
outer
world,
which
has
impresses
us
so
that
our
organization
more
engaging
with
these
kind
of
people,
we
can
then
sort
of
very
nice
pace.
So
this
time
this
is
the
work
according
to
me,
should
be
the
a
g-shock.
E
The
projects
in
2000
have
basically
based
on
segmentation
of
see
that
elegance
through
spring
images.
So
these
are
the
just
exact
sentences
of
thinking.
Ji-Suk
sweden
sude
our
yard
until
2019,
we'll
never
know
like
what
they
have
decided
to
do
in
their
research
projects.
So
I
am
just
going
to
eat
that
one.
So
in
2011,
when
all
the
says
that
President
Anwar
Sadat
aliens,
they
use
the
desires
of
the
segmentation
to
collect
data
about
static
location,
individual
cells
in
vector
that
establish
a
change
this
in
the
moral
position
changes
in
them.
E
Where
we
have
you,
they
have
used
a
microscope
images
and
data
for
research
on
PC
based.
There
is
a
former
project
in
2003
which
was
completed
by
now.
So
what
is
the
voltage
so
far,
so
this
year
we
may
has
made
a
significant
improvement
to
our
project.
Now
we
can
also
deploy
our
primitive
web
app
or
do
what
we
have
available
in
the
link
given.
This
is
the
first
of
unless.
E
B
E
Use
a
family,
in
my
opinion
and
to
a
tougher,
sell
type,
additional
neighbors
others.
This
work
is
all
this
is
also
something
which
is
working
to
defend
itself.
So
these
are
in
progress.
I
want
to
talk
specifically
about
the
tasks
which
is
not
been
completed
in
my
opinion.
So
do
you
find
segmentation
from
their
back
calculate
error
of
the
survival
position
by
measurements
of
our
data
will
be.
The
source
is
segmented
back
right
to
the
minions,
so
there
is
the
work
done
so
far.
This
is
the
work
which
is
the
problem
I.
E
E
E
So
what
still
needs
to
be
done
in
this
project?
So,
as
we
have
seen
like
semi-submersible
standalone
importance
of
the
wise
approach
blasting,
we
have
done
some
sort
of
walk
in
person
in
that
project,
which
is
under
supervision
of
admonition
and
each
other,
so
so
in
the
next
is
the
cells
are
different
stages,
increasing
rather
increasing
with
each
other
for
a
few
CSU's
like
search
engine
optimization,
we
can
do
some
other
things
so
that
it
will
be
become
easy
for
the
outside
people,
X's
overlaps
and
exists.
E
What
we
are
developing
like
this
will
increase
the
reach
of
work
to
as
well
as
what
we
are
doing,
what
we
know
and
I
taste
like
this
is
also
an
important
thing
which
we
can
make
like.
We
can
make
my
abilities
so
that
it
can
be
used
in
Athens
neck.
They
are.
The
publishing
is
much
easier
than
in
a
so.
E
If
you
are
making
something
I
guess
we
can
make
small
remedies
for
that,
so
that
whenever
you,
a
person
is
coming
or
he
has
to
be
segmentation
or
something
like
that,
he
can
simply
include
a
library
instead
use
a
segmentation
toys
which
we
have
used.
So
this
way
again
to
each
other
project,
about
what
you
want
to
say,
its
reach
calculate
an
error:
dissemination
I've,
seen
your
face,
point
segment,
delightful
image
and
some
other
image.
E
It
is
not
existing
line
up
and
we
have
to
make
sure
like
what
is
the
left
in
the
project
has
to
become
people
leading
this
less
yourself
burger.
So
how
to
increase
thinks
these
disappointing
thermal
cycle,
which
we
can
apply
so,
as
you
can
see
like
we
have
applied
in
the
digital
negative,
we
have
applied
in
some
sort
of
like
advanced
techniques
like
B,
Plan,
B,
3
intensive
know
in
creating
models.
So
we
can
use
these
kind
of
vehicle
models
in
all
existing
projects
that
people
know
in
result.
E
Second,
is
improving
on
existing
techniques
is
something
like
you
can
see
like.
If
we
want
some
sort
of
we
have
the
data
set
or
we
can
approach
it,
you
might
try
to
change
the
technique
than
the
last.
Yes,
the
not
it
might.
If
you
may
not.
That
is
also
not
the
kitchen.
Keep
a
check
on
that,
including
wherever
making
you
act
native,
which
do
you
like
at
this
fender
wear
app
is
based
on
a
sketch
on
the
application,
and
it
is
up
here
is
Hospital
HTML
p0.
E
We
can
increase
that
data
and
many
foods
during
this
G
soft,
so
that
we
can
meet
a
complete
final
product
so
that
whenever
someone
is
trying
to
access
out
the
project,
he
can
very
easily
interact
with
our
project,
and
so
this
will
make
this
project
as
well
as
look
more
regionally
with
people
outside.
We
are
interesting,
developmental
biology
and
at
like
PT
mornings,
as
I
am
already
so
what
should
be
that
HR
this
project?
So
in
my
opinion,
we
have
to
go
down
this
planet
in
two
parts.
E
E
Accessing
that
they
type
will
help
us
to
see
how
increasing,
as
well
as
it,
to
help
us
keep
a
check.
Every
single
date
on
the
quest
is
to
Alexis
or
data
like
some
data,
all
the
data's
not
open
source
I,
don't
know
much
about
like
what
kind
of
data
we
have,
but
I'm
sure
max
on
data
which
we
have.
We
don't
want
to
make
it
unavailable.
We
only
want
to
make
it
available
to
do.
Is
we
are
working
with
us?
E
We
want
to
make
sure
that
we
can
also
make
a
both
side
of
things
teacher
in
which,
so
it
will
give
us
all
the
statistics
of
what
they
image
is
being
uploaded.
Where
is
it
centroid,
where
it
is
like
other
error
rates,
and
all
these
things
in
a
single
dashboard
used
to
make
easier
for
the
shorter
piece
of
tail
readings,
and
so
that
she
can.
B
E
In
his
project,
whatever
he's
doing
this
time,
we
extend
it
to
a
one-stop
destination
to
access
all
the
things
we
still
offers
it
over.
All
that
we
can
do
and
we
can
click
on
like
what
we
need,
and
this
will
help
this
like
the
way
all
the
things
which
time
to
exist
is
in
the
one-stop
destination,
so
he
don't
have
to
go
other
than
websites.
E
E
Improving
the
second
teaching
techniques,
as
we
have
said
like
we
have
applied,
and
our
staff
technics
and
Pasadena
projects.
We
can
also
transfer
all
these
techniques
in
this
project
this
year
and
see
like
if
we
can
implement
your
reloads,
deploy
the
whole
code.
A
package
in
liable.
I
think
this
is
optional,
but
less
if
this
must
be
done
in
the
bathroom.
So
if.
E
It
increases
our
reach
and
many
many
like
any
social.
He
want
to
use
segmentation
for
petty
biological
image
analysis.
He
made
use
of
a
liability,
it
might
be
boarded
as
it
might
be
other
topics,
but
he
can
use
our
code
base
directory
for
his
image.
So
like
the
platform's,
like
as
your
notebook
and
your
little
notebooks
collapse,
easily
use
what
we
have
made.
E
Whenever,
like
a
team
and
for
this
year,
you
survive
it,
so
this
is
the
how
we
can
subjects
DCOP
now
I
have
made
the
final
document
on
this,
which
I
will
share
it
over
another
slack.
So
anytime,
a
solution
is
welcome
and
then
it
makes
me
have
economy,
there's
some
sort
of
suggest.
If
you
want
to
achieve
them,
I
guess
we
are
end
of
discussion,
so
then
the
beef.
E
E
F
So
the
presentation
is
very
nice
and,
like
I
could
see
you
covered
most
of
the
topics
and
you
through
line
and
work,
so
this
definitely
scope
for
improvement.
What
happened
last
like
when
what
happened
when
I
was
a
disaster
in
the
was
that
we
sort
of
to
some
dead
ends
on
some
time.
Delaying
events
like
working
with
Java.
F
F
The
2d
things
that
we
have
on
the
project
there
are
some.
There
are
some
things
that
we
wanted
to.
We
wanted
to
use,
not
proceed
with
then,
because
me
and
Douglas
are
kind
of
saw
some
difficulties.
If
we,
if
we
were
to
start
those
things,
I
can
definitely
discuss
discuss
with
you
those
things.
What
were
the
difficulties
that
I
faced
I
gave
you
quite
quite
a
lot
of
experience
in
that
three
months.
F
Yeah,
and
also
the
idea
that
you
have
introduced
that
I
haven't
included,
is
the
good
thing
right.
So
that
definitely
sounds
interesting
to
me,
because
people
can
make
any
record
of
what
they
have
uploaded
in
the
past
and
also
they
can
come
back
and
refer
to
what
what
they
uploaded
and
the
result
analysis
and
many
more
features,
if
you
can
think
of
that,
and
in
I
also
have
some
understanding
of
what
needs
to
be
improved
in
the
period.
I
can
definitely
help
you
with
that,
and
also
that
Lisa
knows
about
this
project
at
work.
B
B
F
B
B
B
I'm
gonna
go
to
the
chat
and
Nick
said
how
about
calculating
the
volume
and
surface
area
of
each
cell
versus
time.
So
we
have
that.
That's
another
thing
that
we
kind
of
did
but
didn't
really
get.
So
the
idea
is
that
you're
taking
images
and
you're
kind
of
like
you
know
right
now.
You
have
images
and
you
can
get
to
extract
the
data
with
cell
size
and
things
like
that.
B
But
we
don't
really
have
it
like
organized
with
respect
to
time
explicitly
that
might
be
another
thing,
but
that
they,
you
know
that
might
be
easy
or
hard
to
do.
It
depends
on
on
what's
involved,
but
we
have
the
infrastructure
in
place
already
a
bit,
so
it
hasn't
really
that
wouldn't
be
that
hard
to
extend
that
Devon
says
I,
think
you
mean
react,
Jas
and
not
react
native,
which
has
meant
for
mobile
apps.
B
B
Anyways,
thank
you
for
that,
as
wall
will
keep
send
out
you,
you
can
put
that
on
slack.
You
can
share
it.
Maybe
sure
it
is
a
Google
Doc
so
that
you
know
we
can
edit
it
or
you
know,
trim
the
permissions
on
so
that
people
can,
you
know,
get
the
link
and
edit
it,
and
you
know
we
can
see
you
know
I
mean
there's
some
it
well
I
mean
you
know.
B
You
write
it
up
kind
of
like
how
you
presented
it
I
think
it's
a
good
to
reference
like
the
past
versions
of
the
project
and
then
what
you
would
plan
to
do
and
then
you
know
give
some
like.
Well,
you
don't
really
even
need
to
give
the
idea
of
what
code
needs
to
be
what
coding
languages
need
be
used.
You
already
know
that,
so
all
right,
so
that's
that's
good.
So
we'll
continue
this
discussion.
Probably
in
slack
and
I,
can
loop
people
in
by
email
as
well.
B
It's
it's
a
top
of
the
hour
I
want
to
in
tin
you
a
bit
with
a
couple
other
things.
So
if
you
have
to
leave
now
it's
fine
but
we'll
try
to
continue.
Maybe
for
another
20
minutes:
what
do
we
have
here
from
Vinay?
We
were
able
to
extract
cells,
centers
and
areas
from
images.
We
can
extend
this
to
work
on
videos.
We
have
the
algorithm
and
web
app
in
place.
Yeah
I
think
that's
that's
true.
F
F
F
How
much
does
it
depend
on
the
data
that
that
we
have
and
in
the
data
that
it
has
been
trained
on,
and
also
also
is
the
performance
increment
good
enough
to
trust
the
model,
because
they
cannot
afford
any
many
false
positives
or
false
negatives?
When
we
are
dealing
with
the
dealing
with
healthcare
data,
so
I
wanted
to
bring
that
up.
Also,
we
can
use
that
paper
to
extend
our
blog
post
that
we
have
written
previously.
B
B
You
know
we
can
talk
about
that,
maybe
in
another
meeting
next
year,
but
he
wanted
to
point
that
paper
out
to
people
plus
he
wanted
to
talk
about
perhaps
extending
the
blog
posts
that
we
wrote
on
pre,
train
models
in
the
direction
of
transfer,
learning
and
so
I
have
I.
Think
that's
great
I
would
I
don't
know
if
people
want
to
participate
in
that
we
can
talk
about
it,
some
more
but
Vinay
and
I
might
like
start
up.
Maybe
a
good
way
to
do.
B
It
is
to
start
up
a
draft
paper,
and
maybe
this
article
would
be
a
seed
for
that,
like
you
go
through
the
paper
and
write
down
some
ideas,
yeah
jesse
is
interested,
so
you
know
to
get
that
like
organized
and
then
eventually
we'll
do
the
same
thing
as
we
did
for
the
free
training
models,
blog
post,
it
might
be
a
bit
shorter,
but
we'll
get
it
organized,
maybe
even
offline
or
on
slack
or
something
we
can
figure
out
a
way
to
write
it.
So
that's
something
else
interesting.
B
Those
were
also
I.
Think
as
well
as
returning
back
to
the
G
sock
project.
We
can
use
Google
Cloud
this
time
so
that
our
model
runs
at
a
faster
rate,
make
our
mortal
a
little
more
complex,
yeah.
That's
all
and
optimizing
infrastructure
that
we
have
on
open,
Devo
cell,
so
so
yeah,
the
blog
post.
We
can
talk
about
that
more
as
well.
If
you're
interested,
let
me
know-
and
then
I
can
connect
you
into
this,
where
you
know
where
we
end
up
doing
it.
B
Please
check
out
that
paper
link
and
but
then
also
I
want
to
talk
about
something
with
Vinay.
You
mentioned
something
or
actually
no.
This
is
actually
for
Jessie,
so
I'm
gonna
share
my
screen
again
and
I
want
to
talk
about
this
paper
that
you
sent
a
while
back,
but
so
this
one
is
reconciling
modern
GT
learning
practice
and
the
bias-variance
tradeoff,
and
so
you
want
to
do
a
longer
presentation
on
this
later,
but
I
just
wanted
to
like
talk
about
it
a
little
bit
today.
B
B
B
The
idea
of
this
paper
is
the
bias-variance
tradeoff.
Is
it's
a
problem
that
you
find
in
a
lot
of
machine
learning,
algorithms,
where
so
the
bias-variance
tradeoff
implies
that
a
model
should
balance
under
fitting
and
overfitting.
So
underfitting,
of
course,
is
when
a
model
does
not
match
the
predict.
You
know
what
it
wants
to
predict,
so
it's
not
doing
a
very
good
job
of
predicting
it.
Purrs
overfitting
is
it's
picking
up
on
certain
features
and
it's
predicting
over
predicting
those
features.
B
And
it's
you
know
it's
not
not
giving
itself
enough
sort
of
variance
to
to
find
the
solution.
It's
picking
up
on
certain
features,
and
it's
it's
it's
right
over
representing
those
things
so
under
fitting
north
fitting
your
things,
we
don't
want
in
our
models
and
so
in
overfitting
you're
talking
about
fitting
spurious
patterns
in
under
fitting
you're
talking
about
not
being
able
to
figure
out
what
the
structure
is,
so
they
talk
in
this
model.
B
B
So
the
the
actually
in
this
paper
they
talked
a
little
bit
about
that
problem,
and
then
they
introduced
a
new
concept
called
the
double
descent
curve,
which
is
actually
so
in
the
textbook
literature
of
machine
learning.
They
talked
about
a
u-shaped
bias-variance
tradeoff
curve,
so
it's
a
u-shaped
curve
that
they
show
on
the
paper
right
here
in
Figure
one-
and
this
is
where
you
have
the
test
versus
the
training
data,
where
you
have
under
fitting
and
over
fitting.
B
So
you
when
you
get
to
the
like
a
middle
middle
of
the
capacity
range
here,
you
sort
of
minimize
between
under
fitting
and
over
fitting.
In
the
test
case,
you
get
this
risk
of
overfitting
in
the
test
case,
where
capacity
is
high
and
they're
saying
that
this
is
something
that
you
know.
This
is
the
traditional
view.
B
They're
talking
about
now
is
the
double
descent
risk
curve
which
actually
shows,
if
you
observe
a
lot
of
behavior
of
high
capacity
algorithms,
you
actually
begin
to
see
this
pit
pop
pattern,
which
is
where
you
have
this
classical
regime,
and
then
this
modern
interpolation
regime
here.
So
this
is
like
a
different,
a
little
bit
different
way
of
like
thinking
about
the
the
trade-off
they
use.
This
I
think
they
actually
use
this
on
real
models
and
they're
using
you
know
so.
B
They're
the
the
old
model
kind
of
comes
from
earlier
methods
of
machine
learning,
and
this
one
is
actually
based
on
looking
at
data
sets
that
exists
in
the
real
world
in
real
applications,
and
so
they
basically
are
improving
upon
this
model
and
they
go
through
talking
about
how
this
double
descent
curve
works
in
neural
networks.
So
they
go
through
some
of
the
math
and
then
they
actually
were
able
to
use
this.
You
know
model
it
theoretically,
so
there's
some
nice
graphs
in
here
that
show
how
this
idea
works.
B
So
they
so
they
just
talk
about
it
as
being
about
having
a
better
understanding
of
model
performance,
and
so
they
kind
of
talk
about
how
this
opens
up
new
lines
of
inquiry
to
study
the
properties
of
these.
So
this
is
has
to
do
with
the
properties
of
networks
themselves.
So
we
were
talking
about
this
with
the
in
last
week's
talk
about
the
you
know,
sort
of
the
inherent
properties
of
these
models
and
how
they're
under
studied-
and
this
is
one
example
where
they're
actually
studying
sort
of
the
basic
attributes
of
these
models.
B
B
Jesse
is
gonna,
give
him
longer
presentation
on
it
the
new
year.
So
look
forward
to
that.
So
dick
has
a
question.
He
has
a
link,
that's
too
big
to
paste
here.
What
do
I
do?
I'm?
You
can
send
it
to
me
and
I
can
send
out
an
email
after
the
after
the
meeting
on
this
I
can
send
the
link
like
in
the
email,
I'm
gonna,
send
out
some
materials
on
what
we've
been
discussing
here.
B
So
you
can
just
do
that,
send
it
to
my
email
and
then
I'll
send
it
out
to
everyone
else,
because
I
don't
know
how
long
the
link,
how
long
the
link
they
accept
in
this
shed
I,
don't
know
why
I
wouldn't
work
but
yeah.
That's
just
just
send
it
in
an
email
and
I'll
send
it
out.
We
can
also
you
could
also
use
like
tinyurl
if
you
wanted
to
shrink
it
down.
B
Okay,
so
Jesse
says
sorry,
oh
I'm,
a
lot
ability
to
communicate
in
depth
right
now,
but
we'll
give
them
all
four
more
full
presentation
on
this
in
the
future.
Oh
the
size
of
text,
not
a
link,
send
it
as
a
file.
Yeah
you'd
have
to
send
me
the
file
can't
send
files
in
this
meeting
space,
but
yeah
send
me
the
file
by
email.
It
should
work
and
on
gmail.
B
So
if
I,
don't
think
I
wanted
to
talk
about,
was
this
thing
that
Vinay
brought
up
and
Twitter,
and
they
talked
about
this
and
I
actually
saw
this
before
was
about
geoff
hinton
is
now
suspicious
of
backpropagation,
and
I
saw
some
rumblings
about
this
earlier
this
week.
So
the
people
follow
the
new
IPS
discussions.
B
So
this
is
a
something
on
Quora
that
someone
posted,
so
they
talked
about
so
Geoff.
Hinton
is
the
inventor
of
sort
of
deep
notes.
I
don't
know.
There's
dispute
is
whether
he's
the
absolute
inventor
of
it,
but
he
was.
He
did
one
of
the
papers.
That's
most
well
known,
so
neural
networks
actually
started
in
the
1980s
and
that
propagation
came
about
back
then
as
well.
So
like
there
were
a
couple
papers
that
were
published
on
neural
networks
of
back
propagation,
led
to
this
whole
explosion
of
work
in
neural
networks.
B
B
They're
talking
about
back
prop
and
how
it's,
although
it's
adequate
for
modeling,
it's
not
really
the
way
the
brain
does
it.
So
one
point
is
that
back
propagation
of
a
deep
neural
networks
is
about
as
much
to
do
with
the
brain.
How
the
brain
learns
is
modern,
jet
airplanes
have
to
do
with
the
way
birds
fly,
both
jets
and
birds
fly,
but
they
do
so
using
entirely
different
principles.
So
what
you
know
jets
can
do
things
that
birds
cannot.
B
Birds
can
do
things
that
Jets
gonna,
so
you
know
back
prop
is
is
like
feedback
in
the
neuron,
as
we
saw
in
the
talk
today,
but
it
doesn't
do
it
in
the
same
way,
and
so
therefore
it's
not,
we
can
maybe
do
better
actual
neurons
and
the
brain
larger
work
by
Spike
trains,
which
we
also
talked
about
in
today's
talk,
each
neuron
is
sending
out
Dada
messages
which
are
like
Morse
code
to
neighboring
neurons.
So
there's
a
temporal
aspect
that
modern
neural
networks
aren't
picking
up
on.
B
Let's
see
so
apparently
Francis
Crick
was
very
skeptical
of
a
neural
network
models.
He
made
the
analogy
to
Aristotle,
who
simply
declared
that
men
have
more
teeth
than
women.
Of
course,
mister
Aristotle
would
simply
looked
inside
missus
missus
Aristotle's
mouth.
He
would
have
discovered
that
she
had
the
same
number
of
teeth
and
obviously
wrong.
Hey
I
guess
that's
a
they
don't
know
exactly
that
fits
together,
but
similarly
Crick
felt
that
neural
network
models
were
simply
largely
ignorant
of
the
way
the
brain
worked
in.
B
The
real
neurons
worked,
which
is
maybe
a
fair
criticism,
but
you
know
it's.
You
know
we're
trying
to
abstract
function
from
you
know
something
so
I
mean
it's
a
pretty
good
attempt,
but
it's
not
biology.
Just
a
zero
deep
learning
models
are
incredibly
wasteful
of
training
data,
so
we've
talked
about
that
and
unsupervised
learning
and
reinforcement
learning
must
be
the
primary
modes
of
learning,
because
labels
mean
little
to
a
child
growing
up.
In
other
words,
we
don't
need
supervised,
learning
to
learn
like
children.
B
Do
children,
learning
unsupervised
ways
and
that's
really
kind
of
the
key
to
learning
so
supervised
learning
is
we
do
some
of
that?
But
we
also
do
a
lot
of
unsupervised
learning
and
that's
where
you
have
to
focus
your
efforts
and
so
I
think
that's
an
interesting
set
of
commentary
there.
If
you
want
to
look
look
at
that,
some
more
and
think
about
it,
please
do
so
like
I
mentioned.
This
is
our
last
meeting
of
the
year
thanks
everyone
for
participating,
especially
those
of
you
who
have
made
every
meeting
I,
was
very
appreciative.
B
B
Okay,
dick
sent
me
a
paper
by
email,
so
I'm
gonna
be
sending
an
email
summarizing
some
of
the
points
from
this
meeting
and
then
I'll
be
in
contact
for
the
new
year
about
our
meeting
scheduled
for
next
year
and
have
a
good
holiday.
Everyone
thanks
for
meeting
once
again
and
I'll,
see
you
on
slack
or
on
line.