►
From YouTube: ICML 2019 Recap from Subutai
Description
Discussion at https://discourse.numenta.org/t/icml-2019-recap/6161
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
A
A
A
So
there
were
about
six
thousand
people
there
about
six
times
bigger
than
cosign
and
it
started
actually
on
Monday
last
week,
but
we
only
showed
up
on
Wednesday,
but
you
can
see
here
so
here's
sort
of
what
a
typical
day
would
be
like
each
of
these
is
a
separate
independent
session.
So
there
are
parallel
sessions
going
on.
Each
of
these
things
would
have
their
own
set
of
talks,
so
you
had
to
kind
of
figure
out
which
talks
you
want
to
attend
and
go
back
and
forth.
A
The
set
up,
I
thought
was
not
the
best
in
many
ways,
because
what
they
did
was
they
would
have
a
couple
of
tops
that
were
like
10,
minute
or
20
minute
talks
and
then
a
whole
bunch
of
like
five
minute
talk
which
are
spotlight
talks.
So
it
was
very
difficult
to
sort
of
run
back
and
forth
between
them.
So
what
I
ended
up
doing
is
just
picking
up
the
20
minute
talks
I
wanted
to
go
to
and
just
do
those.
A
The
good
thing
is
everything
has
a
poster
associated
with
it,
so
you
can
win
it
to
talk
or
not.
There's
a
posters.
You
can
always
go
back
and
talk
to
the
people
later,
but
it
was
a
pretty
hectic
time
going
back
and
forth.
I
think
both
Lucas
and
I,
like
the
workshops
better
than
the
main
conference,
the
quality
I
thought
was
actually
much
better.
The
workshops-
yeah,
okay,
some
of
the
rooms-
were
huge.
So
here's
an
example
of
that.
A
At
the
workshops,
each
workshop
was
actually
like
a
mini
conference.
It's
not
like
cosign,
where
you
have
just
a
few
talks
and
that's
it
for
each
workshop.
People
have
to
submit
papers
to
it.
They
get
accepted
or
not.
Then
there's
posters
and
there's
talks
and
invited
talks
and
contributor
talks.
So
it's
it's
like
a
little
mini
conference,
one-day
confidence
by
the
way.
This
is
an
example
of
I
think
they
had
like
four
big
rooms
like
this
and
several
smaller
rooms,
but.
A
A
A
This
is
a
picture
from
my
talk,
so
I
thought
my
top
went
quite
well.
The
recording
is
available.
If
anyone
wants
to
see
it,
but
I
thought
I
got
a
lot
of
interests
from
the
participants
on
the
stuff
and
I
think
it
was
very
easy
for
people
to
understand
and
it
was
very
independent
of
the
stuff
that
were
already
thinking
about
and
many.
C
A
It's
like
I,
like
1020
people
after
the
top
asking
me
questions
and
then
the
rest
of
the
time.
People
are
stopping
me
and
talking
to
me
about
it,
I
think
I
think
overall,
it
went
really
well.
People
are
very
interested
in
trying
out
the
sparse
stuff.
So
so
that's
good
we'll
see
what
happens
but
overall
I
hope.
That's.
A
So
it
was
so.
This
is
a
little
weird
because
so
then
the
main
conference
there
were
six
thousand
people,
I
think
in
the
workshops
that
were
significantly
fewer.
There
were
maybe
one
to
two
thousand
people
and
entirety.
So
when
you
had
a
workshop
thing,
the
room
was
not
anywhere
near
filled
up,
so
sometimes
you
look
at
it
and
it
looks
like
it's
almost
an
empty
room,
no
one
showing
up,
but
I
did
account
and
I
think
there
were
at
least
probably
two
three
hundred
people
from
my
car.
B
C
A
So
it's
a
very
different
feel
again:
there's
pros
and
cons
to
be
are
a
lot
of
people
came
up
in
the
post.
This
is
the
main
poster
session.
It
wasn't
as
big
as
sfn,
but
it's
pretty
big.
It's
a
pretty
huge
conference.
I
mean
poster
session,
so
I
have
a
few
pictures
of
things
that
were
interesting
and
then
Lucas
also
has
a
few
that
we
picked
out.
So
we'll
just
try
to
go
through
it
pretty
quickly
and
then
later
on,
we
can
dive
deeper
into
a
few
things.
Couple
of
fun
things.
A
I
saw
I,
sent
this
to
Donna
I
think
she
would
be
amused
live
so
this
is
called
inverse
knitting.
So
this
is
a
machine
learning
system
that
essentially
takes
a
picture
of
some
well.
You
know:
wool
clothes
with
some
pattern
and
it
figures
out
the
instructions.
You
need
to
send
to
the
knitting
machine
to
replicate
that.
A
A
B
A
C
A
A
Exactly
so,
it
takes
an
image
and
figures
out
what
the
angle
is
in
this
in
the
space
and
I
think
that
they
do
a
bunch
of
different
things.
So,
given
two
of
such
images,
they
can
figure
out
what
the
relative
rotation
would
be
to
go
from
one
to
the
other.
They
call
this
spherical
CNN.
They
also
have
generative
versions
of
that.
So
you
can
give
an
image
and
an
angle,
a
delta
angle,
and
it
will
then
rotate
the
image
to
the
angle.
You
want
to.
B
A
D
A
C
A
B
A
Inverse
is
a
hardware
company
they're,
making
I
talk
to
the
back
for
a
bit
there
in
Los
Altos
and
in
making
a
kind
of
refrigerator
size
supercomputer
that
has
hundreds
or
thousands
of
GPU
scale
systems
for
doing
machine
learning
for
doing
deep
learning,
so
they're
very
interested
in
sparse
stuff
as
well,
because
they
think
it
might
be
a
better
fit
to
their
architecture.
I,
don't
know
too
much
details
of
the
architecture
itself.
B
A
B
B
D
D
A
A
There
was
one
keynote
talk
and,
and
she
basically
that
she's
a
quote-unquote
neuroscientist
at
MIT,
but
she's,
really
from
coming
at
it
from
a
cognitive
science
perspective
and
looking
at
fMRI
recordings
and
EEG
recordings
and
so
on,
and
she
she
talked
a
little
bit
of
a
various
sort
of
big,
get
various
examples
of
different
cognitive
science
findings
that
perhaps
might
be
relevant
to
machine
learning.
Personally,
I
didn't
get
too
much
out
of
it,
but
I
think
Lucas.
You
got
stuff
out
of
it.
A
This
feature
from,
let's
say
patch
23
as
input
they
try
to
predict
the
image
from
the
patch
directly
below
it,
and
so
the
patch
directly
below
it
becomes
a
training
example
for
this
network
and
that's
treated
as
a
positive
training
example,
and
then
they
take
a
bunch
of
other
random
patches
and
that's
treated
as
a
negative
training
example,
and
so
they
train
this
through
back
propagation.
They
back
propagate
through
this
network
and
then
back
through
all
of
these
other
layers
in
this
network.
A
B
A
I'm
saying
that
very
much
on
purpose,
because
I
was
exactly
the
first
thought
that
occurred
to
me
is
like
well,
you
know
what
we
might
do
is
given
a
patch
and
a
movement
vector
try
to
predict
patch
on
where
the
movement
might
take.
You
a
saccade
they're
not
doing
that
they're
just
always
predicting
the
patch
below
you.
C
A
A
A
Now
you
can
so
once
you've
done
all
this
unsupervised
training.
Now
you
can
put
a
regular
classifier
on
top
of
it
and
try
to
teach
the
system
specific
categories
and
what
they
find
is
that
if
you
just
give
it
once
you've
done
all
of
this
set
my
supervised
training.
If
you
just
give
it
a
small
number
of
labeled
examples
per
class,
this
system
does
much
better
than
on
the
internet,
been
fully
supervised
system.
That's
only
training
on
that
task.
A
So
you
don't
lose
anything
by
doing
this
and
then
the
third
thing
is
that
they
claim
that
if
you
do
this
unser,
so
a
lot
of
people
use
an
image
net
trained
model
and
they
try
to
apply
to
other
image
problems,
transfer
learning,
and
they
claim
that
if
you
do
transfer
learning
with
their
system,
it
works
much
better
than
if
you
take
a
pure
image
net
printing
system
and
their
argument
is
by
doing
this
radicular
stuff.
It's
learning
better
features
than
a
purely
supervised
training
system.
A
A
A
So
this
means
that
you
can
train
each
one
through
local
learning
rules,
you
don't
need
you
never
knew
back
propagation
through
the
whole
network.
So
what
is
gradient
block?
That's
just
one
of
these
blocks
here
then,
this
is
like
they
do
learning
within
this
block.
So
but
there
are
some
gradients
within
this
part,
but
you
don't
do
that
propagation
throughout
the
whole
thing,
and
they
also
these
guys
and
the
previous
nets
have
applied
it
to
audio
as
well
as
images
so
that
they
can
deal
with
temporal
sequences.
C
C
A
C
C
A
That
guitar
and
an
audio
sent
sequence
of
someone
playing
guitar
and
the
network
learns
to
form
features
that
decide
if
they're,
exactly
aligned
or
not
the
audio
set
was
exactly
aligned
with
the
video
samples
and
once
you
can
do
that
now,
you
can
use
these
networks
for
lots
of
interesting
tasks
so
picture
of
it,
but
they
they
could
show
the
network
a
picture
of
a
single
picture
of
someone
playing
a
guitar.
That's
in
that
Fender
guitar
and
then
given.
A
A
A
This
one
bring,
you
know
because
it's
the
the
point
of
this
talk
was
that
you
need
to
deal
with
relative
locations,
not
absolute
locations
when
you're
training
things
and
they
did
a
construction
tasks,
were
given
a
a
gold
you
had
to
create
in
this
blocks.
World
create
stuff
that
match
the
goal,
so
here
they
I
think
they're
trying
to
connect
stuff
from
the
top
to
the
bottom.
Here
the
goal
was
to
cover
all
of
these
red
items
from
above
with
blocks
and
and
they
basically
showed
that
using
relative
locations
instead,.
A
C
At
the
benchmark,
I
think
we
can
try
your
model,
so
it's
based
on
Hendrix
paper
that
we
saw
night
here
they
have
this
all
these
different
transformations
to
the
image
and
your
mother
has
to
perform
well
and
all
of
those
so
that
the
performance
score
is
the
average
equity
on
all
of
those
different
transformation
data
set,
and
they
already
have
some
baseline.
So
they
have
you're.
Capturing
networks
differ
from
like
we
repaired
in
this
ABS
conquer,
and
this
guys
are
from
Google
and
they
said
why?
C
A
A
C
A
C
C
Paper,
so
this
is
evolution.
They
are
doing
basically
the
same
thing
with
his
curls,
like
crop
in
connection
by
magnitude
and
then
they're
having
random
connections.
The
only
difference
here
is
that
he's
doing
neural
pruning
his
power
when
he
says
narrow,
pruning,
he's,
moving
all
connections
off
of
Monmouth,
and
he
said
it
performs
even
better.
We
had
an
interesting
discussion
there.
C
C
C
C
D
C
C
B
C
I
went
several
introducing
thoughts,
but
I
didn't
think
I
just
made
notes
and
I
didn't
prepared.
So
this
poster
in
a
row
like
other
things,
I,
think
push
me
please.
Besides
the
thoughts
and
then
pence,
another
one
I
just
took
a
random
picture
and
it
started
look
clearly.
There
was
no
one
there
to
explain.
That's.
A
C
Is
work
from
program,
I
think
Martha
white
she's,
a
professor
somewhere
here
in
u.s.
and
she's
working
on
the
object,
representational
learning
and
she
caught
like
five
minutes
of
her
presentation.
But
the
five
minutes
I
caught
was
talking.
House
parts
was
very
important
to
him,
although
and
she's
using
like
sparse
activations.
So
yesterday
I
download
her
paper.
She
has
a
paper
on
that
and
I'm
going
over
it
to
see.
Okay,
so
then
the
wrong
just
death
after
training,
just
like
god,
five
minutes.
C
A
A
C
When
I,
just
not
here,
we
had
a
very
good
discussion
with
Jew
from
Luba
who
did
that
deconstructing
lottery
ticket
hypothesis-
oh,
he
was
very
into
I
work
as
well.
I
think
she
might
even
like
come
one
day.
Okay,
that
better
and
I
think
there's
I
saw
a
lot
of
interesting
work
and
continue.
Learning-
and
you
know
they're
not
here,
but
lifelong
learning
continue
learning
multi
task.