►
Description
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
B
B
C
D
B
B
A
So
I
went
to
I
clear
in
New
Orleans,
and
there
were
a
couple
of
kind
of
high
level
thoughts
ahead
about
it.
You
know,
obviously
it
was
all
machine
learning.
There
was
a
little
bit
of
neuroscience,
but
not
much,
but
there
are
a
bunch
of
people.
I
met
who
were
quite
interested
in
really
science
inspired
stuff,
but
there
was
very
little
at
the
conference,
so
I
think
it's
it's
kind
of
good
timing
for
us
to
be
applying
neuroscience.
Everyone
I
spoke
to
about
that
kind
idea.
A
What
a
true
statement,
particularly
when
I
start
talking
about
the
specifics.
Yes
I,
think
that
that's
a
good
thing.
I
had
I
I
tweeted
this
yesterday,
but
or
two
days
ago,
but
I.
It
was
like
really
striking
to
me
to
see
how
modern
machine
learning
conferences
are
run
versus
neuroscience
conferences
and
it's.
A
A
So,
first
of
all,
every
accepted
submission
has
a
full
paper
that
goes
with
it.
It's
unlike
cosine,
or
these
other
neuroscience
conferences
with
cosine.
You
submit
like
that
two-page,
abstract,
but
all
that's.
There
is
a
one-page
abstract,
whereas
here
everything
here
comes
with
the
full
thing
and
you
can
there's
a
link
to
it
on
the
notes,
so
I
think
I.
A
A
Okay,
yeah
somehow
it
got
paused
yeah,
but
every
paper
has
these
detailed
reviews
and
the
way
they
did
the
review
process.
It's
all
open.
Anyone
can
see
it.
Anyone
from
the
community
can
chime
in
with
their
own
reviews
of
the
paper.
So
during
the
whole
submission
process
he's
a
really
nice
reviews
and
you
can,
as
an
author,
you
can
respond
as
many
times
as
you
want
to
the
reviewers.
A
B
B
A
D
B
A
A
problem
they
can
control
it,
but
but
I
didn't
see
any
of
that
as
I
look
through
it
and
you
can
see.
There's
a
really
nice
reviews,
the
reviewers
know
their
reviews
are
gonna,
be
public,
so
they
do
a
better
job
with
it.
This
is
way
better
than
the
cosine
reviews
that
you
get
after
cosign.
There's
tons
of
complaints
about
how
reverse
reviewers
would
reject
with
like
almost
no
explanation
or
all
they
would
say
like
there's
not
enough
detail
things
like
this,
because
he
only
had
the
two
pesach
submission
here.
D
D
B
A
Here
what
happened
is
that
every
there's,
an
area
chair,
that's
coordinating
a
bunch
of
papers,
so,
ultimately
the
area
chair
looks
at
the
semi
reviewer
gives
their
review
and
then
gives
a
rating.
And
then,
if
there's
a
ambiguity,
the
area
chair
can
then
decide
so
the
area
chair
seeing
the
whole
bunch
of
these
and
can
make
a
decision
about
it,
and
you
can
see
how
long
this
says:
I
missed
tons
of
detail
and
years
like
an
anonymous
person
who
said
I,
was
really
enjoyed.
A
Your
paper,
but
I'm
confused
about
Bella
by
the
authors,
want
to
respond
because
the
reviewers
I've
seen
this
too
yeah.
So
it's
a
really
nice
process.
I
thought
this
is
a
very
extensive
review
process
as
a
result
and
in
part
because
of
this
people,
basically
treat
these
main
machine
learning
conferences.
If
you
get
an
accepted
right,
that's
like
having
a
peer-reviewed
journal
paper.
In
fact,
almost
no
one
in
the
machine
learning
paper
really
thinks
much
of
journal
papers
anymore,
a
few.
A
B
A
Open
interview
dog
that
you
know
so
you
got
the
hope.
You
have
questions
and
you
know,
as
the
person
is
talking,
you
can
look
at
this
thing
or
later
on.
It's
very
useful,
and
this
is
so
different
from
a
neuroscience
conference
where
they
don't.
Even
let
you
take
pictures
of
the
presentation
and
the
posters
is
so
ridiculous,
because
I
really
like
taking
the
pictures
and
going
back
and
reviewing
it
and
looking
at
it-
and
this
is
here-
there
was
like
no
stigma
against
pictures.
You
can
just
go
and
take
pictures
of
the.
D
A
A
B
D
For
example,
one
the
systemic
light
thing:
if
neuroscience
companies
are
dominated
by
experimental
results,
that's
one
of
the
reasons.
People
don't
wanna
talk
about
their
work
because
they
review
their
experimental
results
is
like
action
years.
Getting
this
I'm
not
going
to
like
that.
Anyone
else
take
it
yeah.
A
D
A
D
B
D
D
A
D
A
No
here
it's
the
conference
submission
the
conferencing
is
the
gold
standard,
so
people
put
tons
of
work
into
these.
The
amount
of
work
that
couldn't
do
the
submission
is
quite
amazing,
even
from
a
small
group,
so
they
spend
they
plan
their
whole
year
around
these
conference
add
lines
and
it's
it's
all
about
these
deadlines,
and
it's
a
the
deadlines,
have
an
impact
too,
because
there's
a
submission
date
and
then
there's
a
notification
date
and
all
of
this
stuff
happens
within
those
two
or
three
months.
So.
B
D
A
If
you
don't
get
it,
you
can
try
again,
but
it's
all
gonna
happen
within
the
to
three
months,
so
there's
a
pacing
to
it
and
I
kind
of
almost
a
project
management
aspect
to
it
that
I
think
speeds
things
up
and
that
I
don't
see
why
that
couldn't
be
I
know
where
the
nurses
often
they'll,
say:
Oh
run
this
other
experiment
run
this
other
experiment.
You
know
their
school
stands
another
two
years
and
up
and
they're
not
really
not
necessary.
A
A
What
else
can
you
say,
but
overall
there's
you
know,
neuroscience
is
inherently
slower,
but
there's
no
reason
to
make
every
aspect
of
the
scientific
process
slower
and
there's
just
modern
tools
and
modern
ways
of
doing
things.
It
was
nothing
like
this
20
years
ago
and
I
was
came
away,
really
impressed
with
just
the
process
that
they've
used.
The
other
thing
I
think
that
goes
on
behind
the
scenes
is
that
reviewers
themselves
are
rated
and
it's
all
behind
the
scenes.
A
B
A
Like
gifted
descriptions,
so
you
got
it
and
and
most
of
these
conferences,
not
all
of
them.
The
submission
is
also
anonymous.
Remember
like
we
had
to
do
with
the
thing
they
make
make
it
there's
a
bunch
of
rules
around
how
you
make
your
submission
anonymous
said
they
don't
really
know,
although
I
feel,
like
I,
think.
A
A
A
D
D
A
D
A
Was
the
three
ships
like
three
things
that
that
could
be
done
now?
One
is
to
just
analyze.
The
current
data
then
use
the
data
to
show
how
effects
currently
can
be
mitigated,
because
there's
lots
of
effects
like
flooding
like
on
and
some
cities
are
much
more
susceptible
to
it
than
others,
and
you
said
that
global
temperature
charts
are
very
reliable
and
if
they
have
that
in
hand
with
the
local
ones,
there's
huge
variations.
C
A
A
Yeah
and
then
she
just
listed
some
of
the
kind
of
machine
learning
challenges,
and
it's
interesting
that
some
of
those
challenges
are
tied
to
the
stuff
we've
done
with
the
with
HDMI
a
lot
of
the
typical
machine,
learning,
algorithms
assume
Heidi
day
Heidi
and
stationary
data.
Whereas
a
lot
of
this
stuff
is
streaming,
data,
predictive
models.
B
A
So
this
is
a
talk
by
deep
mind
and
what
they've
done
is
they
were
looking
at
generating
music,
where
you
take
care
of
all
of
the
timescales.
The
music
has
timescales,
that's
in
this
level
of
milliseconds
that
you
have
to
worry
about,
but
also
at
the
level
of
minutes,
which
is
the
structure
of
the
whole
piece,
and
then
seconds
is
in
between.
So
you
build
a
model
that
can
take
care
of
I
can
model
all
of
those
different
time
scales
and
then
what
they
did
is
they've
created
this
huge
data
set.
A
Would
they
put
like
nine
years
of
work
into
this?
The
red
come
there's
a
Cisco
music
competition.
That's
going
on
for
years,
hundreds
where
yeah,
which
was
a
piano
player,
is
playing
different
pieces
and
what
they
did
is
they
lined
up
the
actual
notes
on
the
on
the
on
the
page,
with
the
notes
that
are
pressed
by
the
player
down
to
the
millisecond
level,
precision
and.
A
D
A
D
A
D
B
D
A
That's
what
they
want
to
do
so
what
they're
collecting
a
data
set?
First,
because
what
happens
is
if
you
just
have
a
MIDI
piece
or
felt,
and
you
and
you
synthesize,
that
into
music.
It
just
sounds
very
plain,
whereas
if
you
have
a
human
playing
it,
it's
very
expressive,
and
so
they
wanted
to
have
AI
and
be
able
to
play
music.
That's
expressive
as
the
best
players
I.
D
Have
false
hold
some
sense?
It's
like
you
know.
First
of
all,
this
is
emotional
content
to
music
and
it's
very,
very
subjective
to
personal
experiences
in
your
history,
and
so
it's
like
a
Turing
test
and
sometimes
that
the
past
is,
you
have
to
be
the
life
human
in
a
very
sort
of
way,
but
that
necessarily
there's
no
absolute
way.
That's
better
or
different.
D
D
D
B
B
D
A
D
A
D
D
A
B
A
But
and
that
data
set
is
available,
the
color
maestro,
but
it
doesn't
really
generate
new
pieces
as
such.
As
far
as
I
could
tell
it's
mostly
an
intermediate
representation.
You
can
kind
of
seat
it
with
some
stuff
and
it
does
a
little
bit
of
generation,
but
it's
mostly
like,
given
this
intermediate
representation
will
play
something
realistic
and
you
can
play
the
different
styles
I
think.
But
but
it's
not
I'm,
not
a
hundred
percent
sure
of
this,
but
I'm
not
I,
don't
think
it's
really
generating
it's
not
composing.
D
A
D
D
D
A
This
is
one
of
the
few
neuroscience
talks,
so
this
is
from
it's
a
it's
sort
of
one
small
result,
but
it's
kind
of
interesting,
it's
from
Surya
Ganguly's
lab,
and
they
are
trying
to
explain
this
fact
that
if
you
look
at
primates
and
their
retinas,
the
response
responses
of
these
ganglion
cells
are
very
simple,
like
center-surround
or,
like
really
simple
feature
extract,
not
really
feature
extractor.
But
if
you
look
at
the
mouse
retina
you
actually
have
oriented.
A
D
D
D
A
D
A
D
A
Assume
you
have
excels
coming
in
and
then
you
have
some
simulated
retina
and
this
simulated
brain
is
in
convolution
all
that
works.
What
they
did
is
that
first
there's
from
the
retina
there's
a
you
have
to
go
through
the
optic,
fiber
and
so
there's
a
bottleneck
there.
So
there's
many
more
pixels
in
your
retina
and
there
are
fibers
in
the
optic
nerve
and
so
there's
a
bottleneck.
And
then
the
neocortex
for
primates
is
much
more
sophisticated
than
for
for
mice.
D
A
You
know,
and
this
kind
of
a
v2,
but
not
much
so
that
was
so
there
they
had
two
different
systems,
one
which
had
a
big
quote-unquote,
neocortex
and
another
one
which
had
very
few
convolutional
net
works
and
then
they
trained
it
on
things
like
image,
net
and
stuff
and
so
and
sure
enough.
They
found
that
if
you
have
a
big
convolutional
network,
you
get
like
no
feature
detectors
here
and
if
you
have
a
small
one,
you
do
end
up
with
feature
detectors
here
and
there
they're,
basically.
D
D
B
A
Why
would
evolution
do
this
in
the
mouse,
and
so
that
was
there
I
thought
it
was
a
you
know:
it's
it's
not
trying
to
solve
all
of
intelligence.
I
thought
it
would
I
believe
this
answer.
It
makes
sense
and-
and
it
means
the
primate
neocortex
can
do
a
lot
more
with
the
visual
system.
So
that's
that's
good.
A
Yeah
this
is,
there
are
a
couple
of
posters
like
this.
I
didn't
really
go
into
it
in
detail,
but
it
was
I
would
point
it
as
the
machine
learning
take
on
this
location-based
idea.
So
same
thing
here
is
the
trainer
system
to
look
at
shapes
and
converted
into
a
program
literally
source
code
that
can
generate
that
shape
there
using
a
neural
network
to
do
this
and
I
have
not
read
the
papers
yet.
So
you
can
see
here
that
when
you
get
a,
maybe
it's
this
shape,
I'm,
not
sure,
but
you
basically
get
these
loops.
A
A
B
A
D
A
Ian
good
fellow
gave,
and
by
the
toxic
he's
the
person
who
came
up
with
adversarial
networks
and
generate
about
this
aerial
networks,
so
he
gave
kind
of
a
it's.
It's
a
topic,
that's
kind
of
consumed,
a
good
part
of
the
machine
learning
community.
Now
so
you
give
a
nice
talk,
saying
how
ganz
and
at
the
stereo
systems,
I've
used
a
lot
of
different.
This
is
pace
generation.
D
A
Showing
how
the
quality
of
base
generation
through
these
systems
has
improved,
so
this
is
his
first
paper.
This
is
the
holiday
nice
and
it
was
actually
lower
resolution
than
this.
He
had.
He
just
blew
it
up
for
the
slide,
and
today
the
the
best
systems
are
actually
much
higher
resolution
than
this.
You
have
to
reduce
it
for
the
slide.
So
now
you
know
that
you
can
generate
extremely
rich
detailed
representations
of
faces
and.
A
D
A
The
same
thing
with
so
this
was
replacing
and
then
what
the
similar
thing
now
with
image
net.
You
can
pick
one
of
the
thousands
of
1000
categories
and
then
generate
new
instances
of
this.
So
in
the
beginning,
a
couple
years
ago,
they're
generating
flowers
and
the
structure
is
not
quite
right.
There.
You
can
see,
there's
mistakes,
it
doesn't
really
look
exactly
like
a
flower,
and
now
you
get
its
really
rich.
Looking.
A
B
A
One's
messed
up,
but
this
is
like
really
good
and
they
had
so.
This
is
what
images
what
I
don't
have
here
is
they
show
this
with
videos
where
you
can
generate
videos
that
are
very
realistic,
and
you
can
also
do
things
like
you
can
really,
okay,
you
have
a
video
of
a
professional
dancer
and
now
you
can
make
a
regular
person
dance
the
way
the
professional
dancer
dances.
So
there
was
a
funny
video.
D
In
the
spectrum
of
AI
and
machine
learning,
technology
I
would
view
something
like
this
I.
Don't
think
if
this
is
AI
I,
don't
think
this
is
intelligent.
I
think
this
is
some
sort
of
I.
Don't
know
what
to
call
it.
It's
there.
It's
not
even
machine
I
guess
you
could
call
it
machine
learning
and
machine.
D
A
D
D
A
A
D
D
C
D
D
A
D
D
D
Shouldn't
do
this
I'm
just
pointing
out
that
this
is
that
this
is
an
application
which
has
more
probably
more
negative
applications
and
positive
bonds
and-
and
it's
also
confusing
in
terms
of
what
they
I'm,
trying
to
I'm
trying
to
write
about
this
right
now
and
I.
Don't
want
to
piss
off
people,
and
you
know
people
complain.
I
need
some
language
to
describe
these
things.
D
B
D
D
A
A
There's
been
some
interesting
work
in
neuroscience:
applications
of
adversarial
networks
in
neuroscience
that
just
came
out
I
think
a
couple
of
weeks
ago,
where
they're
trying
to
figure
out.
So
the
idea
is:
how
do
you
know
what
inputs
will
make
a
neuron
respond?
It's
a
very
hard
problem.
Put
another
scientists,
particularly
if
you're
much
later
in
this
system
say
in
the
visual
system
from
v4
and
IT.
They
use
adversarial
that
works
to
sort
of
probe
this
thing
and
figure
out
exactly
what
this
neuron
responds
to
from
from
image.
A
A
B
D
D
A
D
D
A
A
Okay,
this
is
kind
of
a
weird
vs
vs
indocin.
So
if
you,
if
you
train
a
robot
on
a
on
a
simulated
world,
you
find
that
it
doesn't
apply
well
to
the
real
world.
So
there's
various
techniques
so
making
that
better.
But
what
they
did
here
is
they
take
the
real
world
and
convert
it
to
a
simulated
image
of
that
real
world.
So
they
they
take
like
images
of
whatever
this
is
I
can't
see,
but
they
would
convert
it
into
yeah.
A
B
A
A
D
D
A
A
D
D
A
The
pruning
one
yeah
yeah
there
I,
don't
know
there
are
many
pruning
ones,
but
that
was
one.
This
is
a
benchmark
that
this
then
index
created
for
robustness,
so
so
we're
adding
kind
of
shot
melis
through
our
image,
but
he
created
a
benchmark
where
you
have
all
of
these
other
types
of
noise
force,
inist
and
see
far
or
psych
are.
A
C
C
A
D
A
A
We
should
recognize
how
people
learn
gradually
through
moving
around
in
the
world
and
learning
about
the
world
and
stuff,
and
so
all
our
robot
should
be
learned,
rightly
and
then
eventually
alert
the
tasks,
and
this
is
a
chart,
but
all
the
different,
no
but
a
whole
bunch
of
stages
of
development
and
what
it's
done.
A
different.
D
A
A
B
A
A
A
A
He's
going
to
formalize
that,
and
he
showed
a
bunch
of
examples
of
robots
learning
about
their
world
and
how
they
sort
of
learned
gradually
and
then
become
more
sophisticated,
rather
than
the
current
approach,
which
is
just
trained
into
in
on
a
particular
task.
And
that's
all
you
worry
about
so
this
is
so
he
had
some
really
nice
videos
and
so
that
this
required
that
there's,
this
kind
of
meta
model
of
you
have
to
figure
out.
Okay,
is
this
task,
something
that
should
try
and
he's
very
much
into
the
sensory
motor,
a
model.
A
A
One
is
one:
that's
really
hard,
no
matter
how
much
time
you
spend
on
it
in
the
near
term,
you're
not
going
to
get
better,
and
these
two
tasks
are
ones
that
you
could
learn,
and
so
he
looks
at
kind
of
how
fast
this
gradient
is
changing
and
uses
that
as
a
way
to
kind
of
switched
to
another
casket.
So
what
you'll.
A
That's
a
timeline
pass
three
for
awhile,
then,
when
you
get
pretty
good
at
it,
and
you'll
switch
automatically
to
test
two
and
you're
not
really
going
to
spend
time
on
one
import
and-
and
what
was
it's
here
but
well
what's
nice
is
that
he
actually
doesn't
know
has
applied
this
to
educational
things,
I'm
creating
automated
ways
of
teaching,
kids,
math
and
stuff
like
that,
and
so
they
would.
You
watch
a
kid
going
through
exercises
and
then
suggest
new
ones.
D
D
D
A
Will
be
different
and
then
you
don't
get
bored
of
math
or
you
get
I
thought
that
was
really
nice
and
said
that
doing
that
is
better
than
an
expert
teacher
teaching
the
case,
because
there's
no
way
the
teachers
want
to
know
all
the
details
about
this
person,
so
I
thought
that
was
that
was
pretty
cool,
he's
a
little
bit
of
a
flashy
speaker,
but
it
was.
It
was
a
good
talk.
I
just
took
a
picture.
This
is
a
really.
A
D
B
A
B
A
This
exam
goes
back
to
the
openness
of
everything,
so
many
people
had
qrp
he's
got.
A
B
A
B
D
A
One
I
need
to
spend
a
little
bit
more
time
on.
This
is
hosted
by
comic
Thomas
Makani.
Remember
you
visiting
scientist
here
for
a
few
months
he's
now
an
uber
right
French
yet-
and
this
was
maybe
the
only
example
of
using
neuroscience
to
improve
deep
learning
that
I
saw-
maybe
one
other
but
his
and
it's
a
very
simple
technique
of
applying
a
kind
of
dynamic
plasticity
to
weights
motivated
by
dopamine
or
some
modulation.