►
From YouTube: Numenta Research Meeting, Nov 20, 2019
Description
Topic is "Does sparsity help Continual Learning?"
Hosted by Vincenzo Lomonaco.
A
C
A
Alright
Emily
Starr
are,
we
are
full
here
so
today,
I'm
we're
not
talking
about
sparsity
and
how
smart,
so
you
can
help
you
learn.
So
the
question
is:
that's
part
of
developing
the
learning.
We
know
that
already
of
eat,
so
there
are
a
couple
of
papers
pointing
out
in
that
direction.
It
is
kind
of
completely
right
in
the
sense
that
you
have
less
patterns
and
representations
colliding
with
one
another
and
probably
that
that
also
implies.
A
Certainly
especially
if
we
talk
about
its
parsley
in
the
ways
we
have
a
most
about,
you
know
to
be
less
interference
among
weight.
So
I
guess
it's
easier
to
talk
in
a
single
model,
different
concepts
and
skills,
knowledge,
that's
something
we
want
to
do
in.
Could
you
learn
I,
don't
know
if
that
automatically
leads
to.
You
know
generalization,
that's
interesting
topic
we
could
investigate.
A
Essentially,
in
this
brief
presentation,
I
was
more
focused
on
a
simple
application
of
sparsely
on
the
representation
tween,
a
Kiwi
nurse
implementation
that
you
guys
proposed
in
the
paper
out
students,
and
so
actually
it's
just
a
very
simple
car.
Just
a
few
experiments
with
very
few
other
parameters
involved
and
dimensions.
Consider
but
I
guess
it's
just
start,
and
you
know
we
clear
feedback.
A
We
can
build
on
top
of
this,
so
basically
the
benchmark
we
are
considering
is
emitted
and
nice,
which
is
not
please
my
perspective,
a
great
benchmark
for
good,
you
know,
learning
or
in
journal,
because
it's
very
small,
it's
easy
to
overfeed
are
many
different.
Many
reasons
why
it's,
maybe
it's
difficult
to
secure
scale
results
with
bigger
networks
and
other
datasets
starting,
but
it's
used
for
by
the
research
community
to
continue
learning
community.
A
A
A
A
Why
I'm?
So
that
is
my
warranty
here,
that
these
results
that
we
represent
are
just
not
movie
we're
on
the
evaluation,
where
I
changed
all
of
things.
I
care
about
I,
take
into
account
a
little
better
permanency.
Just
few
people,
analysts
say
on
these
problem,
and
maybe
you
can
trigger
a
few
interesting
questions
and
discussion.
A
A
And
so
for
those
of
you
I
guess
all
of
you
know
what
for
this
guy
screaming?
For
those
of
you
who
don't
know
nice
data
set,
is
a
classification
essentially
is
a
very
small
data
set
of
and
Britain
digits,
and
this
is
the
base
on
which
we
are
going
to
build.
Our
community
needs
tasks
that
essentially,
is
a
fixed
permutation,
apply
on
on
the
basic
data
set.
A
We
learn
about
this
distribution
to
distinguish
these
ten
digits
that
we
move
to
the
second
one.
With
the
same
model,
we
would
like
to
generalize
our
ability
to
discriminate
digits
on
23
really
from
each
other,
but
still
trying
to
generalize
his
understanding
in
a
single
model,
and
we
can
do
that
for
ten
different
tasks.
So,
as
we
are
subject
to
catastrophe,
forgetting
we
don't
gain.
Essentially,
when
we
go
to
the
seventh
us,
we
don't
maintain
the
training
data
of
the
first
task
in
memory.
We
don't
retrain
on
that.
A
As
we
try
to
learn
about
this
unit
data
distribution,
we
completely
erase
all
the
things
we
erase
part
of
the
abilities
recognize
digits
in
these
previous
institution
of
task
one.
So
that's
the
setting
of
the
problem,
the
setting
I
mean
and
the
problem
we
are
trying
to
take
away.
Let's
try
to
forget.
B
B
A
A
B
B
A
A
A
B
E
E
E
D
A
B
A
Try
to
cross
it
by
two
eg
spray.
Yes,
that's
a
possibility
and
I'm
moving
to
sidebar
something
out
because
it's
you
know
trying
to
do
these
experiments.
It
was
very
difficult,
isn't
humble
several
factors
and
I
guess
we
it's
just
better
in
this
case,
when
it's
very
you
know
very
I
use
search
space,
it's
just
better
to
stick
to
be,
probably
you
were
actually
character
solve
like
a
even
be
here.
I
tried.
D
D
A
A
These
learning
how
he
is
affecting
you
know,
future
of
things
that
are
going
to
encounter
and
and
if
I
can
exploit
this
new
knowledge
attack
just
aim
to
solve
things,
but
in
this
case
mercy
you
know
what
we
are
learning
about.
Specific
substitution
is
not
really
transferable.
You
cannot
see.
You
can
easily
see
that
I'm
going
to
show
you
essentially
the
performance
in
that
sense,
but
we
can
still
assess
you
know
always
looking
at
me
accuracy,
essentially
on
the
task
and
easily
on
how
forgetting
is
happening
in
every
different
Haskins
of
distribution.
A
D
A
The
setting
in
which
we
are
operating
and
then
well
I'm
doing
here,
was
to
try
I
mean
I,
tried
several
different
architectures,
but
India
nice
things
to
just
speak
of
them.
Just
for
the
sake
of
clarity,
I
mean
it's
great.
It's
very
interesting.
The
results
I'm
getting
buddy,
it's
difficult
to
generalize,
understanding
other
things,
but
essentially
I,
consider
three
models
so
to
Emily's
one.
We
just
one
either
way
here,
so
we
start
from
analyzed
input.
So
essentially
we
are
just
enhancing
that
image
28
by
28.
A
In
this
case,
we
consider
for
four
thousand
even
units
or
linearly
here.
I
also
try
to
be
different
dimensions
here,
one
thousand
and
it
works,
but
it
was
really
these
particular
as
we're
going
to
see
next.
These
are
people
are.
It
was
really
fun
to
look
at
very
strange,
weird
results,
our
last
page
and
also
and
okay,
for
the
I
consider
two
versions
of
them.
So
this
part
ratio
just
to
compare.
Oh,
we
are
really
specify
the
network.
A
A
A
F
B
B
A
A
D
B
A
C
A
A
Ok,
so
then
this
second
NOP
is
just
a
movie
Bill
Maher
perception
so
I,
just
group
music
and
these
pattern
all
linear
game
winners
in
the
pub
again
and
then,
as
we
are
classifying
them
and
then
for
the
CNN.
I
tried
just
a
simple
CNN
with
just
one.
Completion
here
is
64
channels,
pi
PI,
pi,
colonel
and
in
this
case,
I
used
real
conveners
Andrew
year
of
just
under
12
I.
A
A
E
D
A
Generally,
if
you
think
of
the
gay
Wiener
says
like
I,
don't
want
my
acacia
to
go
up
to
a
certain
parts.
It
is
like
that
upper
threshold
to
what
you're
saying
is
you
shouldn't
be
activated
very
much,
but
if
the
other
girl
could
be
free
that
that's
not
either
your
view,
you
just
keep
free,
let
the
gradient
descent
other
we
do
what
we
will
normally
do.
So
that's
an
impossible
idea,
but
is.
A
D
F
B
C
F
C
A
A
There
are
a
couple
of
papers
here,
titles.
Actually,
so
you
can
look
at
I
think
they
are
very
aligned
somehow
with
what
we
are
doing
here.
So
I
was
parsley,
and-
and
could
you
learning
somehow
so
this
paper
essentially
from
2012
was
one
of
the
first
papers
introducing
previewed
anemones
tasks
and
also
in
Anaheim,
let's
say
in
a
continual
learning
setting
I
guess
it
was
a
paper
from
me.
Over
that's,
like
was
was
a
great
I
mean
a1
one
of
the
first.
A
You
know
ideas
on
also
sparsely
it's
a
year
that
you
know
you
need
you
to
compete
among
each
other,
and
in
that
way
you
can
some
kind
of
French
like
they
pass
these
plates
interesting
big.
That
was
the
whole
idea
around
the
paper
and
they
were
explaining
that
it's
important
that
these
competition
among
units,
so
that
you
can
learn
things
continue
like
what
interesting
that
I
should
be
reading
again.
A
Okay,
so
this
is
the
task
we're
seeing
so
we're
gonna
just
import.
Some
things
here
is
just
four
tables:
okay,
so
here
I
just
wanted
to
show
you
the
actual
images,
but
it's
something
we
can
see.
Also
in
a
whiteboard,
so
I
created
I
created,
you
know
continual
learning
and
this
class
so
that
we
can
have
with
a
symbol,
because
in
your
class
we
have
this
continuous
stream
of
data.
D
A
F
H
A
Okay,
so
essentially
the
nine
mentions
we
are
talking
about
these
are
these,
so
we
have
60,000
images
for
the
Train,
Set,
BFF
and
ta,
and
then
you
know,
of
course
we
have
also
these
labels,
the
same
number
of
labels
and
ten
different
tasks
so
for
each
task.
We
have
these
dimensions
just
for
here:
okay,
okay,
so
again,
these
results
should
be
taken
with
a
grain
of
salt
in
the
sense
that
we
should
actually
run
so.
First
of
all,
they
are
not
really
deterministic.
A
So
when
you
run
these
results
on
on
GPUs
and
you
accumulate
essentially
the
arrows
of
the
ship
across
10
different
tasks,
you
can
end
up
with
different
results
or
a
percentage
points
difference
like
1,
2
3.
So
it's
it's
something
that
you
should
actually
do
in
journal
as
in
a
multiple
run,
you
know
average
weight
and
that
you
know
we
should
also
consider
different
permutations
and
different
orders
of
these
permutations.
So
that's
just
to.
D
A
D
F
A
The
green
barks,
okay,
great.
F
A
A
A
Work
on
these
projects
is
to
have
continual
learning,
especially
I,
like
the
basic
utility
utils
stuff.
You
know
in
a
single
class.
In
these
cases,
I'm
Winston
and
I
have
a
continuous
version
of
it
in
which
I
can
you
know,
create
an
iterator
that
have
been
used
like
very
easily
to
to
go
through
all
people.
Civil
Casanova
at
all
function
rights
specifically
use
for
a
new
one,
so
these,
for
example,
class
is
already
made
so
that
it
can
handle
both
the
split
em,
nice
and
under
me,
the
Lebanese
down
of
this
variation.
A
So
these
will
be
really
directly
on
all
the
the
benchmarks
and
I
have.
Then
we
have
a
directory
with
the
experiments
parameters,
so
in
this
case
I
use
the
standard
standard
confuse
you
we
also
may
be
without
you
used
for
the
auto,
let's
paper,
so
I
guess
this
is
great
as
well
I
used
in
de
pastino,
it's
a
saccharine,
the
second
tool
that
already
keep
track
so
parameters
and
variations
and
say.
A
The
only
fact
that
I'm
a
bit
sorry,
the
only
hideous
thing
I
think
is
just
use
these
functions
to
convert
every
possible.
You
know
boolean
and
annoying.
No.
What
I
do
here
is
to
play
up
for
for
it
to
complete
when
I
choose
the
configuration.
I
do
I,
for
example,
I
want
a
bool
and
I
have
to
use
this
fun.
We
don't
know
right,
so
maybe
there's
a
better
way.
You.
A
A
Again,
what
I
do
here
is
to
keep
traffic
to
have
all
the
experiments,
all
different
company
files
so
that
they
can
be
in
a
single
place
and
then
I
have
the
Class
C
single
amenities
cases
based
on
by
torch,
just
by
torch
and
a
new
bit
of
research
in
your
big
toe
approach.
So
these
are
the
main
dependencies,
I
would
say,
and
I
am
just
two
different
classes
for
DC
general.
You
know
CNN
plain
sienna
and
if
you
know
NLP
and
then
I
have
directly
with
the
results.
D
A
You
know
all
the
stars
I
collected
about
this,
so
this
so
we
start
ready.
I
mean
the
nice
thing
of
sovereign.
Is
that
you
know
all
these
things?
You
can
store
them
in
a
moment
and
then
you
we
don't
know
the
book.
You
just
be
from
the
MongoDB
record
of
the
experiment,
all
the
things
that
you
are
saying
about
that.
Well,
it's
my
advice.
I
A
D
A
A
Okay,
my
code
is
worries.
Well,
you
know,
store
somewhere.
I
know
how
to
reproduce
yourself.
Body
depends
also
on
extent
of
5.
I
want
to
keep
track
on
that
as
well.
If
I
want
to
reduce
or
not
that's
see
there,
oh
well,
then
I
have
a
Newton
directory
here
with
a
few
PI
torch
and
common
uses.
So
that's
what
and
then
I
have
I
mean
I
tried
to
integrate
ray,
but
it
was
a
bit
tricky
tricky
for
congenial
learning
in
general
and
so
I
put
that
on
hold
because
I
don't
really
need
it
now.
I
D
A
But
yeah
for
now
I
don't
need
it.
So
I
am
is
very
simple,
run
Emily's
file,
which
I
can
choose
my
configuration
and
whether
it
is
essentially
load
all
these
configurations
and
then
define
my
hotel
and
then
the
optimizer,
and
here
use
my
my
data
set.
Then
I
can
hug
yay.
It
is
essentially
all
the
code
that
matters
for
module
learning.
A
Actually,
you
know
use
it
for
the
stats,
but
essentially
it
is
this
bar.
So
with
the
you
know,
it
does
it'll
be
done
in
numerator.
I
can
go
through
all
the
training
tasks
like
these
very
easily
and
and
then
I
have
a
train
that
function.
That
is
like
an
utility
by
torch,
so
it's
all
I
mean
it
was
something
that
they've
written
so
that
it's
very
easy
to
to
ever
you
know,
a
single
main
file
is
very
easy
to
compress
okay,
so
this
is
was
it
then
we
can
go.
A
F
A
It's
okay:
what
are
the
parameters
I'm
considering
here?
So
these
essentially
parameters?
They
are
modeling.
The
specification
partly
model
part
and
the
optimizer
part
so
for
the
specification
is
kind,
very
easy,
just
a
parameter
year,
which
is
going
to
tell
how
to
create
or
construct
the
model.
So
these
different
collectors
way
we
are
senior.
If,
essentially,
we
are
going
to
add
images
or
now
today,
then
we
have
a
few
parameters
that
you
well
know
that
are
the
parameters
for
the
gay
wieners.
A
A
A
C
A
A
G
A
B
C
A
So
and
then
for
the
you
know,
for
the
Moodle
part,
you
know
I
just
have
the
parameter
to
say
it
says
yeah,
no,
no,
no,
that
than
the
number
of
layers
anything
units
and
the
drop
out
for
the
optimizer.
We
have
a
learning
rate
if
we
use
Nestor
as
a
kind
of
optimization
and
we
have
time
to
wake
a
meanie.
Besides
it
actually
does
always
fix
298,
and
then
the
number
of
epochs
in
this
case
is
2.
C
A
A
A
A
D
B
D
A
You
can
see
that
it's
off
to
the
level
that
I
asked
okay
so,
but
the
thing
is
that
it's
not
the
average
across
all
the
possible
units
in
the
network
is
the
average
just
on
that
volume
after
the
features
level.
So,
even
though,
for
example,
in
this
case,
are
you
scheming
us
here?
Are
you
screaming
there's
here
I,
don't
keep
track
of
sparsity
here,
I,
just
just
a.
A
A
The
first
model
you're
consider
so
these
are
the
parameter
site.
I
use
for
this
model.
So
what's
one
insane
and
drop
out
is
0.5
just
wanted
a
layer
with
for
okay
units
learning
rate
of
0.1,
I
use
momentum,
0,
1
9
in
escrow,
and
the
percentage
of
this
case
percent
right
kind
of
yeah
yeah
and
then
oh.
A
A
I
A
A
So
I
got
here:
you
can
see
in
this
plot
the
accuracy
over
time,
so
I
did
10
different
tasks
tasks
so
here
need
accesses
and
the
accuracy
for
each
of
those.
So
you
can
see
here
the
different
lines
with
different
colors
identify
different
tasks.
Like
you
said
you
can
see
here,
for
example,
after
training
of
on
the
PA
train
set
of
the
past
zero,
they
have
a
very
good
performance
for
the
for
the
tasks.
Zero.
Sorry,
the
color
is
not
great,
that's
zero
here
and
you
can
see
that
this
is
going
down
it.
A
When
we
move
through
the
3d
are
incredible
and
you
can
see
that
at
first
you
can
transfer
anything
to
the
next
task.
So,
but
when
you
move
to
these
seven
tasks,
tasks
one
is
goes
how
it
is
normal
about
the
first.
You
know:
that's
zero
here
is
going
a
bit
down,
and
this
is
gonna
be
the
same
for
all.
So
you
can
see
here
that
the
with
these
plot
that
easily
see
the
any
problem.
Okay,
I
forget
right.
A
So
here
is
the
past
disability.
It
has
gives
this
division,
so
you
can.
Maybe
it's
easy
to
see.
Is
graph
that
in
the
previous
one,
at
the
end
of
the
training,
you
can
see
that
you
have
forgotten
most
about?
You
know
mostly
about
the
curse
tasks
in
the
sequence,
so
you
have
more
forgetting
on
zero
than
little
less
than
one
is
the
last
one
to
two
and
so
on.
I
A
What
we
would
like
to
do
is
we
have
all
these
bars
to
be
the
same
of
the
last
one,
and
that
will
be
only
not
forgetting
at
all
and
I
mean
I
didn't
put
another
beta
on
here,
but
you
can
show
that
that
seems
possible
with
these
kind
of
models.
If
you
train
on
all
the
possible
the
union
of
all
the
possible
training
set,
you
can,
you
can
show
that
you
can
get.
You
know
these
people,
it's
not
a
problem.
A
You
can
do
that.
It's
just
and
then
here
these
kind
of
a
nice.
So
the
looking
into
you
know
at
that
particle
level.
This
varsity
represent
a
job
over
time.
So
here
you
can
see
twenty
different
things
here,
because
we
are
looking
into
that
books.
We
have
two
our
books
for
each
task,
so
you
can
see
that,
for
example,
even
in
the
first
task,
you
can
see
that
it
is
partially
level
at
the
end
of
this
is
just
again.
These
starts
from
he
M
of
the
first.
A
So
it
means
that
when
it
started
it
wasn't,
even
even
you
know
higher
and
I
noticed
it's
a
lot,
so
you
start
being
a
very
I,
that's
activation,
and
these
goes
down
as
you
move
forward
in
training
and
actually
does
that.
Actually,
when
you
drink
and
using
multiple
tasks,
so
you
can
see
these
trend.
Ism
is
among
you
know,
multiple,
that's
what
it's
crazy,
I,
don't
think.
So
it's
like
a
normal
behavior.
C
C
A
E
F
B
I
A
I
B
I
A
D
A
B
D
A
A
B
A
I
A
A
This
is
receded
that
you
need
some
units
are
used
very
few
times.
So
that's
maybe
a
neat
about
that.
Yeah,
okay,
okay,
so
the
phrenic
using
here
is
just
the
other
is
16
5
person,
and
here
is
the
actual
percentage
on
to
point
at
the
end
at
the
end
of
the
world,
training
sequencing,
2.5
percent-
and
that
is
actually
like
100
units.
A
A
A
I
A
B
I
A
Okay,
then,
moving
faster
to
this.
Second,
in
this
case,
I
don't
blow
up
everything
again,
just
to
see
that
we
can
have
for
the
again
well
firm,
there's
a
kind
of
the
same
year.
You
just
didn't.
We
have
two
hidden
layers
of
okay
and
then
we
have
the
first.
You
know.
Molded,
that
is
dance,
is
the
second
one
years.
D
A
F
A
Okay
CNN
to
here-
and
so
we
have
a
like
this-
is
different.
Somehow
we
have
a
lower
number
of
units.
It
just
worked
these
ways
right.
I
got
just
oh,
it's
better
than
four
K
units
and
year,
so
without
Sparsit,
if
occasion
in
for,
in
course,
we
have
this
kind
of
yeah,
very,
very,
very
low
percentage
on
active
units,
52
percent
of
accuracy-
and
this
is
the
number
of
sorry
nice
number
is
I
here,
because
we're
looking
after
D
come
on
the
volume
after
accumulations.
In
this
case,
and.
D
A
Yeah,
this
is
no
smart,
let
me
and
then
for
dispersing
five
version.
Yeah
we
are
varying
its
they
don't
it's
partly
yeah
a
little
better
in
terms
of
curacy
and
I.
Get
percent
points
better,
but
you
know
it's
not
just
a
little
sparse
about.
Is
it?
What
I
notice
is
that
you
know
the
very
important
he
needs
parsley
level
from
the
start
and
that's
really
different
from
what
we
are
you
have.
You
know
we,
these
kind
of
parsing
that
are
going
dying
across
time.
A
A
So
I
tried
these
moving
like
little
by
little.
This
is
threshold
for
the
k
winners
from
two
point,
five
percent,
to
whom
you
know,
I
don't
know
fifteen
and
it
was
still
hurting
performance
of.
I
have
to
go
like
iron
at
sixty
to
get
the
same.
It
means
that
this
process
is
imported.
Somehow
it
doesn't
even
a
deal.
I
I
A
D
A
Yeah
and
these
are
the
I
mean
again
taken
with
a
great
result
and
etc,
etc.
But
these
are
these
are
the
results
when
you
just
remove
that
feature
and
then,
when
you
move
on
down
on
the
table,
we're
not
you
know,
we
don't
remove,
drop
out
plasma
memory.
Just
you
know
the
previous
configuration
we
remove
just
momentum,
so
I
didn't
try
all
the
possible
configurations,
but
just
one
thing,
so
the
robot
seems
to
me
to
be.
A
You
know
in
impact
a
lot
of
the
performances
for
this
is
just
for
this
first
version,
but
it
would
be
also
nice
to
add
the
the
results
for
the
dance
you
know
with
respect
to
the
desk
under
part,
because
you
know
when
removed
there
about,
how
does
it
affect
the
dance
ever
it's
important
and
out
doesn't
affect
as
far
as
neck?
Where
he's
incredible.
So
in
this
case,
just
this
part
equation.
I
just
reported
here
and
it's
it's,
it
has
a
very
you
know
it's
a
it's.
A
A
very
important
I
would
say
negative
difference
here
in
terms
of
accuracy,
but
this
is
also
I
mean.
Drop
out
was
already
shown
to
be
important
for
manual
learning,
so
I
don't
know
if
it
is
really
related,
sparsity
or
just
ready
to
continue
learning
so
deep
there
about
can
help.
You
know
in
general,
not
to
other
feet,
can
help
you
know
to
to
avoid
forgetting
in
Copiah
learning
ever
seen
the
past
so
yeah.
We
should
look
into
actually
dance
come
to
brought
to
actually
understudy
these.
A
It's
it's
just
helping
the
new
learning
of.
Is
it
also
it's
something
more?
Maybe
this
person
by
version
or
not
so
for
the
moment
we
decay
here,
we
have
also
an
impact
on
the
curacy-
is
very
interesting
actually
because
so
what
I
know
that
I
could
hear
was
that
if
I
remove
momentum
and
lt
realization
from
the
dance
counterpart,
I
can
get
even
better
results.
So
we
lose
these
advantage
of
specifying
essentially
with
hey,
we
nurse.
A
A
How
do
you
so
Mexican
I
think
that
I
mean
I'm?
It's
very
difficult
understand
in
this.
Why
this
is
happening,
but
in
my
idea
is
that
decay
actually
is
became
the
ways
that
are
not
really
important
for
a
particular
task.
When
you
go
this
hasn't
used,
it
had
to
you
know
a
race,
you
know
yeah
arrays
or
change
the
ways
that
very
important.
You
know
you
train.
You
learn
before
non
importantly,
more
unique
in
the
actual
convene
in
the
actual
mutation
of
the
current
task.
A
A
A
I
A
C
A
And
then
yeah
for
the
momentum
explanations
in
this
case,
I
think
momentum
is
pushing
art
and
on
the
current
you
know
optimization
of
the
specific
path.
So
maybe
that
it's
not
a
healthy
communing,
we
are
moving
faster,
a
direction
of
the
gradient.
So
every
program
has
maybe
help
you.
You
don't
want
to
raise
things
that
you
don't
see
anymore
now.
A
E
Like
when
you
have
all
these,
you
think
of
it,
like
you,
have
all
these
weights
that
are
kind
of
each
dedicated
to
like
this
weight
is
dedicated
to
two
tasks.
Number
two
yeah:
it's
only
there
for
task
number
two.
When
you're
doing
all
these
other
weight,
it's
gonna
just
keep
pummeling
it.
Yeah
momentum
is
gonna
compound
that
pummeling
and
it's
just
like
it.
Just
can't
describe
those
two
young.
Whatever.
A
Powers
combined
yeah,
so
another
futile
yeah,
especially
being
in
that
purse,
but
it
is
something
we
already
consider
and
then
yeah.
Something
that
I
wanted
to
point
out
is
that
so
in
this
case
you
know,
momentum
and
characterization
is
store
essentially
that
than
that
you're
using
K
winners,
but
they
are
needed
most
of
the
times
when
you
take
a
lot
more
complex
tasks,
so
the
idea
of
arriving
in
Africa
does
not
use
it
whole
momentum
after
Stations
of
it.
Maybe
it's
not
really
realistic.
When
you.
C
A
Okay,
even
even
in
this
configuration
only
problems
and
then
I
ever
saw
like
maybe
the
posting
and
stuff
you
know
if
we
could
scale
these
antigens
I
also
be,
can
actually
moving
rate
and
then
some
open
questions.
Can
we
ask
a
dealer's
not
making
adverts
and
complex
ever
more
complex
tasks?
So
another
question:
we
really
need
to
specify
the
network
from
the
star.
Oh,
we
can
just
you
know
care
about
the
percentage
on
at
the
end
of
task.
There's.
A
Oh
man
right
yeah,
no
well,
it
was
more
any
thinking
is
that
maybe
should
we
allow,
for
example,
in
some
specific
that
some,
you
know
in
a
very
small
time
window
for
the
to
get
the
best
family
firm,
graded,
assemble
its
Asia,
to
leave
the
network
more
free
to
explore
things
and
then
to
reviews.
You
know
after
awhile
we
be
activation,
that
that's
activation
to
something
we
care
about
is
something
that
classified
so.
A
B
I
B
G
B
E
This
varsity
thing
with
the
you
mention
we
like
to
start
is
starting
that's
becoming
scarcer.
One
thing
we
could
connect
to
is
if
something
is
kind
of
more
of
a
novel
stimulus.
If
it's
unlike
what
you've
seen
before,
then
it
might
be
somehow
be
allowed
to
be
more
dense
and
then
better
it
becomes
bars.
You
might
already
know
them.