►
Description
We have a visiting scholar Theivendiram Pranavan from National University of Singapore who'll be talking about his work on unsupervised continuous machine learning.
A
A
So,
for
the
moment
like,
we
have
found
like
not
found
that
we
read
through
the
literature
and
we
found
and
finalized
few
principles.
So
the
visual
learning
is
continuous,
so
I
feel
there
is
no
training
or
testing
phase.
So
we
have
to
learn
continuously
so
so
we
have
to
deal
with
the
labor
one
and
the
unlabeled
one.
So
how
we
are
going
to
deal
with
that
and
it
favors
the
recent
information.
Suppose
you
read
something
yesterday
suppose
you
read
something
like
three
years
back
so
there's
a
difference
between
the
recency
like
there's.
B
A
Okay,
so
you
can
see
these
pictures.
The
first
one
is
like
an
illusion.
So
there's
an
there's,
a
jungle,
the
ideas
arise
and
all
daily.
So
at
one
time
you
can
see
only
once
either
you
can
see
the
old
ad
or
the
young
lady.
So
if
you
take
these
two
figures
like
we
are
giving
equal
semantics,
like
preference
for
these
two
images.
But
if
you
look
at
the
second
diagram,
it's
kind
of
some
scratches,
but
if
you
look
closely,
you
can
see
a
talk
in
the
middle.
So
when.
A
We
should
learning
distant
recognition,
that's
five
and
we
usually
jump
to
conclusion.
So
we
don't
do
the
whole
forward
propagation
thing
in
brain.
So,
for
example,
if
I
have
a
dog
in
my
house,
it
has
a
build
in
its
neck.
So
if
I
am
hearing
a
sound
in
the
corridor,
I'll
be
expecting
the
tone.
If
it's
not
coming
like
it's
sunny,
if
you
think
so,
we.
B
A
C
A
C
A
C
A
A
Machine
learning
problem,
what
we
usually
do
is
like
we
just
put
some
data
just
divided
into
training
and
testing.
In
some
cases
we
divide
the
training
they
tie
into,
like
validation,
the
actual
training
data.
So
let
me
train
a
classifier
and
evaluate
on
the
testing,
so
it's
not
going
to
work
all
the
time.
So
we
have
to
do
like
some
learning
with
all
these,
like
all
the
data
that
we
have,
so
we
are
not
going
to
get
annotated
data
all
the
time,
so
there
are
lots
of
unlabeled
data
in
there.
A
A
A
A
So
somebody
is
telling
this
is
a
car
so
from
this
information
that
he
is
able
to
update
he's
learning
by
like
labeling
all
these
previous
car.
Maybe
he
will
keep
these
cars
in
the
memory
for
some
time.
If
he
doesn't
get
any
feedback,
you
will
just
delete
from
the
memory
so
how
we
are
going
to
handle
this
delay
feedback.
So
so,
in
a
neural
network
setting
like
we
tried
one
implementation,
so
I
am
going
to
explain.
C
Possible
onion
I
mean
so
what
I
was
talking
about
this
of
the
nature
of
specializations?
And
what
do
you
manage
you
to
get
specializations
that
your
area
or
your
space
of
objects
is
naturally
separated
by
not
overlapping,
and
so
it
seems
to
me
the
way
we
do
this
is
that
even
if
I
know
one
gives
me
a
label,
anything
I
actually
am
able
to
do
this
self
classification
and
very,
very
well.
I.
Don't
need
a
label
to
do
that
in
all
the
time,
not
all
the
time,
but
most
the
time
someone
will
say.
C
C
A
C
C
A
So
for
the
evaluation
data,
what
they
are
going
to
do
is
we
are
going
to
keep
a
bounded
queue
with
the
fixed
size
and
we
are
going
to
store
all
these
inputs
and
outputs
as
pairs
in
the
bounded
queue.
So
you
put
an
image
and
you
get
the
probability
vector.
We
are
going
to
store
these
X
and
X
or
Y
I
power
in
the
queue
so
actually.
A
E
A
Not
the
actual,
thank
you.
These
are
model,
you
just
pass
six
and
you
get
a
probability
vector.
We
are
going
to
store
in
a
bounded
queue.
If
the
queue
is
full,
we
are
going
to
update
the
model
even
without
the
actual
label.
The
queue
is
not
full,
but
we
are
going
to
get
the
next
X
and
we
have
adopted
the
monarchy.
So
if
the
queue
is
full
how
we
are
going
to
update
the
model?
A
B
A
A
A
Q
so
from
this
red
0,
actually
using
some
content-based
image,
retrieval
like
we
can
create
some
features
so
for
sex
0
we
can
create
0
Prime
and
you
have
a
queue
of
images.
So
for
all
these
images,
you
can
do
the
same
thing.
So
now
you
have
all
these
features,
so
you
can
find
the
similarity
between
these
are
0
Prime
and
all
the
images
in
the
queue.
C
A
F
C
F
C
C
Seems
like
the
goal
in
the
end,
the
goal
the
system
is
given
image
and
you
cluster
it
correctly.
What
does
this
belong
to?
But
here
you
doing
a
very
crude
method
of
it,
so
it's
likely
to
be
wrong.
There's
no
training
involved.
Yet
right.
Yeah
I
show
you
two
pictures
of
cars
unless
they're
really
similar
in
some
basic
ways.
It's
not
going
to
work
right,
yeah.
A
A
B
A
And
for
the
distance
between
input
space
we
can
use,
we
are
using
color
color
histogram
we
can.
Since
we
are
using
a
smaller
image,
we
are
using
color
histogram.
We
can
go
for
change
picture
as
well,
so
what
we
are
doing
is
if
the
distance
between
y
0
and
y
arranged
close
as
well
as
the
distance
between
Z
0
prime
ends
at
I
prime,
is
close.
We
are
going
to
update
by
a
very
small
learning
rate,
so
it
is
an
integer
which
is
greater
than
1.
A
A
It
correctly,
in
that
case,
we
are
penalizing
with
the
right
higher
learning
case.
So
we
are
just
multiply
everything
with
the
fact.
Actually,
these
a
and
B
can
be
learned.
I
should
be
used,
some
arbitrary
values
like
5
or
4,
but
actually
a
B
can
be
learnt
so
in
for
all
the
other
two
cases
we
did
not
update.
A
F
E
C
C
C
D
A
Listen
actually
like
even
for
the
auxillary
informations
em,
so
you
know
like
we
don't
have
to
put
only
for
naming.
Maybe
we
can
put
the
batch
of
images
so
there's
f0
optionally
information
that
we
are
giving
it
as
a
feedback.
So
the
problem
will
occur
like
if
you
are
using
a
single
image
and
if
you
can
use
a
pasture
of
images
like
somehow
we
can.
C
F
A
A
Learning
everything
actually,
it's
learning
some
some
aspect
of
color
shape
and
the.
E
C
A
F
B
F
A
F
F
B
A
A
E
A
On
site
for
ten,
so
actually
let
me
use
the
like
very
small
CNN
with
likes
60,000
parameter,
so
the
baseline
accuracy
was
like
65,
so
with
various
queue
sizes
we
just
evaluated
the
results
actually
like
it's
always
possible,
like
this
accuracy
may
go
down
in
the
wrong
direction
like,
but
in
this
case
we
tested
their
quality
is
increasing
with
the
Q
side.
So
if
you
are.
B
C
F
A
B
A
F
A
So
our
goal
is
like
the
current
neural
network
models
like
if
you
take
the
memory
size
and
number
of
parameter
is
not
gross
output.
It's
very
tight
and
they're
getting
a
high
accuracy,
so
meet
me
down
the
timeline.
We
can
create
models
like
fewer
parameters,
and
maybe
even
that
close,
it
might
be
like
slightly
less
I
need
to
create.
F
A
D
A
F
I
still
question
on
the
queue:
okay.
So
if
I
see
like
a
blue
card
with
that,
okay
and
my
first
I
say
blue
card
with
that,
but
I
don't
have
to
label
and
the
fourth
one
I
see
is
a
blue
card
spot.
So
C
is
gonna
match
right.
So
I'm
gonna
use
all
those
three
samples
on
label
to
learn
and
then
I'm
gonna
discard
those
three
samples
and
Mike
Weir
is
gonna,
be
able
taking
right.
Yes,
so.
A
C
C
A
C
F
F
C
F
F
B
E
A
A
E
A
F
A
D
A
F
B
A
In
your
book,
you
always
tell
about
life
without
a
dealing
with
Tammy
in
all
the
computation
machine
problem.
We
are
just
just
dealing
with
images
right
in
your
book.
I
read
your
book
on
internal.
So,
like
you
always
tell
like,
without
like
dealing
with
temporal
thing,
we
are
not
going
to
build
intelligent
machines
right.
Yes,.
C
C
C
Yeah
that
I
work
in
sort
of
image
classification
is
very,
very
recent.
It's
always
been
that
this
is
a
subset
of
inference.
So
my
first
comments
here
is:
we've
started
here.
I'm
thinking
about
how
the
brain
builds
cell
models
is
messed
up.
Mark
is
talking
about
redoing
it
yeah.
That's
what
Marcus
talked
about
where
the
the
spatial
inference
is
sort
of
the
corner
case
of
the
general
perception
motor.