►
From YouTube: DevoWorm Summer of Code weekly meeting, 6-28
Description
Meeting for Week 8. Vinay Varma presents the paper "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation".
A
B
C
C
C
C
A
B
A
B
So
today,
I'm
going
to
be
presenting
a
little
called
super
stylish.
Actually,
this
paper
is
a
published
in
the
cvpr
2018
CBPR
is
like
the
most
prestigious
and
the
biggest
conference
for
computer
vision
and
pattern
recognition
related
tests.
So
this
has
been
presented
by
an
media.
The
research
has
been
done
by
immediately
and
also
people
from
some
universities,
and
these
are
the
people
who
have
written
this
and
go
to
next
yeah.
So
this
is
Miss
B.
The
contents
that
I'll
be
discussing
now,
firstly,
I'll,
be
talking
about
the
aim
of
the
people
like
what?
B
B
B
Like
their
answers
on
how
the
model
is
performing
compared
to
other
previous
existing
models
and
enough
because
the
most
important
thing
you
address
can
be
applied,
this
like
how.
How
will
this
be
useful
that
section
so,
let's
jump
into
the
aim
of
tests,
so
the
aim
is
to
actually
first,
if
you
can
see
the
demo,
but
a
few
notes.
B
B
B
What
we
can
do
is
that
we
can
take
a
standard
videos
I,
like
already
recorded
the
videos,
and
you
can
input
that
your
video
into
this
model,
this
this
model
can
spit
the
frames,
the
video
in
the
frames
and
it
can
generate
some
intermediate
frames
between
the
original
frames
like
it
develops
like
the
video
is
that
too
much
collection
of
frames
with
are
nothing
but
images.
So
how
can
we
increase?
B
How
can
you
increase
the
number
of
faces
in
the
feeling,
which
is
the
number
of
frames
in
the
video?
Then
you
automatically
the
well
or
we
can
be
the
video
in
slow
motion,
so
they
say
that's
the
idea
of
this
in
this
story.
In
the
paper
they
discussed
their
discuss
the
input
and
output
paper
like
this,
but
if
the
input
images
are
given,
the
model
should
be
able
to
generate
a
multiple
number
of
intermediate
frames
between
those
two
input.
Images
I
think
we
can
understand
this
better
visually.
B
So
this
I'll
get
pressing
some
photos
for
that
before
that
I
want
to
discuss
about
the
previous
existing
work
on
this
problem.
There
are
some.
There
is
some
work.
There
has
been
some
research
that
has
been
put
into
this
problem
like
math.
Most
of
them
are
only
for
a
single
frame
interpolation.
That
means
that
between
two
frames
between
key
frames
of
a
media,
we
can
generate
only
one
intermediate
frame,
so
the
people
most
of
the
previous
approaches
are
focused
on
that.
B
But
this
approach,
what
the
paper
discusses,
we
can
generate
any
number
of
frames
between
any
two
between
the
successive
frames
of
the
original
video
and
there
are
actually
a
few
methods
which
capable
of
multiframe
interpolation,
but
they
are,
they
are
like
very
expensive
computationally.
They
involved
deep
mathematics
and
higher
level
integrations.
So
that's
kind
of
a
drawback
for
that,
and
you
can
there's
a
possibility
that
you
might
think
that,
as
this
exists
as
the
Syrian
population
exists.
B
So
why
can't
we
recursive
layer
on
that
like
if
we,
if
input
the
original
video
frame
interpolation
on
that,
will
get
a
slower
video?
And
if
you
can
plug
that
again
to
the
model,
you
may
get
a
slower
video
and
if
you
keep
repeating
the
process,
we'll
get
a
much.
As
for
your
video,
you
can
do
that,
but
there
are
some
limitations
for
that
which
inspired
these
researches
for
pursue
research
in
this.
So
the
problems
for
recursively
apply
this
into
a
single
frame.
B
They
are
suffering
from
some
active
equations
and
the
motion
boundaries
like
the
nationalistic
fail
in
the
way
slightly
faded
away
on.
They
get
stuck
at
a
point.
All
these
I'm
I'll
be
showing
you
all
these
things
like
visually
and
there's
one
paper
that
I
think
needs
a
special
mention
when
at
earth
try
to
generate
intermediate
frames
showing
light
feel
video
like
this
paper,
this
slow-mo
paper
is
actually
an
unsupervised
way.
This
is
been
an
unsupervised
way
because
there
are
no
labels
or
there
are.
There
is
nothing
that
the
modeling
has,
except
for
the
in
familiar.
B
So
what
mine
say:
Banyan
people
they
they
is
that
they've
captured
a
process
with
normal
with
a
normal
camera
and
I
have
highly
high
frame
rate
camera
and
they
try
to
build
a
model
which
can
learn
from
that
hike,
late
camera,
but
that
also
about
going
well.
So
we
arrive
at
this
paper
now
so
the
approach,
the
approach
of
the
people
is
that
they
have
the
disc
a
they
have
two
combination:
neural
networks,
both
of
them
having
a
unit
architecture
and
one.
B
B
B
So
for
suppose,
this
is
how
it
looks
if
we
have
an
image,
if
you
have
a.
If
you
have
images
like
this
is
at
zero,
the
second-
and
this
is
at
first
a
second,
so
the
approach,
so
the
model
tries
to
generate
an
intermediate
image
here
which
is
between
this
and
this.
So
how
does
it
do?
How
does
it
do
that
is
the
main
is
main
thing
here
and.
B
B
It's
nothing,
but
it's
also
it's
this
image,
it's
this
image
for
each
pixel
of
this
image,
if
you,
because
it
will
be
calculating
the
corresponding
if
we
can
produce
corresponding
value
of
the
pixel
in
those
two
input,
images
like
actually,
the
flow
is
defined
as
that
as
in
a
way
that
for
a
pixel
for
pixel
in
an
image
as
the
video
is
keep
as
the
video
keeps
on
going,
there
will
be
a
vector
for
suppose
in
this.
In
this
village
there
is
a
pixel
here
now
after
some
time
that
pixel
known
somewhere
here.
B
From
here
to
here
like
that,
for
each
and
every
pixel,
very
there
will
be
some
flow.
So
when
the
video
keeps
on
keeps
on
moving,
all
these
pixels
will
be
moving
to
a
new
place
of
some
pixels
will
be
added
in
sub.
Pixels
would
be
remade
like
that,
so
so
the
first
to
CNN
calculates
that
accurately
and
the
reason
why
they
called
binational
is
that
because
it
takes
this
is
a
pixel.
It
takes
the
flow
so
difference
between
the
corresponding
pixel
of
this
input
image
or
in
that
input
image.
B
That's
why
it's
called
bi-directional
flow
computation
so
that
the
first
say
I
mean
actually
does
that,
and
why
is
that
useful?
Are
they
telling
you
so,
let's
get
into
some
mathematics
behind
it,
get
the
basic
things
like
how
this
is
working,
if
you
think,
if
you
can
denote
f
of
T
from
0
P
flow
from
PP
0,
like
from
this
image
to
this
image
and
ft2
1
from
this
image
to
be
so
much
like
the
flow
from
this
image
to
this
image.
So
this
is
the
formula
for
getting
the
hospital.
B
Here
gee,
they
have
defined
G
as
a
warping
function,
which
is
basically
used
for
used
for
fitting
fitting
this
thing
that,
in
the
input
image
is
so
so
nd
nearly
fusing
them
so
that
there
will
be
not
the
only
match
difference
from
the
input
images
will
be
exporting
explaining
about
that
mapping
function
also,
and
then
also
introduced.
The
new
where
we
called
alpha
so
the
intuition
behind
alpha
is
that
for
suppose
it
controls
the
continuity
of
the
two
input
images
into
the
intermediate
village.
I
suppose
I
want
to
predict
an
image
at
time.
B
T
is
equal
to
0.1.
Second,
then,
the
common
sense.
We
can
say
that
all
this
pixels
from
equal
to
0.1,
second,
will
be
will
be
from
this
image
like,
like
from
a
is
equal
to
zero.
If
we
went
credit
from
0.9
at
time,
step
T
is
equal
to
0.9,
then
most
of
the
photo
most
of
the
pixels
will
be
cultivating
from
this
image.
So
alpha
alpha
takes
care
of
that
intuition
and
then
doubts
that.
A
B
So
we
come
so
we
got
the
output
front
first
CNN
now
we
need
to
apply
some
visibility
maps
so
that
this
output
is
defined
and
make,
and
we
can
make
sure
that
it's
not
blurry
or
it's
not
repeated,
repeated
or
any
kind.
So
so,
basically,
this
with
this
visibility
is
defined
like
the
in
a
way
that
we
offer
from
T
to
zero
of
P.
That
means
first
for
each
pixel.
There
will
be
a
probability
for
suppose
if
I.
B
Yeah
for
it
except
P,
this
function
defines
the
probability
in
which
lies
between
0
to
1,
and
it
denotes
whether
the
pixel
is
occluded
or
not
like
the
model
tries
to
predict
a
pixel
in
the
current
image.
Will
that
be
accorded
in
the
next
image
or
not
so
as
a
complete?
Zero
tells
us
that
the
model
is
predicting
that
the
pixel
will
be
uploaded
and
that
cannot
be
contributing
to
the
computation
of
intermediate
image,
so
that
is
how
the
visibility
maps
are
created,
and
this
is
the
function
that
they
have
defined
here.
B
This
is
some
deep
mathematics
to
be
honest
and
I
forgot
to
tell
about
this.
This
is
nothing
but
a
element-wise
multiplication
between
two.
These
are
all
matrices.
All
of
these
all
of
these
terms,
none
of
this
input
in
a
display
images
with
built-in
apps
or
and
all
of
the
same
size,
of
the
insert
image
itself,
so
they
will
be
using
element
wise
multiplication
to
get
the
output
image
so
here
at
this
step
when
they
getting
the
final
output.
So
this
is
the
mathematics
behind
that
and.
B
Yeah,
the
main
basic
thing
is
that,
in
the
first
image,
I
told
that
we
will
be
taking
bi-directional
bi-directional
flow
conference
from
this
intermediate
image
to
to
the
myth
to
both
of
the
input
images.
But
how
do
we
get
this
intermediate
image?
Is
the
question
like
once
we
get
an
intermediate
image?
We
can
keep
on
refining
that
in
the
training
process,
but
how
do
we
actually
settle
at
an
initial
intermediate
image
because
we
do
not
know
like
we
don't
know
how
what
the
image
will
look
like
at
it
equals
to
some
small
T.
B
So
this
is
the
pipeline
of
the
model.
This
is
these.
Are
the
two
combination
unit
neural
networks
that
I
spoke?
This
is
a
u-shaped
architecture
here
and
first
you
will
be
getting
the
input
images.
We
will
be
pushing
the
input
images
into
the
conversation
in
yellow
Network.
It
will
compute
this
based
almost
first
based
on
the
first
two
images
will
get
this
value
and
also
this
value
based
on
this
value,
which
is
the
flow
from
0.
B
The
image
to
first
image
will
be
getting
this
value,
and
likewise,
for
this,
so
will
be
getting
the
blue
elephant
teeth,
image
to
0th
image
and
also
the
tape
image
to
first
image.
This
is
this
is
what
the
combination
here
mainly
does.
So
after
this,
after
the
input
images,
past
Venus
will
be
getting
bi-directional
flow
for
the
for
the
2
input
images
and
then
here
the
visibility
maps
to
come
into
play
here
here
after
applying
this
formula.
B
This
formula
on
the
on
the
input,
image
and
sending
it
to
this
combination
is
very
at
work
in
the
gate
and
refined
flow
and
also
the
visibility
maps.
They
combine
these
two
to
get
the
actual
output
prediction
and
also
all
the
parameters
that
are
included
in
these
in
these
models
are
time
independent.
So
we
can
generate
any
number
of
intermediate
frames
that
that
we
will
be
in
between
these
two
images
in
between
lane
of
images.
B
So
this
is
the
illustration
that
they
have
given
I
think
all
these
things
starts
making
sense.
No,
these
are
the
team
put
in
a
this
and
they're
predicting
intermediate
image.
This
is
the
this
is
the
alleged
to
zero
the
second-
and
this
is
the
immediate
first,
a
second,
and
they
took
small
T
at
0.5,
which
is
the
which
is
so
the
water
has
to
predict
an
intermediate
image
at
0.5.
Second,
so
this
is
how
the
optical
flow
has
been
derived
from
the
first
combination
and
these
two
output
input.
B
B
What
this
will
be
providing
as
the
visibility
maps
here
like
then,
whenever
there
is
a
motion
between
whenever
there's
a
motion
between
3
in
50
meters,
then
we'll
be
having
the
measurement
in
EPS,
highlighting
like
this,
like
that
indicates
motion
and
in
the
mill
network
nodes
to
or
not
to
get
to
to
Mary
over
there
or
not
get
any
distractions
or
discussions
over
there,
and
that's
all
you
yeah
and
the
morning
gets
all
that
from
the
training
thing
training
process.
So
let's
get
in
each
of
these
images
in
a
detailed
way.
B
So
there's
a
these
are
the
two
input
images.
This
is
I
0
and
this
is
I
1
and
we
took
we
took
a
intermediate
second
at
0.5,
and
this
is
the
flow
from
0.5
to
second
image
to
this
or
this
0th
image,
and
this
is
the
flow
from
0.5
to
second
image
at
the
first
a
second.
So
these
are
the
outputs
from
the
first
combination
here
that
we
get
and
after
that
I
and
also
unanswered,
is
also.
These
wanting
also
happens
happens
in
the
first
continuation
area.
B
Focuses
so
if
you
are
understood,
if
you're
not
able
to
understand
the
term
Bach
Bach
is
nothing,
but
it's
like
distorting
distorting
an
image
into
a
shape
that
we
need.
So
that's
miss
the
definition
of
a
thing,
and
this
is
the
visibility
maps
that
you
can
see
yeah
and
they
applied
this
mystery
of
is
between
a
maps
to
the
intermediate
flow
images
and
then
gotten
the
results
like
this
yeah.
B
B
The
player
is
moving
his
hands
like
that,
so
the
motion
got
stuck
over
there
in
this
intermediate
image,
but
after
applying
these
visibility
Maps
you
can
see
that
got
the
let
the
network
got
to
understand
that
the
these
should
not
be
stuck
over
there,
and
this
should
be
moved
like
this,
so
it
will
adjust
the
pixel
flow
in
the
in
such
a
way
that
the
loss
will
be
minimum
discussing
about
the
losses
also
and
yeah.
This
is
how
the
train
the
model,
so
the
final
mass
function
is
a
combination
of
four
more
losses.
B
B
B
How
close
is
it
to
the
input
images
this
one
ruins
that
and
perception
was
this
loss
is,
if
you
use
it,
avoid
having
married
images
because
they,
you
might
end
up
having
well
their
images
even
with
this
built-in
app.
So
when
we
are
applying
the
second
con
valuation
neural
network,
this
this
function
helps
us
to
be
able
to
get
rid
of
blurry
images
and
other
destructions
and
there's
also
a
lot
in
loss
function.
They
are
using
to
compute
the
loss.
B
It's
actually
it's
basically
to
model
the
quality
of
the
optical
flow,
which
is
computed
in
the
first
formulation
neural
network,
and
this
is
nothing
less-
that
they've
added.
It
is
to
encourage
the
neighboring
pixels
to
have
a
similar
flow
values
like
if
the
megapixels
of
a
moving
object
are
also
having
similar
values
say
like
similar
moments,
then
there'd
be
less
chance
of
having
discussions
in
the
third
eye,
constructed
intermediate
image
between
two
versions
of
images.
B
These
are
the
results
from
experience
that
that
they
have
conducted
here.
You
can
see
the
two
input
images
and
this
is
the
actual
ground
truth,
like
the
actual
intermediate
images
this
and
these
these
images
are
all
the
outputs
of
of
previous
existing
methods,
and
this
is
the
output
of
the
current
research
paper.
So
here
you
can
see,
none
of
these
papers
have
MS
bill.
Has
the
concept
of
these
built-in
apps,
so
sometimes
them?
B
When
the
brush
is
in
motion,
it
gets,
it
tends
to
get
blood,
even
the
reconstructed
image
is
formed,
and
sometimes
it
gets
shipped
out
like
this
on
a
small
moment
of
shaping
out,
but
in
this
in
this
paper,
the
more
that
is
produced
by
this
paper
are
pretty
accurate.
Like
this,
they
given
an
example
comparing
all
them
outputs
of
the
previous
models
and
regarding
the
training
process.
B
Tends
to
people
install
I'll,
be
on
the
process
at
each
time.
Step
of
the
video
like
there
are
some
approaches
which
come
close,
but
by
far
this
is
the
best
in
terms
of
high
structuring
is
so
super
easy
super
slow
motion,
videos
so
basically
trying
to
give
a
small
recap
of
that.
These
are
the
key
points
in
the
paper,
so
they
use
to
unit
architectures
with
a
little
bit
combination.
You
know,
networks
and
one
does
come
here.
Compute.
B
The
binational
optical
flow
between
the
two
input
images
and
the
other
unit
is
used
to
define
the
approximated
flow,
like
the
flow
that
we
get
from.
The
first
images
is
a
defined
and
there
is
built
in
EPS
are
generated.
These
visibility
maps
are
combined
with
the
output
of
the
first
combination,
you
network,
and
then
basically,
we
get
the
output
so
that
the
the
data
that
they've
used
is
that
they've
used
about
1130
to
240
frames
per
second
videos
for
this
data.
This
is
an
unsupervised
approach,
so
they
there
are
no
requirement
for
any
labels.
B
Camera
to
capture
it,
so
we
can
take
which
take
a
video
of
that
process
with
our
normal
camera,
and
if
you
can,
you
can
plug
that
into
this
more
and
this
way
produces
super
slow
motion
or
version
of
that
of
that
original
media.
Infer
suppose,
if
you
want
to,
if
you
want
to
generate
data
from
in
very
from
a
video
like,
we
did
first
spin-spin
images,
then
you
can
generate
more
intermediate
made
of
intermediate
frames
between
those
whose
name
or
in
the
frames
of
the
video.
B
A
B
Actually,
like
the
FP
phones,
if
from
a
to
0,
is
basically
letting
right
the
fuel
from
the
flow
from
8
feet,
image
to
the
0,
the
image
like
zero
to
time,
step
image
so
from
so
it's
like
a
it's
kind
of
a
linear
interpolation.
Due
to
my
understanding
of
the
is
because
when
it's
not
only
it's
not
only
linear,
actually
it
can
be
many
polynomial.
Also
because
then
we
can,
we
can
generate
so
many
intermediate
images
between
that.
B
A
Yeah
I
guess
I
had
a
question
about
that
too.
So
I
mean
if
you
had
a
very
simple
type
of
motion
in
the
image
like
you
had
like
a
motorcycle
moving
against
a
background
in
a
constant
speed
that
would
be
like
you
could
probably
the
one
of
your
interpret
interpolation
for
that,
but
that
were
like
a
very
complex
form
of
motion.
Hopefully
like
a
chemical
flow,
where
you
would
hide.
A
B
A
Well,
you
know
they
have
like
super
high
rate
cameras
now,
where
you
can
sample
at
like
a
million
frames
per
second
and
they're
expensive.
But
you
know,
if
you
were
in
a
situation
where
he
wanted
to,
like
I
mean,
could
you
train
it
on
something
that
sampled,
at
a
higher
rate,
so
you'd
have
basically
the
same
video
but
you
sample
at
a
lower
rate?
You
said
what
are
higher
e
and
then
compare
the
two.
A
A
A
B
It
gives
some
satisfactory
results,
but
the
drawback
that
they
mention
is
that
sometimes
it
gets
too
smooth
like
it
gets
very
when
it's
extremely
fast
when
it
has
more
object,
moving
extremely
fast,
and
if
you
try
to
sand,
use
this
method,
and
so
they
observed
that
it
gets.
It
gets
a
bit
blurry
that
moment
on
that
object.
B
A
A
B
Yeah
I
think
we
some
conscious
about
the
mathematics
behind
it
because
in
the
paper
they
discussed
some
high
level,
mathematics
which
might
be
intimidating
for
as
like
sort
like
intimidating,
but
it
might
take
us
some
time
to
implement
it
or
better
they're.
Actually,
that
good
thing
is
that
there
are
some
already
implemented
in
some
people
already
implemented
this
thing
and
the
posted
on
github.
So
maybe
we
can
take
that
and
and
try
to
understand
the
code,
the
code
and
the
paper
side
by
side
and
see
what
they're
doing
like
that.
B
A
I
mean
just
implementing
mathematics
and
code
is
always
a
challenge
to
so.
Yes,
yes,
okay,
so
I
had
a
question.
I
think
this
paper
focuses
on
continuation
of
the
frames
in
camera,
which
has
a
very
high
resolution,
works
on
capturing
more
and
more
number
of
frames,
some
quick
videos.
It
becomes
blurry
because
of
loss
of
items
in
RGB
space,
yeah.
B
A
A
A
A
B
Yeah,
like
last
week,
I
mentioned
about
the
in
addition
for
deepening
J,
right,
yeah,
so
I'll
about
using
some
custom
layers
inside
or
DFO
J,
so
I'm
also.
Second,
you
something
like
I've
got
past
that
error.
So
what
happened
in
this
week
is
that,
after
getting
past
that
error
I
fell
into
another
another
array
which
seemed
like
a
dead
end
to
me.
B
So
after
trying
to
figure
it
out
myself,
I,
there's
actually
community
community
for
a
dependent
for
J,
so
I
went
there
and
I
explained
my
code
I
ago
and
implemented
the
type
of
them
to
reproduce
that
error
and
I
questioned
that.
So
they
said
that
there's
no
current
okay
round
work
around
for
that.
For
my
case.
So
probably
that
is
something
I
have
to
discuss
with
you.
B
You
need
seemed
like
you
didn't
like
what
is
actually
then
did.
Is
that
in
my
I,
can
you
know
don't
like
the
depending
on
which
I
implemented
in
Python
I
used
a
custom
combination
layer
which
is
like
the
heart
of
that
model
which
gained
output
that
I've
shown
last
week,
so
in
the
documentation
they
mentioned
that
lambda
layers
are
possible.
So
let
me
summarize
what
they
do
is
that
whenever
we
implement
a
custom
layer
it
can
grab.
B
B
So
whenever
we
have,
we
have
to
implement
some
custom
layers
and
wrap
it
with,
and
if
we
wrap
it
with
lambda
layer,
then
that
behaves
like
an
inbuilt
layer
so
in
their
documentation.
They
then
mention
that
it
is
possible.
So
I
went
ahead
with
that
method,
but
in
their
examples
they
mentioned
very
simple
computations,
like
expected,
computations
or
addition
complications.
A
B
I
have
a
few
ideas
for
that.
Basically,
no
I
have
I
had
the
whole
model
in
Karis,
so
I
mean
limited.
The
custom
layer
in
pencil
flow
like
I
needed
some,
because
some
low-level
computations
for
that,
so
I
go
event
for
tensorflow,
so
I
try
if
I
can
convert
this
chaos
water
into
a
tenth
of
the
model
and
and
then
important
that
answer
for
model
into
deep
any
fancy,
because
they
have
a
pretty
big
pipe
pipe
and
for
in
the
tense
of
hormonal.
B
So
that
is
one
one
option
that
I
had
now
and
the
other
option
is
to
change
my
implementation
itself,
because
I
need
to
remove
that
compare
custom
layer
and
try
to
try
to
play
with
the
parameters
that
come
that
come
with
the
inbuilt
layers
of
cara's
and
basically
I
have
to
see
if
I
can
get
any
results
from
that.
Those
are
the
two
options
that
I
can
see
for
the
next
week.
Okay,.
A
A
Approach,
so
that's
I
think
that's
what
you're
we're
talking
about
with
phase
one.
So
you
submitted
a
pull
request
for
your
phase,
one
code
and
so
I
accepted
that
right
before
the
meeting.
So
that's
in
progress,
maybe
that's
done
number
ten
and
we
can
revisit
that.
You
said
you
were
gonna
revisit
it
after
you
finished
phases,
two
and
three.
So
yes,.
B
A
B
They
don't
miss
studies,
because
to
do
that.
I
also
pulled
the
code
for
that
about
last
week,
like
these
are
lazily
stock
for
the
documentation,
things
are
considered
like
I
did
the
resizing
of
images.
I
did
shifting
left
shift
right,
shift
up,
shift
down,
shift
and
still
down
equalization,
so
basically
I
applied
all
the
basic
there,
documentation
things
and
then
I'll
play
in
the
model,
so
I
think
that
strategies.
There
is
no
issue
with
that
strategists.
Okay,.
B
You
can
go
through
that
once
so
I'm
just
waiting
for
this
the
dependency
stating
that
you
solved
once
I
get
it
a
sound
I
can
just
copy
paste
that
the
dependencies
and
pour
into
this
planning
hope
so
listen,
Daniel,
diet,
I,
think
the
item
will
be
the
face.
One
will
demand
for
that.
Alright,
it's
kind
of
an
increase
unit,
innovation
for
deepening
fancy.
A
B
Like
a
person
about
the
conference
right,
yeah
in
slack,
actually
I
thought
of
applying
that,
but
after
I
got
I
had
something
I
also
I
mentioned
that
I
had
some
issues
with
that's
why
I
am,
and
also,
and
also
this
deepen
issue,
so
I
thought
of
worth
dying
for
that,
because
I
may
not
it's
it's
time.
Okay
can
do
such
a
thing.
So
all
these
conferences
will
be
my
right,
so
I
can
do
such
for
conference,
where
I
can
speak
about
the
project
once
I
get
so
once
again
get
some
free
time
and
present.
I.
A
A
B
A
B
B
Only
that
yeah
only
the
limitation
thing
if
I
can
get
past
an
impatient
thing
and
then
I
think
I'd
able
to
complete
I,
get
this
unsupervised
approach
and
also
for
this
any
sip
revised
approach.
It
will
be
much
faster
at
once.
I
did
this
pipeline
Kingdom,
it's
very.
It
will
be
the
same
thing
to
me
for
the
semi-supervised
doctors
also
like,
of
course,
the
implementation
for
depending
adding
in
Python,
will
be
different,
but
inverting
that,
depending
on
and
improve
this
job
application,
I
will
be
the
same.
B
A
A
So
let
me
put
the
link
to
the
repo
in
the
chat
once
again
and
if
you
follow
that
link
it'll,
bring
you
to
the
repo
and
we'll
go
to
the
project's
tab
and
then
the
project's.
We
will
click
on
the
board.
There
there's
one
link
digital
area
and
here's
our
board
and
we'll
just
go
there,
the
in
progress
and
to
do
issues
again.
D
D
D
E
A
A
The
and
will
actually
put
that
in
progress
since
it
sounds
like
you're
gonna
do
that
this
week
and
this
coming
week
so
we'll
put
that
there
now
model
evaluation
it
sounded
like
there
was
like
that
was
linked
to
some
other
things.
So
we'll
leave
that
aside
for
now
scheduled
and
present
papers
so
is
well
presented
and
as
it
will
present
next
week,
so
we'll
put
that
and
done
for
now
and
then
the
mathematical
model
development
number
27.
D
So
I'll
make
yeah
so
because
the
obviously
the
movie
will
not
perform
perfectly
like.
Yes,
we
have
different
selves,
animate
inversely.
So
if
we
can
detect
impending
110
in
the
neck
overall
like
there
will
be
a
different
types
of
segmented
or
likely
images
we
get,
we
get
will
be
different
amounts
of
sales
for
a
different
stream.
So
it's
not
possible
that
we
can
send
Vietnam
this
rectangle
movement
by
simply
make
just
after
getting
a
segmented
in
that
will
track
the
movement
of
their
town,
beach.
So
I
guess.
D
D
D
D
A
E
A
A
But
it
wasn't
like
like
something:
that's
doesn't
seem
to
be
central
to
what
we're
doing
so.
Welcome
to
Paris
again
and
one
of
the
things
we
do
here
process.
We
give
well
everyone
at
the
beginning.
They
did
a
presentation
based
on
their
jisang
proposals,
so
everyone
here
at
least
applied
to
ji-suk
project
and
they
presented
her
propose
their
proposal
to
the
group
you
didn't.
A
Do
you
didn't
do
a
jew
site
proposal,
so
that
doesn't
make
any
sense,
but
what
you
could
do
is
with
asthma
as
well,
but
they
have
done
is
a
pic
of
paper
that
you,
you
know
like
a
paper
from
yeah.
I
guess
in
your
case
would
be
machine
learning.
You
know
literature
like
in
from
archive
or
somewhere
from
a
conference
and
do
a
presentation
like
what
we
saw
today
from
Vinay.
You
know
kind
of
review,
something
that
really
interests.
C
A
A
A
A
And
then
the
reviews
of
seppuku
at
all,
that's
the
paper
that
I
just
got
that
paper
yesterday.
So
you
sent
it
to
me
so
I
can
share
it
with
you,
guys
like
on
slack
and
as
long
as
you
keep
it
confidential
and
I.
Think
you're
gonna
have
too
much
of
a
and
other
here,
I'll
be
sharing
it
with
other
scientists.
I
mean
just
you
know,
just
look
at
it.
On
slack
and
it'll,
be
all
private
message
you,
the
paper
and
I,
might
help
with
the
31
and
36.
A
So
it
will
put
that
in
progress
as
well
template
methods.
Again,
that's
kind
of
linked
to
that.
We'll
put
that
in
progress
and
then
model
features
within
in-between
frames
that
sort
of
were
related
to
the
mathematical
modeling.
Actually,
I
might
move
mathematical
model
development
back
into
to
do
so
that
we
don't.
You
know
we
have
enough
stuff
going
on
that.
We
don't
want
to
have
something
in
there.
A
We
don't
know
what
it
is
or
not
doing,
right
now
and
then
I
think
the
last
one
is
video
resources
and
that's
something
they're
actually
I
did
get
a
email
from
Thomas
Harbach
and
he
actually
had
in
in
his
email
and
if
you
look
at
the
readme
in
the
repo
you'll,
see,
there's
a
link
to
a
lot
of
the
stuff
he's
done
with
with
the
diatoms.
So
this
isn't
just
like
the
basilar
area.