►
From YouTube: IETF110-NWCRG-20210311-1430
Description
NWCRG meeting session at IETF110
2021/03/11 1430
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
A
B
Okay,
I
guess
we
can
start.
Yes,
hello!
Welcome
everyone.
This
is
the
coding
for
efficient
network
communication
research
group
next
slide.
C
B
Oh
and
well,
I'm
matthew,
simon,
petty
and
my
or
my
co-chair
vanessa.
Is
there
too
so
our
goals
which
have
not
changed
since
a
long
time,
is
to
foster
research
and
network
and
application
layer
coding
and
the
goal
is
to
improve
network
performance.
B
So
through
the
years
we
looked
at
coding,
coding,
libraries,
the
protocols
to
use
these
codes
and
a
lot
of
real
world
use
cases
and
there's
work
in
progress,
especially
in
congestion
control,
for
example.
Next
slide.
B
So
ipr
has
always
been
a
problem
with
this
group.
We
remind
everybody
that
everything
that
is
presented
here
if
you
have
ipr,
should
be
disclosed
and
that
the
group
should
be
aware
of
it
next
slide.
B
There's
also
a
code
of
conduct
it.
Actually,
there
were
issues
this
week
and
the
ietf
about
people
who
had
not
read
that
and
instead
of
bullying
people
and
that's
not
acceptable
so
next
night,
so
the
goal
of
the
irtf.
I
think
it's
also
extremely
important-
and
this
is
a
a
not
a
warning
but
an
advice
to
everyone
presenting.
B
We
are
not
standardization,
we're
not
there
to
standardize
anything
where
they
foster
research,
to
present
research
to
to
essentially
create
research
communities
around
network
coding
and
not
just
to
present
a
bunch
of
solutions
for
standardization,
which
is
not
the
role.
So
please,
when
you
present
highlight
what
is
the
role
of
the
research
and
what
the
community
will
will
gain
from
it.
Next
play.
B
So
great,
but
we're
on
online,
it's
recorded,
so
we
can
have.
We
can
do
the
minutes
later.
We
didn't
have
anyone
to
take
minutes,
but
I
guess
we
can
do
it
from
the
recording.
B
It's
great,
also
that
the
the
blue
sheet
is
now
automatically
generated.
So
don't
worry
you,
we
know
you,
we
know
you're
there.
Well,
we
know
you're
there.
Big
big
brother
is
watching
you.
I
guess,
and
you
can
actually
press
the
raise
hand
if
you
want
to
talk-
and
if
you
want
to
ask
questions,
I
would
say
also
another
great
way
of
asking
questions
is
putting
question
in
the
chat
and
both
bayside,
and
I
will
monitor
the
chat
to
make
sure
that
we
know
that
you're
in
line
for
a
question
next
slide.
B
So
our
agenda-
there's
there's
me
now.
Hopefully
it
won't
be
ten
minutes
and
then
we're
going
to
have
the
presentation
on
bats,
which
is
a
coding
scheme
for
obviously
the
ongoing
research
on
coding
and
congestion
control,
which
is
also
done
in
almost
collaboration
with
our
friends
in
the
in
congestion
control,
research
group
and
more
from
morton
about
latency
and
reliability
for
block
and
sliding
codes.
B
So
we
have
a
new
rfc,
for
instance,
last
time.
Congratulations
to
every
author.
We
have
the
one
that
I
should
be
more
forceful
on,
but
I've
been
busy
on
other
things,
but
we
actually
are
pushing
the
nc
for
cc
and
icn.
We
have
bats.
That's
kind
of
ready
to
to
move
on
to
you
know
becoming
more
than
an
rg
document,
but
also
going
to
last
call
and
the
coding
for
congestion
control
is
also
very
much
progressing
and
there's
going
to
be
presentations
on
those
two
drafts
today.
B
We
have
decided
to
put
it
on
hold
a
little
bit
while
quick
was
pushing
deciding
what
they
were
going
to
do
and
we
have
to
decide
if
we
want
to
push
that
to
be
a
quick
draft
or
what
we
want
to
do
with
it
and
then
there's
tetris.
B
The
group
of
tetris
has
actually
told
us
that
they
wanted
to
continue
the
document
and
put
it
to
a
level
where
it
could
be.
You
know
pushed
forward
also
to
the
isu
next
slide.
B
So
maybe
our
plan
is
to
meet
at
ietf
111.
We
will
meet
probably
at
an
interim
to
make
sure
we
do
that.
We
want
to
close
the
group.
B
I
think
we've
achieved
everything
we
wanted
to
do
and
since
there's
the
the
research
community
is
not
as
large
as
it
was
and
I
think
in
terms
of
our
milestone,
we
met
everything.
So
we
planned
to
close
the
group
at
ietf
2011.
A
A
Yeah,
I
will
share
your
slides
now.
Yes,
that's
the
right
button.
A
D
Hello,
everybody
and
good
morning,
good
afternoon,
it's
night
time
in
hong
kong,
so
I'm
going
I'm
going
to
give
you
an
update
on
the
deployment
of
bass,
codes
and
next
slides.
Please.
D
Okay,
so
just
a
few
very
quick,
brief
introduction:
why
do
we
care
about
bats?
Because
the
wireless
communication
multi-hub
is
a
long-standing
problem?
You
know
if
data
packets
are
treated
as
commodities,
you
know
when
you
go
through
different
hubs
and
the
pack
loss
will
keep
accumulating
and
that's
why
you
know,
usually
after
a
few
hops,
you
it's
very
hard
to
transmit
for
exa
in
particular
video
very
smoothly
and
in
the
industry.
D
Sometimes
it
is
referred
to
as
the
multi-multi-hop
curse
and
by
doing
recording
at
the
intermediate
nodes
bats,
which
is
a
an
efficient
implementation
of
network
coding.
The
transmission
can
sustain
tens
or
even
hundreds
of
hops,
and
without
relying
on
link
by
link
retransmission,
which,
by
the
way,
is
very
bad
for
video
transmission.
As
many
of
us
know
next
night,
please,
okay,
here's
a
monograph
that
we
that
I
published
with
shanghai
back
in
2017.
D
next
slide.
Please:
okay,
here's
a
quick
performance
comparison!
Assuming
that
there's
a
20
packet
loss
on
each
link,
then
the
the
red
curve
is
the
performance.
I
mean
the
throughput
for
any
routing
based
protocol.
It
could
be
tcp
or
of
end
to
end
founding
code.
D
X-Axis
is
the
number
of
hops
and
the
y-axis
is
the
throughput,
and
we
can
see
that
you
know
when
when
you
the
number
of
hops
increases,
then
the
throughput
actually
decreases
exponentially
fast,
because
every
time
you
multiply
by
0.8
and
whereas
for
best
code,
the
the
throughput
can
sustain
very
well
and
after
50
hops,
the
throughput
can
still
sustain
at
1.7
next
slide
piece
and
even
after
a
thousand
hubs,
the
throughput
can
still
be
around
0.66
next
one,
please,
okay,
the
our
first
employment
of
basco
is
actually
in
the
hong
kong
government's
smart
lamppost
project.
D
Next,
please,
okay!
So
this
is
the
an
illustration
of
the
system.
Suppose,
on
the
same
street,
we
have
a
number
of
lamp
posts
and
the
left
one
is
connected
to
an
optical
fiber
and
the
rest
are
connected
through
this
wireless
multi-hub
network
enabled
by
by
bass
code.
D
Next,
please,
okay,
so
in
2019
we
successfully
deployed
bats
in
36,
smart
lampposts.
D
However,
due
to
the
social
unrest,
the
general
public
is
concerned
about
the
installation
of
video
cameras
on
the
lamppost
due
to
possible
infringement
of
privacy
and,
however,
we
we
learned
that
it
is
quite
quite
hopeful,
then
the
the
project
can
resume
by
the
end
of
this
year,
with
the
video
cameras
replaced
by
lighters
next
next
one,
please:
okay,
there's
a
picture
of
the
smart
lamppost
that
we
have
deployed
next,
please,
okay,
currently,
we
are
building
a
smart
lamppost
testbed
on
the
chk.
That's
my
university!
E
D
Are
well
one
thing
I
would
like
to
bring
up
is
is
something
quite
interesting
because
bats
is
inherently
a
a
fault
computing
application,
because
the
computation
must
be
done
at
the
edge
okay,
there's
no
choice,
and
so
we're
going
to
install
24
computing
based
smart
lampos
on
our
campus,
with
bats
being
provided
as
a
service
and
after
the
network
is
being
set
up.
Different
services
pro
will
will
be
provided,
including
wi-fi
wi-fi
access,
lamp
post,
assisted
autonomous
driving,
real-time
traffic,
monitor
monitoring
with
ai
application.
D
Next,
please,
okay.
This
is
a
picture
of
the
the
smart
lamppost
we
we're
building
on
campus,
as
you
can
see,
they're,
not
as
fancy
as
the
one
we
just
saw
a
moment
ago.
Next,
please,
okay,
we
are
also.
We
also
have
started
a
new
pilot
trial
at
one
of
the
the
country
parks
in
hong
kong.
Next,
please
so
in
this
picture.
D
You
know
in
this
map
you
see
that
we're
going
to
build
a
four
node
pilot
trial
in
in
a
country
park
which
is
not
well
covered
by
by
the
cellular
network
and
that
poses
threats
to
hikers
in
addition
to
being
inconvenient
okay.
So
what
we're
going
to
do
is
to
extend
the
the
network
to
to
the
country
park
via
the
best
technology.
D
Next,
please,
okay.
So
this
is
a
picture
of
of
the
one
of
the
notes
which
is
driven
by
solar
power,
because
there's
no
power
available
or
no
electric
power
available
over
there
and
beneath
the
solar
panel
are
the
other
equipments.
And
you
see
that
on
the
pole
and
opposed
there
are
some
that
are
there.
These
are
the
antennas
okay
next
piece.
D
So
during
the
past
few
years
we
have
been
exploring
different
opportunities
and
we
have
identified
quite
a
number
of
potential
applications
of
bats.
So
also,
these
include
satellite
communications,
rural
communications,
okay,
private
networks,
rapid
response
network,
smart
cities
like
smart
lampposts,
and
things
like
that.
V2X
safety
and
surveillance
internet
of
things
confined
space
like
mining
tunnels,
5g
access
networks,
power
line,
communications
and
also
underwater
acoustic
communications.
D
Next,
please,
okay!
Currently,
we
are
build
building
an
fpga
to
to
implement
the
best
coding
algorithm
and
for
this
project
we
are
working
with
with
intel
and
aero
electronics.
Okay!
Next,
please,
okay!
So
we
have
submitted
an
internet
draft,
as
many
of
us
know,
and
the
latest
update
is
on
february
2nd
this
year.
D
So
this
job
consists
of
a
couple
of
sections
with
one
section
on
on
the
the
basic
data
delivery
procedures,
using
a
base
code
with
one
section
on
a
baseline
batch
code
specification
and
there's
a
new
section
on
related
research
issues,
including
coding,
design,
issues,
protocol
design
issues
and
applications
related
issues,
and
we
also
have
a
section
discussing
the
security
issues
related
to
bets
next
piece.
D
So
I'd
like
to
take
this
opportunity
to
thank
the
two
shares,
maria
jose
and
and
the
vincent
and
for
the
suggestion
to
add
discussions
on
related
research
issues
and
also
to
dave
iran
for
the
pointer
to
the
recent
work
by
buyers
and
newbie
on
the
so-called
liquid
data.
Networking
so
indeed,
bats
is
a
is
a
suitable
candidate
for
the
eurasia
correction
code
in
liquid
data
networking
in
the
presence
of
multiple
hubs
in
the
network.
D
So
this
is
a
list
of
the
ips
that
we
have
files
so
far
and
most
of
them
have
already
been
granted,
except
for
one
or
two
which
are
pending.
D
That's
and
next,
please,
okay,
so
so
I
actually.
This
is
the
the
pdf
file
in
the
in
the
powerpoint
file.
This
is
actually
a
very
interesting
animation
to
to
entertain
you,
you
see,
you
keep
see
cars
running
around,
and
so
this
is
the
end
of
my
talk.
Thank
you.
D
A
Okay,
sorry,
I
was
muted.
Thank
you
very
much
raymond
for
your
presentation.
Is
there
any
question
from
the
audience
I
haven't
seen
any
so
far?
Otherwise
I
have
okay
there's
somebody.
F
What's
in
lad,
cloudflare,
this
is
really
interesting.
Work
is
there
if
this
research
group
spans
sort
of
six
months
or
three
months
from
now
is
how
we're
going
to
advance
the
draft.
D
I
I
think
it
is
pretty
ready
for
for
publication.
A
A
A
I
haven't
read
so
far
the
specification
once
again,
so
I
will
do
that,
but
I
have
the
feeling
that,
except
a
few
minor
modifications
there,
we
are
very
close
to
the
to
the
end
with
this
document.
So
I'm
very
happy
with
this.
The
way
it
moves
forward,
and
especially
this
the
addition
of
this
new
research
section-
that's
great,
that's
great.
A
If
I
can,
I
would
just
do
a
general
comment
for
this
section
four,
so
it's
it's
very
well
written,
no
problem
for
that
very
synthetic
synthetic
manner.
So
that's
great!
I
am
may
just
have
one
comment.
It's
it
can
be
understood
as
some
advertisement
for
that.
So
I
understand
that
it's
in
some
way
unavoidable.
A
That's
not
the
problem,
but
if
you
can,
I
would
say,
make
it
a
bit
more
generic.
It
would
be
great,
I
think,
a
bit
more
neutral.
I
would
say:
okay,
okay,
I
don't
think
it's
a
big
deal.
It's
no
problem!
If
there
are
many
references
to
that,
but
if,
if
you
can
make
it
more
generic
anyway,
that's
that
would
be
great.
A
C
A
C
Will
reduce
the
number
of
references
or.
A
G
I
I'm
not
trying
to
speak
for
colin
here,
but
just
with
reflecting
on
on
watson's
comment,
the
what
we've
done,
I
believe
in
the
ihf
in
the
past
is
we
put
a
working
research
group
to
sleep
while
there's
still
documents
going
through
the
process
and
don't
formally
close
until
the
last
one's
been
disposed
of.
So
I
think
what'll
probably
happen
is
we'll
you
know.
If
we,
if
we're
done,
you
know
the
active
need
to
meet
and
exchange
stuff.
A
Thank
you
so
much
yes,
and
I'm
quite
confident
that
we
can
have
it
start
a
working
group.
Oh
sorry,
a
research
group
last
call
quite
soon
for
this
document.
It
shouldn't
be
a
big
deal.
So
thank
you
so
much
for
for
your
efforts
and
the
quality
of
the
dock.
E
Perfect,
hello,
everyone,
so
I'm
presenting
the
draft
update
on
the
draft
rafter
condition.
This
is
the
sixth
scene
of
the
document,
so
there
are.
E
Person
that
we're
not
there
at
that
moment,
sorry
nicola
the.
A
B
E
So
it
slides
hides
and
there
are,
there
are
much
content
content
to
what
is
actually
new,
but
I
wanted
to
make
a
make
on
what
one
the
province,
the
previous
version
of
these
two
parties
now.
B
Is
an
incredible
there
is.
B
Echo
somebody
probably
has
a
loop,
so
can
any
everybody
put
themselves
on
just.
E
B
E
Okay,
I
have
to
take
this
as
a
yes.
I
just
have
some
issues
to
follow
the
screen.
So
basically
we
had
some
comments
on
live,
live
the
previous
and
this
helped
us
to
improve
the
document.
So
I
will
try
to
make
an
emphasis
on
that
during
the
presentation.
E
So
next,
though,
is
something
that
was
only
that
we
presented
before
not
take
further
further
this
set
of
experiments.
We
have
run
over
different
different
fat
characteristics,
so
I
think
the
main
point
that
we
want
to
highlight
is
on
the
series
of
the
slide.
E
We
have
a
lossy
satellite
static
and
we
use
either
tcp
or
connection
controls,
of
course.
So
can
you
go
next
slide?
Please
bye.
D
A
Understand
you
so
maybe
okay,
so
there's
a
suggestion
in
the
in
the
list
find
the
chat
that
you.
E
E
So
on
slide,
four,
we
just
show
that
having
a
fec
improves
a
lot
with
the
download
and
for
the
downloading
time
of
20
megabits
mega
megabytes.
E
E
It's
shown
that
ietf
109
is
that
when
we
have,
I
want
to
focus
on
the
graph
at
the
bottom.
Is
that
basically,
when
we
have
coded
tcp
flows
flow,
they
just
take
the
whole
capacity.
E
So
that's
a
there
can
be
lots
of
fairness
issues
when
you
use
coding
without
considering
congestion
control,
so
the
objective
of
the
slides
is
shown
outside
seven.
E
If
we
go
to
slide
eight,
this
basically
shows
the
main
changes
changes
as
the
last
ietf.
Basically,
we
have
realized
circle
and
document
how
the
figure
is
not
to
go
now
into
that.
Okay,
but
just
to
show
you
that
there
were
lots
of
changes,
basically
atf
109..
We
had
comments
or
comments
on
adding
an
art,
transport,
multipass
executions,
on
questions
on
readability
and
partial.
E
E
E
What
discussed,
if
you
go
to
the
next
slide,
this
slide
is
also
something
that
we
have
added
in.
The
document
is
what
is
fairness,
so
they
say
this
paper
that
was
presented
in
one
night
on
what
is
furnace,
and
I
think
that
is
just
we
are
defining
it
in
the
in
the
draft.
It's
just
saying
that
saying
that
we
measure
fairness
as
the
impacts
of
causing
of
coding
flow
and
inflows
when
they
share
about
an
hour
next
slide.
E
Please
it's
just
some
comments
on
what
is
and
what
is
not
in
the
scope
of
this
document.
I
think
this
was
not
clear
for
someone
asked
something
to
clarify
this,
so
we
have
may
include
so.
The
picture
that
is
here
will
be
the
next
version
of
the
document.
It's
not
in
the
current
one
that
has
been
published
for
the
atf,
but
basically
what
we
want
to
say
that
an
application
may
be
composed
of
several
streams,
and
we
do
not
consider
the
impact
of
fact
of
mutualism
and
how
they
attract
each
other.
E
E
Next
slide,
please
is
just
recording
what
we
have
we
have
in
in
this
document.
Basically,
we
have,
if
you
have
fcc
that
is
above
in
or
below
condition,
control
and
control.
E
So
we
discovered
these
different
cases
in
this,
inter
and
and
in
version
in
the
previous
version,
with
fairness,
how
you
did
with
recovered
symbols,
how
you
adapt
coding
rates
and
what
you,
what
you
choose
useless
involved
in
the
current
version,
we
have
added
discussions
on
partial
ordering,
partially
ability
multiple
transports,
so
the
first
slide
slide
was
present
on
previous
previously,
and
so
we're
not
going
to
go
into
the
details.
Details.
E
E
Next
next
slide,
please
is
what
is
new
in
this
version
of
the
document.
For
I
don't
I
have
time
I
am
going
to
go
into
already.
We
have
here.
I
think
it's
very
wordy.
It
was
more
for
everyone
to
know
through
the
mailing
list.
If
you
have,
if
you
have
any
comments
on
different
boxes
boxes,
we
tried
to
cover
all
the
cases
and
we
hope
that
we
hope
what
we
have
done.
You
don't
know
different
aspects.
E
There
are
some
options
on
availability
and
for
which
we
have
an
issue,
so
I
guess
we
can
move
to
the
next
slide
14..
Basically,
that
is
one
open
issue
we
have
at
the
moment
in
the
github.
Repo
ripple
is
the
for
the
moment.
Basically
vessel
had
a
comment
on
partial
reliability
and
saying
that
it
impacts
a
lot,
the
type
of
effects
that
you
are
using,
that's
right.
E
So
at
the
moment
we
have
just
covered
that
by
saying
that
partial
reliability
impacts
the
type
of
effects
and
the
type
of
codecs
that
can
be
used,
but
I'm
not
sure
to
what
extent
we
should
provide
more
details.
For
this
point.
A
A
So
yes,
it's
pretty
easy
to
address.
I
would
say.
A
Can
you
do
something?
Okay,
you
can
hear
me
at
least
so.
Quality
is
bad,
but
let's
do
our
best.
So,
yes,
I
think
this
issue
is
pretty
easy
to
address.
If
you
just
mentioned
that,
okay,
for
instance,
we
can
have
a
block
versus
sliding
windows,
it
can
impact
blocks
any
windows.
It
can
also
be
a
matter
of
having
small
versus
medium
or
large
block
sizes,
or
also
the
potential
the
possibility
to
solve
a
subset
of
the
linear
system.
So
if
you
add
this
kind
of
precision,
I
think
it
will
solve
this
opening.
E
So
next
line
next
slide
next
slide.
It
covers
another
issue
that
was
more
related
to
the
appropriate
parametric
motivation
techniques.
E
You
pointed
out
to
to
see
81
so
indeed,
I
think
the
same
points
impact
a
lot,
a
lot
on
the
flow
that
we
are
considering,
whether
it
is
a
constant
bit
rate
and
real
time
or
it
is
non-real
time.
E
That
is
what
you
have
at
least
in
your
in
the
appendix
of
the
next
that
you
showed
so
at
the
moment
we
tried
to
to
say
we
just
answered
by
saying
that
there's
a
general
trade-off
between
the
amount
of
variance
t
to
add,
depending
on
the
transport
of
the
transport
value
and
the
requirements,
did
you
not
go
in
more
into
the
details?
I
don't
know
if
you
think
we
should
provide
more
for
using
these
covered
issues
or
or
deep.
A
Command,
for
instance,
you
can
achieve
the
same
reliability,
which
is
more
or
less
what
we
try
to
achieve,
while
reducing
the
amount
of
relevant
packets
just
by
changing
the
nature
of
the
fake
scheme
or
changing
the
block
size.
If
you
are
using
the
size,
fcc
scheme
things
like
that,
so
it's
a
bit
more
complex.
E
What
what
what
is
the
current
research
going?
What
is
what
and
we
agree
with
the
course
that
we
need
to
do
some-
some
second
consequent
test
on
that
in
the
next
version.
Next,
very,
that's
really
a
missing
point.
We
have
at
the
moment
and
we
need
to
work
more
on
appropriate
sections
for
values.
We
just
list
these
sums
and
from
reference
element,
but
this
needs
to
be
more
structured
and
more
made
more
things
more
thing,
so
the
next
next
slide
is
on
what
we're
going
to
do
for.
E
E
A
I
think
that
in
spite
of
the
but
with
very
bad
audio
quality,
we
managed
to
understand
what
you
said.
I
think
so,
but
it
was
quite
difficult.
Is
there
any
question
or
comment
from
the
from
the
audience.
A
A
H
A
E
H
I
had
some
issues,
so
I'm
one
of
the
authors
of
the
multipass
dccp
draft
and
also
the
multipath
reordering
draft
and
to
have
two
questions.
My
first
question
is:
do
you
learn
another.
E
We
have
been
using
for
the
tests
that
we
showed
earlier
and
he's
more
gathering
and
different
protocols.
So
it's
not
really
specific
to
one
protocol.
So
francois
had
some
michelle
had
some
implementations
for
fec
in
quick,
so
it
depends
on
the
protocols
you
are
using,
but
we
have
in
some
cases
running.
H
H
Okay,
because
I
could
imagine
okay,
that
this
would
be
a
great
opportunity
to
bring
this
to
the
multi-pass
dccp
and
if
you
are
interested
in
that
you
are
very
much
mel
welcome
here.
H
E
E
E
H
A
It
seems
it's
not
the
case,
so
thank
you
nikola,
so
we
are
waiting
for
the
next
version.
Yes,
there
is
some
work
to
be
done
for
this,
for
the
comment
that
mariju
made
about
the
document,
but
otherwise,
yes,
the
document
seems
to
be
progressing
well,
so
once
again,
I'm
confident
that
we
managed
to
to
get
something
ready
for
not
too
far
in
the
future.
A
A
The
last
presentation-
sorry
yep
from
martin.
I
Okay,
so
hello,
everybody,
so
I
have
a
short
presentation
on
on
basically
a
small
comparison
of
latency
and
reliability
for
for
block
and
sliding
window
codes.
So
if
you
go
to
the
next
slide,.
I
Certain
applications
have,
you
know,
latency
requirements
in
the
order
of
hundreds
of
milliseconds,
and
so,
if
the
link
latency
grows
too
large,
then
we
don't
have
time
for
enough
real
transmissions
and
we
need
to
come
up
with
a
different
solution
for
for
obtaining
the
reliability
but
still
meeting
the
latency
requirements.
So
one
of
the
things
that
that
is
often
suggested
is
using
coding
for
that
and
and
basically
as
shown
here
in
the
diagram
right.
I
E
I
Loss-
and
we
can
combine
that
with
three
transmissions,
of
course,
but
we
can
also
run
it
sort
of
in
a
in
a
in
a
unidirectional
way
where
we
just
have.
You
know
no
feedback
at
all
from
the
receivers,
but
but
there's
also
there's
still
some
latency
involved
here,
although
this
figure
is
basically
seems
like
there's
almost
no
latency
in
the
recovery,
that's
actually
not
the
case,
and
so
one
of
the
things
that
I
want
to
share
a
little
bit
about
today
is
is
the
latency
involved
when
using
an
encoding
approach.
I
So
if
you
take
the
next
slide,
so
if
we
start
by
looking
at
something
which
is
called
the
block,
fec
or
block
ecc,
basically,
what
we're
doing
in
those
cases
is
that
we
are
taking
our
data
packets.
We
are
putting
them
into
blocks
and
then
from
those
blocks
we
are
generating
a
repair.
So
in
this
figure
here
I
have
two
cases
in
case
number
one.
I
We
have
four
blocks
of
four
symbols
and
in
case
number
two
we
are
gathering
a
larger
block
which
contains
16
symbols,
and
so,
if
we
look
at
what
the
repair
pattern
looks
like
for
those
two
different
cases,
if
we
look
at
the
timeline
that
we
have
there
at
the
bottom,
you
can
see
in
case
number
one.
We
are
taking
our
four
data
packets
and
then
generating
two
repair
packets.
I
I
The
repair
rate
in
both
of
these
cases
are
the
same
right.
33
of
the
traffic
sent
is
actually
repair
data,
but
when
we
use
the
small
blocks,
we
can
minimize
the
distance
to
repair
and
therefore
also
minimize
the
latency
or
the
the
latency
of
decoding
or
repairing
packet
loss
and
the
graph
that
at
the
bottom,
basically
just
shows
with
a
five
megabit
stream
and
a
1280
byte
packet
size
that
that
the
the
block
size
really
matters.
I
When
you
look
at
something
like
latency,
there
are
certain
codes
that
require
very
large
blocks,
and
what
you
can
see
there
in
that
figure
is
that
if
you
have
a
block
size
of
more
than
500,
you
actually
have
accumulated
one
second
of
latency
inside
that
block.
Of
course,
this
also
depends
on
the
source,
how
it's
transmitting
data,
how
bursty
it
is,
and
all
these
kind
of
things,
but
the
block
size,
can
really
have
a
big
impact
on
the
latency
that
we're
going
to
observe.
I
So,
on
the
next
slide,
we
compared
two
different
configurations,
the
one
the
case
one
and
the
case
two
that
we
had
in
the
in
in
the
in
the
first
in
the
in
small
with
the
video
camera
there,
and
you
can
see
that
that
that
the
on
the
figure
on
the
left,
we
are
comparing
the
residual
packet
loss.
I
That
is
the
loss
that
is
sort
of
still
ex
still
that
we
are,
that
is
unrecoverable,
given
a
certain
loss
on
the
network
on
the
x-axis,
we
have
the
residual
packet
loss
on
the
on
the
y-axis
for
the
two
different
codes,
and
what
you
can
see
is
that
the
larger
block
code,
the
one
that
sort
of
accumulates
more
symbols
before
it
generates
repair,
is
capable
of
handling
a
higher
amount
of
packet
loss
before
it
actually
sort
of
has
to
give
up
and
expose
a
packet
loss
to
the
application.
I
Loss
in
the
channel
basically,
but
but
the
the
good
side
about
the
good
thing
about
the
small
block
code
is
that
if
you
look
then
at
the
latency
characteristics,
you
basically
see
that
for
the
for
the
larger
block,
the
larger
red
solomon
code,
you
have
the
blue
histogram
there.
You
can
see
that
the
per
packet
delay
that
we're
going
to
experience
with
that
block
code
is
actually
sort
of
much
more
smeared
out
and
in
the
worst
case,
we
have
a
sort
of
relatively
high
per
packet
delay
right.
I
And
if
you
take
the
next
slide,
so
one
of
the
sort
of
ways
that
we
can
do
that
is
by
using
the
sliding
window
codes
that
have
been
that
also
vincent
has
has
done
some
a
lot
of
work
on
and
and
just
to
give
you
a
little
bit
of
insights
on
how
those
perform
so
just
to
give
you
also
like
a
quick
overview
of
how
they
work
right
if
a
blog
code.
Basically,
as
I
said,
what
is
what
is
shown
on
the
right?
I
Oh
sorry,
on
the
left,
where
we
are
collecting
six
symbols,
then
we're
generating
two
repair
packets
from
those
six
symbols.
And
then,
in
the
next
block
we
have
the
next
six
symbols
and
then
we
generate
two
repair:
symbols
for
those
for
those
packets
and
and
sort
of
there's
no
overlap
between
the
repair
and
the
symbols
that
just
you
know,
one
block
is
only
protected
by
the
repair
symbols
of
that
one
block.
I
If
we
look
at
something
like
a
sliding
window
code,
it
works
quite
differently,
but
one
of
the
key
things
that
we
can
observe
is
that
every
symbol
is
actually
protected
by
the
same
amount
of
repair.
So
if
you
look
in
the
columns,
you
will
see
that
every
packet
is
included
in
two
repair
packets,
which
is
exactly
the
same
as
with
the
block
code.
I
But
what
we're
able
to
do
with
the
sliding
window
code
is
that
we
were
able
to
distribute
the
repair
packets,
much
more
granularly
granularly
inside
the
the
stream
of
the
packets
that
are
flowing
through
the
network.
And
so,
if
we
look
at
the
results
from
for
a
sliding
window,
implementation
versus
the
versus
the
resolument
implementation
on
the
next
slide,
you
can
see
that
that
actually
using
the
sliding
window,
where
we
have
we,
we
so
to
make
it
fair.
We
made
we
made
the
sliding
window
code
only
code
over
24
symbols.
I
But
with
the
sliding
window
code,
it's
actually
the
opposite.
Even
with
the
with
the
with
the
more
granular
repairs
or
the
more
a
sort
of
evenly
spread
out
repair,
we
can,
we
can
actually
handle
more
packet
loss
before
we
start
to
sort
of
expose
packet
loss
to
the
application
and,
at
the
same
time,
we
are
able
to
get
the
same
latency
profile
as
we
had
with
the
small
read
solomon.
E
I
So
you
can
see
again
the
results
from
the
larger
solomon
code,
where
we
have
a
latency
per
packet,
latency
profile,
which
is
basically
spread
out
and
the
worst
case
is
pretty
bad.
So
it
means
that
sometimes
you
know
packets
experience
quite
a
bit
of
latency
which
causes
jitter
in
the
application
and
all
sorts
of
unwanted
effects.
But
with
the
with
the
sliding
window
code,
we
can
actually
narrow
in
and
cut
off
that
tail
because
we're
generating
the
repair
needed
to
deal
with
losses.
I
Much
more
often
so
we
can
basically
get
sort
of
the
the
best
of
the
two
worlds
with
with
sliding
window
codes,
and
so
I
think
the
final
slide
is
just
as
quick
a
conclusion
on
our
side
right.
I
It
will
just
perform
as
a
block
code
right
and
if
you
want
to
read
some
more
about
the
sliding
window
coding,
we
have
a
lot
of.
We
have
a
lot
of
documentation
on
on
on
on
how
they
work
and
different
ways
to
configure
them
on
that
link
there
in
in
our
own
implementation
called
rely,
and
if
you
you
know
wanna
ask
me,
questions
either
go
to
the
list
or
you
can
also
reach
out
directly
at
the.
E
A
Thank
you
so
much
martin.
I
have
comments
some
questions,
but
is
there
anybody
else.
B
Yeah,
so
I
I
will
will
sound
like
the
the
usual
broken
record.
Is
there
more
research
being
done
on
this?
I
went
through
an
nsf
group
on
the
future
of
broadband
and
you
know
delay
and
bandwidth
are
not
the
only
metrics
anymore.
So.
B
So
I
would
like
to
know
the
research
and
also
this
seems
to
be
like
I
would
say
it's
nice,
you
checked
it,
but
I
think
we
kind
of
knew
that
for
quite
a
long
time
right.
Anybody
like
me,
who's
done
sliding
anything
in
or
in
our
career
know
that
sliding
will
be
better
than
block
because
you
can
start
decoding
as
as
the
window
arrives,
and
not
wait
for
the
whole
block.
So
I'm
actually
wondering
what
else
is
in
that
research.
I
Yeah,
so
that's
a
super
good
question,
so
there's
actually
a
lot
of
interesting
research
to
be
done
in
this
area
still
so
so.
One
of
the
things
that
we
are
looking
at
right
now
is
that
it's
actually
not
true
that
sliding
window
is
always
better
than
a
block
when,
when
you
have
when,
when
the
repair
rate
that
you
have
for
a
sliding
window
code
approaches
the
losses
on
the
network.
I
I
So
so,
basically,
one
of
the
things
that
we
are
looking
at
right
now,
which
is
sort
of
open
research,
is
how
you
shape
the
window,
depending
on
how
close
your
your
repair
rate
is
to
the
packet
loss
rate
that
you're
observing,
and
so,
if
you
look
and
the
reason
why
that
is
important,
is
that
if
you
look
at
something
like
a
video
streaming
application,
you
can't
always
increase
the
repair
rate.
It's
not
that
you
can
just
bring
like.
Then
you
run
into
congestion
right.
I
You
there's
some
times
where
you
have
like
a
maximum
amount
of
repair
that
you
can
put
on
top
and
there.
Actually,
when
you
reach
that
ceiling,
you
need
to
start
to
manage
your
coding
window
x.
You
can't
just
include
as
much
as
possible.
You
need
to
actually
swing
it
down
and
so
that
you
get
more.
Basically,
you
become
like
a
block
code
and
then
you
can
open
it
up
again,
as
your
repair
rate
actually
drops
away
from
the
from
the
packet
loss
rate
that
you're
observing.
So
that's
one
open
thing.
I
Another
thing
which
is
currently
being
looked
at
is
how
you
manage
the
coding
window
when
you
have
heterogeneous
links
with
different
latencies.
So
basically,
when
you
have
like
again,
you
need
to
be
able
to
control
the
coding
window
in
time
and
you
need
to
be
able
to
say
so.
So
you
can
imagine
you
have
one
link
which
has
100
milliseconds
latency
and
another
link
which
has
a
150
milliseconds
latency,
the
packets
that
you're
sending
over
this
the
slower
link.
I
They
can't
include
as
many
of
the
source
symbols
as
the
other
link,
because
when
they
arrive,
they
are
50
milliseconds
behind
the
other
link
and-
and
that
becomes
important
when
you
have
an
overall
latency
criteria
that
you're
trying
to
meet,
and
so
I
think,
there's
a
lot
of
work
that
hasn't
really
been
done,
especially
when
we
look
at
the
most
bringing
it
sort
of
closer
to
the
applications.
I
at
last
time
I
talked
a
little
bit
about
content,
aware
coding,
where
basically,
you
have
to
shape
the
window
according
to
the
video
frames.
I
So
if
you
have
like,
if
you,
if
you
just
have
a
if
you
have
a
coding
window
where
you're
just
blindly
saying
every
four
packets,
I
create
two
repair,
that's
not
a
good
approach.
If
you
have
a
video
frame,
that's
like
15
symbols,
because
you
might
have
a
have
a
case
where
somewhat
some
some
of
parts
of
the
symbol
of
your
video
frame
is
not
is
included
in
it's
not
getting
repaired
until
the
next
video
frame
arrives.
Basically,
and
so
you
get
like
a
sort
of
a
latency
penalty.
I
If
you
don't
make
your
window
adjust
to
the
content,
so
you
need
basically
your
repair
to
follow
the
the
content
that
you
are
so
this
is
again
very
application,
specific
of
course,
but
I
think
you
know
so
that's
some
of
the
directions
that
that
we
are
looking
at
improving
this
or
working
on
the
sliding
window,
codes
and
and
sort
of
optimizing
them.
But
I
think
that
that's
also-
and
I
can
mention
also
a
third
sorry.
I
Fourth
thing,
which
is
basically,
which
is
basically
there's,
been
a
lot
of
of
sort
of
work
where
you
either
split
the
solution
into
aiq
or
fec.
I
So
either
you
build
a
transport
build
just
on
air
q
or
you
build
a
transport
just
build
an
fcc,
there's
very
little
actual
transport
protocols
that
try
to
combine
the
two
where
you
actually
use
aiq
when
link
latency
is
low,
because
then
you
have,
you
have
time
to
do
air
q,
but
but
when
link
latency
grows,
you
need
to
basically
switch
out
of
that
mode
and
start
to
use
more
the
fcc
to
to
to
get
some
of
the
repair
up
front
yeah.
So
that's
some
of
the
areas.
Okay
and.
B
I
think
I
think
maybe
you
should
mention
that.
Oh
you
know
shouldn't
you
know
this
on
in
the
future
and
also
like
you
know,
we're
also
looking
at
research
communities
here.
So
you
said
a
lot
of
wee
wee
wee,
but
is
we
just
you
or
is
we
many
more
universities
and
I
will
give
that
the
floor
to
walk
to
to
watson
after.
F
This
is
really
interesting.
My
question.
E
I
So
so,
well,
there's
a
bunch
of
trade-offs
involved
here
right.
So
so,
if
you,
if
you
use,
if
you
see
underneath
the
transport
protocol
you
can
you
can,
you
can
effectively
mask
all
packet
laws,
but
it
comes
at
a
cost
of
increased
bandwidth
right.
So
so
and
when
you
say
good
put,
do
you
mean
like
what
is
what
is
the
balance
between
the
amount
of
repair
that
I'm
putting
on
my
network
compared
to
the
actual
data
that
I'm
that
I'm
putting
in.
F
I
Yeah,
so
I
think,
looking
at
the
graphs,
you
can
see
that
the
sliding
window
code
is
the
one
that
gives
you
the
best
sort
of
reliability.
Overhead
trade-off
right
so
giving
having
the
same
in.
In
all
of
these
cases,
they
are
using
33
repair.
So
that
means
that
33
of
the
of
the
of
the
packets
sent
is
repair
right
and
there
you
are
able
to
handle.
I
I
don't
know
what
is
the
the
number
here
like
like
the
sliding
window
code
can
handle
20
packet
loss
before
it
starts
to
sort
of
expose
some
packet
loss
to
the
application
right,
whereas,
whereas
the
resolument
code
is
is
down
to
10
or
something,
and
then
it
starts
to
expose
some
packet
loss.
So
clearly
the
coding
that
you
you
can
you
can
use
that
overhead
more
effectively
with
certain
types
of
codes
than
others.
I
But
if
you
rely
only
on
fec,
you
have
to
have
a
certain
amount
of
repair
in
order
to
deal
with
with
a
given
amount
of
packet
loss
on
the
network
and
also
how
that
packet
loss
is
distributed
is
also
also
has
an
impact.
A
Okay,
I
think
we
need
to
stop
here.
Thank
you
very
much,
morton.
Whatever
you've
said
after
doing
the
the
questions
for
the
questions
was
extremely
interesting,
and
I
would
be
interested
in
having
more
detail
on
this,
which
is.
A
So,
yes,
that's
very
important,
so
I
hope
that
next
time
you
will
have
time
to
to
share
on
this.
We
tried
on
our
side
a
few
years
ago,
but
we've
reached
a
situation
that
is
not
perf
our.
I
was
not
totally
satisfied
with
the
the
way
we
initialize
such
parameters.
So
if
you
have
more
insight
on
this,
it
will
be
great,
so
we
nee
now
need
to
switch
to
close
the
the
session
the
meeting.
So
thank
you
very
much
for
all
of
you.
I
don't
know
my
jose.
Do
you
want
to
add.