►
From YouTube: IETF100-NWCRG-20171116-1550
Description
NWCRG meeting session at IETF100
2017/11/16 1550
https://datatracker.ietf.org/meeting/100/proceedings/
A
Okay,
so
welcome
everybody.
This
is
the
IETF
and
it's
my
first
one
as
a
co-chair.
So
that's
I
and
I
are
very
happy
to
have
you
here
and
we
have
a
very,
very
full
agenda
actually
and
maybe
you're
going.
To
put
it
know,
you
want
to
put
the
okay,
so
this
is
actually
a
very
important
thing.
You
probably
do
not
know,
but
there
was
a
meeting
yesterday
of
all
the
chairs
for
the
research
group
and
actually
the
idea
the
actually
the
new
rules
for
IPR
was
discussed.
So
I
think
it's
very
important.
B
A
So
oh
I
miss
trivia.
We
have
updated
the
charge
of
the
group
and
I'm
going
to
continue
I'm
going
to
mention
what
we
did.
The
wiki
is
available
for
uploading
documents
and
things
that
evening
are
valuable
for
the
group.
I
intend
also
to
use
it
to
upload
documentation
and
background
papers
that
have
been
published
both
in
industry
in
academia
in
this
field
that
people
have
a
good
image
of
what
is
happening
in.
A
This
is
their
dislikes
above.
These
are
the
ones
that
we
have
I've
already
been
uploaded.
If
there
are
presenters
who
did
not
send
us
their
slides,
please
do
it
now,
and
there
is
presentation.
So
what
I
said
is
that
I
don't
intend
to
go
on
with
a
lot
of
this?
The
the
agenda
is
very,
very
full
I
think
we
were
very
happy
that
we
could
get
that
presentation,
so
we're
going
to
go
to
coding,
use
cases
where
the
coding
can
be
beneficial
and
the
research
projects
updates
and
things
that
are
ongoing.
A
A
E
A
For
four
quick,
so
what's
new
I
want
admit
something
with
colorful
ones.
So
it's
news
because
we
change
the
name
and
the
reason
that
we
change
a
name
is
nice.
Nothing
to
do
that.
We
change
them.
The
fact
that
we're
talking
about
network
coding,
but
a
lot
of
people
didn't
really
know
what
networking
was
used
for.
A
So
we
changed
the
name
so
that
people
would
know
that
when
we
do
network
coding,
it's
actually
to
have
more
efficient
network
communication,
and
it
is
part
of
a
toolkit
for
improving
the
performance
of
communication,
and
this
is
the
goal
of
this
group-
is
to
look
at
coding.
As
part
of
this
toolkit
for
improvement.
We
have
a
new
charter.
Why
do
we
have
a
neutral?
A
Because
the
old
charter
was
about
five
years
old
and
since
then,
there's
been
a
lot
of
development
and
I
think
it
was
also
to
reflect
the
fact
that
we
wanted
to
move
a
little
bit
from
just
codes
and
things
that
go
directly
on
codes
on
more
into
how
our
codes
being
used
or
could
be
used.
Since
we
are
research
group
and
what
are
the
research
area
that
should
be
investigated
in
order
for
codes
to
be
better,
better
used
or
better
applied
in
again
to
improve
the
efficiency
of
networks,
we
have
a
new
one
new
milestones.
A
What's
on
the
next
page,
so
what
we
did
in
the
meantime
is
we
had
a
meeting,
an
interim
meeting
in
Boston
on
September
19th,
which
happened
to
be
my
daughter's
birthday,
sirs
fantastic.
We
had
way
more
participants
than
we
thought
we
would
have.
Actually
we
were
extremely
pleased
by
the
number
of
participants
there
were
10
to
12
people
local,
depending
on
the
time
there
was
more
than
10
online
apologize
to
the
online.
A
We
had
started
working
on
the
Charter
at
that
time
and
we
wanted
to
have
the
the
feeling
of
the
group
to
define
the
major
groups,
the
major
goals
that
we
want
to
achieve
and
again
this
focus
on
the
research
on
how
to
use
codes
as
a
toolkit
for
as
part
of
the
toolkit
for
improving
efficiency,
also
look
at
common
issues.
What
are
the
big
elephants
in
the
room?
A
Why
aren't
the
coding
solutions
be
more
used
or
be
more
applied
in
the
industry
and
in
the
research
also
and
the
challenges
that
we
need
to
address
if
we
want
these
things
to
go
better?
So
essentially,
it
was
a
great
outcome,
so
we
had
a
proposal
for
a
new
charter,
a
new
milestone
that
we
spent
to
the
list
for
approval,
and
actually
we
had
some
discussion
and
add-ons
and
we
can
actually
identify
the
number
of
activities
and
the
first
volunteers
for
doing
these
activities,
which
I
think
them
are
here.
A
So
it
was
a
very
successful
meeting
and
it
made
us
think
that
we're
going
to
redo
that
and
by
the
way
we
felt
that
Boston
was
a
good
place
to
do.
It
seems
that
there's
quite
a
good
proportion
of
researchers
in
the
field
that
are
in
the
Boston
area,
so
the
milestone
that
we
identified
and
are
now
uploaded.
We
want
to
look
at
a
document
talking
about
the
existing
solutions.
Again.
A
This
is
something
that,
although
this
group
has
been
chartered
for
quite
a
long
time,
there's
never
been
a
document
saying
what
is
are
these
things
and
what
are
they
used
for
and
why
would
we
want
to
look
at
them?
We
were
going
to
look
again
at
this
idea
of
network
coding
and
quick-quick
had
had
FEC
from
the
beginning.
A
If
we
make
it
better,
there
is
a
NCN
satellite
news
case
to
look
at
an
overview
and
the
research
challenges
when
using
that
we're
going
over
sad,
like
networks,
and
we
have
Angeles
here
we
have
Nicholas
Kuhn
and
we
have
Emmanuel
who
have
all
looked
into
these
things.
We
want
to
look
at
a
some
kind
of
a
common
coding
API.
The
reason
for
this
is
that
you
know
that
there
are
current
their
codes,
the
codes
that
people
are
using,
the
code
that
people
may
use.
A
What
is
why
isn't
this
not
being
used
a
big
elephant
in
the
room
that
has
been
there
since
the
beginning
of
network
coding
and
the
IRT
F
is
the
that's
where
coding
and
congestion
control
there's
been
more
emails
and
stuff
on
this?
That
I
can
just
think
about,
and
is
there
a
way
to
finally
get
this
behind
us,
and
actually
this
is
also
Michael.
Well,
sir,
is
not
there
heat
now,
but
this
is
something
that
could
be
done
also
in
collaboration
with
the
ice
crg.
A
The
ICN
news
case
again
we're
going
to
have
a
draft
on
this.
This
is
something
that
has
been
going
on
for
quite
a
while,
and
we
have
also
the
people
from
the
Xen
orgy
and
we
had
talked
about
doing.
Maybe
common
meetings
at
one
point,
and
there
is
the
last
one
for
the
moment
doesn't
have
a
draft.
But
there
was
this
idea
of
having
NC
and
robust
tunneling
and
I
can
tell
you
that,
after
attending
the
tax
meeting
yesterday,
they
could
be
some
reuse
of
some
of
the
work.
F
Allisyn
Mankin
as
the
iron
chef
chair,
so
I
took
the
word
use
case
out
of
all
your
milestones,
because
I
think
it's
important
to
not
do
a
here's,
an
exemplary
thing,
but
to
do
a
here's,
a
hard
problem
that
we
solve
for
these
people
right,
and
it
might
be
that
in
the
end,
you'll
consolidate
these
documents
because
you've
solved
some
hard
problems
for
multiple
types
of
use.
So
it's
and
it's
quite
important
not
to
not
to
publish
their
little
ephemeral,
use
cases
right
because
there
are
exemplary.
F
They
don't
actually
necessarily
last
for
very
long
and
also
I
now
have
some
fuel
for
this.
So
we're
going
to
work
this
in
the
whole
IRS
G
they
cost
the
the
documents
are
quite
expensive
in
the
RFC
editor.
So
they
really
want
us
to
use
our
RFC
as
well.
So
so
I
see
it
all
there
and
a
night
since
I'd
taken
them
all
out
of
all
the
milestones.
A
A
A
D
So
so
the
other
cat,
so
this
is
an
extract
from
the
current
taxonomy
documents,
which
explains
what
we
are,
what
we
mean
by
so
scoring
network
owning
channel
coding,
physical
coding,
what
we're
talking
about
in
this
research
group,
so
it's
quite
usual,
but
network
coding
is
the
thing
in
the
middle
on
the
top.
You
are
potentially
source
coding,
so
schooling
typically
is
well
multi
media,
encoding
multimedia
decoding,
all
these
so
specific
stuff,
which
depends
on
the
application
of
course,
and
at
the
bottom
at
the
bottom.
You
of
course
those
physical
layer,
FEC
codes.
D
That's
one
way
to
have
network
currying
in
this
protocol
stack.
The
other
way
is
putting
this
networking
stuff
bill
within
the
communication
layers,
so
below
UDP
below
IP
below
TCP
depends
on
what
who
we
want
to
address
to
us?
They've
always
clue
that,
but
just
to
make
it
clear,
then
there
are
a
few
things
we
want
to
do
and
a
few
things
we
do
not
want
to
do
more,
particularly,
we
never
will
never
consider
a
physical
layer
of
physical
layer
codes
bit
error
correction,
bit
error
detection.
D
All
of
this
is
physical
layer
specific
and
not
and
will
not
be
addressed
within
this
research
group.
On
the
opposite,
we
want
to
deal
with
packet
losses,
so
packet
means
many
different
things
independently
context.
It
can
be
a
UDP
Datagram.
It
can
be
a
UDP
Datagram
payload.
It
can
be
a
unique
IP,
Datagram
itself,
TCP
segment,
whatever
you
want
an
application
message
and
so
and
so
forth.
So
it
really
depends
on
the
context.
Yeah.
G
Just
quick
clarification
is
that
just
another
way
of
saying
you
only
do
a
ratio
coding
or
a
ratio
yeah
and
so
I'm,
trying
to
figure
if
you
got
a
packet
yeah
and
the
packet
CRC
is
wrong.
Yeah
all
right.
It
removes
that
treated
as
an
erasure.
Or
do
you
actually
try
to
correct
that
packet?
We
don't
try
to
correct
it.
So.
D
G
D
D
D
So
next
next
slide
is
about
coding,
basics
very
easily.
Well,
you
can
have
packets,
sauce,
packets,
repair,
packets
and
the
typical
way
of
doing
that
of
doing
encoding
consists
in
computing,
a
linear
combination
of,
for
instance,
sauce
packets
in
to
repair
packets.
So
you
in
the
first
case
the
first
prepare
packet.
You
just
do
the
exhaust
some
of
these
sauce
packets.
This
is
one
way
to
do
that.
Another
example
just
below
consisting
multiplying
each
sauce
packet
by
a
certain
coefficient
and
Windex
or
some
of
all
of
these.
Well,
it's
pretty
simple.
D
Of
course
you
can
also,
and
in
some
use
cases
you
will
also
compute
repair
packets
from
auto
repair
packets
that
you
have
received
is
no
way
to
do
that,
and
in
this
bottom
example,
you
can
see
that
we
multiply
the
first
repair
decade
by
coefficient,
57
and
second
repair
packet
by
another
coefficient.
You
do
the
Sun
exhaustion
of
the
two
results
and
we
produce
your
additional
repair.
Packets
that
cancel
it.
A
D
Very
basic:
it's
not
rocket
science
for
sure
you
basically
have
two
math
mathematical
operations.
Xo,
you
have
two
data
trunks.
You
want
to
show
them,
and
you
have
also
this
multiplication
by
o
coefficients.
You
multiply
a
certain
data
chunk
by
a
coefficients
over
a
certain
finite
field,
but
there's
nothing
complex
in
this
operation
as
well.
So
that's
almost
what
all
you
need
to
know
in
order
to
be
able
to
understand
at
least
the
main
FEC
techniques
used
in
this
domain.
D
Now
we
have
two
kinds:
basically
roughly
speaking,
where
two
kinds
of
FEC
codes
or
network
cards,
one
of
them
being
block
codes,
Yas
over
being
window
based
codes.
So
block
codes
are
traditional
codes
in
some
way.
You
probably
know
some
of
them
by
name
radicals
rato,
Cukor's,
richlum,
encodes,
a
little
sister
guess
all
those
are
block
codes.
D
So
that's
the
basic
lucien,
the
one
that
has
been
addressed
by
the
IMT
and
fig
frame,
IDF
working
group,
for
instance
in
the
past,
and
then
you
have
this
second
approach,
which
consists
in
considering
a
sliding
and
killing
window.
We
know
that
we
slide
over
the
set
of
sauce
packets
or
repair.
Packets
depends
on
the
example
I'm
considering
sauce
packets.
Only
you
have
this
and
coming
window
that
slides
over
this
sauce
packet
continue
sauce
packets
and
whenever
you
need
to
reduce
one
or
more
repair
packets,
it's
pretty
simple.
D
You
consider
all
the
packets
in
the
end
coming
window,
you,
computer
linear
combination,
you
produce
one!
You
produce
one!
We
package,
if
you
want
more
than
you
consider
another
combination,
linear
combination
and
you
produce
additional
repair
kits
as
many
as
you
want
all
of
them.
From
this
for
the
currents
and
cutting
window.
That's
pretty
simple!
D
D
There
are
additional
names
for
those
window
based
codes,
so
it's
more
as
the
same
sliding
window
cuts
is
morrison
animals
in
the
case
where
this
lining
window
is
not
a
fixed
size
but
can
evolve
over
the
time,
depending
on,
for
instance,
feedback
that
may
be
from
the
destination.
This
is
not
shown
not
always
the
case,
but
if
you
have
something
back,
you
can
adjust
this
window
size,
and
in
that
case
you
will
talk
about
an
elastic
window.
D
Size
code
on
the
fly
codes
are
more
assessing,
so
main
benefits
compared
to
block
codes
is
first
of
all
it's
much
more
flexible
and
then
it's
also
very
benefit
in
terms
of
reduced
latency.
So
when
you
have
very
time
constraints
flows
real
time
flows,
then
it
makes
sense
to
use
this
kind
of,
of
course,
next
and
that's
more
or
less
all
I
wanted
to
say
for
this
toriel
introduction
part
we
have
this
taxonomy
document
that
explains
or
defines
additional
terms
and
single
for
encoding,
multiflo
encoding.
All
those
are
defined
in
these
documents.
D
You
can
have
a
look
at
it
if
you,
if
you
need,
we
can
also
talk
after
the
meeting
no
problem.
Thanks
so
now,
I
have
two
additional
slides
to
give
you
a
gross
panorama
of
what
has
been
discussed
presented
in
the
past
within
this
research
group.
So
the
first
slide
is
about
the
Kurds
themselves.
So
we
talked
a
lot
on
about
RL
and
C,
which
is
the
old
story.
Cold
cuts
for
this.
A
D
I
forgot
to
mention
that
so
random
linear
network
codes,
so
the
equations
are
produced
from
Tom
Lee
in
that
case
and
they
also
transmitted
within
the
packets
itself
in
order
to
be
able
to
do
what
I
mentioned
after
in
order
to
be
able
to
do
Rhian
coding
within
the
network.
So
that's
more
or
less
than
that
ready
to
do
that.
I
simplified
a
little
bit,
but
there
is
no
specification
for
the
moments
for
those
curves
ohms.
This
is
probably
something
that
is
missing.
D
Next
is
the
network
cuts
from
home
network
cuts
was
proposed
and
introduced
by
Stein,
wolf
fuel
ITF
meetings
ago
at
Hawaii?
Yes,
so
you
had
the
slide
in
this
URL
bats
was
also
introduced
and
proposed
some
time
ago.
It
was
in
London
for
another
correctly
by
professor
Raymond
young.
So
you
also
other
slides
can
refer
to
that.
There
is
no
specification
for
those
codes
and
you
have
this
work-in-progress
about
our
LC
codes
from
Dom
linear
codes.
It's
in
fact
more
less
the
same
name,
but
it's
different.
D
The
main
difference
with
respect
to
iron
C
is
the
fact
that
there
are
only
four
end-to-end
communications,
so
you
do
not
carry
the
current
coefficient,
for
instance
within
the
packet
itself,
just
Korea,
95
acid,
for
PNG,
it's
working
progress
and,
while
he's
progressing
when
in
fact,
looking
so
next
slide,
this
one
is
about
the
protocols,
so
the
code
stem
cells
are
not
sufficient.
You
need
to
have
a
full
solution,
that
is
to
say,
you
have
to
specify
also
the
mechanisms
that
will
be
used
to
make
all
of
this
work
together.
D
A
D
So
most
probably
it
will
be
updated
soon.
Then
we
have
this
Dragon
cast
cedric
at
the
end
of
this
meeting
will
say
a
few
words
about
it's
another
way
to
do
to
apply
network
coding
and
this
this
time
with
Riaan
coding
within
the
network.
So
seroquel
talk
about
it
with
a
specific
use
case
and
there
is
a
also
an
expired
internet
draft
that
describes
what
it
is.
And
finally
there
is
this
fixed
frame
extension,
so
fixed
frame
is
already
specified
and
standardised
she's
IFC
6363.
It
was
four
or
five
years
ago
now.
D
The
idea
is
to
extend
it
in
order
to
be
able
to
use
sliding
window
cuts,
it
was
restricted
to
block
codes,
so
we
extended
it
for
slanging
window
cuts.
The
specification
is
more
or
less
ready
for
working
group
last
call,
so
it
will
soon
be
move
forward.
Next,
I
think
this
is
all
I
need
to
I
wanted
to
say
just
to
conclude.
D
Well,
there
are
many
research
outcomes
in
the
domains
in
the
field,
very
interesting
outcomes
here
in
this
group,
it's
time
to
work,
to
transition
to
application
and
protocol
research,
and
this
is
the
the
main
goal
of
this
group
and,
of
course
doing
that,
in
collaboration
with
a
close
collaboration
to
other
research
group
or
IGF
groups,
whenever
it's,
it
makes
sense.
So
we'll
see
examples
today
afternoon
of
collaborations.
D
So
second
presentation
for
this
generic
API
now
so
the
idea
is
to
have
you,
so
this
is
joint
work
with
many
people
with
unit
on
that
shot
for
mr.
Shapiro,
with
a
civic
IG
from
INRIA
as
well
as
me
just
like
me,
but
we
are
not
in
the
same
team
with
Yan,
sweat
from
girl
and
with
modern
medicine
from
the
sign
off.
This
is
working
progress,
so
the
idea
of
this
work
is
to
specify
a
common
API
for
the
coding
part
and
only
the
coding
part.
So
this
is
this
bottom
right
box.
D
So
typically,
this
code
is
this:
generic
API
will
make
it
possible
to
interact
with
the
codec
itself
by
doing
session
management
initialization
in
visualization
shutdown,
office
colleague
instance,
for
instance.
You
will
also
be
able
to
interact
with
the
correct
in
order
to
specify
and
manage
the
encoding
and
decoding
windows
in
order
to
specify
and
manage
the
coding
coefficients.
So
it
depends
on
what
code
you
are
considering.
Sometimes
the
kurds
are
generated
by
the
correct
itself,
sometimes
they're
generated
by
the
application
top
of
the
correct,
so
that
several
ways
to
do
that.
D
You
also
typically
have
functions
to
do
efficient,
cooling
and
functions
to
do
recurring
by
submitting
new
repair,
packets
or
new
sauce
packets
that
you
may
receive
from
the
network.
So
that's,
basically
the
the
goal
of
this
API-
and
this
is
I-
will
insist,
really
low
level
API
for
low
level
component
of
a
much
larger
software.
So
not
in
this
API
are
all
those
functions
that
are
typically
on
the
protocol
side,
so
congestion
control
memory,
management,
transmission,
reception,
packets,
either
creation
and
processing.
D
So
we
are
talking
about
the
generic
API,
so
why
is
it
generic?
What
is
generating
this
API?
Well,
we
want
to
have
something
that
can
be
useful
with
many
different
codes
from
not
just
one
code,
not
just
specific
to
one
code,
but
generic
with
and
usable
with
different
codes.
We
also
want
to
be
able,
well
initially,
I,
had
in
mind
being
able
to
have
a
an
API
that
is
common
to
block
codes
and
sliding-window
codes.
D
It
turns
out
that
it
would
make
the
API
bit
awkward,
so
I
think
it's
preferable
to
remove
this
block
code
support
and
only
focus
on
sliding
window
codes,
but
still
this
is
something
we
can
discuss.
If
you
have
no
beyond
this,
we
can
talk
about
it
then,
yes
makes
no
sense.
In
this
context,
we
want
to
be
able
to
produce
as
many
repair
packets
and
manage
has
many
repair
targets
as
we
want.
So
this
is
what
we
mean
by
red
lace.
We
have
this
feature.
D
The
goal
is
to
is
to
simplify
a
product
development
on
one
side
and
also
simplified
protocol
and
application
development
and
the
other
side
by
having
this
already
standardized
and
specified
API.
So
it
makes
correct
development
simpler
because
typically
designing
an
API.
You
can
believe
me
is
something
complex:
it's
not
obvious
at
first,
so
that's
one
pot
and
of
course,
if
you
have
this
community,
if
you
want
to
test
new
another
type
of
codec,
then
it's
it
will
make
things
much
simpler
from
a
software
development
point
of
view.
D
So
that's
very
important,
so
we
also
want
to
is
benchmarking.
Facilitate
benchmarking.
You
can
fast
is
mostly
swap
between
several
codecs
and
test,
which
one
is
the
best
appropriate
is
most
appropriate
for
your
use
case
with
this
API
you
want.
We
also
want
to
simplify
the
development
of
future
open
source.
Correct
I
will
talk
about
it
at
the
end
of
this
presentation,
also
because
it's
visible,
which
is
also
a
good
reason
to
do
that.
G
May
be
a
little
too
detailed
here,
but
because
I
did
some
work
in
this
area.
I'm
just
curious.
Are
your
thoughts
are
what
if
the
core
of
the
codec
is
in
an
FPGA
and
that.
D
G
D
D
D
So
once
we
have
the
API,
we
can
continue
with
this
codec.
This
is
this
will
be
next
step,
while
typically
in
C
C++
we'll
see
for
the
FPGA.
That's
a
good
point.
We
are
looking
for
candidates
codec
that
may
exist.
We
know
that
Cedric
as
one
proposal,
this
garden
net
project,
which
provides
already
the
functionalities
that
we
are
looking
for.
It
is
more
a
little
bit
specific
to
embedded
platforms,
but
it
can
be
as
well
a
good,
a
good
starting
point
for
this
development.
D
H
Ian
at
Google
it
is
suggested
that
our
next
steps-
possibly
it
would
be
certainly
very
useful
to
actually
try
using
multiple
of
these
in
a
in
a
certain
environment.
I
mean,
given
that
you
have
three
already
that
appear
to
have
the
same
API,
actually
try
them
in
an
in
a
real-world
use
case
and
make
sure
that
the
the
API
is
like
not
overly
cumbersome
to
use
before
you,
you
know,
implement,
not
I've
attended.
That
I
mean
obviously
Nestle.
H
D
The
experiment
yeah,
we
typically
need
experimental
developers
to
do
this
watch,
but
if
I
mention
those
free
api
sheets,
it
is
because
behind
them
that
is
actually
running
code,
and
we
have-
we
all
have
some
experience.
We
know
that
some
features
are
maybe
could
be
improved
within
those
API,
so
we
want
to
benefit
from
this
expense.
This
common
experience,
in
order
to
make
something
that
is
useful
and
efficient,
yeah.
G
D
I
I
We
have
had
lots
of
activities
in
networking
in
satellite
over
the
past
lots
of
research
activities
going
on
and
the
main
idea
is
to
try
to
identify
what
is
actually
deployed
at
the
moment
and
if
there
are
any
interest
for
research,
different
different
research
axes
to
further
make
research
on
that,
but
also
having
in
mind
that
we
have
to
make
that
into
products
to
mobile.
So,
basically,
you
notify
challenges
where
we
can
actually
have
marketing,
when
it
makes
sense
that
we
deploy
these
techniques
and
what
are
the
challenges
in
deploying
them?.
I
So,
as
opposed
to
what
we
had
in
the
first
draft,
we
have
improved
representation
of
the
notation
and
what
we
can
have
at
the
generic
satellite
multi
gateway
satellite
network,
because
I
mean
problem
in
the
satellite
industry
that
we
don't
have
reference.
Architecture
for
mutti
gateway
systems
like
you
may
have
in
buy
provided
like
like,
may
be
provided
by
the
3gpp
or
broadband
forum,
or
this
kind
of
thing.
So
there
are
some
standards,
but
it's
not
enough
to
make
things
on
Tara
payable.
E
I
Multiple
paths
to
better
see
to
try
to
make
for
each
example
of
what
is
deployed
today
to
see
if
there
are
any
gaps
and
opportunities
for
more
network
coding,
and
that
is
that
table
would
be
further
fulfilled
by
the
use
cases.
So
idea
is
to
see
if
we
can
actually
use
the
taxonomy
to
to
be
the
base
for
our
organization
of
our
use
cases
and
you
notification
of
the
need
for
further
use
case.
So,
for
example,
here
the
problem
is
that
maybe
it's
because
I
didn't
understand
that
correctly.
I
When
we
have,
for
example,
video
streaming
like
YouTube
was
quick.
We
have
some
I
thought
it
was
resource
notes,
but
now
I
understand
that
the
source
node
is
actually
the
video
encoding,
the
thing
not
the
transport
layer,
but
in
quickly
somehow
mixed
at
the
moment,
but
but
if
it's
more,
basically,
we
have
seeing
that
are
done,
end
to
end
per
flow
and
single
path.
And
what
is
done
at
the
moment
in
satellite
industry
is
to
introduce
coding
as
a
physical
layer.
I
This
is
what
some
of
them
we
are
more
believed
in
for
the
moment.
We
just
list
it
to
see.
If
there
is
interest
for
other
people,
we
have
our
own
preference,
which
I
will
hide
at
the
moment,
but
basically
we
have.
That
is
something
that
has
been
demonstrated
all
the
way
already,
the
true
way
with
a
channel
when
we
have
two
terminals:
satellite
terminals
wanting
to
speak
to
each
other.
I
We
have
hey
wanted
to
speak
with
B
and
the
fat
light
gateway
would
make
saw
with
a
and
B,
and
so
we
only
have
to
send
one
one
packet,
one
flow
to
the
two
terminals,
so
we
say
satellite
bandwidth.
The
other
use
case
is
that
one
of
the
main
one
of
big
conventions
issue
we
have
in
the
satellite
industry
at
the
moment
is
the
conversions
between
the
broadcast
and
the
bought
born
networks,
which
are
totally
separate
at
the
moment
and
with
all
bond
networks.
I
I
While
we
could
imagine
very
interesting
things
that
could
be
deployed
because
sometimes
the
free
capacity
and
that
could
be
used
by
sending
redundancy
packets
on
this
free
capacity
and
another
one
but
I
didn't
know
you
would
speak
about
bad,
but
that's
not
the
same
bats.
This
bats
is
fp7
project,
where,
basically,
the
idea
was
to
try
to
breed
the
different
technologies.
For
example,
today
you'd
upset
when
you
deploy
when
it
deploys
the
satellite
service
it
may
use
45
kilo
forties,
56,
kilobytes
link.
I
You
have
to
send
some
acknowledgment
on
some
things
on
the
link
and
for
you
you
don't
have
a
busy
directional
access
on
the
satellite
link,
but
you
actually
use
satellites
on
network.
So
they'd
hear
an
opportunity
to
use
network
coding
to
actually
better
to
add
some
more
reliability
on
the
service,
and
this
is
more
following
up
on
that,
because
they
have
done
some
load
sharing
schemes
more
cloud
close
to
what
has
been
is
different
to
what
is
done
by
MP
TCP,
but
it's
close
and
next
slide.
Please
what
we
need
to
do
next.
I
I
There's
still
more
use,
cases
may
be
wherever
maybe
the
some
that
are
actually
interesting.
We
really
want
to
focus
on
making
it
happen
in
the
networks.
We
don't
want
to
make
just
axes
of
research
that
to
do
many
things,
because
there's
already
been
lots
of
activity
on
that
topic,
so
we
want
to
to
be
sure
about
what
can
be
deployed
and
speaking
about
deployment
within
the
link
with
what's
happening
in
the
network
function.
I
Virtualization
working
group,
because
the
trends
at
the
moment
is
that
we
have
all
these
crude
round
activities
where
we
have
data
centers
hosting
virtualized
function.
This
is
some
trends
we
see
and
we
believe
that
this
is
the
usual
portunity
for
deployment
of
network
coding
schemes
on
both
I
mean
we're,
told
Caminos
and
data
centers.
I
A
E
E
A
I
I
think
that's
very
important
because
for
the
moment
we
have
tried
to
start
a
description
of
what
is
because
nowadays
for
broadband
access.
We
have
multiple
gateways
in
the
network
and
the
problem.
Basically,
we
try
to
match
with
what
the
broadband
forum
is
doing
in
terms
of
architecture
where
basically
will
have
a
PNG
and
then
access
more
access
related
parts
in
the
Gateway.
So
we
don't
have
one
access
gateway,
that
satellite
and
Jen
door
to
the
Internet.
I
We
have
some
network
and
independent,
then,
depending
on
what
Internet
manufacturer,
getaway
manufacturers
are
doing
and
loss
may
happen
in
the
gateway
or
and
also
we
want
to
describe
that
and
that's
why
we
want
we.
We
want
to
detail
the
mobility
use
case,
because
we
think
that
this
is
the
cases
where
we
will
have
physical
layer
losses,
because
the
physical
layer
codes
will
not
be
able
to
come
from
these
huge
valuations,
because
if
we
have
fixed
broadband
network,
there's
more
loss
on
Wi-Fi
than
on
satellites.
I
Doing
here
and
NFV
or
G,
it
is
just
that
we
think
that
the
problem,
depending
on
the
use
case
and
where
we
put
the
coding,
do
we
want
to
put
that
in
the
part
of
the
Gateway
that
it's
actually
managing
the
QoS
and
the
IP
packets
or
formatting
the
physical
layer,
packets,
medically.
We
have
different
parts
on
which
we
can
apply
network
coding
and
depending
on
the
architecture
we
can.
We
have
lots
of
functions
in
these
two
components
and
we
want
to.
I
E
I
D
J
Simon
Romano
University
on
Napoli.
First
of
all,
thank
you
for
this
document,
because
I
think
this
is
something
that
is
really
needed
because
as
far
as
I
know,
there
are
a
number
of
current
projects
that
are
working
on
this
and
trying
to
find
a
taxonomy
and
putting
the
use.
Cases
together
is
a
very
welcomed.
Effort.
I
also
think
that
one
son
who
is
sitting
here
knows
about
the
NFB
effort,
because
I
think
a
look
at
the
draft.
J
I
Other
produce
just
for
the
for
the
police
I'm
very
interested,
because
we
could.
We
were
lots
of
caching
as
well
and
I,
think
it's
part
of
the
architecture
and
where
you
put
your
data
but
for
the
link
for
the
Navy.
Our
point
is
not
to
explain
how
you
get
your
eyes
donate
for
the
function,
because
I
think
it's
more
with
copper
than
every
working
group.
But
here
is
just
that.
We
we
want
to
focus
on
how
we
can
actually
deploy
that
in
satellite
networks.
Today,
exactly.
J
D
K
I
K
K
L
I
saw
myself
wrong
when
ITT
I
made
the
same
presentation
in
icy
energy
walking
group
meeting
for
thanks
for
chairs,
so
I'd
like
to
introduce
the
calling
for
I
she.
She
she
Anna
and
na
I,
didn't
and
at
the
end
of
the
use
case
for
Network
coding
for
Shion.
So
we
imitated
just
we
need
eighty
to
making
a
draft.
So
let
me
introduce
a
context
and
a
content
of
this
draft.
Briefly.
Okay,
so
you
know
a
little
coding
has
been
attractive,
attractive
and
research
topic.
So
there
are
several
interesting
papers
already.
L
E
E
L
E
L
We
we
think
we
need
to
clarify
and
specify
the
requirements
and
the
potential
item
position
will
be
such
item
for
for
this
topic
and
they
agree
on
that.
So
we
initiate
to
making
this
draft
so
actually
I
say
now:
documents
describes
our
network
holding
related
to
stuff,
but
it
is
just
simply
it's
a
benefit.
L
Coding
and
decision
architecture
and
the
we
idea
we
specify
the
ultimate
scope.
This
torta
is
not
is
not
to
provider
from
the
specific
solution,
but
maybe
we
want
to
provide
a
rough
solution.
Okay,
so
after
defining
of
the
definition
of
the
post,
she
she
an
and
network
holding
terminology.
We
briefly
introduce
GG
background
and.
E
L
Do
really
describe
given
by
network
coding,
and
she
she
landed
in
it
and
then
in
the
fiction
fall,
will
consider
how
network
coding
can
be
applied
in
decision
engine
and
its
requirement
and
after
clarifying
the
requirement
we
we
want
to
provide
the
research
challenge
and
potential
research
item
to
to
make
makes
the
communication
better
by
using
an
equal
coding.
Technology
are
the
world's
oxygen
and
in
an
architecture,
program
and
the
protocol.
L
So
so
you
know,
net
recording,
bring
some
benefits,
such
as
a
throughput
soup,
the
under
capacity
improvement
and
the
robust
robust,
not
enhancement.
So
so.
These
benefits
are
not
radically
different
from
the
session
and
energy
and
benefit
so
accordingly,
just
focusing
on
just
focusing
on
what
what
data
to
be
encoded
rather
than
its
data
property,
such
as
where
it
is
generated.
So
it's
very
in
line
with
she.
E
L
L
So
we
have
a
two
type
surname
one
is
a
coded
data
has
a
unique
name
or
code.
You
data
ways,
quality
coded
data.
Does
it
have
a
unigram,
so
in
the
case
of
the
coded
data
has
a
unique
name
called
that
account
which
could
could
could
have
a
unique
name
by
adding
some
information.
Some
coding
information,
some
coding
information,
such
as
a
encoding
vector
and
generational.
E
L
Case
consumer
need
to
know
the
exact
renaming
structure
to
deliver
it
by
using
a
specific
name
resolution
system,
and
in
this
case
also
or
instead
of
content
producer
content
to
request,
determines
encoding
vector.
It
means
that
the
just
indicated
how
how
to
create
a
colles
packet
so
other
case
indicates
that
the
coded
that
has
no
has.
L
This
case,
our
coded
data
that
the
name
of
code
editor
has
doesn't
have
a
coding
information
in
this
in
the
name,
so
this
data
may
specify
the
coding
information
into
a
meta
filter
and
the
payload
or
under
in
the
inglis
pondok,
to
the
interest
without
either
without
a
unique
name
for
coded
data.
In
this
case
note,
you
need
a
period
according
for
generating
and
the
providing
innovation
calling
packets.
G
L
G
Isn't
that
kind
of
a
separate
design
decision
too,
because
you
could
statically
decide
what
the
coding
vector
to
be
used
is
by
the
producer
right
and
then
and
and
then
you
only
have
one
possible.
You
know
coding
vector
for
all
the
possible
consumers.
The
trade-off,
however,
is
that
if
it's
the
content
requester
that
determines
the
coding
vector
it
says
the
coded
packets
can't
be
produced
ahead
of
time.
You
have
to
wait
until
the
interest
message
arrives
from
the
consumer
to
for
the
producer
to
actually
generate
the
correct
coded
packet.
G
L
So,
concerning
transport
requirement
we
need
to
we
need
to.
We
need
to
discuss
net
recording
scope,
it
means
that
it
should
be
discussed.
No
no
nodes
can
update
data
packets
that
are
being
received
in
transit.
So
because
of
you
know,
she
she
and
the
engineer
has
a
mechanism
to
to
verify
the
data
integrity.
So
in
this
case
we
encoding.
We
do
require
some
integration
mechanism.
E
L
Another
another
case,
another
execute
network
coding
only
when
the
receiver,
the
interest
for
the
colleague
data,
can
be
satisfied.
It's
like
end-to-end
Amana.
In
this
case,
it
would
require
making
it
to
ensure
food,
exactly
execute,
execute
and
net
recording,
and
we
will.
We
also
clarifies
the
basic
operation
at
consumer
water
and
the
opening
publisher,
considering
the
how
how
nodes
can
provide
innovative
data
packets,
especially
especially
in
the
case
that
called
it.
L
That
does
have
a
unique
name
in
this
case
consumer
who
do
need
issue
issues
interest
with
with
some
coding
information
for
getting
exactly
innovative,
innovative
datapack
coded
packet
and
Luther
would
need
maintain,
maintain
theory
of
interest
for
spec
and
generation,
and,
oh
and
and
the
aggregation
should
be
avoided
for
getting
a
forgetting,
a
multiple
call.
These
data,
packets.
L
Okay,
so
so,
in
addition
to
important
naming
and
the
transport
requirement,
we
describe
requirements
regarding
in
neutral
caching
and
security
and
policy
and
routing
and
following
and
how
to
single,
seamless
mobility
and
it's
their
requirements.
So,
as
I
mentioned,
that
network
coding
definitely
impacts
shishi
and
energy
and
security.
L
So
under
after
clarifying
the
requirements
are
we
we
qualify
the
Z
challenge
and,
in
my
opinion,
I,
am
very
interested
in
designing
how
we
can
reach
an
adopting
triangles
reading
elastic
encoding
window
into
sushi
Anna,
because
so
the
current
research
paper
adopts
just
only
Brooke
Brooke
Corning
mana.
So
it's
very
interesting
topic.
L
F
So
can
you
when
it
seemed
like
it
might
be
that
you'll
develop
a
draft
here?
The
draft
in
icy
NRG
that
covers
the
two
kind
of
interlocking
parts
of
this,
because
that
would
allow
I
mean
I
was
think
about
the
web
RTC
RTC
web
case,
but
doesn't
have
to
be
quite
as
complex
as
that
no
I
mean
it
certainly
have
to
be
complex,
but
I
think
that
getting
the
right
getting
enough
people
to
look
at
both
things
meet
and
discuss.
Both
things
may
require
splitting
the
content
a
little
bit
I
just
throw
that
out.
A
F
G
K
H
So
my
name
is
Ian
I'm,
one
of
the
quick
editors
I'm
going
to
be
talking
about
a
border
correction
or
no
we're
coding
in
quick
and
coming
some.
So
last
time,
I
talked
a
little
bit
about
some
previous
experiments
and
a
little
bit
of
data.
We
had
this
time.
It's
more
gonna,
be
forward-looking
and
saying
like
what
what
might
be
experiments
that
are
worth
trying
now
and
what
you
know.
What
approaches
architectural
II
might
make
sense
and
quick
so
for
next
slide.
H
Here's
an
intro
into
the
quick
header
in
case
some
of
you
are
not
that
familiar.
The
quick
header
is
pretty
small,
but
it
does
include
a
single
byte
to
indicate
what
type
of
packet
is
a
eight
byte
connection,
ID
and
one
or
two
board
for
one
two
or
four
byte
packet
number
and
the
packet
numbers
are
actually
truncated.
So
really
the
packet
numbers
are
technically
64-bit
packet
numbers
in
terms
of
space
or
62.
Bit
now,.
H
Do
it
like
work,
but
next
slide
oh
and
backward
backwards,
weren't,
actually
sorry
and
the
rest
of
the
payload
is
encrypted.
That's
all
you
really
need
to
know
so.
From
the
network's
perspective,
it's
completely
opaque,
f-fine,
so
wife
order
it
correction
in
quick
numbers
number
one
is
we
can,
unlike.
H
Real-Time
communications
tunnels
may
be
multicast
someday,
as
I
mentioned
at
a
previous
talk.
You
know,
they're
still
interest
in
a
more
efficient,
tell
us
probe,
because
that's
kind
of
a
proactive
loss
recovery
scheme,
where
you
don't
necessarily
have
much
information
and
just
have
to
send
something
and
you're
pretty
much
always
wrong
when
you
send
something
random.
That
sounds
dimming
so
next
slide.
H
Some
quick
features
that
may
kind
of
make
this
mapping
a
little
bit
easier
and
a
little
bit
more
tractable
from
an
implementation
perspective.
At
least
I
hope.
One
is
a
quick
monotonically
increasing
packet
number
that
increases
by
one
every
time.
I
guess
I'm,
hoping
that
somehow
we
can
utilize
this
in
how
we
map
the
coding
and
describe
like
which
packets
are
protected
by
the
decoding.
H
Multiple
flows
can
share
the
same
5-tuple
if
they
need
to,
or
you
can
potentially
brow
it
like
over
different
5/2
poles,
but
back
to
the
same
host
using
connection
ID.
So
there
are
a
variety
of
ways:
basically
just
a
take
like
two
flows,
and
potentially
one
of
them
could
be
a
forward
error
correction
flow
and
one
of
them
could
be
at
the
actual
base
flow
as
another
example
and
put
them
together
and
quick
provides
multiple
extremes
which
do
not
head
of
line
blocking
streams.
H
So
a
single
stream
is
kind
of
an
inorder
sequence
of
bytes,
but
we
can
have
up
to
to
the
62
individual
stream.
So
there's
not
a
practical
limit
on
the
number
of
streams
and
so
option
number
one
is
to
put
FEC
kind
of
outside
the
crypto
I
guess
the
way
I'm
thinking
about
this
option
is
essentially
is
like
a
separate
flow.
H
Probably,
and
it
goes
alongside
and
possibly
you
know
the
packet
number
of
the
for
Direction
correction
scheme
lines
up
sort
of
with
the
original
flow,
and
then
so
there's
some
metadata
inside
to
try
to
figure
out
like
how
to
decode.
But
this
would
be
somewhat
visible
from
the
network's
perspective.
I
mean
you
would
actually
like,
presumably
for
better
and
for
worse,
you
know
the
network.
H
Could
you
know
potentially
decode
the
without
having
any
knowledge
of
the
the
quick
crypto
right
so
like
there
would
be
no
cryptographic
knowledge,
as
I
said,
you
can
use
packet
numbers,
another
probe
potentially,
is
the
this
order
goes
on
the
side,
so
the
standard,
quick
packetization
process
would
not
be
modified
so
like
in
some
sense.
This
is
like
a
bolt-on
approach
opposed
to
actually
like
getting
inside
the
core
quick
packetization
code.
H
If
you,
if
you
aren't
doing
this
end-to-end,
obviously
you
would
need,
like
you
know,
some
middle
boxes
for
lack
of
a
better
word
to
like
terminate
it
in
the
middle
in
some
environments.
That
may
be
very
compelling,
particularly
since
quick
does
not
it's
not
practical,
to
terminate
quick
at
a
performance
enhancing
proxy.
So
you
know
there
might
be
cases
where,
instead
of
running
a
performance
enhancing
proxy
in
a
network,
your
performance
enhancing
proxy
for
quic.
Is
you
know
one
of
these
like
FEC
tunnels,
essentially
or
FEC
side
ones?
So
I
mean
there?
H
Are
kid
I
think
there
are
use
cases?
It's
it's
practical.
It
is
fairly
difficult
to
integrate
into
quicks
congestion
control.
In
some
cases
it
may
be
actually
impossible,
which
is
kind
of
unfortunate
from
intellectually,
like
we'd,
really
like
to
consider
all
the
bytes
that
were
sending
as
part
of
the
congestion
control
from
bed.
With
estimation
perspective
and
like
loss
detection
and
all
these
things.
This
is
a
completely
separate
yeah
next
slide.
H
H
I
have
this
new,
quick
option
that,
like
you
might
know
about,
and
the
other
side
says
I
know
about
that
too,
and
you
know
you
can
negotiate
say
a
new
frame,
so
you
know
you
could
experiment
with
this,
but
I
would
actually
minting
a
whole
new
version
of
quick
there's.
Some
CPU
cost
on
the
con
side
to
coding,
as
well
as
doing
encryption
because
you'd
be
doing
the
coding
inside
the
encryption
and
it
consumes
an
extra
byte
of
payload
because
you
actually
have
to
burn
a
quick
frame
type
inside
the
the
palin
yeah.
H
H
N
Space
Agency
projects
doing
Network
coding
and
I
fully
agree
with
what
he
says
and
in
the
other
draft
we
are
dealing
with
Network
coding
neutralization.
We
take
Network
coding
as
a
network
function.
You
know,
then
you
have
to
think
in
advance
about
all
day,
like
all
the
purple,
all
the
objectives
you
want
in
advance.
So
it's
per
floor,
multi
flow
per
path,
multi
path,
so
this
is
to
define
a
network
function
before
the
coding
just
for
some
one
specific
protocol.
So
this
is
what
we
are
doing.
I
I
just
want
an
Akula
screen.
Speaking
I
just
wanted
to
point
out
the
fact
that
indeed
there
is
an
interaction
between
the
FEC
in
the
congestion
control,
and
this
is
an
open
question
that
we
mentioned
before,
but
I
think
that
the
link
between
quick
and
FEC
and
the
impact
on
the
causation
control
in
totally
different
from
the
considerations
of
multipath,
for
which
the
in
quick
multipath,
let's
say
you
will
have
all
these
couple
codes
can
control
these
issues,
which
are
not
the
same
scope.
So
it
hasn't
impacted
the
congestion
control,
but
not
the
same
level.
H
G
I
think
we're
agreeing,
but
let
me
check
once
if
you
do
multipath,
it
makes
large
changes
to
the
way
quick.
Does
congestion
control
independent
of
whether
you're
going
to
do
coding
as
well?
Is
that
what
you're
saying
yeah
I
agree
entirely
with
that?
Yes,
if
you're
going
to
do
multipath,
you
need
multipath
the
congestion
control.
Yes,.
G
G
Also
true,
I
think
the
point
I
was
making
is,
if
you
look
at
previous
protocol
integrations
of
coding,
they
provide
modest
improvements
in
the
single
path
environment,
but
dramatically
better
improvements
in
a
multipath
environment.
So,
from
a
you
know,
barrier
to
adoption
and
how
people
will
view
the
value
of
this
kind
of
work
right,
I.
Think
people
will
view
the
value
much
higher
to
bother
to
do
all
this
work.
If
it's
done,
multipath
so
I
think
we're
basically
agree.
I
actually.
H
Have
a
follow
up
on
that
is:
do
you
have
any
like
references
to
literature
of
kind
of
considerations
that
they
they
went
through
and
and
other
things
that
might
be
relevant
because
you
know,
obviously,
since
we
haven't
done
quick,
multi
path
yet-
and
you
know
we're
just
starting
to
talk
about
this-
it
might
be
a
good
time
just
to
have
the
those
concepts
in
the
back
of
my
mind.
If
it
folks
would
be
willing
to
share
I
mean
it,
you
can
email
me
later
if
you
want
as
well.
N
Several
works
on
the
PCP
type,
so
you
can
see
the
differences
of
applying
the
network
all
in
per
flow
TCP
or
multipath
TCP,
and
so
they,
the
constraints
and
the
trade-offs
are
different.
So
if
you
know
that
you
should
consider
that
thing
advances
so
yeah
I
still
agree
with
him.
You
know
so
there
is
this
pre
work
on
multiple
TCP
that
can
give
you
they
you're
talking
about
yeah.
H
No
it'd
be
interesting
to
read
it
thanks
neck
side,
another
option:
that's
yeah,
yeah
yeah
I
can
oh,
how
did
you
I'll
blow
through
the
rest?
I'll
go
through
the
rest
and
then
whatever
remaining
time
we
have
for
questions,
believes
four
questions.
A
third
option
that's
been
suggested
is
to
actually
create
one
or
more
streams
that
are
forward
error
correction
themselves
and
have
those
protect
other
streams.
H
So
this
may
work
well
for
existing
applications.
I
know
some
existing
applications
actually
have
mappings
that
are
sort
of
of
this
form
that
already
exist.
You
can
implement
this
without
any
transport
changes
whatsoever,
which
is
extremely
nice.
Actually,
people
even
talked
about
implement
this
in
JavaScript,
which
seems
a
little
bit
crazy,
but
it
possibly
is
plausible
cons.
Is
it's
it's
fairly
application
specific
and
it
may
end
up
increasing
the
overhead
versus
kind
of
on
a
packet
layer
just
because
you're
doing
it
out
on
a
stream
layer
and
yeah.
H
There's
no
like
benefit
to
things
like
packet
number
sequencing,
because
you're
now
dealing
with
like
many
streams,
each
of
which
that
have
their
own
like
byte
space.
So
next
slide
my
opinion
on
this.
So
far
is
that
option
two
seems
like
the
most
promising
and
the
most
straightforward
thing
to
experiment
with
it's
not
clear,
it's
ideal
for
all
circumstances,
but
it
seems
fairly
flexible
and
you
know
it's
relatively
easy
to
negotiate
a
new
frame
type.
H
It
should
work
well
with
a
variety
of
codes,
Kryptos
cheap,
so
that
negative
is
really
not
very
interesting,
and
you
know
from
the
network's
perspective,
it
looks
like
exactly
like
any
other
quick
flow,
which
has
a
nice
benefit
from
making
sure
that,
like
middle
boxes,
don't
do
terrible
things
to
you.
So
next
slide.
H
So
it's
key
to
know
how
much
that
is
so
quick
and
kind
of
leave
that
amount
amount
of
extra
quick
does
not
fragment
it's
UDP
packets.
So
if
it
needs
to
leave
24
bytes
and
needs
to
leave
24
bytes
that
you
can't
just
increase,
you
know
the
coded
packets
by
24
bytes
and
have
them
all
dropped
on
the
floor.
So
it's
a
practical
consideration
that
definitely
needs
to
be
exposed.
H
Obviously
we
need
to
know
if
it's
you
know,
the
coding
rate
is
dynamic.
Can
I
change
it?
You
know
this
is
curry
bytes.
This
is
more
of
a
non
sliding-window
case.
Number
three
is
probably
not
that
relevant,
but
we
might
be
able
to
change
the
code
and
great.
You
know
the
link
lender
link
and
their
runtime
obviously
want
to.
You
know
it
to
add
data
to
be
protected
and
request
coded
by
its
B
Center
or
you
know,
produced
for
transmission,
pretty
basic
stuff.
The
challenges
and
questions
are
probably
more
interesting.
H
H
Oh?
Yes,
and
the
other
question
is
like:
does
the
API
need
to
understand
packet
numbers,
or
is
there
a
quick,
specific
shim
or
like?
Can
we
use
FEC
frame
or
like
kind
of
how?
How
is
this
actually
like?
What's
the
glue
with
the
transport
so
like,
given
we
have
a
Ford
error,
correction?
Api,
you
know
what
there's
got
to
be
some
kind
of
shim
here
of
how
we're
gonna
map
this
on
and
everything
so.
H
I
added
one
slide
about
implementations.
There
are
now
I
think
something
at
least
five
open
implementations
and
probably
ten
total
implementations
of
quick
that
largely
interoperate
with
each
other
with
TLS
one
three.
So
if
folks
ever
want
to
play
around
with
an
implementation,
they're
certainly
welcome
to
try
chromium.
But
honestly,
some
of
the
other
ones
are
a
little
bit
easier
to
get
up
to
speed
with
particularly
the
go.
Implementations
tend
to
be
pretty
easy
to
look
at
if
you're
familiar
with
go.
So
you
know,
I
would
encourage
you
to
play
around
to
things.
O
E
O
At
different
level
of
Duty
architecture,
so,
for
instance
above
UDP,
below
UDP,
whatever
below
IP.
This
is
not
a
problem,
however,
when
you
use
it
with
TCP,
you
have
no
choice.
You
have
to
be
below
TCP
and
in
the
meantime,
you
have
to
do
a
kind
of
costly
or
with
it.
Otherwise
you
have
to
replace
the
again
again
again
replace
the
network
coding
layer.
O
P
O
E
O
O
All
the
losses
are
mask
and
TCP
way
will
never
get
out
from
the
slow
start,
and
you
can
have
some
problems
because
you
began
to
be
opportunistic
and
there
is
no
no
point
to
using
this
protocol
anymore.
The
main
advantage
is
that
you
seen
is
implemented
in
any
kind
of
Oasis
and
they
all
follows
the
AGC.
Any
error
see
I've
checked.
The
main
difference
is
that
between
them
is
only
on
Windows.
You
have
to
activate
it
by
and
it's
not
activated
by
default
next.
O
So
if
I
represent
how
this
principle
is
applied,
simply
if
I
consider
a
network
adding
layer,
a
bureau,
IP
layer
and
you
week,
you
you
want
to
treat
strictly
bill,
I
behave
like
TCP.
Simply
each
time
you
get
a
decoded
packet
and
this
packet
has
been
is
a
rebuild
lost
packet.
You
just
have
to
mark
the
IP
CN
bit
and
transmit
it
to
the
upper
layer
next,
but,
for
instance,
if
you
want
don't
want
to
be
as
bad
as
TCP
over
a
random
losses.
In
that
case,
you
can
do
whatever
you
want.
O
If
I
can
say,
you
can
apply
a
loss,
discrimination,
algorithm
machine
learning,
algorithm.
Any
kind
of
I
go
even
allowing
you
to
choose,
to
filter
some
ecn
bits
and
to
prevent
TCP
to
react
to
all
losses.
Next,
no
problem,
above
obviously
IP
layer.
The
only
difference
is
that
in
that
case
you
have
to
mark
another
field,
the
the
TCP
field,
corresponding
tcp
field.
Next,
so
just
to
illustrate
what
we
obtained.
I
did
a
simple
experiment,
simple
experiment
showing
here.
So
it's
a
link
capacity
of
10
megabits.
O
There
is
energy
of
fourteen
iskcon's
and
two
percent
of
london
random
losses.
So,
basically,
if
I
applied
on
famis
matches
formula,
I
shoot
up
that
something
like
two
megabits:
that's
what
I
obtain
next,
if
I
use
a
net
recording
layer,
so
in
this
example,
I
use,
tetris
implementation
and
I
do
not
cross
layer
any
information
to
TCP.
In
that
case,
TCP
is
becoming
opportunistic
and,
although
all
all
information
of
mask,
so
you
is
going
to
fetch
all
the
bandwidth
available,
and
this
is
a
problem
for
my
view
over
TCP
sharing
the
same
link
next.
O
O
However,
for
my
TCP
10
that
so
the
one
implement
in
the
ecn
bit
using
the
ICN
bit,
the
good
boot
is
enhance,
as
there
is
no
retransmission
of
packet.
So
all
packet
are
delivered.
There
is
a
delivery
ratio
which
is
superior
to
the
standard
TCP
next
and
if
I
use
a
loss,
discrimination,
algorithm
or
machine
learning,
algorithm.
E
O
A
A
Okay,
so
yes
I,
we
didn't
really
know
at
the
time
what
manual
was
going
to
present,
but
I
guess
that
we
are
not
in
really
in
conflict,
because
what
is
in
this
presentation
is
basically
some
of
the
requirements
and
I
mentioned
the
interim
meeting
that
we
had
in
September.
One
of
the
main
topic
that
was
identified
in
that
meeting
as
something
that
we
needed
to
address
was
this
interaction
or
lack
of
lack
thereof,
between
network
coding
and
congestion
control.
A
So
you
have
a
series
of
lost
packets,
so
hence
you
don't
get
any
good
usage
of
doing
these
codes,
so
this
is
actually
as
a
almost
a
requirement
for
this
whole
working
group,
but
a
research
group,
but
so
we
need
more
structured
codes
and
we've
done
that
and
the
especially
with
long
RTT,
which
is
the
satellite
people.
Almost
the
conclusion
to
this
whole
meeting,
the
FEC
has
been
shown
to
shorten
the
recovery
time
and
I
think
this
is
something
that
I
will
send
to
the
satellite
people.
A
Next,
so
what's
the
problem
statement,
we
do
not
have
to
know
or
do
not
have
to.
We
have
to
recognize
that
the
use
of
FEC
a
lot
of
times
hi
is
the
congestion
control
information
that
TCP
uses,
because
obviously
we
correct
things
so
the
packet
was
lost.
We
correct
it,
we
send
it
to
the
other
end.
Tcp
has
no
idea
what
had
happened,
the
middle
and
there
are
instances-
and
actually
this
is
something
that's
going
to
be
slightly
different
from
what
you
have
said.
A
There
are
instances
where
a
reduction
of
bandwidth,
the
reduction,
the
bandwidth
for
the
Akamai
people,
means
that
we
added
some
overhead
to
protect
against
the
losses,
and
there
are
some
some
times
where
this
is
not
necessary.
What
do
we
need
that
by
that?
It's
very
short
term
or
spiky
events,
that
by
the
time
we
get
the
feedback
that
this
thing
has
happened?
It's
it's
too
late
in
any
case,
there's
nothing.
We
can
do
so.
You
could
actually
kind
of
say
we're
going
always
going
to
protect
against
these
things.
A
A
So
maybe
it's
not
worth
it,
but
we
all
know-
and
actually
this
coming
back
from
your
presentation-
that
there
are
instances
where
you
have
more
chronic
things
that
happened
on
almost
a
regular
basis,
they're
very
long
term
and
again
in
the
measures
that
Akamai
are
very
clustered
together.
They're,
not
you
know
chronic
and
spiky,
but
actually
obviously
there's
times
where
both
are
necessary,
and
actually
this
goes
into
this
ID
that
maybe
we
need
more
than
one
approach
to
deal
with
these
different
elements
in
terms
of
congestion
control.
Next,
so
the
project,
the
potential.
A
Approaches
that
basically
has
been
done
already
is
that,
instead
of
just
doing
the
standard
lost
congestion,
control
has
actually
moved
to
something
like
PDR,
which
is
delay
or
RT
t
based,
and
this
actually
has
given
some
good
results,
at
least
in
some
of
the
stuff.
That's
been
done
at
Akamai
and
it's
under
investigation.
There
was
this
idea
of
also
sending
the
last
information
from
the
FEC
in
the
congestion
to
monitor
so
here's
Dave's
question
this.
G
May
make
no
sense
at
all,
and
it
just
occurred
to
me.
So
just
tell
me
I'm
crazy,
which
is
thank
you
in
a
delay
based
congestion
control
scheme.
Inaccuracies
in
measuring
delay
throw
the
algorithm
off
as
opposed
to
inaccuracies
in
detecting
loss,
throw
the
algorithm
off.
Is
it
your
assumption
that
the
computational
cost
of
reconstruction
in
a
coded
system
does
not
increase
delay
to
the
point
where
you're
actually
biasing,
the
the
delay
is
seen
by
the
layer
above
okay.
G
A
Okay,
thank
you.
The
assumption
it's
actually,
especially
when
you
start
using
systematic
codes
that
the
cost
of
reconstructing
a
packet
is
very,
very
low
and
and
it's
and
will
not
greatly
modify
your
estimation
of
the
delay.
Okay.
So
the
other
idea
was
that
sending
loss
information
from
the
FEC
to
a
congestion,
control
algorithm,
like
estimate
the
loss
instead
of
estimating
the
delay
and
then
that's
has
been
the
subject
of
thousands
of
papers.
A
Well,
do
we
the
design
which
one
is
the
congestion
loss
or
everything,
but
we
think
that
this
may
add
a
lot
of
complexity
and
potential
non-standard
solution.
But
again
it's
been
tried
the
other.
Well
now
we
know
that
zcn
that's
been
informed
that
has
been
looked
at,
presented
the
same
segment,
the
work
that
was
done,
trying
to
change
to
distinguish
between
congestion
losses
and
others.
That
also
has
been.
A
This
is
actually
background
and
also
the
work
that
also
emmanuel
presented
of
MIT
and
the
Hamilton
Institute
on
the
TCP
and
C,
and
the
NC
TCP,
which
are
like
the
creation
of
a
completely
different
way
of
doing
TCP
with
coding.
I
think
point
there's
no.
This
is
not
really
like
a
solution
presentation,
it's
actually
basically
establishing
with
the
problems
and
where
some
of
the
avenues
could
be.
Obviously
a
manual
presented,
something
that
is
probably
more
like
a
solution.
This
is
like
more
like
an
investigation
of
what
could
work.
A
I
A
I
A
A
We
would
like
to
report
to
this
group
and
the
interaction
of
congestion,
control
and
network
coding
has
been
the
elephant
in
the
room
ever
since
this
research
group
was
funded
founded.
There
was
early
results
that
were
actually
presented
in
the
past,
which
essentially
had
ignored
the
issues
of
congestion
control
and
essentially
we're
biasing
the
results
incredibly,
so
that
actually,
is
not
what
we
would
like
to
continue.
A
We
would
like
to
start
looking
at
new
ideas
and
we
would
like
to
have
a
draft
to
be
produced
either
for
London
or
Montreal
to
start
putting
ideas
and
potential
solutions
and
potential
architectures
into
a
valid
document
and,
of
course,
collaborator
welcome.
You
guys
are
already
working
on
this
I,
don't
know
if
you
want
your
own
draft.
A
If
you
want
to
do
something
in
collaboration,
there's
probably
other
people
who
may
have
ideas
that
we
would
welcome,
and
what
I
understand
is
that
William
will
continue
working
on
these
things
and
who
will
be
obviously
the
main
collaborator
to
the
draft.
So
this
was
like
just
a
small
presentation
again
at
the
interim
meeting.
Congestion
control
was
identified
as
a
major
issue
that
the
group
had
to
to
address
more
questions.
Oh.
H
The
goal
should
be.
They
should
have.
No
actual
interaction
like
that
for
Derek
Russian
are
now
we're
coding
is,
is
faster
loss
recovery.
It
is
not
substitute
for
congestion
control
like
if
it's
substitute
for
did
congestion,
control,
I,
I,
think
so
I
don't
know.
I
have
a
fairly
strong
opinion
that
like,
if
that
that
should
be
a
principle
of
whatever
document,
but
I
mean
that's
just
a
personal
opinion.
So
someone.
A
H
O
Definitely
agree
with
your
point
here:
I
think
this
is
something
very
important
and
I
believe
that
we
have
also
to
consider
if
one
day
or
another,
we
are
going
to
spread
a
lot
of
TCP
NC
or
any
kind
of
whether
or
not
redundancy
packet.
We
per
packet
must
be
congestion,
control
or
not,
because
if
we
decrease
the
the
TCP
flows
without
considering
that
we
inject
a
lot
of
reaper
bracket,
it
could
be
a
problem,
but
I
definitely
agree
with
this
point.
I
think
this
is
something
useful
to
discuss.
Yeah.
M
Makkya
versa,
I.
My
general
view
on
this
matter
is
kind
of
similar
to
ian's
and
I
at
first
thought.
I'm
not
gonna,
stand
up
because
I
I
don't
have
any
interesting
thing
to
say
because
I
don't
think
this
is
an
interesting
mix,
but
I
just
had
an
idea.
I
want
to
share
that
idea.
One
of
the
benefits
of
of
ECM.
One
of
the
benefits
is
that
you
cannot.
M
You
can
be
basically
lossless
right
and
you
have
this
signal
so
in
the
absence
of
routers
being
able
to
do
easy,
an
that
could
be
a
possibility
of
doing.
Essentially,
you
know
normal
loss,
oriented
congestion,
control
and
just
using
coding
to
master
loss
like
not
half
the
loss
right,
but
you
will
figure
out
that
it
happened.
A
Actually
looked,
you
know
when
Faison
was
talking
about
the
sliding
windows
and
the
windows
that
can
grow
well,
the
minute
this
you
start
seeing
your
window
growing,
you
know
something's
happening
and
you
could
start
signaling
back.
So
there's
a
number
of
these
signals
that
are
intrinsically
part
of
what's
happening
in
YouTube.
Oh,
you.
M
A
So
I,
so
that's
why
that's
why
I
know
that's
why
we
think
that
it's
important
to
have
a
some
kind
of
a
document
where
we
can
actually
capture
all
these
ideas
and
at
least
have
you
know,
like
an
element
to
answer
to
people
say:
okay,
every
time
you
do,
network
coding,
you're,
killing
the
TCP
congestion,
control
or
you're
doing
things
that
are
bad
for
the
network
would
say
well
by
the
way.
Look
here
and
here
a
number
of
things
that
can
be
done
and
will
probably
be
done
to
actually
do
that.
Yes,
so.
G
A
plea-
let's
not
go
over
again
the
30
years
of
mistakes,
of
not
telling
the
difference
between
congestive
loss
and
other
loss
processes.
We
have
networks
that
have
other
loss
processes
and
if
we
simply
build
something
that
can't
tell
the
difference
between
congestive
loss
and
other
types
of
loss
like
errors
on
wireless
links-
and
you
know,
rain
fade
or
any
number
of
other
things
we're
gonna
wind
up
in
the
same
bad
place.
The
TCP
was
for
30
years
and.
G
A
Okay
and
I-
this
is
actually
has
been
my
Holy,
Grail
and
I
think.
This
is
why
this
is
why
I
think
it's
important
for
this
group
to
address
that
problem,
so
that
we
do
not
repeat
the
errors
of
the
past
and
that
we
can
actually
come
up
with
I
would
say
a
solution,
but
maybe
many
solutions,
but
things
that
will
actually
work
for
what
we're
doing.
E
Okay,
so
this
presentation
is
about
some
use
case
of
network
coding,
specifically
on
multi
up
wireless
network,
so
I
will
present
one
use
case.
I
will
present
some
example
of
what
I
think
a
key
feature
or
constraint,
but
maybe
not
found
in
over
settings
and
also
I
will
present
some
example
of
solution
and
team.
Our
goal
of
this
presentation
is
maybe
to
give
some
feedback
on
what
is
developed
in
this
course.
Our
group,
like
module
of
a
generic
app
IPIN,
so
one
motivation,
one
really
timely
use
case-
is
update
of
IOT
devices.
E
You
have
to
do
mutual
broadcast
and
the
thing
is
also
your
female
image
is
likely
to
be
rather
big,
so
it
can
be
hundred
or
thousand
of
packet,
and
if
you
want
to
do
in
which
a
broadcast
with
lot
of
packet,
then
it's
a
very
use.
A
very
fitted
use
case
for
for
networking
and
also
I
want
to
mention
that
our
there
are
many
use
cases
in
over
in
general
in
multi
openness
network.
E
E
It
will
fact
that
when
you
transmit
one
packet,
you
have
many
receiver
and
most
of
the
protocol
did
do
optimization
in
one
way
or
another,
on
the
fact
that
you
can
have
some
kind
of
subset
of
a
note
like
represented
on
the
figure
on
the
left,
but
the
black
node
only
them
will
transmit
the
packet
and
if
you
select
properly
each
subset
and
if
everything
goes
well,
you
have
an
efficient
wireless
broadcast.
Now
this
is
works
well
when
something
in
open
loop.
The
problem
is
now.
E
If
you
want
to
have
some
feedback,
so,
for
instance,
you
cannot
do
a
acknowledgement
on
every
receive
packet
because
you
will
use
the
wireless
Makuta
Utica
Center,
so
you
don't
want
to
do
that.
And
then
you
have
a
real
problem.
It's
not
straight
forward
design,
efficient
control,
plane
where
you
are
able
to
say
that
your
broadcast
is
working
well,
so
to
address
this
problem
of
efficient
control
plane.
E
So
some
protocol
have
been
designed.
So
that's
dragon
cast
dragon
head.
There
is
a
draft.
There
is
some
implementation
and
this
protocol
have
been
proposed
by
those
people
Ratan
area,
and
they
are
based
on
two
principle.
The
first
one
is
network
coding
used.
We
are
you
doing
broadcast
and
then
every
node
in
the
network
sounds
kool-aid
packets,
which
participate
in
the
network
coding
process
and
also
the
what
goes
with.
E
This
is
the
fact
that
the
state
of
the
node
is
piggybacked
on
each
coded
packet,
and
this
is
a
way
to
have
knowledge
actually
over
of
a
state
of
a
neighbor.
So
it's
a
it's
a
kind
of
local
control,
plane
responsible
and
the
second
one
is
that
the
protocols
act
locally
and
by
acting
locally,
the
VC
still
sufficient
in
most
ways
to
ensure
that
the
broadcast
is
working
well,
globally
and
locally.
In
what
in
this
way
is
each
node
tried
to
helps
the
neighbors,
so
it's
uncrazy
trait
if
a
neighbor
is
falling
behind.
E
E
It's
sliding
on
coding,
window
module
and
what
the
note
do
is
take
the
neighbor
affirmation
state
look
at
all
many
packet,
each
neighbor
has
decoded
and
when
the
node
helped
on
note,
which
is
the
most
behind
in
the
decoding
process
on
the
video,
you
have
not
B,
which
has
declared
only
up
to
sauce
packet,
9
and
then
the
generate.
Couldn't
that
start
with
sauce
packet
and
this
starting
from
10.
So
that's
that's
the
principal
of
the
protocol,
and
now,
if
we
want
to
connect
this
to
the
general
KPI
decision,
so
it's
on
the
next
slide.
E
We,
if
we
want
to
see
where
generally
KPI
model
will
be,
it
will
be
the
Nostalgia
encoding
window,
all
right
good.
It's
really
it's
not
clear
on
tele,
where
it
would
be,
but
there
is
at
least
this
which
is
a
decoding
buffer,
which
is
the
livery
that
also
Vance
emotion,
which
does
basically
in
a
coding
operation
and
maintain
a
decoding
buffer,
and
so
the
decoding
is
easy.
E
If
you
are
doing
the
networking
point-to-point,
it
will
be
more
easy
to
to
get
the
state
because
it's
a
state
of
a
site.
But
here
this
information
is
not
necessary,
even
accurate,
because
you
can
use
packet,
but
it's
what
and
then
on
the
next
slide
on
much.
Yes,
this
is
what
I
was
discussing
so
maybe
but
generate
flag.
Api
will
be
something
around
this
module
if
for
for,
we
use
K.
E
So
if
there
is
a
genetic
model
which
is
designed
good,
it
would
be
power
and
with
the
use
case,
we'll
use
this
in
this
kind
of
way,
and
the
last
slide
is
just
to
mention
that
if
you
charge
slightly
way-
but
it
is
like,
if
you
alone
know
to
be
decentralized,
then
probably
you
need
different
lining
and
more
general
coding
strategy
for
management.
So
you
some
flexibility.
There
is
a
discussion
in
an
expired
draft,
but
it's
just
to
say
that
things
are
not
always
straightforward,
maybe
not
always
pass
and
resolution.