►
From YouTube: IETF99-NWCRG-20170720-1330
Description
NWCRG meeting session at IETF99
2017/07/20 1330
https://datatracker.ietf.org/meeting/99/proceedings/
A
Okay,
I
think
it's
time
to
start,
even
if
I
don't
see
our
I
don't
see
Allison
but
I
guess
she
will
join
in
fact,
whether
quite
a
full
agenda.
So
it's
better
not
to
waste
much
time
the
beginning.
So
this
is
the
network
cutting
research
group
meeting
I
am
Vasa
Walker
and
my
coach
Victor
will
participate
remotely.
A
This
is
the
usual
not
well,
so
please
I
will
look
at
it
and
okay.
Those
are
the
rules
that
we
all
follow
in
ITF
as
well
as
I
RTF.
We
start
coops.
So
a
few
information
and
destructive
information.
Well,
nothing
special
to
say
you
have
a
few
links.
If
you
need
all
the
slides
had
been
uploaded
between
yesterday
and
today
morning,
so
do
not
hesitate.
They
are
there
available.
A
A
So
we
were
magic,
okay,
so
very
briefly,
quick
status
on
active
internet
draft.
We
have
two
internet
draft
at
the
moment.
Working
group
eat'em
internet
drafts
on
taxonomy,
so
you
saw
on
the
list
that
this
internet
drafts
past
working
well.
We
saw
troupe
last
call
two
months
ago
and
I
submitted
it
to
the
next
step,
that
is
to
say
the
IR
s.
G
last
call
on
Monday.
A
A
So
this
is
the
agenda.
As
I
said,
the
agenda
is
quite
full
and
the
schedule
is
quite
tight.
So
if
you
are,
you
are
presenter,
then
please
be
careful
not
to
take
too
much
time.
It
shall
first
as
approximately
10
minutes
and
I
will
check
that
we
that
you
do
not
extend
too
much
beyond
those
10
minutes.
A
A
A
The
idea
with
this
presentation
on
efficient
quick
is
to
see
whether
it
well
of
do's
presentation
is
to
see
whether
it
makes
sense
to
to
use
sliding
window
cuts.
I
would
say
for
all
the
use
cases
within
also
ITF
protocols,
not
just
fake
frame
as
I
did,
but
seeing
if
it
makes
sense
for
quick,
we're
not
see
for
Pellinore
such
working
groups
and
then.
Finally,
we
talked
about
well
a
topic
that
was
brought
to
the
list
by
funk.
I
ain't
seen
him
but
at
least
eyes
Morton.
A
A
C
A
C
C
So
the
objective
is
draft
are
splitted
into
four
slides,
the
first
one
just
that
here
we
see
a
satellite
typical
multi
gateway,
satellite
drawn
segment
and
just
to
introduce
the
vocabulary.
Mostly.
Basically,
we
have
multiple
satellite
gateways
covering
each
gateway
covering
a
subset
of
satellite
terminals,
and
the
good
gateway
is
further
split.
It
in
three
main
blocks,
which
are
more
detailed
in
the
following:
slides.
C
Basically,
if
you
see
the
contributions
that
we
have
received,
and
that
is
the
kind
of
contribution
that
is
very
useful
for
this
draft,
because
it
just
briefly
describes
the
use
case
and
the
natural
network
coding
protocol
for
using
satellite
platform
and
I
guess
you
can
go
directly
to
the
next
slide.
I,
don't
want
to
go
into
the
details
of
that
protocol.
That
is
not
the
scope
of
this
presentation,
but
if
you
have
any
question,
I
guess
that's
the
other
guys
would
be
happy
to
comment
on
it.
C
What
we
want,
what
we
are
looking
for
is
the
global
vision
and
to
have
a
synthetic
view
of
all
the
activities
that
happened
in
that
context,
and
we
believe
that
this
kind
of
analysis
would
be
very
useful
for
future
architecture,
oriented
document,
basically
in
an
in
other
word,
we
want
to
have
the
notification
of
the
best
way
to
use
net
recording
and
where
we
can
actually
put
it
more
than
only
on
the
physical
layer
which
is
okay.
At
the
moment,
the
we
want
the
document
to
be
quite
generic.
C
C
Next
next,
please
politically
the
document.
At
the
moment
we
have
a
first
section
on
the
north
of
satellite
topology.
That
is
basically
describing
the
reference
architecture
which
has
I
have
just
earlier
presented,
but
very
quickly,
just
made
me
to
give
the
vocabulary
that
is
used
in
the
satellite
communication
industry
and
then
making
a
state-of-the-art
of
what
is
actually
deployed
at
the
moment
and
on.
The
fourth
section
is
to
identify
the
opportunities
for
monitor
colony
in
satellite
systems
and
I've
also
a
discussion
on
specific
use
case
and
deployability
considerations.
C
So
that
is
the
draft
structure
at
the
moment,
and
that
is
why
we
want
to
have
many.
Basically,
we
can
provide
lots
of
work
and
initial
contribution
on
the
to
first
section
so
section
2
and
section
3.
We
also
have
some
feedbacks
from
different
projects,
but
David
Lee.
If
you
want
to
contribute
and
to
provide
your
contribution
and
to
have
your
work
notified
and
in
this
and
considered
in
this
document-
and
we
are
that
it
will
be
very
welcome
for
section
4
and
5,
mostly
and
I-
guess
you
can
go
to
next
time.
C
Basically,
that
is
more
an
illustration
of
what
we
want
to
do
and
how
we
want
to
have
and
high-level
view
of
what
is
actually
deployed
at
the
moment
and
basically,
we
have
the
two
top
figures
that
have
been
taken
from
the
taxonomy
document.
When
we
have
the
different
layers
at
which
the
network
coding
can
happen
and
on
the
left
side
and
on
the
right
side,
we
have
more
the
nature
of
the
coding
whether
it
is
in
that
flow,
interflow,
single
path,
multi
path,
end-to-end
and
so
on.
C
Immediately
try
to
come
up
with
some
sort
of
table
to
have
a
look
on
where
it
is
actually
deployed.
Basically,
we
can
have
upper
layer
source
coding
on
the
end-to-end.
We
also
have,
at
the
moment,
network
coding,
proper
network
coding
as
a
physical
layer,
and
basically
we
want
to
be
sure
about
what
you
make
making
the
link
with
the
taxonomy
document
medical
to
have
an
organization
view
of
what
is
actually
deployed
and
then
on
the
next
slide.
C
Basically,
the
we
have
the
state-of-the-art
and
days
on
all
the
feedback
from
open
Union
projects,
national
projects,
any
other
companies
projects
to
basically
identify
where
we
can
apply
more
metal
coding
and
mapping
it
with
the
taxonomy
document.
That
is
what
we
want
to
do.
We
think
it's
a
way
to
visualize
what
is
actually
deployed.
So
basically,
what
is
needed.
Basically,
what
we
need
is
to
have
some
contributions,
and
particularly
for
section
4,
&
5.
F
C
A
Okay,
I
have
once
I
do
a
create
this
way
of
making
progress,
because
a
kind
of
design
team
that
will
federates
contributions
for
several
people
and
for
people
already
sent
release
their
intention
to
take
that
in
this
document,
writing
is
very
I,
think
a
great
thing,
so
thank
you
very
much
Nicola
and
Manuel
for
taking
this
initiative
and
I
hope
that
we
can
make
progress
on
this
document
rapidly,
with
contributions
from
everywhere
from
everybody
interested
I
think
that
we
already
entered
step.
One
yep,
yes,
I
have
two
people
waiting.
G
Limit
yourself
to
the
satellite
link
right
so
decide
to
VSAT,
not
the
whole
network
infrastructure,
because,
from
my
experience
a
lot
of
sometimes
a
lot
of
the
need
for
network
coating
may
not
be
just
over
the
satellite
link.
It
could
be.
You
know
it's
the
internet
right,
so
this
satellite
link
is
connected
to
other
networks,
but
your
draft
will
just
look
at
the
satellite
link
and
when
you
talk
about
network
coding
at
the
physical
layer,
do
you
mean
physical
layer,
network
coding
or
FEC
at
the
network
at
the
physical
layer?
G
G
Because
when
I
hear
network
coding
at
the
physical
layer
there's
actually
a
physical
layer
network
coating,
which
is
waveform
coding,
so
I
think
it
should
be
clear
that
network
coding
is
that
the
network
layer
not
FEC
at
the
physical.
It's
just
a
comment,
but
anyway,
so
I
understand
that.
Now
you
just
want
to
look
at
the
satellite
link:
okay,
I'm,
good
thanks,
but.
C
Maybe
a
clarification,
and
we
want
to
look
at
the
satellite
network
coding,
but
also
taking
into
account
the
constraints
that
we
have
when
you
have
a
mitigate.
Where
satellites
not
only
is
a
satellite
channel,
but
also
the
fact
that
you
can
have
some
some
transition
of
specific
network
functions
or
the
specific
functions
that
are
deployed
when
we
split
the
TCP
connections.
For
example.
H
H
F
H
A
C
Be
such
as
it
is
known
for
the
network
coding
in
the
network
virtualization
working
group.
They
have
a
document
specific
to
open
research
challenges
and
we
have.
We
can
have
some
this
kind
of
medically,
based
on
the
feedbacks
that
we
have
been
after
even
summarized
everything
that
has
happened
in
the
past.
In
ninety
five
figure
areas
of
research.
You
can.
A
No,
nobody
is
waiting.
So
thank
you
very
much
and
let's
switch
to
the
second
topic.
So
second
topic
is
on
network
coding
for
IC
NCC
and
all
those
information
content,
centric
networks
which
cool
also
and
which
is
also
one
of
the
main
fields
where
network
concatenate
sense.
So
we
have
two
presentations,
one
from
ceedric
and
one
from
me.
She
so
say
like
if
you
want
to
come.
I
Okay,
yeah
so
well,
thanks
for
inviting
me
to
participate
in
this
so
at
first
the
original
I
think
on
the
wiki
I.
Don't
know
if
has
modified,
but
he
was
talking
about
some
paper
that
we
wrote
about
network
coding
and
then
ICN
using
combination
of
those
two
things
to
to
do:
video,
a
video
delivery
and
then,
after
a
little
bit.
I
I
I
We
can
talk
about
how
those
two
research
groups
could
synchronize
on
this
and
then
there's
some
papers
on
the
topic
that
we
did
I
think
we
did
the
first
one
with
Mary
Jo
Z
and
a
director
Olsen
in
2012
and
then
there's
some
other
papers
and
these
kind
of
those
papers
at
the
bottom
are
the
one
I'm
going
to
talk
about
today.
So
the
netcode
CCN
from
university
of
bern,
I'm
not
a
quarter
of
that,
but
they
kind
of
took
away.
They
ran
away
with
the
idea
of
this
first
paper
and
then
in
return.
I
We
took
their
code
and
we
did
this
implementation
of
the
adaptive
video
streaming
on
top
of
net
code
CCN
and
cannot
solve
some
of
the
implementation
issues
to
do
that.
Okay,
so
well,
so
I
wanted
to
probably
started
with
a
little
background
on
ICN
and
and
ICN
here
in
this
context
is
gonna,
be
CC
and
CC
n
stands
for
content.
I
Centric
networking
has
been
proposed,
I
think
in
2006-2007
by
van
Jacobson,
when
he
was
at
Parc
and
has
been
a
you
know,
one
of
the
the
nsf-funded
for
future
internet
architectures
and
CC,
and
slash
and
II
an
Indian
was
the
name
of
CCN
for
for
that
NSF
funding
project
and
the
n
stands
for
name
data
networks
so
that
that's
been
around
for
a
while
and
the
key
principle
of
it.
It's
that
you
have
a
different
semantics
of
happy,
because
everything
is
a
request
response
exchange
so
and
for
that
X
handshakes.
I
You
have
two
types
of
messages.
You
have
interest
messages
and
data
messages.
You
have
two
types
of
packets
and
the
interest
carried
the
name
and
the
packet
format
is,
is
on
this
picture
and
contain
also
some
some
options
and
and
a
you
know
some
parameters,
and
then
the
data
packet
returns
the
content
name
as
well,
and
some
signature
information,
security
information
and
then
the
payload
as
a
data.
And
so
the
way
the
network
is
architecture.
I
And
so
in
the
data
structure
that
I
mentioned
before
in
the
packet
format.
The
interest
had
a
name
didn't
have
a
source.
You
didn't
say
where
the
source
words,
so
that's
kind
of
the
intent
here
is
to
to
provide
some
some
level
of
privacy.
However,
because
there's
no
indication
of
the
source
of
the
packet,
then
you
have
to
create
a
state
in
the
router
to
route
the
data
back
to
whoever
was
requesting
it.
So
that's
what
that
pending
interest
table?
Does
it's
the
state?
It's
a
transient
state
that
you
know
says
well.
I
I
received
the
request
for
this
data
from
that
interface.
So,
when
I
receive
that
data
I
have
to
forward
it
on
this
interval
and
and
then
you
have
the
the
FIB
which
is
gonna,
tell
you
when
you
receive
the
interesting
and
I
look
up
at
the
name
and
then
on
that
name.
You're
gonna
do
some
matching
in
that
fit,
and
then
you
have
some
divergences
between
different
so
protocols.
Some
some
do
an
exact
match
them
do
a
longest
prefix
match,
but
I'm.
The
few
you're
gonna
do
a
well
actually
known
v.
I
I
think
that's
one!
Yes,
so,
basically,
when
you
receive
the
interest
what's
happening
is
is
there's
a
lookup
to
say
well,
for
the
name
is
that
data
that
is
being
requested
in
my
cache,
so
you
first
do
a
lookup
on
the
content
store
and
if
that's
negative,
the
data
is
not
here.
If
the
data
is
here,
you
return
the
data
and
you're
done.
I
I
I
If
you
wanted
to,
and
once
the
client
is
gonna,
send
the
request
for
the
second
one:
it's
not
going
to
go
all
the
way
to
the
server
it's
just
going
to
go
to
this
router
in
the
middle
so
and
and
beta's-
that's
good
next
slide.
So
the
benefits,
of
course,
is
to
combine
the
benefits
of
ICN,
which
is
of
this
session.
I
Let's
transport
that
you
know
connect
directly
to
the
content
that
support
some
kind
of
mobility
in
a
more
graceful
way
than
having
to
set
up
some
kind
of
session
that
has
these
native
native
multi
casting
and
that
kind
of
take
advantage
of
this
ubiquitous
caching.
So
it's
kind
of
the
benefits
of
ICN.
But
you
add
to
this
the
benefits
of
network
coding,
which
is
this
kind
of
a
synchronous
use
of
multiple
interfaces
that
if
you
multi,
gets
an
interest
in
multiple
sources,
then
whatever
data
is
returned
to
you
is
not
lost.
I
That
can
be
cached
closer
to
the
person
to
the
client
to
make
the
request
and
you
can
populate
the
cache
faster
if
you
have
multiple
clients
requesting
the
same
data,
so
you
have
different
scenarios.
You
know
you
can
look
at
unicast
scenarios,
0.2
more
boring
to
multiple
into
multiple
scenarios
where
you
know
in
every
case,
there's
some
some
benefit
of
the
network
hurting
and
then,
of
course,
you
try
to
preserve
the
one
interest,
one
data
for
balance
of
CCN.
I
Our
original
paper
was
more
of
a
position
paper,
and
so
now
you
can
request
credit
segments
and
you
have
to
specify
specific
segments
itu
specific
specify
that
you
want
cut
it
segment.
So
you
have
some
kind
of
way
of
indicating
this
in
the
in
the
naming.
The
name
carry
is
kind
of
that.
You
request.
Well,
there's
a
bit
that
tells
you
you
know,
I
allow
for
network
coding
and
in
the
name
that
is
returned
to
you
have
embedded
the
coding
vector
as
well.
As
you
know,
the
generation
of
the
video.
I
I
I
So,
but
the
the
key
difference
and
that's
a
picture
from
the
paper
is
that
the
interest
and
it's
filtered
out
in
the
same
manner
so,
for
instance,
with
the
typical
CCN,
if
you
have
interests
that
are
received
in
this
case,
you
have
for
interest,
are
being
received
on
through
interface.
So
the
first
interest
is
received
on
interface,
a
and,
and
actually
one
of
the
idiosyncrasies
of
CCN
is
that
the
renamed
stuff
so
like
instead
of
cash,
they
call
it
content
store
instead
of
interfacing
its
face,
but
which
you
know
it's
a
lot
or
willing.
I
And
if
you
ask
me,
but
anyway,
so
the
first
one.
The
first
interest
is
receive
an
interface
a
and
is
being
propagated.
Both
the
topcase
is
regular
CC
and
the
bottom
case
is
the
coded
CCN.
So,
in
both
case
is
the
same,
then
you
receive
a
new
interest
and
now
it's
on
a
new
interface,
second
interface,
or
even
if
it's
be,
this
one
is
not
propagated,
because
you
already
have
one
it's
outstanding
interest
and
the
data
that's
gonna,
come
back.
He's
gonna
be
responding
to
both
those
interests
right.
The
third
one
comes
on
interface
II.
I
So
then
again,
you're
not
gonna
forward
that
interest,
because
the
data
is
gonna
come
back.
Also,
is
gonna
satisfy
this
one,
but
then
the
first
one
in
the
case
of
regular
CC
n,
it's
also
being
ignored
because
just
a
duplicate
of
the
previous
one,
the
there's
no
action
taken.
That
I
mean
the
for
the
second
and
third
one.
You
put
an
entry
in
your
pit
saying
you
know
when
I
receive
the
data
I,
send
it
back
to
interface,
B
and
to
interface,
see,
but
the
the
fourth
one
you
do
nothing
in
there.
I
I
Anyway,
so
we
took
that
code,
we
merged
it
with
like
a
implementation,
and
we
demonstrated
some
sub
benefits
of
this
and
particularly
seamless
mobility
if
in
the
way
so
part
of
the
implementation.
What
we
show
is
that
if
you,
if
you
add
more
interfaces
and
you
send
a
interest
on
those
interfaces,
then
the
rate
adaptation
can
view
the
whole
link
between
you
and
those
multiple
servers
as
one
logical
link
to
the
copy
of
the
content
and
perform
rate
adaptation
on
that
logical
link.
I
But
you
have
to
be
cautious
because
you
would
have
to
make
sure
you
don't
request
the
same
thing
on
the
different
links
and
you
ever
have
to
make
sure
you
request
them
at
the
right
rate
and
the
difference
link
in
this
case
and
our
permutation.
You
don't
do
any
of
that.
You
just
do
the
control.
The
ready
addition
logic
is
that
of
and
it's
not
modified
and
everything
underneath
is
transparent
and
because
you
know
networking
is
inherently
asynchronous
you
you
you
can
just
you
don't
have
to
worry
about
anything.
I
So
there
is
some
little
okay,
that's
a
hint
to
move
on
it
anyway.
That's
it
for
my
talk.
You
know
thanks
for
hearing
me
as
far
as
what
the
next
steps
are.
There
has
been
some
interest
on
this
domain.
I
was
actually
surprised
by
how
much
there
has
been
over
the
last
years.
On
that
on
that
paper
we
wrote
couple
years
back,
and
so
definitely
there
is
some
kind
of
people
thing.
I
There's
another
presentation
on
the
same
topic
coming
up,
and
so
that's
good,
but
you
know,
and
if
this
research
group
would
like
to
take
on
this
work,
I
think
that's
a
great
idea,
but
you
know
it's
just
kind
of
offer
it.
You
know
to
you
guys
as
well
and
and
also
I'm
more
involved
in
the
ice
energy
group
than
in
the
NW
c
rrg,
but
you
know
says
that
would
be
nice
if
they
as
well
also
to
build
bridge
with
that
other
community
anyway.
So
you
have
questions
or
your
remarks
or
comments.
A
A
A
B
Okay,
so
I'm
excited
from
NACD,
and
this
work
with
my
colleague,
Alex
and
Shelley
curating
in
here,
and
this
paper
was
accepted
in
info
console,
it's
already
presented
in
last
epic
on
conference
held
in
Toronto,
so
the
title
is
lowly.
This
is
all
all
streaming
using
coding
and
caching
next,
please.
So.
The
motivation
of
this
work
is
to
fulfill
the
various
requirement,
especially
for
the
target,
which
is
a
UHD
level.
4
k,
o8,
k,
delay
sensitive
video
streaming.
The
sensitive
means
a
really
really
real
time
streaming.
B
So
the
requirements
we
formulated
more
low
latency
about
150,
for
example,
150
millisecond
for
interactivity
communication.
This
150
millisecond
is
defined
in
the
itu-t
recommendation
and
we
definitely
need
to
have
a
chance
of
the
low
packet
loss
to
maintain
high
QE
and,
of
course,
from
the
network
perspective
efficient,
forget
they're,
able
to
support
a
large
number
of
receivers
because
high
quality
or
high
bandwidth
streaming.
If
the
large
number
of
receivers
receive
in
the
same
content,
then
the
network
will
be
consumed.
B
I
hope
dat
affording
is
something
like
a
fashion
of
decision
and
NPM,
so
it's
natively
supported,
hope,
I,
updated
warning
and
so
to
fulfill
the
this
requirement
we
use
in
our
coding,
which
is
actually
London
region,
according
all
agency,
and
also
in
network
cashing.
It's
a
natively
supported
by
system
Vienna
for
Asian
data
delivery
and
also
data
recovered
within
an
estimated
acceptable
link
day.
This
acceptable
link
that
is
not
a
pen
until
I
entered
a
and
Dirac
Dre,
is
requirement
for
the
application
accepted
link.
B
Today
is
something
like
like
a
requirement
for
the
calculation
of
the
Lea
transmission
for
the
lost
packets
and
the
data
liquor
body
based
on
the
measured
data
loss
Li
and
to
support
these
functions.
We
newly
define
the
symbolic
interest
and
the
contour
interest,
because
the
CCN
indian,
for
the
simplified
reason
so
for
the
simplification
reason,
the
legally
called
regular
intercept.
Interest
must
be
sent
to
get
data
as
Eric
mentioned,
but
that
Lego
enters
may
cause
some
problematic
situations.
B
I'll
explain
later
so
we
newly
defined
SMI
and
CNI,
the
and
SMI
used
for
streaming
request,
including
layer,
motive,
clear
information
and
the
controller
interest
is
used
for
RTD
measurement
notified,
lead
under
sea
level
for
encoding
and
switching
to
allegra
interest.
So
this
is
a
system
architecture.
We
make
something
like
a
separation
for
control,
prey
and
data
plane
for
the
control
playing
the
user
consumer.
Send
our
interest
different
kinds
of
winters
symbolic
interest
later
our
interest,
which
is
defined
by
this
stunt
predefined
decision
in
the
end
and
contouring
set
interest
for
the
dead
plane.
B
You
can
see
the
one
video
source
and
you
have
a
large
number
of
consumers
receiving
the
same
content
and
the
video
source
can
be
layered
and
the
layered
encoding
data,
and
also
that
didn't
that
coded
the
data
is
transmitted
from
the
source
and
the
intermediate
routers
can
catch
the
content
itself
and
also
the
coded
to
date
as
well.
So
the
receiver
briefly
I
explained
the
situation,
so
the
lacy
bar
when
he'll
achieve
the
content
and
he
detects
a
missing
packet.
Then
he
mainly
quest
the
retransmission,
though
of
the
missing
data.
B
But
if,
luckily
you
have,
the
missing
data
is
cast
in
a
network,
then
that
content
can
be
can
be
transmitted
from
the
cache
but
unluckily.
If
you
don't,
have
the
cached
data.
But
luckily
you
have
the
coded
data,
then
you
may
create
the
coordinator
inside
a
network
and
the
u.s..
Finally,
you
would
retrieve
the
complete
packet,
so
the
network
coding
is
applied
for
each
coding,
which
consists
of
like
a
different.
All
these
now
coded
dead
pockets.
B
Encoding
vectors
are
London
rejected
from
a
gala
field
to
power
by
a
and
K
set
a
constant
value,
considering
the
waiting
time
to
lick
up
a
lost
data.
So
this
is
a
one
of
the
co-op
function
for
estimating
linked
conditions.
The
data
loss
later
can
be
easily
calculated
because
we
detect
the
sequence,
number
and
network
coding
parameters
stated
in
the
data
header,
so
the
receiver
can
easily
calculate
or
recognize
the
data
loss
right
and
but
accept
a
link
delay.
It's
something
like
hop-by-hop
delay.
B
This
is
a
little
bit
difficult
to
measure
the
hop-by-hop
delay,
but
basically
the
application
itself
is
transmitted
within
150
millisecond.
This
is
our
assumption,
but
if
you
receive
the
data,
so
let's
assume
that
the
source
already
encoded
absolute
time-
sorry
absolutely
time,
so
he
can
recognize
okay.
So
this
content
coming
from
a
source
who
is
the
civil
period-
let's
say
100
millisecond
or
50
millisecond?
So
he
has
something
Bastille
budget.
B
If
he'll
achieved
the
content
from
the
sauce-
and
he
calculated,
this
takes
100
Michigan's,
then
you
have
a
50
millisecond
budget
and
so
that
within
a
louder,
if
this
50
millisecond
budget
can
be
used
to
encode
the
oil
and
decode
the
lost
data,
then
you
can
use
this
50
millisecond.
This
is
we
call
the
timing,
some
delay
budget,
so
the
each
louder
calculate
and
measured
the
RTT
between
them
and
this
to
make
this
kind
of
a
measurement
we
defined
a
controller
interest.
B
This
control
interest
can
measure
just
a
lesson
like
a
pin,
so
each
lauter
has
such
kind
of
information,
and
if
you
receive
this
CNI,
for
example,
lava
I
Nagy,
just
and
CNI
tool
outage
a
they'll
out
of
jail.
They
spawn
with
the
link
delay
between
souls
and
J,
so
he
can
recognize
okay,
so
from
j2s
the
delay
must
be,
let's
say,
d
SJ
and
the
lava
I
also
informed.
If
he
deceived
the
CNI
from
downstream,
then
he
can
send
the
total
delay
from
source
to
lava
I
to
his
downstream
neighbors.
B
So
I
don't
explain
the
very
detail
today
because
of
the
because
it
takes
time.
So
we
directly
now
explained
our
simulation
results
with
no.
So
this
is
a
simulation
we
set
up
and
this
is
done
by
Indian
seam
and
you
have
a
layer
time.
Video
rate
total
of
35
mega
bps
and
the
disc
MP
lead
used
with
the
lead
reduction
of
the
layers.
B
So
this
is
the
result,
so
the
green
one
is
a
and
let
Y
is
our
alpha,
C
2,
and
maybe
it's
better.
So
thanks
to
the
simple
mixture
of
CNI
or
legal
interest
and
symbolic
interest,
we
can
strongly
reduce
the
number
of
transmitted
Intel,
something
like
a
signal
because
CCN
just
Ong
and
just
requires
exact
match
for
the
discipline
contents.
So
if
we
use
something
like
a
symbolic
winter
stand,
we
can
omit
the
bursty
or
highlight
to
intercept
transmission.
So
we
can
Audrey
reduce
this
such
kind
of
signal
and
the
left
side.
B
The
figure
shows
that
we
increase
the
normalizer
QV,
comparing
with
the
last
ICMP
paper.
So
that's
all
for
our
papers
explanation.
So
this
slide
just
show.
This
is
something
like
my
personal
opinion
that
I
already
sent
to
a
mailing
list,
and
so
we
may
be
able
to
contribute
to
various
work
in
this
network
coding
and
research
group,
for
example,
potential
work
and
recording
research.
I,
don't
know
all
of
you-
or
many
of
you
can
agree
on
to
this
salt,
but
I
think
that,
for
example,
describing
a
common
research
challenges
are
interesting.
B
A
A
So
first
of
all,
I
would
like
to
say
that
to
most
nikolaos
sent
me
an
email,
so
he
sent
an
email
to
released,
explaining
that
there
was
his
paper
that
select
mansion,
but
that
code
CCN
available,
and
it
sent
me
an
email
just
to
tell
me
that
okay
couldn't
join
to
the
afternoon,
but
he
volunteers
to
take
part
into
this
activity
and
to
contribute.
So
that's
the
first
one.
So
next
point
is
that
okay
I
think
it
would
make
sense,
since
there
are
at
least
three
people
interested
and
probably
a
bit
more.
A
We
will
see
into
this
topic.
It
would
be
perhaps
interesting
to
have
the
same
approach
as
we
add
previously
for
the
previous
topic
satellite
communications,
that
is
to
say,
trying
to
set
up
a
small
design
team,
few
people
interesting
and
volunteer
to
contribute
and
work
jointly.
So
I
hope
that
you
all
would
like
and
agree
to
work
jointly
on
this
topic.
Well,
that's
the
first
question
to
you
and,
to
you
say,
rich
sheriff.
J
A
K
This
is
Alison
Mankin,
IRF,
chair,
I,
wonder
if
you're
going
to
have
some
design
teams,
whether
you
might
want
to
create
a
kind
of
a
template
for
what
is
in
it
is
a
document
that
takes
a
certain
aspect
of
FEC
and
then
put
some
consistency
across
the
documents,
because
there
there's
probably
some
comment,
there's
quite
a
bit
of
common
ground
in
terms
of
describing.
Why
you're
doing
this
and
and
what
are
the
kind
of
components
yeah.
A
B
This
doesn't
mean
we
only
concentrate
there.
For
example,
Network
coding
for
ICN
was
easy,
and
so
so
the
for
example.
This
is
my
personal
payment
butter,
kokum
modular,
several
common
research
challenges,
including
IC,
NCCN,
satellite
yl
s
and
other
stuff.
So
there
are
a
lot
of
architecture.
Was
that
a
lot
of
various
or
different
link
layers
or
different
applications?
So
the
research
challenge
could
be
different,
but
we
can
summarize,
for
example,
these
for
this
domain.
We
can.
We
should
address
this
kind
of
issue
for
this
domain
over
this
kind
of
a
network
condition.
B
Then
we
should
address
this,
so
the
drizzle
challenge
itself
can
be
us,
for
example.
This
is
my
personal
driven
opinion,
but
can
be
R,
something
like
a
single
document,
but
for
the
baseline
scenario,
it's
a
little
bit
difficult
to
concentrate
only
for
the
one
general
solution
or
baseline
scenario.
That
scheme
can
be
adapted
to
various
network
or
various
applications.
So
the
baseline
scenario,
for
example,
we
can
form
the
some
design
tip-
let's
say
design
team
for
ICN
and
design
t4
satellite
I
don't
know,
but
so
this
is
something
like
my
personal
opinion.
L
A
Perfect
perfect:
there
are
many
situations
where
I
can
make
sense
to
have
this
kind
of
architecture
of
solution
for
distributing
content
unleased
a
few
months
one
months
ago,
maybe
we
mentioned
the
possibility
of
fighting
this
approach
for
putting
content
within
trains,
for
instance,
so
yeah
I
think
a
good
potential
getting
it
that's
great.
Okay.
Thank
you
very
much
to
go
phew.
Let's
switch
to
the
next
presentation.
A
A
I
was
not
very
clear
and
not
clear
enough
with
on
what
was
expected
for
through
the
afternoon,
and
in
particular,
I
was
not
very
clear
in
the
fact
that
we
were
looking
for
existing
solutions
more
or
less
different
domains
for
applying
network
coding
and
coding,
and
therefore
what
he
sent
me
was
more
on
potential
use
cases
where
it
could
make
sense
to
have
coding.
I
would
say
instead
of
having
a
presentation
on
existing
solutions.
So
I
don't
see
if
you
see
what
I
mean,
but
I
will
not
go
into
the
details
of
this
presentation.
A
Well,
identity
send
some
networks,
let's
say
that
this
way,
either
for
some
particle
to
read
or
experimental
in
first
few,
many
thousands
of
sensors
at
the
same
place
producing
a
very
huge
amount
of
data
in
a
very
short
time
frame
and
why
couldn't
make
sense
to
have
some
kind
of
coding.
But
for
the
moment
we
don't
know
if
it
is
the
case
or
not.
A
So
but
I
don't
want
to
enter
into
details
because
I
said
the
slides
are
not
very
well
focused
on
what
I
had
in
mind.
So
I
apologize
well,
yeah
I,
don't
know
if
you
want
to
add
a
few
words,
but
I
want
to
project
with
you.
I
was
not
very
and
what
I
asked
you
so
anyway.
This
is
working
to
press.
If
you
know
that
in
some
specific
domains
that
were
mentioned
yeah,
you
are
aware
of
networking
applications
when
it
opening
experiments.
A
A
He
told
me
that
these
network
connection
would
not
be
very
good,
so
I
don't
know
if
we,
if
he
can
anyway,
so
let's
switch,
and
if
you
have
a,
if
you
want
to
say
something
once
you
can
come
back
later
on.
Okay,
let's
switch
to
the
next
presentation
from
Mary
Susie.
This
is
a
joint
presentation
with
London
Williams
from
Akamai
on
something
that
we
mentioned
released
this
generic
robust,
low,
latency
journaling
system.
Thank
you.
Oh.
G
Hi
everybody
I
hope
you
hear
me
thank
you
for
allowing
me
to
present
I'm
sorry,
I
couldn't
be
there
and
I
think
Brandon
also
apologizes,
because
he
couldn't
be
there.
So
this
is
very
recent
work
that
we
started
doing
I
would
say
the
early
part
of
that
maybe
last
year
and
going
all
the
way
yesterday
afternoon,
and
there
is
part
of
it
which
has
use
cases
that
relate
a
little
bit
to
what
thanks
not
just
presented
and
also
you
the
presentation
on
icy
NCCN.
G
But
this
is
actually
looking
at
it
from
a
more
not
network
coding
as
a
set
of
libraries
to
do
coding.
But
how
do
we
introduce
network
coding
as
an
element
in
a
toolkit
to
enhance
performance
in
a
network?
So
I
know
I,
don't
have
the
slides
that
cannot
move
the
slides.
So
could
you
please
get
to
the
next
slide?
G
G
Obviously,
all
the
enterprise
work
that's
been
done
on
overlay
networks,
the
cloud
computing,
a
lot
of
people
understand
the
computing
part
of
the
cloud
computing,
but
don't
really
get
that
the
there's
a
cloud
and
you
have
to
get
in
and
out
of
the
cloud
and
keep
performances
there
and
and
I
would
say
it
from
the
previous
slide.
A
previous
presentation.
G
There
was
this
thing
about
the
mobile
fixed
network,
the
airplane,
to
peer
to
peer
to
multi,
satellite,
the
trains,
and
this
has
to
some
kind
of
a
rebirth
of
this
I
think
in
terms
of
some
kind
of
an
interrupt
the
internet.
So
what
we
feel
is
that
what
is
needed
is
some
research
and
implementation
of
some
flexible,
dynamic
application
and
policy
based
mechanism,
because
we
need
to
enhance
the
performance
of
these
rising
services
beyond
the
traditional
quality
of
service,
which
was
essentially
just
prior
to
icing
services
and
allowing
them
to
have
enough
bandwidth
next
slide.
G
So
what
we
suggest
is
to
look
at
some
kind
of
a
robust
low
latency
tunnel,
and
what
do
we
mean
by
that?
And
we
would
like
to
have
a
dynamic
negotiation
performance
inside
the
tunnel
when
the
tunnel
is
established
instead
of
just
being
the
fact,
oh
I'm
going
to
send
all
my
stuff
there?
What
is
the?
What
is
the
performance
that
this
tunnel
is
going
to
give
us,
and
should
we
use
this
one
or
another
one?
Should
we
use
some
FEC
or
not?
G
Should
we
use
all
kinds
of
other
performance
enhancement
through
that
tunnel
ever
to
come
back
to
this
in
the
next
slide,
and
to
essentially
define
my
end-to-end
semantics
from
where
I
am
at
my
application
going
to
where
I
want
to
go
and
the
other
applications
we
wanted
to
be
completely
in
user
space?
We
don't
want
to
be
impacting
things
that
are
below
us.
G
There's
a
lot
of
issues
now,
with
middleboxes
being
introduced
everywhere
on
the
network
and
I'm
sure
that
you
guys
some
of
you
went
to
the
TLS
meetings
where
there
is
issues
with
with
middleboxes.
We
don't
want
it
to
interact
with
crypto.
We
don't
know
where
the
VPNs
are,
and
we
don't
want
to
deal
with
that
either
and
we
do
not
want
to
have
network
termination
hop-by-hop
if
we
don't
have
to
do
it,
because
it's
so
complicated
to
restart
everything
and
we
redefined
what
is
needed
at
the
next
halt
next
line.
G
So
what
needs
to
be
negotiated?
Well,
we're
inside
this
network
coding,
research
groups.
Obviously
that's
the
use
of
the
FEC
algorithm
and
the
implementation
want
to
use
and
those
depend
not
only
on
technical
issues.
Do
I.
Have
you
know
what
is
my
error
rate
is
at
first
he
is
not
bursty,
but
there's.
Also
legal
constraints
and
requirements
is
a
lot
patents
in
this
field.
G
There's
software
license
saying:
do
I,
have
the
right
legal
rights
to
use
this
as
my
destination
have
the
right
to
use
this
so
I
think
we
want
to
be
able
to
negotiate
that,
obviously,
there's
this
reliability
requirements,
which
is
central
so
based
on
packet,
error
rates
and
profiles,
I
think
it's
not
just
the
error
rate,
but
also
the
burstiness.
When
we
look
at
things
that
come
from
the
physical
layer,
it
has
a
tendency
to
be
a
little
bit
more
random
because
of
the
other
implementation
of
F.
G
He
said
lower
layers,
but
when
you
start
looking
at
the
network
layer,
there's
actually
a
lot
of
burstiness
and
very
long
burstiness
and
your
code
has
to
be
able
to
adapt
to
this.
Do
we
need
in
order
delivery
for
some
applications?
Yes,
for
some
applications?
No,
and
it's
it.
It
actually
relates
this
time
to
maybe
the
difference
between
having
video
versus
page
load.
For
example,
do
we
want
to
have
they
think
it's
a
trade
delayed
tolerance
with
reliability?
If
we
can't
tolerate
delay,
maybe
we
can
have
a
more
reliable
system.
G
This
has
been
essentially
the
main
goal
of
that
forward.
Our
Corrections
for
about
17
years
would
we
like
to
have
packet
pacing
to
maintain
constant
delivery
rates
that
actually
helps
also
in
the
monitoring
this
bit
of
work.
I'm,
sorry,
confession,
control
and
fairness.
Actually,
this
is
a
question.
For
example,
do
we
need
the
protocols
to
be
TCP
aware
there
is
some
network
coding
work?
That's
been
done.
That
is
TCP,
aware,
there's
some
that
is
done.
G
That
is
not,
and
a
big
one
for
us
is
this
micro
flow
support
versus
maxing
and
we
would
like
to
have
the
micro
flow
to
be
independent
from
the
tunnel
protocol
evolution.
So
if
the
tunnel
itself,
absolutely
you
know,
evolves
our
flows
that
are
within
it
should
be
untouched
by
it.
Obviously,
some
of
these
are
implementations.
Some
of
them
are
still
research
and
I.
Think
the
whole
set
of
this
is
actually
a
nice
network
research
project.
Next.
G
There's
some
tabs
there's
concept
to
decouple
application
from
transport
which
relates
a
lot
to
this
there's.
Some
new
paths
aware
networking
research
that
has
been
proposed
in
this
IETF
and
there's
other
and
also
you
know,
related
to
congestion
control,
yes,
TN,
but
also
relating
to
what
cedric
presented.
We
gave
some
thought
to
how
this
application.
G
G
There's
many
ways
of
looking
at
architecture,
I
think
in
this
case,
it's
more
like
a
network
architecture,
draft
and
I
think,
based
on
the
numerous
implementation
of
network
coding
insight
that
works.
They
could
be
an
implementation
draft
on
how
to
do
this
within
what
is
existing
and
we
would
like
to
know
if
this
is
important
work
for
the
future
of
network
coding
as
part
of
the
network
coding
performance
toolkit.
G
We
feel
it's
important,
but
again
you
know
we're
suggesting
that
to
the
to
the
group,
because
I
submit
out
of
the
current
scope
next
and
again,
so
that's
I
made
this
very
generic
as
it's
a
very
recent
idea,
but
I
would
like
to
thank
thanks,
I
and
Ian
for
some
of
the
talks
that
led
to
this
and,
of
course,
the
people
at
Akamai
who
felt
into
some
form
some
development
that
helped
us
focus
our
IDs
in
this.
So
that's
about
it.
Okay,.
A
Thank
you,
I
have
a
technical
question
and
then
we
can
continue
with
this
with
questions
about
what
to
do
next,
a
technical
question
is
about
the
use
of
potential
use
of
in
network
recording
feature.
Do
you
believe
that
it
will
be
that
this
channel
will
use
only
and
to
encoding,
or
also
it
is
necessary
to
have
recording
within
the
network?
Do
you
have
an
idea
of
about
it?
Not
yet
actually.
G
A
G
G
A
G
A
I
understand
now:
okay,
thank
you,
okay,
so
the
dues
are
not.
There
are
many
important
research
topics
in
this
potential
activity.
Very
interesting
questions,
so
the
key
I
think
is
whether
there
is
a
critical
mass
to
address
this
topic
as
well.
In
addition
to
the
previous
two
ones
or
not
that's
one
of
the
key
points,
but
I
see
that
there
is
a
support
and
strong
support.
I
would
say
from
Akamai
I
guess.
A
A
H
H
Something
interesting
that
I
saw
on
queuing.
Your
brief
presentation
is
that
the
the
end-to-end,
the
application
level
component,
the
the
part
where
you
express
the
various
performance
or
otherwise
Keowee
requirements,
that's
clearly
something
that
we
did
not.
As
you
said,
it's
it's
a
little
bit
out
of
what
we
initially
thought
in
the
Charter
and
so
on,
but
I
don't
think
that
would
be
that
to
actually
have
as
a
research
topic.
In
other
words,
the
you
know,
it
should
be
in
short,
I,
think
it's
interesting
and
we
haven't
actually
discussed
or
considered
this
topic
yet
so.
G
G
H
I
agree:
I
think
this
is
interesting.
We
haven't,
we
haven't
discussed
that
clearly
it
has
implications.
You
know
whatever
the
application
wants
to
do
or
want,
from
the
actual
from
the
network
service
that
that
would
be
made
possible
by
this
tunnel
that
has
the
network
coding
another
under
other
machinery
under
the
hood.
That
would
be
interesting
and
the
implication
of
what
out
of
what
the
application
requests,
because
application
normally
is
agnostic
about
what
is
it
an
upper
coding?
Is
it
some
other?
You
know
mechanism
under
the
hood?
H
F
So
it's
a
question
from
both.
Finally,
it's
not
only
meet
me
and
Nicola.
The
question
is
a
its.
It
sounds
like
more
problem
of
traffic
engineering
than
a
problem
of
networking
and
I.
Don't
see
a
you
presented
several
solution,
but
it
was
much
more
related
about
network
engineering
network
traffic
engineering
rather
than
network
coding
itself.
G
And
that's
what
I
said:
it's
actually
to
put
Network
coding
inside
a
toolkits
for
traffic
engineering.
Put
it
this
way
or
network
coating
inside
a
toolkit
of
I
would
say
more
network
engineering
I
agree
with
you.
This
is
not
network
coding.
Research
and
I
actually
said
that
this
is
not
looking
at
codes.
G
This
is
looking
at
codes
inside
an
infrastructure
for
traffic
engineering,
which
I
think
is
engine
is
networking
research,
maybe
more
than
network
coding,
research
but
I
think
because
it
implies
the
use
of
network
codes
in
may
be
different
use
cases
and
everything
I
think
it
is
related
to
this
group,
but
I
agree
with
you.
We
will
not.
This
work
will
not
lead
to
new
network
codes.
It.
G
C
So
maybe
that's
where
the
lingo,
though,
is
what
we
when
we
want
it
to
speak
in
our
draft
on
the
deployment
of
network
coding
techniques
and
satellite
system.
For
example,
we
have
the
question
on
the
in
interaction
with
the
virtualized
work
and
basically
I
don't
know.
If
maybe
if
we
speak
about
lots
of
subsets
the
functions
that
need
to
be
interconnected,
the
maybe
it's
more
interesting
to
try
to
target
more
working
groups.
That
says
the
sap
is
function.
F
G
Think
I
think
it's
part
of
that.
That's
part
of
the
work
I'm
not
saying
that
this
is
you
know,
none
that
none
of
these
things
is
independent
on
its
own,
but
any
case
it
I
think
if,
if
the
group
thinks
this
is
not
interesting,
you
know
I,
don't
mind,
I,
think
I,
don't
know,
I'll
ask
Brendan
to
support.
This
is
just
remark,
but
if
this
is
not
interesting
for
network
research,
we
can't
actually
take
it
to
transport.
A
E
You
hear
me
the
the
point
I
wanted
to
make
about
the
network.
Engineering
aspects
is
that
I
think
what
we're
really
focused
on
are
the
ways
in
which
application
of
the
codes
at
the
network
level
actually
directly
impacts.
The
network
engineering
there
there
tend
to
be
a
lot
of
tunable
parameters
related
to
how
the
codes
are
applied,
that
have
a
direct
impact
on
the
network
quality
and
the
quality
of
experience
for
the
application.
E
The
the
tunings
that
are
appropriate
for
real-time
video
will
tend
to
be
different
than
the
tunings
that
are
appropriate
for
data
transfer
and
and
that
sort
of
thing,
and
the
idea
is
that
for
really
effective
application
of
network
coding,
we
somehow
have
to
take
into
account
the
impact
that
it
has
on
the
network,
engineering
and
sort
of
the
interrelationship
between
the
tunable
parameters
for
the
network
codes
themselves
and
the
quality
of
the
network.
Experience.
E
A
C
A
C
If
you
are,
you
want
to
add
something
quickly:
yes,
it's
just
that
it
is
much
clearer
now
and
we
we
understand
the
need
for
the
at
the
application
and
your
application.
Yeah
I
think
you
experienced
not
on
the
network.
That's
why
we
were
a
bit
confused.
That's
now.
The
link
with
the
taps
working
group
is
clearer
in.
F
E
C
And
also
that's
why
we
understand
better
in
the
program
in
21st
between
what
you
have
do:
network
coding
solutions
that
you
have
and
what
is
actually
deployed
at
the
moment
at
the
application
level,
and
we
could
do
more
and
that's
also
the
name
with
a
quick
presentation
that
will
come
after
the
basically
right
now.
Thank
you.
M
A
M
M
So
just
a
brief
reminder.
So
suppose
you
want
to
send
some
packets
over
a
network
to
another
node,
so
you
want
to
protect
them
by
adding
some
extra
packets.
So
you
generate
some
liner
combinations.
You
send
all
these
packets
and
if
you
have
losses,
you
reverse
the
operations
to
rebuild
your
missing
sauce
packets.
So
that's
why
we
need
to
use
a
finite
field
to
perform
these
lineup
combinations.
So
your
interests,
either
plonker
wrote
a
really
good
technical
report
about
finite
Villard
matrix
applied
to
your
code.
So
it's
a
good
entry
point.
M
If
you
are
interested
sounds
like
so
now
the
problem
with
the
finite
field,
medic
is
operations,
are
complex,
so
solution
is
to
move
away
from
this
structure
to
go
into
an
another
structure
called
the
Ring
where
operations
like
multiplication
are
much
easier.
So
it's
fast,
so
we
use
fast
transforms
to
move
all
our
field
elements
into
a
ring
element.
The
bigger
ring
with
performs
the
multiplications
inside
this
ring
structure
and
then
once
we
get
the
results,
we
go
back
from
this
ring
to
the
field.
So
it's
a
known
technique
based
on
ringing
decomposition.
M
So
it's
not
new.
What
we
did
actually
is
the
definition
of
new
transforms,
so
next
slide
following
transforms.
So
basically
you
have
these
operations
to
do
so.
The
vector
represents
your
data,
so
you
have
thousands
of
bytes
of
data,
so
that's
your
packets
and
you
have
a
matrix.
So
the
first
thing
you
need
to
do
is
to
transform
your
data,
so
this
transform
does
be
fast
because
you
have
a
lot
of
data,
so
the
next
transform
next
slide
is
about
generator
matrix.
M
So
this
transform
doesn't
have
to
be
fast
because
that's
such
few
elements,
but
the
element
you
get
must
be
well
chosen
and
when
we
move
from
the
field
to
the
ring,
actually
we
move
into
a
bigger
ring.
So
we
have
a
choice
to
do
that
means.
One
field
can
have
several
representation
in
this
ring,
and
by
using
this
we
could
reduce
the
complexity.
I
will
explain
you
later
next
slide,
so
you
perform
the
multiplication
in
this
ring.
M
Okay,
so
you
get
ring
element,
and
now
we
apply
the
reverse:
transform
to
go
back
from
ring
elements
into
field
elements
next
slide,
okay,
so
the
first
one
from
you
could
use
it
called
the
embedding
transform.
So
it
was
described
by
ito
in
1989,
okay,
so
it's
very
fast
to
transform
financial
elements
into
ring
elements
and
back
and
additionally,
we
added
two
other
transforms.
So
the
first
is
called
parity
transform.
So
like
a
meeting
transform
it's
very
fast
to
transform
finite
elements
into
ring
elements
and
back
and
the
most
important
is
this
path
transform.
M
M
So
here
is
a
final
scheme,
so
you
applied
a
special
transform
to
your
vector
that
data
and
you
applied
another
transform
to
the
generator
matrix.
So
just
to
give
you
an
idea
of
how
the
transform
can
reduce
the
complexity,
let
me
give
you
an
example.
So
let's
not
next
slide
when
you
have
a
generator
matrix,
it
is
composed
by
field
elements.
So
the
one
known
technique
is
to
convert
this
matrix
into
binary
matrix
where
each
and
trees
represent
XOR
of
a
part
of
a
symbol.
So
on
the
less
matrix
you
have
this
unstructured
matrix.
M
It
represents
all
these,
so
you
need
to
perform
the
operations
inside
the
field.
Now
it's.
If
you
move
from
this
field
to
a
ring,
you
get
the
right
matrix.
Okay,
so
the
first
thing
you
see
is
you
have
less
source
so
less
than
twist,
so
you
have
less
operations
to
do
and
then,
if
you
look
well,
the
12
elements
are
composed
by
cyclic
diagonals.
So
that's
an
important
thing
to
optimize
the
coding
operation.
So,
okay,
that's
all
so
now
we
implemented
our
method
and
compare
our
codec
to
the
fastest
implementation.
We
know
it's
called
either.
M
M
So
operation,
so
it
works
on
Intel
Aaron,
anything
you
want
in
it
or
Suzy
the
use
of
uncommon
fields
like
the
GF
64,
because
it's
not
possible
using
other
techniques
like
look-up
tables
with
this
this
field
and
this
field
is,
is
very
fast
and
also
provides
good
capacity
correction.
So
we
test
it
inside
our
tetris
code
and
it
has
good
performances.
So
so
it's
cool
and
I'm
done.
First,
questions.
A
E
A
L
M
M
M
When
you
transform
the
data
from
a
field
into
a
ring,
you
have
bigger
data
actually,
because
your
ring
is
bigger,
so
you
have
to
represent
it
with
more
data.
That's
why
you
need
to
go
back
and
actually
the
transform
just
adds
some
redundant,
so
the
perche
transform
is
just
to
sort
of
all
your
modem's
of
you
of
your
of
your
field
elements.
That's
why
you
need
to
go
back.
N
N
M
Actually,
can
you
to
the
transform,
slides
the
previews,
the
previews
when
I
describes?
Yes,
so
the
impeding
transform
is
just
you
embed
your
your
field
element
into
a
ring
element,
so
that
means
you
do
not
think.
Okay,
so
it's
fast
and
the
reverse
transform
you
actually
need
to
store
all
the
elements.
Okay,
so
the
last
part
of
your
wrinkle
amount
into
the
other
field
element
so
and
the
other
transform
party
is
your
posit.
M
So
the
first,
when
you
go
from
the
field
to
the
ring,
you
just
had
a
parity
bit
actually
and
the
reverse
transform
is
just
removing
this
bit.
Okay,
so
can
you
go
back
to
the
also
slide?
The
next
slide?
Yes,
this
one
so,
depending
on
all
many
pockets,
you
have
and
you
want
to
generate,
you
will
use
the
embedding
of
the
party,
but
they
are
very
fast
and
they
are
described
in
the
paper.
If
you
want
I
can
send
you
if.
N
You
I
mean
it
reminds
me
a
little
bit
of
some
work.
Some
people
live
on
something
called
optimal
prime
fields
where
they
also
need
to
do
a
mapping
before
and
after
the
the
the
the
encoding
ursus
wondering
if
it
was
included,
because
I
know
this
isuh
ll
implementation
also
that
they
have
also
really
fast
so
every
so.
It's
impressive
that
you
are
managing
to
outperform
them
by
that
much
right.
A
N
A
Looks
magic!
Thank
you.
We're
running
a
little
bit
late.
Three
minutes,
so
I
will
ask
you
a
question:
electronic
okay.
Thank
you,
so
I
forgot
to
introduce.
We
moved
to
do
a
second
part
of
this
afternoon's
session
on
end-to-end
for
encoding
techniques,
so
that
was
for
the
implementation
part.
Now
I
will
have
a
plantation
on
well
lining
cards.
A
A
Don't
because
he
will
experience
different
loss
model
and
then
you
do
a
CCD
coding.
You
will
construct
flow
and
you
will
goal
is
to
achieve
a
specific
target
quality.
So
that's
one
of
the
key
criteria.
Each
receiver
must
achieve
a
certain
target
quality,
so
you
have
several
parameters,
this
loss
model
that
will
differ
from
between
the
various
receivers.
You
also
have
this
very
important
parameter,
which
is
the
latency
budget.
The
FEC
latency
budget
that
you
can
afford
for
FEC
operations
and
coding
decoding.
A
A
So
the
intuition
very
basic
intuition,
with
blog
cards,
you
are
constrained
by
this
blog
creation
time.
So
if
we
have
an
is
related
packet
loss,
then
it
can
take
time
to
recover
from
this
packet
loss
with
sliding-window
cards.
With
this
continuously
sliding
and
conning
window
that
slides
over
the
data
sets,
then
you
have
more
opportunities
to
repair
lost
packets
more
rapidly.
So
that's
the
key
idea,
the
key
intuition.
A
Now
the
question
is:
does
it
work
or
not
with
also
types
of
packet
losses?
So
once
again,
what
can
you
expect
from
the
use
of
signing
window
codes?
Well,
we
can
expect
lower
FEC
related
latency
for
the
reason
that
I
just
mentioned,
but
we
can
also
expect
insist
your
nutrition,
improved
robustness.
Why
well
for
two
reasons?
Essentially,
because
the
sliding
window
will
overlap
with
one
another,
so
it
will
potentially
be
a
good
asset
in
case
of
long
packet
losses
bursts.
A
A
So
this
is
the
humid
wards,
but
the
experimental
setup.
So
we
compared
the
ILC,
which
is
this
lining
window,
that
we
are
proposing
to
TS
VW
working
group,
very
basic
one,
very
basic
solution,
nothing
new
inside
and
we
compare
it
with
Reed
Solomon
codes
with
that
are
in
the
old
codes
with
small
blocks,
at
least,
but
those
are
in
the
old
block
codes
and
we
also
included
rata
cut
comparison.
A
Radical
comparisons,
unlike
richlum
on,
are
far
from
in
the
oil,
especially
when
you
are
considering
small
blocks,
which
is
the
case
here.
So
you
need
to
set
up
a
few
tricks.
We
need
to
artificially
divide
packets
into
sub
packets
or
symbols
into
sub
symbols
in
order
to
artificially
increase
the
number
of
symbols
per
block
so
going
to
larger
blocks.
So
it's
a
trick.
It's
tricky.
It
is
described
in
the
IFC
describing
raptor,
but
you
have
to
do
that,
so
the
experimental
setup
is
similar
to
what
I
presented
just
before.
A
So
you
are
this
CDR
transmission
channel,
which
makes
sense
for
the
use
case
that
we
are
continuing
with
free
DPP.
We
are
considering
constant
bit
rate
transmissions,
so
in
this
case
you
have
10
packets
per
sorry,
a
1
red
packets
per
second
constant
bit
rate
transmissions.
You
are
this
target
quality.
We
don't
want
to
have
more
than
1/10
of
the
power
minus
3
packets
missing
after
FEC
decoding,
so
that
target
correctly.
We
have
this
lost
model,
which
is
also
a
very
important
parameter,
and
then
we
have
this
latency
budget.
A
The
amount
of
time
you
can
afford
to
doing
encoding
decoding
operations,
and
in
that
case
we
consider
two
types
or
two
values
quarter.
Second
and
a
second,
and
then
we
want
to
measure
in
the
first
set,
how
much
repair
traffic
we
need
to
achieve
this
given
target
quality,
tennis
free
after
I,
specifically
given
a
loss
model
given
yes
give
Ross
model,
given
the
targets,
the
FEC
latency
budget.
So
what
is
the
additional
RIBA
traffic
we
need
to
inject?
And
when
you
inject
that
repair
traffic,
of
course,
you
will
reduce
the
source
application
traffic.
A
I
don't
have
time
to
go
too
much
into
details,
but
the
way
it's
implemented
must
be
considered,
especially
with
block
codes,
because,
depending
on
whether
you
have
a
single
output
Q
at
the
efficient
code
or
two
Q's,
it
will
differ
in
practice.
So
just
a
few
words
about
it.
Let's
consider
a
block
cut.
A
So,
since
you
are
considering
constant
bitrate
transmissions,
it
means
that
the
traffic
shape,
or
that
twist,
that
is
behind
this
Q
will
f2
will
generate
a
lot
of
well
will
sorry
will
generate
a
burst
of
not
a
bus,
but
we
send
a
sequence
of
rip
a
packet,
follow
it
by
sauce,
packets
and
then
reaper
packets,
sorry,
once
again
by
sauce
packets
and
some
sauce
packets
will
be
delayed.
Because
of
that.
So
if
you
have
a
single
output
queue,
it
will
be
there
is
that
will
build
your
result.
A
A
We
also
focusing
on
3pp
use
cases,
which
means
that
mobility
needs
to
be
taken
into
account
and
pretty
people
has
produced
lost
models
for
taking
to
account
mobility.
So,
typically,
basically
you
are
two
types
of
receiver:
one
is
passenger
vehicle
and
the
other
one
is
pedestrian.
If
you
are
considering
a
pedestrian,
this
restaurant
will
remain
beyond
abstract
for
quite
a
long
time.
A
It
doesn't
work
for
so
fast,
so
it
ended
up
in
having
two
different,
completely
different
types
of
lost
models,
as
you
can
see
in
this
figure
on
the
top
you
have
this
vehicle
passenger
with
which
well
you
will
see,
is
related.
Packet
loss
is
more
or
less,
and
in
the
second
case
you
will
have
deuce
long
burst
of
packet
losses.
So
this
is
also
something
interesting
take
into
account
the
official
loss
models
I.
A
So
the
lower
the
better,
of
course-
and
if
you
look
carefully,
you
will
see
that
well
sliding
window
cuts
are
always
always
much
better
significantly
better
than
the
other
block
cuts,
even
the
even
with
Salomon
block
codes,
wrapped
or
well
behind.
All
of
that,
so
that's
the
main
achievement,
so
it
really
makes
sense.
Then,
if
you
look
more
carefully,
you
will
see
that
some
of
these
scenarios
cannot
be
addressed
in
a
reasonable
way
with
a
reasonable
amount
of
repair
packets.
This
is
the
case
for
the
worst
scenarios.
A
A
Since
we
are
considering
a
single
source,
a
single
data
flow,
you
need
to
make
a
choice.
Now
that
you
are,
the
results
can
make
this
choice.
You
need
to
choose
the
target
code
rate,
the
amount
of
repair
traffic
humor
you
can
accommodate.
This
is
one
way
to
say
that
the
other
way
is
to
select
the
channels
that
you
want
to
support,
for
both
for
questions
are
more
or
less
equivalent,
and
then
from
this
you
can
follow.
You
can
measure
the
actual
latency
that
you
can
experience
at
a
receiver,
so.
A
Very
quickly,
I
don't
have
too
much
time
left
these.
Those
figures
are
for
the
final
situation,
with
a
single
data
flow
with
a
specific
Latin
C
budget.
Half
a
second
in
that
case,
with
a
specific
amount
of
repair
traffic
50%,
so
could
read:
1
2,
4
3
in
that
case,
and
then
you
measure
the
experience
latency
at
a
receiver
with
this
rule,
depending
on
the
scenario,
mobility
scenario,
and
what
you
can
see
is
that,
with
oil
see
very
good
receivers,
the
the
one
on
left
on
left
curve.
One
stands:
one
percent,
five
percent.
A
A
Yes,
it
is
fast.
This
is
the
next
question.
We
do
not
implement
our
cards.
The
way
Jonathan
did
so.
This
is
far
from
being
the
fastest.
A
few
codes
you
can
imagine
your
colleagues
can
imagine,
but
still
it's
significantly
faster
than
rap
tacos,
for
instance,
and
more
is
the
same
as
with
Reed
Solomon
codes.
A
So
to
conclude,
yes,
it
makes
sense
to
use
sliding
window
curves.
We
experimented
with
loose
our
C
code,
very
simple
codes,
but
you
can
imagine
also
types
of
sliding
window
codes
will
be
more
or
less
same
results.
We
focused
on
multicast
broadcast
communications,
but
you
can
have
same
or
similar
benefits
with
unicast
communications
as
well.
So
that's
all
I'm
open
to
questions
yeah.
A
A
A
Could
be
better
in
terms
of
well
compared
to
righto,
but
still
there
will
be
more
or
less.
There
will
be
more
less
similar
to
that
of
rich
women,
just
behind
with
Solomon,
perhaps
or
almost
the
same.
But
the
idea
of
comparing
with
rich
sermon
code
and
in
the
old
code
is
really
to
add
the
best
solution
possible
news,
blog
cuts
in
that
case
I
do
not
have
time
to
mention,
but
the
blog
cuts
typically
are
around
1020
less
than
10
packets
per
block.
Oh.
G
A
G
The
other
question
is
Dean
again
going
back
to
the
word
that
we're
not
supposed
to
use
convolutional
codes.
Have
you
thought
of
like
other
types
of
convolutional
codes,
the
ones
that
you
puncture
the
ones
that
you
know
detective
punctured
codes
that
you
can
actually
use
to
have
different
rates
from
the
same
code
or
things
like
that?
O
is
just
right
now,
just
a
sliding
window,
I.
A
A
E
A
H
H
Obviously
you
know
with
FEC
you
have
a
fixed
coding
scheme
with
a
fixed
coding,
great
meaning.
The
amount
of
redundancy
is
fixed,
but
with
RLC
I'm,
not
sure
I
mean
different
protocols
have
different
ways
of
it.
Was
it
dynamic
adapting
to
the
loss
of
the
channel
or
not
or
I'm,
trying
to
see
how
you
can
actually
have
some
comparison
between
the
different
adaptation
or
what
or
lack
of
mechanisms
inside
each
of
the
protocols?
Obviously,
the
the
Raptor
codes
are
weightless,
so
they
keep
sending
until
you
get
it
right.
Yeah.
A
A
What
we
are
not
considering
dynamic
adjustment
of
those
parameters,
cut
rate,
for
instance
during
this
session,
it
does
not
make
any
problem
to
change
it.
If
you
have
unicast
communications
and
feedback,
a
feedback
flow
that
will
tell
you
what
is
the
network
situation
network
conditions?
If
you
are
that
you
can
do
that
is
very
easily
with
all
she
codes
with
those
kinds
of
FEC
codes,
knowing
is
fixed
in
advance,
you
need
a
new
repair,
you
generate
it,
you
send
it,
you
don't
need
it,
you
don't
repair,
you
don't
generate
it
and
that's
all.
So.
A
A
A
O
My
name
is:
can
you
hear
me?
My
name
is
Ian
sweat,
I'm
gonna
talk
about
quick,
FEC
I
want
to
call
out
one
thing
about
actually
how
many
of
you
are
familiar
with
the
quick
effort
at
ITF.
Ok,
ok,
so
enough
that
I
won't
go
into
it
too
much.
But
quick
is
a
transport
built
on
top
of
UDP.
It's
designed
to
be
encrypted
always
because
it's
built
on
top
of
UDP
and
not
TCP.
O
The
opportunity
for
FPC
is
more
promising
than
you
know,
most
tcp-based
transports
and
I
think
that
probably
is
sufficient
for
for
what's
in
the
slides,
we
did
a
v1
of
FEC.
It
was
in
the
original
design
it.
You
know,
we
tried
experimenting
with
it
and
in
there
ways,
but
the
core
foundation
was.
It
has
a
single
single
X
or
recovery
packet,
so
you
would
have
a
block
of
a
certain
size
and
everyone
smile.
O
You
would
like
rotate
blocks
and
you
would
either
spit
out
an
X
or
recovery
packet
at
the
end
of
the
block
or
not.
Those
are
basically
the
two
ways
you
could
do
it
so
and
you
could
end
up
end
a
block
prematurely.
So,
for
some
reason
you
know
you
entered
quiescence
and
you
didn't
have
anything
to
send.
You
could
just
end
a
block
and
you
know
have
an
X
where
they
only
covered
say
two
packets
instead
of
ten.
O
O
So
so,
anyway,
so
so
this
this
particular
fall.
It
turned
out
to
be
a
sufficiently
large
problem
that
it
actually
turned
out
to
be
for
HTTP
traffic,
better
to
just
send
a
random
packet
like
what
we
do.
Int
a
loss
probe
for
ECB
than
it
was
to
send
an
FEC
packet
because
so
often
like
the
thing
you
sent
was
just
useless
yeah
and
the
fact
that
was
integrated
in
the
core
transport
really
caused.
A
lot
of
programming,
attic,
I,
mean
I.
O
Think
we
probably
spent
close
to
two
developer
years,
trying
to
make
it
work
with
different
experiments.
So
we
ran
experiments
where
we
just
ran
it
at
a
fixed
rate.
We
ran
it
at
a
rate
that
was
half
of
the
congestion
window
under
the
theory
that
you
would
never
want
to
bother
with
FEC
and
wait
an
entire
extra
RTT
and
we
ran
one
with
which
basically
just
replaced
a
aggressive
tail
loss
probe
with
a
Ford
error,
correction
packet
and
in
all
cases
it
was
just
strictly
negative
or
it
turned
out
to
be.
O
It
was
slightly
positive
in
a
few
cases,
but
in
the
cases
it
was
positive
you
could
just
send
any
random
pack
if
I
hadn't
been
acknowledged
and
I
was
equally
good,
so
that
was
kind
of
a
bummer.
But
now
we
know
we
learned
a
lot
so
this
this
turned
out
to
be
the
we
found
this
out
after
we
ran
all
these
FPC
experiments.
We
should
have
gathered
this
data
first,
so
we
gathered
a
bunch
of
data
early
on.
O
That
was
what
quick
packet
loss
would
have
looked
like
on
the
internet
if
we
weren't
actually
sending
any
data-
and
it
turns
out-
that's
not
super
representative
of
like
what
it
looks
like
when
you're
actually
running
a
transport
over
the
Internet.
So
when
we
weren't
sending
any
data
like
it
did
kind
of
look
like
iid,
approximately
it
wasn't
too
far
off,
but
we're
actually
using
the
network,
at
least
to
some
extent.
You
know
whether
it
be
search
or
otherwise.
O
This
is
the
CDF
you
end
up
with,
so
you
can
see
that
about
30%
of
the
time
the
packet
loss,
distribution,
I,
oh
I,
should
say
this
is
within
an
RTT.
So
this
is,
if
you're
familiar
with
TCP
the
concept
of
Wasi
thought
well
like
when
you
go
into
recovery,
so
this
is
how
many
packets
were
lost
from
the
first
packet
that
was
lost
to
an
RTT
later.
So
it's
it's,
not
a
it's,
not
true
birth
size,
but
commonly
these
did
actually
arrive
in
bursts.
O
O
So
the
current
things
that
we've
been
thinking
about
is
you
know
we
learned
from
the
first
time.
Okay,
that
this
tell
us
probe
thing
should
work.
It
should
be
possible
to
send
a
packet
and
have
it
be
useful,
no
matter
what
and
if
it's
an
FEC
packet
and
there
was
only
one
outstanding
packet,
he
would
immediately
recover
and
if
that
wasn't
sufficient,
we
would
at
least
a
learn
something
because
he
would
generate
an
acknowledgement
that
would
cause
us
to
know
what
to
send
and
be.
O
We
we
wouldn't
have
wasted
that
packet,
so
it
should
be
still
possible
to
use
possibly
something
like
a
sliding
window
code
to
only
generate
a
packet
at
the
at
the
tail.
When
you
enter
quiescence
for
web
applications,
that
should
still
be
a
valuable
concept.
We
haven't
done
the
work
to
do
it,
but
yeah
potentially
could
be
other
useful
for
other
applications
that
are
sensitive
to
tailings.
O
Real-Time
communications,
like
RTP
for
WebRTC,
seem
like
an
obvious
use
case.
We've
started
to
look
into
doing
web
recei
on
top
of
quick,
whoever
to
see
I
think
currently,
the
most
advanced
scheme
that
is
standardized
are
semi.
Standardized
is
a
matrix
of
X
or
forward
error
correction.
Packets
is
the
last
draft
called
flex
FEC.
O
Apparently
it
works
reasonably
well,
but
I
haven't
really
probably
would,
but
it
kind
of
uses
this
matrix
to
deal
with
the
fact
that
losses
are
sometimes
correlated.
So
you
can
kind
of
as
long
as
you
don't
lose
two
packets
in
the
same
row
or
column
like
you're,
okay,
essentially,
and
one
thing
about
our
particularly
WebRTC
and
and
all
of
this
we
really
need
something.
That's
fast
enough
that
especially
the
decode
can
happen
on
a
relatively
constrained
device.
Ie
FM
I
mean
it
doesn't
have
to
be
the
world's
oldest
phone.
O
O
You
can't
actually
terminate
the
transport
without
terminating
crypto,
which
is
great
on
a
variety
of
levels,
but
it
means
so
thing
like
a
performance-enhancing
proxy
is
like
literally
impossible
on
the
transport
layer,
but
there
might
be
a
possibility
of
using
FEC
tunnels
as
a
replacement.
For
you
know
a
performance-enhancing
proxy
in
cases
when
you
know
that
some
sub
portion
of
a
network
has
a
available
bandwidth
that
you
can
use,
so
it
doesn't
have
to
be
end-to-end.
It
could
be.
O
You
know
a
tunnel
in
the
middle,
that's
in
the
providers
network,
maybe
from
where
they
normally
would
install
a
performance-enhancing
proxy
to
the
wireless
head
end
or
something
I.
Don't
know
I
mean
so
any
of
these
use
cases
sound
potentially
interesting,
and
you
know
we
have
some
active
development
in
the
in
the
latter
two
for
like
applications
and
the
first
obviously
HTTP,
where
you
know
quick
as
we're
at
7%
of
the
internet
right
now.
So
that's
like
the
thing
you
know
so
that
there's
a
lot
of
deployment
there.
O
N
Modern
Peters,
and
so
on,
the
performance
thing
I
think
you
will
find
no
resource.
No
there's
been
a
lot
of
work
in
less
coverage
and
also
with
this
news
conference
that
has
not
been
problem.
How
long
I'm
running
on
no
sensor?
No,
so
you
can
get
have
reasonable
comments
coding
there
I
know
some
people
that
are
looking
at
quick
and
also
put
in,
for
example,
a
sliding
window
code.
What
would
be
in
were
pina.
This
starting
point
is
that
multiple
implementations
for
the
goal-
one
and
one
that
is
yeah
and.
P
O
Say
this
is
a
good
place
to
look.
It
depends
on
what
you're
attempting
to
accomplish
I
think
if
you're
attempting
to
accomplish
an
experiment
involving
FEC
to
show
its
end
end
viability
and
maybe
its
its
performance
in
like
like
a
controlled
environment,
I
think
probably
the
going
implementation
is
actually
the
easiest
to
iterate
on,
because
it's
go
assuming
you're
familiar
with
go,
it
seems
like
they
were
able
to
right
there,
implementation
with
a
lot
less
time
like
you
know,
develop
developer
time
than
we
were
the
chromium.
O
One
is
like
almost
identical
to
what
we
run
internally,
so
I
mean
it's
a,
but
it's
all
C++,
it's
quite
well
optimized,
it's
not
100%
I
mean
it.
It
still
has
work
to
do,
but
it's
it's
fairly
fast.
So,
if
you're
looking
for
something
that's
going
to
be
a
more
production
like
give
you
realistic
performance
numbers,
I
think
the
chromium
one
is
what
I
recommend
and
you
can
either
pull
that
directly
from
the
tree
or
there's
something
called
proto.
Quick,
there's
also
something
called
lip
quick,
but
that
is
I.
O
Think
now
about
a
year
behind
so
I
wouldn't
grab
that
I
probably
will
not
be
into
robot
interoperable
with
any
of
our
servers,
or
at
least
not
for
long,
and
so
it's
just
and
we're
trying
to
move
to
the
IETF
specification
of
quick
as
quickly
as
we
can
so,
hopefully,
they'll
be
almost
compatible
or
completely
compatible,
but
yeah.
So
it
depends
on
you
years.
I
think
I'm.
A
E
O
I
A
Thank
you
very
much,
so
I
see
many
opportunities
for
continuing
this
activity
on
FEC
and
how
to
apply
to
other
protocols
within
ITF
and
yeah.
It's
very
important
for
for
Earth
for
IETF
in
general.
I
think
Allison
will
agree.
So
let's
continue
on
this
topic
and
released
and
let's
continue
just
after
the
meeting,
we
can
take
advantage
of
being
old
case
or
two
to
discuss
this.
So
thank
you.
Everybody
for
this
meeting
I
would
like
to
do
special
thanks
to
nicely.
Thank
you,
my
for
your
help.
Okay,
see
you
next
time,
bye.