►
From YouTube: IETF104-NWCRG-20190328-1350
Description
NWCRG meeting session at IETF104
2019/03/28 1350
https://datatracker.ietf.org/meeting/104/proceedings/
A
Everybody,
so
this
is
the
network,
cooling,
sorry
for
the
cutting
for
efficient
network
communication,
so
troop
we
are
on
time,
but
maybe
we
can
wait
one
minute
one
more
minute.
This
is
the
first
afternoon
session,
so
maybe
it
makes
sense
to
wait
just
a
little
bit.
The
agenda
is
not
total
fool.
We
have
20
minutes
belongs
time
at
the
end.
So
if
we
need
we
can
discuss
just
a
little
bit
more.
It's
feasible
this
time.
C
So
good
afternoon,
everyone,
this
is
again
like
as
I
was
saying,
this
is
the
coding
for
efficient
network
communications
research
group-
if
you
here
for
this
great,
if
you're,
not
you're,
going
to
have
a
great
meeting
anyway
and
you're
going
to
learn
about
something
fantastic.
So
the
goals
that
we
repeat
at
every
meeting
is
that
we
would
like
to
foster
research
in
network
and
application
layer
coding
to
improve
network
performance
and
I
would
say
that
if
I
look
at
the
people
from
this
room,
a
lot
of
them
have
been
very
successful
at
this.
C
We
want
obviously
to
focus
on
the
codes
themselves
and
the
libraries
to
be
able
to
implement
them.
We
want
to
look
at
the
protocols
beyond
the
codes
with
the
protocols
to
facilitate
the
use
of
coding
in
existing
systems
and
existing
applications,
and
we
are
welcoming
all
the
time
and
you're
going
to
see
some
of
this
presented
today.
Real-World
use
cases
and
work
in
progress
in
the
field.
C
C
C
So
you
will
see
that
if
you
go
to
the
wiki
by
any
chance,
you're
going
to
see
that
essentially
you've
been
sent
to
the
github
and
all
the
latest
document
and
the
latest
comments,
including
the
code
that
we've
been
developing
over
the
weekend
or
there
so
Elizondo
okay.
So
the
agenda
is,
is
kind
of
very
simple
after
I'm
done.
Talking
we're
going
to
talk
about
is
what
we
did
on
Saturday
and
Sunday
we're
going
to
have
a
quick
update
on
the
satellite
deck
document.
C
That
is
very
much
ready
to
publish
now,
frankly,
we're
going
to
talk
about
another
update
of
something
that
is
very
also
very
close
to
be
able
to
publish,
which
is
the
network
coding
on
CNN
and
Jung.
We're
going
to
have
a
fairly
extensive
presentation
from
Muriel
and
Karen
who
are
on
the
meet
echo
on
basically
our
LNC
other
random
linear
network
code.
C
We
want
to
talk
about
what's
happening
about
the
the
RLC
ether,
but
now
we
called
it,
which
is
related
to
the
swift
codec
and
that
the
standardization
related
to
it
in
the
transport
group-
and
you
know,
raising
some
of
the
issues.
Obviously
future
meetings.
The
hackathon
will
have
another
one.
We
had
so
much
fun.
We
want
to
repeat
the
experience
and
and
by
the
way
you
have
more,
women
would
like
to
come
to
the
hackathon.
That
was
like
I.
C
Wasn't
the
sisters
lunch
right
now
and
we
said
that
the
problem
with
the
hackathon
it
was
like
you
could
see
huge
amounts
of
men,
but
they
were
not
only
women,
so
I
encourage
more
women,
although
I'm
speaking
to
a
room
here.
Also
with
this
a
strong,
not
a
strong
number
of
women,
and
we
are
going
to
obviously
to
meet
in
Montreal,
and
if
anybody
wants
to
have
any
hints
about
material,
it's
my
hometown,
so
send
the
email
and
without
further
delay
we're
going
to
go
to
the
first
presentation,
which
is
its
Nico
right.
C
E
A
A
So,
as
I
said,
this
is
the
second
marathon
we
second
time.
Amid
for
this
project,
the
goal
is
to
design
an
open
source
reference
and
free
codec,
with
as
a
first
goal
being
able
to
do
a
noisy
like
codec
and
as
a
second
goal,
an
RNC
like
collect.
So
for
the
first
time
we
will
only
focus
end-to-end
communications,
one
encoder,
one
decoder
and
as
a
second
goal,
we
will
add
this
oil
NC
Network
coding,
with
potentially
the
capability
to
do
rien
culling
within
the
network.
A
So
that's
the
goal
and
of
course
the
the
final
objective
is
to
add
this
technology
easier
to
tests
to
integrate
or
to
benchmark
facilitate
adoption.
Another
goal
is
to
challenge
our
generic
API
internet
draft
I
will
come
back
on
the
spectrum
so
in
during
the
first
ITF
103
I
was
almost
alone.
So
this
time
there
was
a
great
team,
as
you
can
see,
it's
almost
balanced
in
terms
of
gender.
We
had
two
women,
including
my
Jose
and
may
not
and
our
free
eye,
so
it
was
great.
A
So
we
made
a
lot
of
progress.
I
mean
most
a
story
that
we
managed
not
to
finish
the
encoder,
but
we
are
not
very
far
from
the
end
on
the
unclean
site,
so
we
are
capable
of
submitting
new
symbols
and
computing
randomly
not
munitions
on
the
symbols
and
closing
the
session
strings
like
even
it
is
some
tests
on
going.
We
are
not
very
far
from
the
from
the
end
at
the
end,
so
the
encoder
is
the
simple
spots.
Next
time
we
will
have
to
do
the
decoder.
A
In
fact,
we
started
the
decoder
program,
but
it's
not
yet
very
well
against.
So
we
had
this
as
a
goal
for
next
idea.
We
also
wrote
a
te
client-server
demo
application.
That's
also
another
way
to
test
the
codec
itself
with
threads,
and
we
have
almost
finished
this
part
of
the
work
and
we
started
a
python
wrapper
on
top
of
this.
On
top
of
the
api,
to
add
to
add
this,
the
ability
to
integrate
this
c
codec
into
Python
applications
I
won't
come
back
on
the
design
principle
of
this
correct.
A
We
already
discussed
this
in
the
past,
so
the
idea
that
okay,
this
is
just
the
cross
thought
of
the
FEC
correct.
All
many
parts
of
it
are
left
to
the
application,
so
the
signaling,
all
this
stuff,
is
managed
within
the
application.
So
I
won't
go
into
these
technical
details.
Something
great
with
this.
A
Catherine
project
is
that
it
enables
us
to
make
progress
also
on
the
internet,
dot
internet
draft
generic
API
thought.
A
So
we
as
we
do
the
the
world
technical
work.
We
I
don't
find
a
few
problems.
We
solve
some
of
them,
so
we
made
few
fixes
in
this
engine
draft
four
or
five,
and
we
also
I,
don't
find
open
points
but
is
to
say
point
as
technical
aspects
that
we
need
to
figure
out
how
to
solve
in
the
coming
weeks.
So
all
of
this
is
documented
in
this
github
document.
A
If
we
can
achieve
this,
but
maybe
it
could
be
possible
also,
there
is
a
lot
of
clean
up
all
the
stuff
of
things
to
do
on
the
side
aspect
of
this
codec,
but
that's
the
goal
for
next
ITF
and
then
for
the
ITF
106,
we'll
start
working
on
this.
We,
including
capability
that
we
want
to
add
into
these
projects,
has
to
be
a
Island
sea
to
have
an
oil
and
she
like
solution.
A
So
at
the
end,
we
will
have
both
a
support
for
algae,
purely
Antoine
and
Kareem,
recording
and
Ireland
sea
with
the
capability
to
have
this
gray
encoding
within
the
network.
So
that's,
that's
all
for
this
academic
project
just
want
to
say
something
else,
a
sense.
Yesterday,
an
email
asking
for
research
group
adoption
of
this
document-
yeah
I
mean
the
generic
API
document.
This
is
something
that
we
already
discussed
at
procite
EF,
so
there
was
two
positive
opinions
on
doing
that
on
cognition.
A
A
The
fact
we
have
this
document
as
a
document
does
not
mean
that
we
necessarily
need
to
publish
it
as
an
information
or
I
see.
That's
a
side.
Question
we'll
see
that
later
on,
it's
not
very
usual
for
ITF
for
at
EF,
you
have
API
document,
so
we
will
see
if
it
makes
sense
later
on,
but
anyway
it's
important
I
think
to
have
both
works
interim
since
the
mesh
of
the
nation's
major
connections
between
the
two.
So
that's
its.
If
you
have
commands
questions
indicate.
F
Hello,
everyone,
that's
a
quick
update
on
what
we
have
been
doing
in
this
draft
since
the
last
IDF
further
next,
and
only
slide
is
basically
we
had
lots
of
at
the
ATF.
We
presented
this
document
in
front
of
the
ttn
working
hope
as
well
to
see
if
anyone
wanted
to
review
it.
So
we
had
lots
of
comments
from
riderhood
and
lots
of
emails.
I
changed.
F
He
had
lots
of
good
points
and
try
to
sum
up
here
what
we
have
done.
Basically,
at
the
beginning,
we
wanted
to
have
lots
of
people
on
board
on
trying
to
speak
about
what
was
actual
deployment
on
network
coding
in
higher
layer
in
SATCOM
systems.
We
failed
to
do
that,
so
we
have
more
focused
on
indentifying
opportunities,
so
we
somehow
had
some
words
in
the
introduction
that
were
not
really
accurate.
F
Also,
we
had
no
motive
references
which
are
not
relevant
for
research
document.
We
had
an
economy
list
that
was
not
up
to
date
and
we
were
not
clear
on
whether
fitty
color
coding
were
involved
or
not.
So
basically,
we
removed
any
mention
to
physical
layer
coating.
So
we
haven't
had
many
feedback
scenes
and
we
believe
the
hoop
is
draft.
If
still
what
happened
for
being
published
as
an
RFC
is
ready
to
do
so.
A
So,
if
nobody
objects,
maybe
we
can
start
research
group
last
call
on
this
well
see
left
on
the
list
anyway,
but
it's
really
a
list
where
these
three
weeks
period
for
all
research
group
last
call
and
hopefully
people
will
come
and
I
will
anyway.
Does
anybody
intend
knows
that
you
will
read
and
comment
so
this
document
during
this
last
call
how
many
people-
yes,
what
great
okay,
so
at
least
or
two
is
me,
that's
great:
that's
the
minimum
I
hope
there
will
be
more
people
but
anyways.
G
E
H
Hello
I'm
in
matters
because
he
saw
I'm
not
so
no
from
when
I
cheated.
So
let
me
talk
about
a
quick
update
on
net
recording
for
session
and
young
requirements
and
the
challenges.
So
first,
let
me
talk
about
this
draft
of
summary.
The
main
objective
is
to
introduce
research
result
related
to
this
topic
and.
H
So
by
doing
so,
we,
hopefully
we
provide
a
user
for
any
sites
to
network
need
to
code
who
try
to
implement
network
coding
intuition
and
en
you,
so
distrust
now
are
focusing
on
the
requirement
so
describes
a
specific
solutions
and
were
mechanism
out
of
scope
of
this
draft.
So
actual
protocol
proposal
will
be
down
in
another
draft.
H
So,
regarding
mine
applet,
we
we
addressed
missing
your
content,
such
as
we
added
backward
compatibility
section,
because
there
is
no
description
about
about
these
two
requirements.
So
we
describes
recording
comparability
compatibility
weights
were
a
network
operation
with
consideration
for,
since
my
work,
migration,
and
also
we
added
the
content
regarding
security,
privacy
and
routing
in
the
channel
dissection,
so
net
recording
operations
and
in
network
caching
related
to
content,
poisoning
and
cache
pollution
attack
and
the
net
recording
of
may
offer
encoding
vector
modification
by
attackers.
B
E
I
A
J
J
My
suggestion
is
just
go
ahead
and
do
that
now
and
then
we'll
send
that
draft
around
I'll
make
sure
that
it
gets
some
review
in
the
IC
NRG
see
see
if
there's
any
further
things
that
the
ICA
RG
thinks
and
the
NWC
argue
thinks,
and
with
the
goal
of
turning
into
a
last
call
well
before
the
next
IETF
right
see
if
we
can
turn
this
around
in
a
month
or
two
make
sense.
Thank
you.
Thank
you.
C
K
K
This
document
contained
a
large
background
and
symbol.
It
was
focused
on
simple
representation.
Still
is
so.
We
submitted
two
updates
to
that
document,
so
we
separated
the
background
information
on
still
pretty
much
simple
representation.
So
it's
not
complete
in
Arlen,
see
background
document
and
then
the
second
one
is
dedicated,
updated
the
second
one
they
have
0
1
and
it's
dedicated
to
presentation.
This
is
collaborative
worked
with
an
of
code
and
Mossad
all
the
way
we
are
peeking
in
the
name
of
all
the
Maryland.
L
K
Moving
to
the
agenda
slide,
though,
as
I
mentioned,
we
split
the
document
into
the
background
information
and
simple
representations
itself
in
the
updates
in
1001
I'll.
Just
you
know,
Cisco
we
have
new
definitions
that
are
engines
couples,
we
have
two
important
exceptions
and
then
and
then
I'll
cover
them.
What
we
need
look
towards
doing
for
the
next
version
you
to
modifications,
mostly
from
comments
from
the
list,
and
of
course,
if
you
guys
have
come
in
from
this,
nothing
will
add
them
up
and
and
respond
to
them
so
going
to
slide
3.
K
So,
in
the
first
document,
informational,
as
I
said
a
general
background,
it's
supposed
to
be
on
Ireland,
see
that's
the
goal,
its
documents
and
development.
Still
now
it's
focusing
on
the
really
the
background
required
to
read
the
single
representation,
so
an
important
section
was
again
which
is
really
viewing
symbol
representation
as
an
important
standardization
target.
We
actually
argue
for
that
in
this
document
and
then
in
the
second
document,
which
is
a
simple
representation,
specification
itself
added
definitions
and
and
improve
the
figures.
Most
of
these
comments
come
from.
K
So
on
the
definition,
symbol,
representation,
really
we
spelled
out
the
definition.
You
see,
there's
a
slight
difference,
but
basically
the
same.
We
are
talking
about
these
little
carrying
data.
That
means
this
is
what
you're
gonna
have
on
the
wire.
It's
not,
of
course,
what's
gonna
be
used
in
the
encoder,
the
decoder,
that's
different,
so
if
you
guys
have
any,
you
know
comment
on
this
definition.
K
This
is
a
good
time
to
say
so
on
the
disclaimer
here
most
of
the
points,
I'm
gonna
say
it
after
this
are
really
touching
upon
the
some
of
the
comments
on
the
on
the
on
the
background
section
and
also
I'm
gonna
cover
what
we
added
in
the
background
section,
the
the
section
about
simple
representation.
So
you
see
here,
the
figures
is
really
the
fundamental
concept
that
you
have
in
in
the
experimental,
a
draft
which
is
the
second
version
are
in
c01.
K
B
K
I'm
on
slide
5
now,
so
the
major
argument
for
standardizing,
simple
representation
according
to
us
is
that
there
is
a
lot
of
flexibility
in
Ireland
see,
and
so
we
really
need
the
standard.
I
mean,
that's
that's
for
sure,
but
once
we
have
it,
it's
going
to
be
highly
reusable,
and
so
the
point
is,
of
course,
that
Ireland
C
is
is
very
dynamic.
The
structure
is
dynamic
if
the
simple
set
is
very
reconfigurable,
so
examples
of
reconfigurability
are
given.
K
K
You
can
do
de-seed,
which
is
something
completely
different,
and
you
can
also
do
an
indexed
representation
where
you,
if
you
have
just
a
few
coefficients,
you
can
put
them.
You
know
really
unknown
juxtapose
one
next
to
the
other,
each
one
of
them
having
an
index
indicating
which
symbol
they
refer
to.
Basically,
so
so,
there's
a
lot
of
flexibility.
The
reason
for
this
flexibility
is
that
symbols-
and
you
know,
are
LNC
tailored
can
be.
You
can
operate
on
it.
K
You
can
use
linear
operations
on
it
without
actually
necessarily
knowing
have
the
location
of
so
once
you
put
them
there,
you
can
it's
the
same
operations
that
are
done
tailored
for
the
positions
themselves,
which
is
a
really
unique
aspect
that
adds.
That
was
point
one.
The
point
two
is
the
of
course.
The
number
of
coefficients
is
quite
dynamic,
also
the
number
of
symbols,
so
you
can
have
dense
curves,
you
can
have
sparse
codes
and
actually
the
sparsity
can
be
even
dynamic
itself.
K
The
Third
Point
of
flexibility
is
a
simple
size.
Clearly
you
can
have
larger
symbols
or
smaller
symbols.
I
will
define
actually
some
of
these
concepts
later,
because
we've
had
questions
about
this
that
so
you,
if
you
you're,
expecting
in
your
network,
to
have
anything
like
fragmentation,
padding
or
even
encapsulation,
which
is
standard
procedure
everywhere.
Then
you
know
it's
it's.
This
is
very
useful.
This
flexibility
is
useful
and
then
finally,
the
field,
even
the
field,
which
is
you
can
think
of
as
something
fixed
in
most
applications.
It
can
be
flexible.
K
K
You
can
have
flexible
fields,
and
one
of
the
interesting
things
about
RNC
is
that
the
same
operations
can
hold
for
different
fields.
That's
another
discussion.
So
this
is
what
I
have
on
this
slide
on
slide.
Six
I
was
going
to
just
tackle
one
example,
so,
as
we
said,
network
operations
will
be
affected
by
single
representation.
K
So
an
example
is
fragmentation
and
it's
very
simple:
either
you
know
where
your
coefficients
are,
or
you
don't,
and
if
you
know
them,
then
you
can't
separate
the
coefficients
and
do
the
fragmentation.
That's
what
you
see
in
the
figure,
the
a
code
or
a
fragmentation,
you
can
actually
run
recoding
and
modify
the
code
on.
You
know
different
fragments,
and
then
you
can
do
the
decoding
and
if
the
coefficients
are
unknown.
So
this
can
happen.
K
Also
I've
received
comments
on
that
on
the
list
and
reply
to
some
of
them,
though,
suppose
you
have
a
file
which
was
precoded
at
the
different
layer,
so
you
know
it's
good,
but
you
don't
know
where
the
coefficients
are,
then
you
basically
have
to.
If
you
need
to
do
coding
on
the
different
fragments,
you
have
to
add
new
coefficients
and
that's
what
we
call
by
adding
a
new
layer.
Basically
you
have
you,
don't
use
the
same
coefficients
you're,
adding
another
layer
of
RL
and
C
that
you
need
to
deal
with,
of
course,
at
the
decoder.
K
Okay.
So
if
there
are
questions
here,
attack
we're
happy
to
do
to
respond
so
I'm
moving
to
slide
seven.
So
here's
slide
seven.
It's
still,
you
know
arguing
for
the
standardization
symbol,
representation,
and
so
the
point
here
is
I
mean
there
was
a
figure
similar
to
this
in
Yanis
presentation
in
London
Yanis
mentioned
and
through
that,
of
course,
simple
representation
can
be
reused
by
multiple
protocols.
Simple
representation
also
effects
the
architecture,
the
linear
architecture
it
affect
it
effects,
including
topology
that
you
can
use.
K
Alright,
we've
received
some
questions
on
this
too,
so
one
actually
architecture,
of
course,
related
architecture.
Where
you
put
the
coding,
you
can
be
using
the
coding
for
routing
you
can
you
can
just
do
the
encapsulation
as
mentioned
earlier.
These
aspects
are
affected
by
where
the
coefficients
are,
for
example,
and
other
aspects.
So
on
the
topology
side,
we
are
talking
about
the
logical
coding,
topology
and
the
good
example
is
whether
you
need
to
do
recording
or
not.
If
you
need
to
do
recording,
then
the
coefficients
need
to
be
apparent.
B
K
Now
I'm
moving
to
the
next
section,
which
is
overview,
suggested
changes
almost
slide
8.
So
on
this
we
have
received
comments
from
Dave
and
Salvatori,
mainly
seniority,
thanks
and
and
also
Dave
Oran.
Thank
you
very
much,
multiple
comments
and
on
the
first
on
the
background
draft,
we
have
we're
looking
at
substantial
changes.
So
there'll
be
a
bunch
of
definitions
that
will
add,
there's
a
lot
of
networking
terminology
that
needs
clarification
or
correction
to
that
there
is
they've
mentioned
trade
offs
related
to
coding
parameters.
K
We
talked
about
that
a
little
bit
and,
of
course
a
security
section
will
discuss
what
we
put
right
in
the
Security
section
on
the
RNC
draft,
so
there
the
second
version
of
the
symbol
representation.
It's
really
just,
maybe
some
clarification
of
definitions
that
are
required
there.
So
again,
if
you
have
more
comments
on
them
to
the
lists,
so
I'm
going
to
slide
9,
so
just
an
overview
of
the
kind
of
definitions
we're
going
to
be
looking
into
so
first
of
all,
I
mean
before
the
definitions,
networking
terminology.
K
So,
yes,
there
is
a
little
bit
some
a
little
bit
of
washi
use
of
the
word
connection,
as
I
mentioned,
also
in
an
email
just
earlier,
link
versus
channel
and
the
losses
versus
errors.
So
most
of
this
stuff
comes
from
really
any
different
backgrounds
of
people,
people
working
in
communication
theory,
maybe
information
theory
versus
networking,
different
terminology,
so
I
I
go
back,
I
mean
I'm,
gonna,
say
from
you
know
from
the
get-go
that
yes,
we're
gonna,
look
at
the
taxonomy
draft,
RFC
84
of
six
and
maybe
connect
our
document
with
that.
K
A
little
bit
more
refer
to
that
some
of
the
definitions,
some
of
the
terms
that
I
think
we're
going
to
be
using
are
going
to
say,
field
elements
rather
than
since
symbols
is
not
I
mean
symbols
is
used
in
communication
theory
to
mean
you
know
it's
related
to
modulation.
So
so
it
really
means
field
elements
here.
We're
gonna
use,
field
elements.
What
we
mean
by
symbol
here
is
more
than
networking
coding
implementation
term,
which
is
really
the
coding
data
units.
It's
in
a
racial
element,
raw
data,
yeah
I
mean
Yanis
mentioned
yeah.
K
This
is
a
good
term.
I!
Think
raw
data,
it
means
anything
like
uncoded
or
systematic
or
assembles
input
symbols,
application
data.
All
of
that
has
been
used
to
check
the
taxonomy,
but
you
probably
be
using
raw
data.
So
again,
I
mentioned
representation
is
what
goes
on.
The
wire
is
different
from
what
the
input
and
you
can
have.
So
this
is
not
necessarily
the
data
structures
in
here
in
the
nodes.
It's
really
at.
What's
on
the
wire,
we
agree
on
that.
K
K
There
is
two
other
aspects
we'll
be
mentioning
next
slide
cutting
vector
which
comes
back
to
a
point
from
Salvatori
and
also
the
concept
of
hidden
coefficients
which
is
related
ready
to
the
security.
So
that's
nice,
so
slide
10
on
the
coding
vector
so
I'm
just
going
to
a
little
bit
of
detail.
So
yes,
I
mean
I
already
explained.
All
of
this.
The
raw
vector
is
the
clothing
vector.
The
coding
vector
is
really
what
is
used
as
in
the
encoder
and
the
decoder.
K
So
what
we
mean
by
coding
vector
is
really
what
you
have
there
on
the
Left,
it's
the
full
vector,
including
as
many
zeros
as
needed.
If
you
have
sparse
coding,
it's
definitely
knocking
to
you'll
be
standing
on
the
wire.
It
doesn't
make
sense
if
it's
as
far
as
code,
there
are
different
representations
that
you
can
be
using
and
that's
actually
an
important
aspect
of
the
second
drug
that
you
can
look
at
so
I
think
I
mentioned
all
of
that.
Yes,
yeah.
K
The
two
main
aspects
really
for
the
representations
you
need
to
have
the
value
of
the
coefficient,
but
you
also
need
to
have
the
mapping
between
each
coefficient
and
the
symbol
that's
related
to.
So
that's
so
any
way
you
devise
of
doing
that
is
around
and
yes,
neural
is
pointing
to
just
the
fact
that
I
mean
there
are
examples
of
representations,
so
either
you
send
the
raw
vector,
which
is
the
whole
thing.
If
you're
using
dense
coding.
If
it's
good,
then
you
can
just
go
with
the
coefficient
value
symbol.
K
The
document
and
yes,
you
can
send
the
seeds,
which
is
also
specified
in
the
simple
representation
job,
so
that
is
on
the
cutting
that
decide
on
the
called
trade
offs.
So
that's
an
important
point.
We
have
to
say
we
try
to
avoid
this
a
little
bit
because
we
didn't
want
to
go
into
the
other
call,
but
they
are
related
to
many
of
the
parameters
that
are
explained
so
so
we
agree,
I
think
we're
going
to
do
some
at
least
the
fundamental
trade
offs
we're
gonna
mention.
So
basically
the
field
size.
K
You
know
having
a
larger
field
size,
meaning
if
you
have
a
larger
field
size,
then
you
have
higher
diversity,
it's
less
likely
to
have
dependents,
and
it's
higher
complexity
that
that's
a
basic
trade-off.
Also
you,
if
you
have
a
smaller
field
size
you'll
need
to
send
probably
more
redundancy
because
of
the
denier
possible
linear
dependence.
So
you
know
you're
working
on
that
I
think
that's
an
important
trade-off,
which
is
fundamental.
The
the
simple
size
in
between
and
the
generator
in
size
is
pretty
clear,
so
the
larger
the
generation,
the
higher
latency.
K
B
K
So
that's
important,
so
there
is
a
latency
through
per
trade-off
there,
that
we
can
explain.
There's
a
extensive
literature
about
this
too
and,
of
course,
the
granularity
of
your
redundancy.
Of
course,
the
larger
the
generations
are,
the
larger
the
window
is
the
more
you
can
be
specific
with
you
with
with
your
feedback.
K
So
sorry
I
mean
I
mean
with
your
redundancy,
so
one
packet
basically
is
if
you
lose,
for
example,
if
you,
if
it
is,
you
can
have
better
granularity
finer
granularity
to
the
words,
and
we
will
explain
that
you
will
explain
that
and
document
so
I'm
looking
at
the
time
and
frankly,
I
need
to
afford
so
other
up
trade
offs.
We
might
touch,
but
we
don't
think
really
unnecessary
at
this
point
is,
of
course,
whether
you
do
a
block
code.
A
sliding
window
really
is
a
design
issue
for
the
application
we
might
brush
on
that.
K
L
This
would
be
an
interesting
topic,
so
there's
a
huge
amount
of
work
that
that's
been
done
by
group.
They
don't
survive
some
others
on
security
and
really
what
we're
concentrating
in
heroes
again
just
how
this,
how
this
connects
to
the
representation
issue.
So
we,
this
is
not
a
comprehensive
view
of
security,
but
the
initial
assumption
that
we
have
here
is
that
we're
operating
for
the
security
aspects
within
the
coating
layer.
So
really
just
focusing
on
the
coding
operational.
L
Aspects
of
coding
specific
so
now
we're
coding
operates
course
allowing
for
mixing
of
data,
and
the
natural
question
is:
what
are
the
security
consequences?
One
are
there
extra
vulnerabilities
that
open
up,
but
two?
What
are
the
extra
of
possibilities?
Also
that
this
provides?
Also,
you
know
one
of
the
things
that
this
allows.
Of
course
is.
Is
data
hiding
the
encoded
data
in
the
absence
of
decoded
is,
of
course,
actually
a
very
well
protected.
It
is
related
to
you
know,
sort
of
sure
style.
L
You
know
post
respond
up
security
actually
strongly
behind
such
as
you
know,
basically
trusted
stored
over
untrusted
networks.
I
think
that
people
course
sometimes
worry
about
because
of
the
work
that
you
have.
The
mixtures
is
this
issue
of
pollution
attacks,
sometimes
more
Byzantine
attacks
because
insider
attacks,
but
it's
inside
the
networks
in
it
appear
system
or
when
you're
recording.
If
one
of
the
one
of
the
intermediate
routers
were
colonized,
so
then
the
possibilities
run
a
detection
and
correction,
some
of
them
automated
the
representation,
some
of
them
or
not.
L
There
is
work
on
that.
The
other
aspect,
of
course,
is
a
the
aspect
of
verification.
Just
I
want
to
say
for
the
Byzantine
attacks
is,
of
course
you
do
both
detection
business
school
attacks
either
end
to
end
or
I.
Didn't
immediate
knows.
Even
that
knows
that
that
don't
have
enough
degrees
of
freedom
to
decode.
You
can
also
do
correction
by
combining
it
with
an
error,
correcting
code.
L
Also,
verification
is
something
where
that
touches
upon
homomorphic
encryption
I
just
want
to
point
out
that
the
don't
necessarily
need
third
of
the
more
sophisticated
elliptic
curves
verification
with
encryption,
but
the
the
network
coding
aspect
does
does
vary
with
hormone
fizzle.
Basically,
what
you
need
is
home
office
and
after
the
linear
operations
of
the
and
actually
very
simple,
discrete
log
systems,
elida
phenomena
perfectly
compatible
with
that.
So
again,
you
know.
L
K
K
That
needs
to
be
looked
at.
The
RFC
Percy
84
is
fixed
economy.
Maybe
a
couple
more
references
about
related,
simple
representation,
there's
also
a
question
in
coda
right.
We
saw
that
from
an
oscillatory,
so
all
of
these
will
respond
to
in,
of
course,
definitely,
and
of
course,
if
you
have
last
slide,
if
you
have,
you
know
any
comments,
questions
or
suggestions
do
not
hesitate
to
send
them
on
the
list
or
just
come
to
the
microphone
and
tell
us.
No
that's.
A
A
On
slide
5,
you
mentioned
that
there
is
this
dynamic
number
of
K
of
coefficients
and
symbols.
Yes,
but
if
I'm
looking
at
the
adder
I
see
your
four
bits
field,
meaning
that
you
cannot
exceed
the
15
16
depending
on
how
you
count
okay,
is
it
less
flexible
it
really
on
what
you
want
to
achieve
with
which
field
size
things
like
that
that
much
yeah.
K
B
K
A
Yep,
which
leads
to
another
comment,
it's
great
to
have
this
specification
of
full
specification
of
the
ideas,
but
if
we
won't
go
into
that
direction,
then
we
may
then
we
should
make
attention
not
to
closed
doors
and
by
being
too
specific
on
this
aspect.
So
it's
a
difficult
exercise
to
keep
the
eye
flexibility
of
rnc
that
you
mentioned
I
fully
agree
and
at
the
same
time,
specifying
something
which
is
very
detailed
in
terms
of
either
format
but
difficult
exercise.
K
There
but
again
I
mean
this
is
one
simple
representation.
So
what
I
mentioned?
What
I
said
in
Montreal
is
that
we
might
end
up
with
a
with
a
couple
three
or
four
different,
simple
representations:
each
geared
to
class
of
applications,
so
so
I
mean
it
doesn't
have
to
be
all
full
flexibility,
and
it
doesn't
have
to
be
also
too
rigid
so
and
that
yeah
definitely
a
discussion
for
the
list.
We
agree
and
and.
L
A
K
These
documents,
so
not
necessarily
I,
mean
I,
think
representations
can
be
added
in
different
documents.
I
think
it's
not
a
bad
idea
to
have
maybe
a
certain
number
of
of
documents,
each
addressing
maybe
a
class
of
applications.
This
is
the
first
effort.
I
mean
it
can
go
either
way.
Frankly,
so
we
can
discuss
it,
but
as
far
as
I
see
it
now
clarifying
this
one,
putting
it
down
I
think
is
useful
and
then
we
can
take
it
from
there.
Yeah.
L
A
I
agree
since
you're
mentioning
the
possibility
to
other
seed.
You
means
you
have
a
PNG
associated
to
this.
We
also
intend
to
provide
a
list
suggestions
in
terms
of
PNG
I'm
asking
the
question,
because
the
last
item
in
the
agenda
today
is
about
our
experience
on
terms
of
30
mg.
42
I
will
have
to
come
back
on
this,
but
what
you
go?
Okay,.
K
So
yeah,
so
that's
that's
a
good
question.
We
had
it
also
on
the
list,
so
we,
our
initial
thought,
was
to
have
that
part
of
the
protocol
specification
and
negotiation.
I
think
that's
a
natural
place
for
it.
We
are
really
really
specific
to
the
symbol
representation
here
so
yeah
that
can
be
I.
Think
that
that
can
remain
outside
yeah
I
mean
we're
open
to
discussion
here.
If
you
think
it
really
has
to
be
part
of
it,
I
don't
see
now
why
it
should
be
part
of
it.
That's
all
yeah.
B
A
L
A
H
M
I,
just
can't
get
rid
of
so
I
here
present
our
work
on
implementing
and
father
as
a
correction
extension
to
the
quic
protocol,
so
first
a
quick
reminder
about
the
equip
protocol.
So
this
is
a
transport
protocol
providing
a
reliable
and
secure
transports
of
data.
It
also
provides
stream
multiplexing
in
order
to
avoid
problems
such
as
head-of-line
blocking
so
fake
worst
part
of
the
only
quick
versions,
but
it
has
been
dropped
due
to
bad
experimental
results
from
the
only
designs.
M
So
how
does
a
quick
packet
look
like
it's
just
composed
of
header
and
payload?
The
payload
is
itself
a
sequence
of
frames.
So
let's
take
an
example
here
we
have
three
frames
in
the
quick
payload.
The
first
one
is
an
ACK
frame
which
acknowledges
the
received
packets
and
the
two
others
are
what
we
call
stream
frames.
They
carry
user
data
for
two
different
streams
here:
the
stream
X
and
the
stream
Y.
M
M
We
will
discuss
the
differences
here,
so
we
have
two
implementations
one
using
the
quick
go
implementation
which
is
based
on
the
Google,
quick
design
and
second
implementation
working
on
ICO,
quick,
which
is
based
on
the
IETF
quick,
quick
design,
draft
14,
so
I
will
show
some
friend
formats,
but
we
do
not
intend
to
propose
it
as
a
standard.
So
we
just
want
to
discuss.
The
idea
is
not
the
details
of
these
frame
formats.
So
let's
first
take
a
look
at
what
the
term
draft
is
proposing.
M
So
we
want
to
define
what
are
the
source
symbols,
which
are
the
the
data
unit
that
we
want
to
protect,
and
the
draft
proposes
to
chunk
a
stream
into
blocks
of
fixed
size.
The
size
is
called
E,
and
so
we
will
chop
the
stream
into
several
fixed
charge,
size
chunks,
and
we
will
consider
this
equal
size
chunks.
As
our
source
symbols,
so
here
on
the
figure
we
can
see
that
we
have
ten
source
symbols
and
the
last
one
is
synchrony,
because
the
stream
is
finished
before
the
end.
B
M
This
track,
so
there
are
some
advantages
of
proceeding
like
this.
First,
we
don't
in.
We
don't
need
any
signaling
to
to
identify
those
source
symbols,
because
the
server
and
the
client
only
have
to
agree
on
the
size
of
the
chunks
and
then
they
they
don't
need
to
advertise
which
part
of
a
stream
is
a
source
symbol,
because
it's
implicit
another
advantage
is
that
there
is
no
control
over
head,
because
we
only
protect
the
user
data.
We
don't
protect
any
any
frame,
header
or
thing
like
that.
We
focus
on
user
data.
M
M
So
here
you
can
see
on
the
on
the
image
that
the
packet
header
is
not
really
part
of
the
symbol.
Only
the
the
seconds
and
frames
is
the
salsa
mode.
So
this
approach
also
has
some
advantages.
First,
we
don't
need
to
agree
on
a
symbol
size,
because
if
two
packet
payloads
have
different
sizes,
we
can
we
can
consider
that
one
of
those
two
source
symbols
will
be
padded
to
match
the
size
of
the
other
and
as
quick,
naturally
handle
padding.
M
This
additional
padding
will
be
understood
as
padding
frames
and
having
not
having
to
to
agree
on
a
symbol.
Size
also
helps
us
to
solve
the
silent
period
problem,
which
was
a
problem
that
was
pointed
by
the
authors
of
the
draft.
If
you
look
at
the
figure,
you
can
see
that
the
last
symbol
is
not
complete,
so
we
are
unable
to
protect
it.
So
if
this
last
symbol
is
lost,
we
can
recover.
We
cannot
recover
it
because
we
cannot
add
padding
in
stream
frames
because
it
would
consist
in
adding
data
in
the
user
data.
M
Another
advantage
is
that
we
can
protect
more
than
just
the
stream
data.
So,
for
example,
we
can
protect
the
flow
control
frames
and
we
can
also
protect
some
other
friends,
like
the
data
grant
frames
that
were
discussed
yesterday.
That
will
provide
a
message
mode
for
the
quick
users.
We
can
also
protect
any
of
the
frames
that
are
not
currently
not
part
of
the
design
because,
as
we
protect
a
quick
payload,
we
are
agnostic
of
what
it
contains.
M
So
this
approach,
this
packet
based
approach,
still
has
some
inconvenience.
First,
we
need
explicit
signaling
to
identify
the
source
symbols,
because
then
it
is
not
implicit
anymore.
So,
for
that
we
propose
to
add
a
new
frame
that
will
identify
that
this
quick
payload
is
considered
as
a
source
symbol.
We
call
it
the
source
back,
pedal,
ie
frame,
and
it
just
contains
one
field,
which
is
a
no
back
field,
and
this
field
will
be
populated
by
the
VEX
key.
It
will
identify
the
source
symbol
in
the
in
the
coding
window.
M
Another
disadvantage
of
this
packet
based
approach
is
that
we
have
an
increased
overhead
because
now,
as
we
are
protecting
a
sequence
of
frames,
we
will
also
protect
the
header,
and
so
we
are
not
only
protecting
the
user
data
anymore
and,
as
we
have
an
increased
overhead,
we
will
more
likely
be
in
the
case
where
so
symbol
we
have
a
full
packet
size
and
so
the
Reaper
symbol
might
not
completely
fit
into
one
packet.
So
you
know
implementation.
M
We,
we
restrict
a
little
bit
the
size
of
a
source
symbol
to
be
able
to
have
a
Reaper
symbol
that
fits
into
one
packet.
So
now
so
here
we
thought
about
the
Sun
symbols.
Now,
let's
talk
about
how
we
we
send
a
Reaper
Simoes.
So
here
we
have
an
approach
that
that
is
similar
to
the
approach
described
in
the
undercurrents
column,
for
quick
specification,
so
we
add
a
new
frame
which
will
basically
contain
a
Reaper
symbol,
so
I
won't
dig
dig
into
the
details
of
this
frame.
M
Then
we
had
to
think
about
what
to
do
when
we
recover
a
packet,
because
in
a
transport
protocol,
if
we
recover
a
packet,
we
can
do
three
different
things.
First,
we
can
AK
this
recover
packet,
so
it
will
make
the
sender
thing
that
this
packet
has
been
successfully
received
and
then
it
will.
It
will
not
retransmits
the
the
lost
data,
which
is
fine,
but
it
won't.
It
won't
adapt
its
congestion
window.
So
if
this
lost,
if
this
loss
was
due
to
congestion,
the
congestion
window
won't
be
adapted.
M
M
However,
the
problem
is
that
we
transmit
the
loss
data,
even
if
there
they
have
been
successfully
received.
So
we
propose
a
third
way
to
react
and
this
is
consisting
explicitly
advertising
the
sender
that
we
recovered
packets.
So
we
we
send
a
new
kind
of
frame
which
is
called
the
recovered
frame
and
it
announces
the
range
the
range
of
packets
that
have
been
successfully
recovered.
M
In
that
way,
the
server
can
can
know
that
a
packet
has
been
lost
and
then
recovered,
so
it
can
adjust
its
congestion
window,
but
it
can
avoid
to
retransmit
the
recovered
data,
so
the
format
assist
of
this
frame
is
currently
very
identical
to
the
format
of
an
ACK
frame,
just
instead
of
announcing
announcing
which
packet
has
been
acknowledged,
it
announces
which
packet
has
been
recovered.
So
now,
let's
take
a
look
at
some
experimental
results
that
we
have
performed
with
our
implementations,
so
we
are
doing
a
simple
request.
You
have
a
question.
Excuse.
M
F
M
So
yeah
I
agree,
so
maybe
we
can.
We
can
discuss
about
adding
a
new
frame
or
not
yes,
so
thank
you
so
look
at
some
results.
So,
first
the
experiments
have
been
done
with
emulation
using
mini
net
and
we
are
focusing
on
in-flight
communication
use
case.
So
it's
a
case
where
there
is
a
high
delay
and
high
loss
rate,
so
we
used
a
seed
loss
generator
in
order
to
compare
more
fairly
the
different
solutions.
M
So
here
we
have,
we
have
a
graph
showing
the
download
completion
time
ratio
between
quick,
using
faq
and
regular
quick.
So
basically,
if
our
results
are
smaller
than
1,
that
means
that
then
quick
with,
like
the
transfer
using
fake,
was
shorter
and
if
we
are
greater
than
1,
that
means
that
the
transfer
using
regular
creek
was
shorter
than
using
fake.
M
And
so,
if
we
look
at
the
small
request
response,
we
can
see
that
for
small
sizes,
fake
has
a
good
advantage,
because
if
we
lose
a
packet,
the
receiver
will
will
have
to
wait
additional
time
after
the
end
of
the
stream
to
receive
the
retransmission
of
the
lost
data
using
fake.
We
can
recover
the
dislodged
data
before
we
before
the
the
wait
for
a
rec
transmission.
M
If
we
take
a
look
at
the
bigger
the
larger
requests
responses
like
this
one
mega
megabyte
file
download,
we
can
see
that
adding
redundancy
will
need
some
more
time
to
transmit
this
additional
overhead.
So
it
will
have
a
negative
impact
on
the
download
completion
time
and
so
recovering
from
the
last.
The
tail
losses
only
had
a
small
small
gain
compared
to
the
additional
overhead
that
we
that
we
had
to
add
to
send.
E
M
So
for
for
large
request
responses,
there
is
an
interesting
disabling
fake.
So,
with
these
results
in
mind,
we
decided
to
perform
our
experiments
without
Pyke,
a
quick
implementation
and
we
decided
to
only
protect
the
end
of
the
stream,
not
the
whole
stream,
and
so
the
results
are
quite
similar.
The
difference
is
that
so
for
the
small
request
responses,
the
results
are
quite
the
same
for
the
one
megabyte
download.
We
can
see
that
the
negative
impact
of
fake
has
been
highly
reduced
because
we
don't
send
redundancy
during
the
download.
M
So
we
don't
have
this
additional
overhead
and
we
can
still
recover
from
from
T
styluses
whether
we
still
have
a
small
overhead,
because
we
need
some
signalling.
We
need
to
to
send
these
different
frames,
the
source,
symbol,
friends
and
the
repair
frames.
So
it
has
still
a
small
overhead,
but
it
has
been
largely
reduced
by
only
protecting
the
end
of
the
stream.
M
Then
we
wanted
to
check
the
interest
of
sending
this
recovered
frame
congestion
control
point
of
view,
so
we
performed
experiments
when
to
through
patrols
are
competing.
So
we
have
one
quick
flow,
which
is
in
background,
so
we
call
it.
The
background
traffic
and
we
have
three
candidates
for
the
background
traffic,
a
regular,
quick
download,
a
quick,
fake
download,
sending
recovered
frames
and
a
quick
fact
download,
acknowledging
the
recovered
frames.
Yeah
the
recovered
packets
and
we
study
the
download
completion
time
of
a
foreground.
Regular,
quick
traffic
so
we'll
see
how
it
competes
with
quick,
fake,
sending.
B
M
Not
the
record
friends,
we
don't
apply
media
molasses
on
this
on
this
use
case,
so
the
only
losses
that
will
apply
will
be
congestion
induced.
So
here
we
on
this
graph.
We
show
the
download
completion
time
of
the
foreground
traffic
for
these
three
cases.
So
we
can
see
that
in
the
case
where
the
background
traffic
is
quick,
fake,
sending
recovered
frame.
So
it's
the
with
RF
boxplot.
M
The
download
completion
time
of
the
foreground
traffic
is
very
similar
to
the
download
completion
time
of
when
it
competes
with
a
regular,
quick
download.
When
it
completes
with
quick,
fake,
only
acknowledging
the
recovered
packets.
We
can
see
that
the
foreground
traffic
takes
a
lot
more
time
to
complete
and
it
because
the
congestion
implied
losses
will
be
hindered
by
the
acknowledgment
of
these
recovered
packets,
but
we
may
need
more
experimental
results
to
confirm
the
interest
of
the
record
frame,
because
it's
only
one
case
study,
so
we
might
need
to
perform
more
experiments
to
confirm
this
tendency.
M
So,
to
conclude,
we
can
see
that
fact
also
has
an
interest
in
the
small
wicker
request.
Westbourne's
news
case,
like
some
API
calls,
etc,
especially
in
high
delay
and
high
loss
configurations.
We
also
see
advantages
in
both
approaches.
The
chunk
base,
the
stream
string
based
approach
and
the
packet
based
approach,
but
the
experience
that
we
did
only
consider
the
packet
based
approach,
so
it
might
be
interesting
if
somebody
could
compare
these
two
approaches.
M
However,
the
packet
based
approach
is
the
only
one
that
allows
to
protect
other
frames
that
then
shoot
friends
like
the
new
diagram
frame,
and
we
also
saw
that
recovering.
The
packet
must
be
on
carefully
in
a
transport
protocol,
because
if
we
send
the
wrong
signal
it
could
lead
to,
it
could
lead
to
unfair
behavior.
M
I
Modern
pierson-el,
nice
presentation,
I
have
had
a
question
on
the
I'm,
not
an
expert
on
quick
at
all,
but
how?
How
does
the?
How
do
you
know
so?
Basically,
what
I
saw
was
that,
when
you're
sending
large
chunks
it's
better
to
basically
only
protect
there
or
large
files,
it's
better
to
protect
only
the
end
so
sort
of
avoid
this
problem
with
the
losses
at
the
tail
does
a
quick
implementation
know
the
size
of
the
file
it's
sending,
or
is
that
something
that
you
implemented
in
order
to
get
that
information
through
the
implementation.
M
So
when
we
send
stream
data,
basically
a
stream
frame
contains
one
bit
which
is
called
the
fin
bit,
which
says
if
this
frame
contains
the
end
of
the
street.
So
basically,
when
we
see
that
we
sent
the
end
of
the
stream,
we
decide
to
send
a
redundancy,
and
we
also
did
it
for
when
we
are
what
we
call
application
limited.
So
if
we
have
no
stream
to
that,
have
to
send
data
at
all,
so
all
streams
are
blocked
basically
by
the
application.
Then
we
also
flush
these
the
redundancy
and
how.
I
Would
that
work?
If
you
have
something
like
again
like
if
you're
pipelining
data
like
in
in
HTTP?
Typically,
you
you
put
more
than
you
know
just
a
single
file
on
the
same
connection,
because
you
don't
want
to
have
so
what
what
that
mean
that
it's
only
at
the
end
of
basically,
when
you
close
the
socket
you
would,
you
would
protect
that
last
piece
of
data,
or
so.
M
There
are
two
ways
to
proceed
when
we
want
to
send
multiple
files
with
HTTP
either
you
use
several
streams.
One
stream
per
file
I
mean
in
that
way.
You
you
know
when
you,
when
you
send
the
end
of
a
stream.
Otherwise
you
can
still
do
what
what
we
did
in
our
implementation,
if
we
don't
have
any
more
data
to
send.
M
I
I
M
I
B
I
M
I
would
say
that
it
depends
if
your
application
has
we
strong
needs,
and
we
depend
strongly
on
the
shape
of
your
traffic
there.
It's
better
to
let
the
application
handle
the
the
forward
correction
itself,
but
providing
the
fake
on
the
transport
level
allows
simple
application
to
protect
their
traffic
without
too
much
implementation
efforts.
So
basically
that
I
think
there
are.
There
are
use
cases
for
both
approaches,
but
it's
very
good
remote
because
yeah,
that's
also
what
we
think.
Okay.
Thank
you.
Thank
you.
N
M
N
N
M
F
G
F
A
A
Thank
you
other
questions.
No
I
have
questions
yeah,
but
I
want
you
to
no
remaining
questions
in
the
room
oil.
So
thank
you
very
much
for
this
very
interesting
and
complimentary
approach
for
addressing
this
problem
of
adding
FEC
to
quick.
As
you
said,
there
are
different
design
goals
and
different
properties
that
results
from
those
design
goals
and
at
this
level
it's
very
interesting
because
we
don't
really
know
what
is
the
best
approach
or
is
there
a
single
best
approach,
it's
still
totally
open.
A
So
what
we
already
discussed
this
with
Francois,
of
course,
and
our
conclusion
for
the
moment
was,
as
I
said-
was
to
experiment
and
going
to
address
both
both
types
of
solutions
and
see
at
the
end
in
the
future,
in
the
future
ITF
meetings
once
and
see.
What's
what
seems
to
be
the
most
interesting
one
of
maybe
me
making
both
of
them,
but
you
add
some
more
complexity,
so
I'm
not
sure
of
that.
A
A
I
mean
cutting
for
quick
and
I'll
see
for
quick,
but
anyway
we
will
include
support
from
this
approach
or
this
packet
based
approach
in
both
documents
and
in
order
to
to
be
more
to
going
to
be
loose,
technical
details
that
are
very
interesting
and
assessing
the
consequences.
The
key
benefits
on
counts
for
each
type.
J
J
A
A
O
Spencer
Dawkins
I
were
to
strongly
agree
with
everything
that
Dave
Ferrand
just
said,
with
the
possible
exception
that
the
quick
working
group
is
not
actually
working
on
multi
path
as
a
working
group.
Right
now,
even
though
it's
a
chartered
activity-
and
it
might
be
that
you
all-
will
discover
things
doing
research
that
they
would
find
useful
in
the
working
group
as
far
as
being
able
to
move
things
forward,
I
mean
you
know
our
TF
research
group
so
tone
or
idea
for
groups
what
to
do.
I
I
If
we
do
not
consider
coding
for
this,
then
we
are
missing
some
important
performance
opportunities,
so
basically
low,
latency
and
reliability
is
a
requirement
driven
by
the
application
that
that
is
running
over
our
over
the
transport
way
that
we're
using
in
different
applications
of
course,
have
different
requirements
and
what
we
are
seeing
now.
So
some
of
these
applications
are
all
right.
I
We
all
know
video
conferencing
and
they
have
relatively
relaxed
requirements
to
latency
and
reliability,
but
a
lot
of
newer
applications
are
appearing
like
a
are
via
a
chilly
surgery
with
haptic
feedback,
cooperative
driving,
and
these
types
of
applications
have
much
more
strict
requirements
on
reliability
and
latency,
and
if
we're
going
to
address
those
those
requirements,
we
need
to.
You
know
think
about
our
different
options
for
how
we
are
implementing
reliability
and
achieving
low
names
and
our
applications.
I
Basically,
if
you
look
at
the
latency
between
between
two
given
points,
there's
a
sort
of
that's
a
physical
limitation
so
how
low
we
can
get
it
right.
So
if
I
have
two
cities
in
the
US,
Boston
and
Stanford,
the
distance
between
them
is
approximately
4,000
kilometers,
and
that
means
that
if
I
do
the
calculation
of
speed
of
light
through
a
fiber
I'm
gonna
get
something
like
21
milliseconds
of
latency
one-way
and
that's
not
gonna
change
until
somebody
finds
a
way
to
improve
speed
of
light.
So
basically
yeah.
I
So
there's
some
there's
some
basic
desam
there's
also
some
references
that
are
not
showing
and
on
the
slides,
but
you
can.
If
you
find
the
slides
data,
you
can
go
in
and
read
a
little
bit
more
about
if
you
wanna
sort
of
fully
understand
this,
but
basically
basically
what
it's
telling
us
is
that
the
link
latency
without
the
protocol
has
a
lower
bound
right.
So
if
we're
gonna
sort
of
put
our
applications
on
these
links,
then
we
have
to
take
that
into
account.
So
there's
another,
maybe
I,
think
my
slides
are
ok.
I
So
there's
another
interesting
thing,
which
is
from
a
taking
from
my
famous
rant
bias
to
a
t-shirt
whose
basic
who
says
it's
the
latency
stupid
and
basically,
what
he's
saying
is
once
you
have
bad
latency
you're
stuck
with
it.
Referring
to
this
fact
that
you
know
speed
of
lights,
not
gonna
easy
to
change,
but
but
there's
another
thing
and
actually
that's
you
know
where
the
coding
becomes
interesting.
Is
that
making
more
bandwidth
is
easy
right,
so
you
can
always
take
another
fiber
or
you
can.
I
You
know,
buy
another
DSL
line,
so
we
can
always
make
more.
You
know
bandwidth,
but
we
are
not
going
to
be
able
to
improve
our
latency.
So
so
so
that's
an
important
point
when
we
think
about
adding
coding
as
a
solution
to
this.
So
if
sort
of
two
brief
the
problem
right
is,
if
you
have
two
devices
that
need
to
communicate
with
low
latency,
how
can
you
achieve
it
right
and
what
we
are
doing
today,
almost
everywhere?
If
you
go
also
to
the
different
talks,
your
diet,
if
it's
a
IQ
right
and
what
is
that?
I
And
then
we
have
another
trade-off
which
people
which,
which
we
are
advocating
right,
is
that
you
use
coding
for
the
reliability,
because
what
you
can
do
that
is
you?
Can
trade
bandwidth
for
reliability
without
hurting
your
latency?
So
in
theory,
you
can
do
something
that
is,
you
know,
even
bandwidth,
optimal
with
coding,
but
typically
in
practice.
I
So
we
have
a
few
sort
of
to
illustrate
this
point.
We
have
a
few
sort
of
visualizations.
You
can
go
in
and
check
that
there,
where
you
can
play
around
it's
basically
small
simulations
that
are
running
in
your
browser.
You
can
go
in
and
play
around
with
them
and
and
try
to
see
you
know
what
is
the
difference.
I
I
Coding,
of
course,
isn't
the
only
answer
right.
There's
lots
of
other
techniques
like
it's
computing
moving.
You
know
moving
data
closer
to
the
people
having
empath
retransmissions,
you
can
do
other
things,
but
but
definitely
it
needs
to
be
part
of
the
toolbox
that
people
have
when
they
when
they
are
doing
these
kinds
of
things.
Also,
if
you
want
to
see
a
real
life
use
case,
this
is
a
demonstration
you
can
find.
B
I
People
I
know
sort
of
being
becoming
aware
that
low
latency
is
going
to
be
a
fundamental
requirement
for
many
of
the
applications
that
are
proposed
for
5g
and
also
a
diet
if
people
are
thinking
about
latency
right,
but
nobody
mentions
coding
as
a
potential
sort
of
solution
for
this,
and
that's
a
little
bit
problematic
and
of
course
you
know
we
are
here,
but
there
seems
that
we
are
not
very
good
at
communicating
to
the
rest
where
these
these
kind
of
things.
So
maybe
maybe
this
is
something
that
could
be.
I
A
Comments,
questions
I
think
we
all
agree
with
the
conclusion.
It's
not
a
big
surprise
for
us,
but,
as
you
mentioned,
it's
maybe
something
that
we
that
should
be
more
communicating
well.
We
should
find
a
way
to
better
make
people
aware
of
this
d'amato
limit
and
we're
one
way
to
reducing
some
to
a
certain
point.
This
Latin
seeing
Samir,
reliable
or
reliable
communications
yeah.
The
two
documents
that
you
mentioned
at
the
end,
the
two
internet
drafts
one
of
them
is
the
one
from
Yahoo
is
now
find
out
expired,
not
the
other
one.
A
I
A
A
Okay,
so
it's
a
quick
presentation
about
some
rated
work
we
have
done,
and
that
is
almost
finished
in
the
GSV
working
group,
where
we
specifically
explained
that
using
coding
and
particular
sliding-window
coding
could
help
in
reducing
latency.
So
it's
part
of
this
big
communication
on
okay,
if
he
can't
help
reducing
latency,
but
this
in
this
TSV
working
group
and
most
specifically,
what
I
would
like
to
talk
now
to
the
afternoon
is
about
this.
What
lessons
we've
learned
on
by
specifying
this
is
e
fe
fee
scheme
and,
most
specifically
on
the
PNG
aspect.
A
There
are
a
few
important
things
that
we
should
be
aware
of.
It
could
help,
so
a
specification
has
been
I
would
say
a
little
bit
difficult
and
long
at
the
end,
for
two
reasons.
Basically,
first
of
all,
we
totally
overlooked
the
program
associated
with
the
PNG.
That's
a
mistake
that
we've
made,
and
we
fix
that
only
well
one
year
ago,
around
May
2018.
A
We
thought
we
would.
We
were
close
to
again,
but
in
fact
we
quickly
over
this
PNG
stuff.
The
linear,
congruential
PNG
that
we
have
been
classically
used
for
years
is
totally
broken
when
it
comes
to
generating
con
encryptions.
That's
something
we
need
to
be
aware
of
it's
broken,
because
it
will
basically
produce
for
certain
types
of
seeds.
The
same
first
few
parameters
because
of
the
way
it
works.
I
mentioned
this
and
explain
this
more
in
details
in
neutral
meeting.
A
So
unless
there
are
questions
on
this,
I
wouldn't
like
to
come
back
on
it,
but
keep
in
mind
that
it's
broken.
So
we
had
to
find
something
else,
and
we
found
this
great
another
great
PNG,
much
more
modern
and
more
mathematically,
speaking
correct,
that
produces
better
to
the
random
numbers.
But
this
second
specification
is
based
on
source
code
on
C
source
code.
So
the
specification
is
the
implementation.
I
would
say
the
implementation
is
the
specification.
A
There
is
no
absolute
code
specification
of
this
of
it
and
it
created
several
side
problems
that
we
found
very
difficult
to
source.
So
this
is
what
I
would
like
to
explain.
So,
basically,
the
main
problem
is
about
a
cooperating
license.
It
should
make
seems
something
quite
easy,
especially
when
the
source
code
is
provided
with
a
bsd-style
license.
We
in
fact
bsd-style
license
she's
the
one
that
ITF
is
working
is
using
sorry,
but
still
there
is
a
slight
difference
and
the
licensing
aspect
also
the
copyright
I
space
story.
A
The
copyright
aspect
is
extremely
hard
to
solve
unless
the
authors
of
the
source
code
are
also
offers
of
the
internet
draft.
So
that's
the
only
way.
The
only
clean
way
we
found
to
solve
this
copyright
and
license
program
having
those
C
implementation.
Authors
also
be
internet
draft
office.
That's
the
clean
way.
Otherwise
you
will
never
found
a
solution.
We
we
tried.
We
asked
our
Spencer
and
Mia
as
the
ITF
legal
department
to
we
try
to
find
another
way
to
address
this
problem.
A
That
was
almost
impossible,
so
that
was
the
first
problems
when
it
when
we
have
this
C
source
code
reference
implementation.
Another
problem
is
interoperability
and
determinism,
and
we
can
back
up
this.
Sometimes
it
can
be
quite
subtle.
I
was
not
aware
that
the
C
specification,
C
1990,
11,
DOS
and
uncie
specification
are
a
little
bit
unclear
on
how
to,
for
instance,
represent
negative
values.
A
So
most
of
the
compilers
I
would
say,
99%
of
the
compilers
and
target
environments
you
may
want
to
use
are
using
two
compliments
for
representing
negative
values,
which
is
more
or
less
the
basic
assumption
you
may
have
on
this,
but
it
remains
that
it's
not
specified
not
mandated
in
the
specification,
so
you
may
found
you
may
find
a
situation
where
your
compiler
uses
one
compliment
representation
or
something
else,
and
depending
on
this,
depending
on
the
way
you
implement
your
stuff,
it
may
create
deterministic
problems.
It
may
end
in
different
results.
So
that's
something
we
discovered.
A
Fortunately,
the
mechanic
was
aware
of
this,
and
we
mentioned
this
during
the
is
G
with
you,
so
it
was
great.
So
that's
that's.
So
we
end
up
with
this
20
42
PNG,
which
is
from
a
mathematical
point
of
view,
quite
good.
It
produces
good
quality
to
the
random
numbers,
as
I
said
comes
with
a
serie
France
in
fermentations.
We
specify
we've
also
specified
three
internal
parameters.
A
You
may
found
millions
of
such
triples.
We
found
we,
we
selected
this
one
to
make
it
easy
to
use,
but
keep
in
mind
that
if
you
use
a
different
triple,
then
you
end
up
in
a
different
Suriname
number
sequence.
So
that
was
what
we
specify
this
son
we
also
made.
We
have
also
been
very
careful
in
terms
of
determinism
for
some
use
cases.
Determinism
is
not
an
issue
when
you
want
to
create
simulations.
A
Well,
it's
not
a
big
deal.
If
what
you
get
on
your
laptop
and
what
you
gettin
a
different
laptop,
a
little
bit
different,
but
in
when
it
comes
to
FEC
schemes,
then
it's
absolutely
mandatory
to
have
exactly
the
same
pseudo-random
sequence
with
the
same
seed.
So
this
is
what
we've
checked
and
we
produces
this
table
with
50
PNG
to
the
random
values.
That
must
be
checked.
A
A
So
that's
something
important:
we
clarify
this
once
another
point:
let's
bullet
this
PNG
produces
pseudo-random
32-bit
numbers.
So
that's
the
goal
of
this
internet
draft
and
this
PNG.
Now
in
some
situations
you
need
to
scale
down
this
32
bit
address
of
a
value
space
into
something
smaller
like,
for
instance,
with
RLC.
We
need
pseudo-random
numbers
between
0
and
15
or
0
and
255.
So
in
that
case
you
need
to
specify
how
to
do
that
and
we
did
it,
but
we
did
it
in
the
ex-king
specification
and
not
in
the
attiny
MD
32
internet
draft.
A
So
if
you
want
to
use
this-
and
if
you
have
such
requirements,
then
you
have
to
provide
your
own
scaling
down
function.
But
that's
that's
not
a
big
deal
and
the
good
news
is
that
yeah
Mon
be
reason.
Why
do
you
meter
sorry?
Why
do
you
need
that?
Because
I
need
a
pure,
absolute
or
random
number
between
0
and
15
and
0
and
255
for
the
coefficient
generator
function?
But
if
you
just
generate.
I
A
A
Is
typically
what
I
call
this
function
that
gets
the
32
bits,
so
the
random
number
and
generate
this?
Let's
say
4
bits
or
8
bits,
pseudo-random
number
that
you
need
for
your
specification,
so
you
have
to
provide
this
function.
That's
one
way
to
do
that.
We
use
the
different
way,
but
it's
also
valid,
but
you
have
to
provide
this
in
some
way
yeah
and
and
why
the
50
is
that
just
no
no
50
is
okay,
it's
considered
as
a
good
value.
A
No,
it
was
also
the
size
provided
by
the
office
themselves
and
last
right,
so
the
good
news
that
it's
not
yet
finished
but
almost-
and
we
hope
to
have
this
I
fish-
we're
optimistic
for
next
idea,
we'll
see
but
I
think
it's
feasible.
It
has.
The
source
code
has
already
been
reviewed
by
is
GE,
and
we
clear
out
this
licensing
fee
right.
It's.
It
should
be
R
feasible
to
have
it
available
for
next
ATF.
Anyway,
if
you
need
the
PNG,
this
is
a
clearly
an
option.
A
I
Did
you
benchmark
the
the
time
it
takes
to
seed
the
prnt,
because
that's
something
that
we
experienced
has
a
huge
is
very,
very
different
and
and
in
if
you,
if
you
have
a
very
high
throughput
application,
which
basically
uses
the
seed
as
the
way
of
generating
the
coefficients,
then
you
also
need
a
fast
way
of
seeding
on
the
on
the
décor
of
example:
yeah
yeah.
Yes,
we
did
it.
We
didn't
I
on
finding
problem.
A
But
I
agree
with
you
it's
something
that
should
be
considered.
In
fact,
the
way
we
use
these
pngs
for
random
linear
codes
in
general
is
quite
different
from
the
way
you
use
PNG
for
all
the
use
cases.
In
our
case,
we
need
to
generate
a
high
number
of
small
PNG
sequences,
rather
than
a
single,
very
long,
PNG
number
sequence.
A
I
A
Unless
you
want
to
use
this
ipfx
scheme,
so
the
oil
Felix
scheme
non
dates
that
this
one
be
used
bursts
with
those
the
function.
We
provide
to
grow
from
32
to
down
to
4
or
8
bits
a
value
range,
so
this
is
mandated
in
that
case.
But
if
you
want
to
use
a
different
PNG,
as
I
said
you,
you
are
free
to
do
that.
But
you
will
have
to
specify
this
PNG
by
yourself
and
depending
on
the
weights,
which
find
can
be
a
bit
tricky
become
depends.
C
I
was
there
with
the
transport
area.
People
were
asking
questions
about
this
and
they
were
extremely
I
would
say
strict
in
making
sure
that
what
was
mentioned
as
a
PRG
was
a
true
PRG.
They
were
concerned.
If
I
remember
I,
don't
remember
who
was
sitting
at
the
table,
but
there
was
there
was
actually
quite
they
a
discussion
on
that.
So
I
think
it's.
It's
I
actually
think
the
work
that
was
done.
There
is
very,
very
interesting
since
we've
been
talking
about
random
coefficient
in
this
group
for
quite
a
long
time.
Yes,.
C
C
A
Well
done
so
I
wish
you
all
until
next
IDF
and
the
interesting
work.
So,
as
we
said
already,
we
will
meet
in
mantra
and
we
will
have
this
yes
seoeon
thumb
and
we
will
also
have
this
Sackett
own
project,
so
do
not
hesitate.
If
you
are
interested,
you
can
participate
times
not
physically
or
remotely.
Both
are
possible.
I
think
we
there
will
be
fewer
people
there
compared
to
this
åkesson,
but
from
what
ago
I
already
get,
but
anyway
we
will
find
a
way
to
make
progress
on
these
interesting
projects.
So,
thank
you,
everybody.
Thank
you.