►
From YouTube: IETF113-TSVWG-20220325-0900
Description
TSVWG meeting session at IETF113
2022/03/25 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
B
C
D
D
So
for
anybody
who's
getting
themselves
into
this
room,
this
is
tsvwg.
Our
wonderful
first
slide
renders
quite
well
in
the
proceedings
and
pretty
awfully
here.
So
I'm.
F
D
This
is
this:
is
a
non-critical
problem,
we're
about
to
start
the
meeting
so
we're
just
gathering
people.
If
you
are
one
of
the
presenters
and
you
want
to
do
a
quick,
sound
check,
please
do
we
will
start
in
just
a
few
minutes.
D
Right,
okay,
I
suggest
we
start
the
meeting.
D
It's
the
transport
and
services
working
group.
We
have
a
two-hour
agenda
for
today,
I'm
gory
fergus,
I'm
one
of
the
coaches,
I'm
the
chair,
who'll,
be
on
the
platform
remotely.
We
have
where's
eddie
and
david
black
apologies
for
the
first
slide,
it's
quite
cute,
but
it's
not
very
informative.
The
rest
of
the
slides,
I'm
sure
I
will
be
readable.
D
So
maybe
I
fibbed
this
is
the
not
well
the
not
well
is
traditionally
in
small
print.
If
you
haven't
read
the
not
well
by
now,
then
you
really
should
ietf
not
wells,
apply
to
all
ietf
contributions
on
the
mailing
list
at
the
microphone
and
by
any
other
means
and
are
the
ways
in
which
we
ensure
that
we
have
dealt
appropriately
with
ipr
and
other
issues
that
relate
to
the
itf
process.
D
D
D
D
D
The
ecn
cups
for
lorelei's
draft
is
shepherded
by
david
black
and
needs
text
to
replace
the
reframing.
Interaction
works
words
in
that
document.
This
is
a
small
change,
but
it's
a
change.
It
has
to
be
agreed
with
the
authors
and
will
be
put
on
list
once
that
change
is
complete
ecn
for
tunnels
that
use
shim
headers
has
a
similar
issues
that
has
been
resolved.
D
G
No
changes
since
we
talked
about
it
earlier
this
week.
I
think
everything
you
said
is
accurate,
gory,
good.
D
D
We
have
transport
protocols
for
dccp,
we
have
a
differentiated
services
section
where
we're
going
to
look
at
nqb
and
dscp
considerations,
and
we
have
transport
for
udp
where
we're
going
to
look
at
the
udp
options.
Work.
D
D
D
D
J
J
J
D
J
No,
then,
then,
I
will
take
this
one.
Unfortunately,
that
is
the
old
version
of
the
slide
set.
So
if
you
want
to
have
the
latest
one,
please
look
into
the
agenda
that
you
will
get
it
yeah.
We
have
changes
or
we
have
a
lot
of
changes
in
in
the
draft
since
since
last
ietf,
so
we
finalized
the
hand
checking
procedure
have
some
more
information
about
that
in
one
of
the
next
slides,
we
did
a
lot
of
changes
in
the
mpprio
option.
Also,
here
I
will
give
some
more
information
about
it.
J
J
J
J
J
You
will
also
find
the
individual
pull
requests
there,
which
describe
exactly
what
we
have
done
along
with
the
draft
development.
We
also
developed
our
open
source
code
at
github,
where
we
have
a
linux
reference
implementation
for
multi-pass
eccp
available
and
that
one
we
extended
with
some
of
the
changes
we
made
in
version
two
and
three
of
the
mpdccp
draft
next
slide.
Please.
J
We
were
also
part
of
the
first
tsvw
interim
meeting
in
february
this
year.
Just
want
to
highlight
here
what
we
presented
and
discussed
during
this
interim
on
the
left.
You
will
find
the
mpprio
option,
which
I
already
entities,
which
gives
us
a
fine
granular
path
management
capability,
so
we
can
design
with.
J
We
can
assign
up
to
four
bits,
or
we
can
use
up
to
four
bits
to
make
different
prioritization
levels
to
enable
disable
parts
and
to
give
the
scheduling
engine
of
the
multi-path
protocol
the
appropriate
input,
how
to
how
to
deal
with
paths.
That
is,
for
example,
quite
useful
for
at
triple
s
when
the
wi-fi
path
shall
be
prioritized
over
the
cellular
access.
J
We
stabilized,
as
I
also
said,
the
handshaking
procedure,
we
decided
for
a
four-way
design
more
or
less
that
resembled
the
multi-pass
tcp
logic,
which
I
think
is
quite
stable.
So
we
adopted
this
idea.
It's
not
exactly
the
same
as
a
multi-pass
tcp,
but
quite
similar.
So
at
least
these
four-way
design
is
taken
from
there.
J
I
have
a
dedicated
slide
to
this
in
in
the
backup,
not
sure
if
you
will
have
time
to
look
into,
but,
as
I
said
it,
it's
the
last
slide
here
in
in
this
document,
so
both
to
look
into
yeah
and
we
started
to
present
during
the
interim.
What
is
the
relation
between
3gbp
and
iepf
if
we
discuss
80
triple
s
related
multi-pass
protocols?
I
will
have
a
dedicated
slide
on
this
here
as
well.
Next
slide,
please.
D
L
J
Yes
on
this
one,
thanks
for
bringing
it
back
yeah
now,
looking
into
the
new
closing
a
closing
procedure,
we
introduced
in
the
latest
version
of
the
mpd
ccp.
J
Basically,
we
defined
two
new
multi-pass
options,
one
of
the
fast
cloud,
the
other
one,
is
just
called
close
in
the
fast
close
we
we
used
to
start
starter,
abrupt
shutdown
of
the
mp
dccp
connection,
there's
not
really
a
negotiation
about
that.
So
if
a
multi-pass
dccp
peer
decides
to
close
a
multi-pass
dcp
connection
using
the
mp
fast
close,
it
will
just
send
this
option
over
all
paths
to
the
to
the
peer
host
and
with
ending,
already
shut
down
the
connection
on
its
side.
J
As
soon
as
the
mp
fast
close
is
received
on
any
of
the
parts
at
the
pier
host,
the
pure
host
will
close
the
multi-pass
dccp
connection
as
well.
So
there
is
not
this
common
procedure
of
closing
a
connection
which
is
known
from
the
dccp
context
for
that
we
defined
a
new
reset
code,
a
new
dccp
reset
code
with
the
number
12,
and
that
we
also
added
to
the
iana
considerations
that
this
can
be
adopted.
J
The
amp
cloud
has
to
be
used
to
shut
down
all
of
the
passes
individually,
so
it
has
to
be
sent
on
all
parties
and
first,
after
all,
passes
are
shut
down.
Then
the
overall
multi-pass
gccp
connection
will
be
removed
in
general
for
both
multi-path
options.
We
use
a
key
data
field
that
uses
some
material,
which
was
exchanged
during
the
handshake
to
authenticate
subsequent
paths.
J
J
We
use
the
mp
capable
feature
to
to
negotiate
in
general.
The
multi-pass
capability
between
two
hosts,
that
is
part
of
the
hand
tracking
procedure
and
also
part
of
the
handshaking
procedure,
is
the
multi-pass
key
option
for
exchanging
the
key
material
I
already
outlined,
which
is
used
to
authenticate
subsequent
flow
establishment.
J
We
have
the
multi-pass
sequence
option
that
carries
overall
connection
number,
a
sequence
number
sorry,
which
is
related
to
the
multi-pass
connection.
And
last
but
not
least,
we
have
the
multi-pass
rtt
option
which
can
carry
some
timing
information
that
is
quite
useful
in
the
reordering
process.
We
apply
on
on
receiver
side
to
compensate
latency
differences.
J
Some
of
them
still
need
some
updates
for
the
prototype,
but
in
the
draft
itself
I
think
that
is
stable
and
ready
for
being
reviewed
on
the
right
and
the
partial
ready
table.
I
think
we
also
made
a
lot
of
progress,
but
for
most
of
the
features,
there's
still
the
implementation
missing,
and
that
is
starting
with
the
mp
confirm,
which
we
use
for
reliable
exchange
of
multi-pass
options
at
the
multi-pass
join.
As
I
already
outlined,
to
establish
subsequent
flows,
we
have
a
section
on
a
fallback
mechanism.
So
what
what
happens
if
multipath
cannot
be
negotiated?
J
J
Yeah
and
last
but
not
least,
we
also
have
multi-pass
at
address,
remove
address
so
in
case
new
ip
addresses
are
identified,
which
can
be
used
to
establish
new
multi-pass
dccp
sub-flows.
Then
those
options
can
be
used
or
in
case
ip
addresses
are
not
available.
J
Anymore
passes
are
not
available,
anymore,
remove
address
can
be
used
and
the
mprio,
I
think
I
already
outlined
during
this
presentation,
so
draft
work
is
almost
completed
and
we
have
now
the
main
focus
on
on
the
implementation
to
keep
this
in
line
with
what
we
have
available
in
the
draft
next
slide.
Please.
J
Yeah,
that
is
probably
interesting
for
those
who
are
also
interested
in
the
3tpp
80,
triple
s
functionality,
maybe
in
one
short
sentence,
the
80
triple
s
is
a
part
of
the
5g
system
and
it's
providing.
J
Multi-Path
transport
from
a
mobile
network
operator
towards
a
terminal
device
towards
a
ue.
That
is
where
we
are
very
active.
That
is
where
we
see
the
multi-pass
gccp
as
one
solution
in
that
space
and
just
matching
here
what
we
think
we
already
provide
with
the
multi-pass
dccp
with
the
draft
definition,
but
also
with
the
published
prototype.
J
So
if
you
have
a
look
into
into
the
right
table,
what
are
the
requirements
from
at
triple
s?
Basically,
it
needs
the
multi-pass
transport
it
needs
non-tcp
support.
So
maybe
for
those
of
you
who
are
a
little
bit
more
familiar
at
triple
s
is
an
existing
framework,
but
so
far
it
only
can
deal
with
a
tcp
multi-pass
splitting,
because
multi-pass
tcp
is
is
used
for
that.
So
for
non-tcp
traffic.
There
is
no
solution
so
far,
and
that
is
where
we
look
into
at
the
moment.
J
J
We
need
steering
modes
so
in
in
the
multi-pass
terminology
that
are
scheduling
entities,
we
need
reordering
to
compensate
the
paths
latency
differences
on
receiver
side
to
bring
packets.
In
order
again,
we
need
path
measurement
to
make
good
sketch
as
an
input
for
scheduling
decisions,
and
we
need
path,
management
to
establish
or
destruct
subflows
whenever
it
is
required.
J
Matching
this
on
the
left,
with
what
we
provide
so
far.
We
think
we
have
now
a
very,
very
complete
prototype
available
with
the
multi-pass
dccp
itself,
with
a
new
encapsulation
framework,
with
a
lot
of
scheduling,
logics
with
a
lot
of
reordering
logics,
multiple
congestion
controls,
ccid,
2
and
and
3,
which
were
already
defined
in
foil
for
dccp
itself,
so
those
we
have
adopted
and
we're
also
working
on
a
new
ccid5,
which
is
following
the
bbr
algorithm
yeah
and
last
but
not
least,
we
also
provide
path
management
functionalities.
J
M
Marcus
thanks
last,
I
could
actually
question
this
slide.
So
have
you
started
discussing
with
the
linux
netdev
community
about
upstreaming
this
so.
J
Good
question:
at
the
moment:
it's
a
completely
out
of
three
implementation,
which
makes
it
much
much
simpler
for
us
because
press
it
in
the
foreground
to
get
some
first
experience
before
we
read
make
the
move
and
bring
this
into
the
main
line.
M
That
that
to
the
lateness
and
for
mptcp,
the
problem
is
that
nobody
could
really
use
it
because
you
needed
to
build
custom
kernels,
and
nobody
wants
to
do
that.
And
if
you
want
to
run
containers
with
this
stuff,
you
can't
right
so
you
so
it
needs
to
be
in
the
linux
kernel
by
default.
If
it's,
if
it's
supposed
to
be
a
building
block
for
like
atss
or
something
like
that
right
because
nobody's
going
to
compile
kernels.
J
J
J
J
Wanted
to
know
what
okay.
N
As
far
as
I
know,
there
are
some
iprs
involved,
so
how
is
the
relation
between
these
iprs
and
upstreaming
this
to
the
linux
kernel
and
or
this
open
source
code?
So
since
you
say
you,
I
can
download
the
open
source
code.
Does
that
mean?
What's
in
the
iprs
is
not
in
the
code.
J
J
Yep,
so
some
general
updates
on
where
we
are
now
with
the
multiple
dccp.
J
Here
I
make
again
the
relation
to
3tpp
and
I
think
on
that
we
will
also
have
a
site
meeting
later
on
after
after
the
tsvwts
session,
where
also
our
3gpp
delegate
from
from
the
telecom
will
give
some
insights
into
how
atriples.
J
Nevertheless,
the
point
I
wanted
to
make
is
so
multi-parts
dccp
is
so
far
the
only
solution
for
non-tcp
non-tcp
splitting
support,
which
made
it
into
the
technical
technical
report
after
the
last
sa-2
meeting,
which
has
happened
in
when
was
it
february
yeah
exactly
so
for
sure.
We
also
expect
that
the
other
solutions
will
will
come
up
into
this
technical
report,
but
so
far
only
multiples
gccp
is
there,
and
we
also
have
some
some
great
support
from
from
big
parties.
Here
I
already
outlined.
J
We
have
now
the
full
set
of
functionalities
available
which
are
at
triple
s
compatible.
I
will
not
repeat
all
of
them.
I
think
I
made
this
clear
already.
J
If
I
look
at
the
time,
I
think
I
have
to
speed
up
a
little
bit,
so
that
is
the
relationship
I
think
we
will
later
on
discuss
again
in
the
site
meeting.
So
maybe
we
can
skip
this
for
this
meeting
now.
J
Very
good
yeah,
that
is,
that,
is
something
interesting,
so
we
spontaneously
decided
to
participate
in
the
hackathon,
so
we
just
took
a
free
free
table
and
started
live
hacking.
So
we
have
a
google
pixel
4
phone
now
available
with
multi-pass
tccp
integrated.
J
J
So
this
google
pixel
4
be
connected
to
a
multi-pass
dccp
proxy
in
the
internet,
and
we
started
a
real
skype
call
between
this
multi-pass
dccp
enabled
smartphone
and
a
traditional
smartphone
without
multiples
tcp
through
the
mpdccp
proxy.
So
what
what
happened
when,
when
one
or
the
the
currently
used
access
failed?
There
was
a
immediate
hand
over
into
the
other
axis
and
there
was
no
interruption
in
the
skype
call,
so
it
works.
I
have
this
phone
available
and
I'm
open
to
present
it
to
anyone
who
is
interested.
J
J
Our
goal
is
also
to
keep
pace
with
a
3gbp
release,
18
timelines
again.
I
think
that
will
be
something
we
will
later
on
discuss
in
the
site
meeting
the
details
when
it
will
happen
you
you
can
find
here
on
this
slide,
so
please
feel
free
to
join
the
reviewing
queue
for
multi-pass
gccp,
also,
to
start
maybe
hacking.
J
The
prototype
would
be
also
great,
and
with
that
I
would
say,
let's
go
to
the
question
round.
Thank.
D
Thank
you
ever
so
much
for
joining
the
hackathon
and
it's
always
good
to
see
running
code,
and
I
think
evidence
that
things
work
is
definitely
something.
That's
useful,
you'll
see
the
announcement
while
we're
taking
questions.
This
is
a
tsv
related
side
meeting
across
the
corridor
after
the
current
break.
Just
after
the
end
of
this
meeting,
the
room
capacity
is
always
limited,
because
this
meeting
was
called
together
just
to
provide
some
coordination.
D
It's
basically
just
a
chance
to
talk
and
make
sure
that
people
understand
the
position
of
different
things
that
are
going
on,
and
we're
really
pleased
that
we
have
some
participation
from
three
gpp
representatives
to
help
just
tell
us
a
little
bit
more.
D
We
have
one
question:
go
ahead.
O
O
Well,
when
you
did
give
the
presentation,
I
try
to
look
for
information
regarding
that
beyond
the
two.
So
can
you
elaborate
on
that
one?
Thank
you.
E
J
Very
good
yeah,
so
we
are
completely
agnostic
to
that.
We
support
as
many
parts
are
available,
so
we
don't
have
any
limitation,
so
it
doesn't
matter
if
there
are
just
two
paths
or
two
access
available
or
three
or
four
or
five
or
six
okay.
O
Okay,
thank
you,
oh
yeah,
by
the
way
for
the
3gpp
part,
the
hss
multipath
tcp
they're,
going
to
provide
some
proxy
and
and
some
ups
so
for
your
implementation,
mp
tccp.
Are
you
looking
at
the
similar
ones
to
put
some
proxy
on
some
sort
of
ups
in
order
to
provide
multipaths?
Thank
you.
J
Yeah,
that
is
maybe
something
which
has
to
be
clarified
in
3tpp.
If
there
is
a
certain
protocol
needed
to
signal
the
that
there
is
a
multi-pass
tccp
proxy.
So
at
the
moment
from
the
multi-pass
dcp
traffic
itself,
there
is
no
support
for
this,
and
I
think
that
is
also
not
the
idea
of
the
multi-pass
gccp
draft,
so
that
is
specifying
really
the
basic
protocol
and
not
any
proxy
functionalities.
H
D
D
Everything
that
comes
out
of
this
will
be
made
available
to
people
and
sorry,
but
we
can't
arrange
remote
participation
at
that.
P
Just
to
comment
on
that
so
yeah,
so
it
is
ietf
policy-
did
not
provide
the
support
for
remote
meetings.
That
said,
if
somebody
wants
to
use
the
btc
client
of
their
choice,
that's
obviously
okay,
independently,
so
because,
if,
if
somebody
chooses
to
do
that,
we
just
email
the
list
and
let
people
know
that's
available.
D
Really
fun
this
I
mean.
Maybe
we
need
a
longer
ietf
meeting
yeah,
we'll
see
what
we
can
do.
Tell
us.
D
H
This
is
david,
I
have
high
control
just
say
next
I'll
I'll
move
them
forward.
C
Okay,
thank
you
right.
So
hi,
I'm
anna,
I'm
here
to
talk
about
our
draft
considerations
for
assigning
gifts
of
code
points
next,
so
I'm
going
to
start
with
some
updates,
so
we
published
revision01
and
in
this
revision
we
included
some
clarifications
around
the
different
pathologies
that
can
happen
to
a
diffserv
code
point
as
it
crosses
a
path.
C
In
particular,
we
added
a
new
pathology
which
is
clearing
the
least
significant
bits
of
a
dhcp,
and
since
we
published
that
we
also
got
a
load
of
comments
in
particular,
thank
you
very
much
to
brian
carpenter
and
rudy
gargayeb,
who
offers
some
very
helpful
suggestions.
C
Mostly.
This
is
just
adding
or
editing
text
on
how
we
relate
to
other
rfcs,
like
3086,
on
phps
or
clarifying
some
of
the
text
in
rfc
2474,
and
especially
what
it
says
about
remarking
of
this
code
points
at
boundaries
between
domains.
C
We
also
got
some
suggestions
to
talk
a
bit
more
about
management
code
points
and
also
to
clarify
that
the
gsma
ir34
standard
is
not
binding
and
has
not
necessarily
seen
a
lot
of
deployment.
Oops.
Sorry,
so
we're
going
to
make
all
of
these
edits.
I
haven't
had
a
chance
to
get
back
to
people
on
the
mailing
list,
but
thank
you
very
much.
They
are
very
helpful
and
I
will
propose
some
text
and
we
will
get
back
to
you.
C
Thank
you
very
much,
there's
something
else
we
want
to
discuss
and
we
want
to
include
in
a
future
version
of
the
draft
and
I'm
going
to
try
and
paint
a
picture
here
on
what
that
is
so
next
slide.
Please.
C
So
I'm
going
to
start
with
sure
I
have
a
lot
of
transitions,
I'm
going
to
be
saying
next,
a
lot,
but
I'm
going
to
try
and
show
you
on
this
grid,
which
these
are
all
of
the
64
chord
points
arranged
in
an
8x8
grid
and
I'm
first
going
to
highlight
the
ones
that
have
already
been
assigned
next,
so
first
off
you
have
rsu
2474.
We
specified
the
specifies
the
class
selector
code
points.
I
think
this
is
back
in
1998.
C
Next,
then,
along
come
all
of
the
assured
forwarding
code
points
which
can
also
encode
drop
probability
next
and
then,
finally,
you
have
three
more
assignments
that
have
been
made
over
the
past
20
years.
You
have
one
for
expedited,
forwarding
you
have
one
for
voice,
admit
and
finally
in
2018
you
have
one
for
lower
effort.
C
C
Right,
so
this
is
how
the
grid
looks
right
now,
with
all
of
the
assignments
and
over
the
years
I've
been
I've
been
measuring
diffserv
ever
since
around
2016
and
in
doing
measurements.
Well,
we
kind
of
know
what
code
points
are
popular,
in
particular
in
specific
types
of
networks
or
from
specific
types
of
edges.
So
next,
please
so,
for
example,
at
the
web
server
edge,
you
see
a
lot
of
the
af
11,
21
and
31
code
points
and,
to
a
lot
lesser
extent,
cs3
and
ef.
C
Next,
please
mobile
networks,
a
lot
of
mobile
networks,
just
remark,
all
of
the
incoming
code
points
to
one
only
one
value
and
that's
often
af
11,
12
or
13..
C
Next,
please,
by
examining
a
large
number
of
packet
traces
at
an
internet
exchange,
we
saw
a
lot
of
the
icmp
traffic,
does
carry
a
code
point
cs6
as
rc247
states,
so
that's
another
one
that
is
used
next,
please
and
finally
examining
dns
server
replies.
We
see
that
often
they
use
cs1
that
one
in
particular
all
of
this
is
measurement
data
and
the
latest
slide
deck
has
an
appendix
and
all
of
the
highlights
of
this
data
and
all
of
the
measurements
that
we
conducted
are
in
there.
C
Right,
so
these
are
the
code
points
that
are
used
based
on
measurement
data.
Unfortunately,
the
measurement
data
also
highlights
a
problem.
C
C
We
also
validated
the
data
through
different
methods,
validated
in
a
study
on
edge
networks
made
through
ripe
atlas,
and
we
also
found
this
in
the
packetries
analysis,
so
those
precedence
bleaching
is
extensive.
C
What
does
it
mean?
Well,
if
you
clear
the
top
three
bits
of
a
dhcp,
essentially
what
you
end
up
with
with
a
value
of
the
resulting
dhcp,
is
between
0
and
7.,
so,
for
example,
af
11
bleaches
to
dhcp2
ef,
which
is
46.
If
you
slash
off
the
top
three
bits,
it
bleeds
to
six
next,
please
so,
essentially,
all
of
the
dhcps
on
one
column
will
bleach
to
the
lower
lower
lowest
value
down
there
of
that
column.
So
that
is
a
big
problem
for
assignments.
C
It's
a
two-way
problem,
because
if
you
have
all
of
these
popular
chord
points
that
then
get
toss,
precedence
bleach
to
one
small
code
point
that
it
means
that
small
code,
point
b,
two
or
six
then
can
no
longer
be
used
for
assignments,
because
it's
essentially
polluted
as
it's
being
used
by
a
lot
of
other
traffic
and
also
it.
It
also
has
implications
for
the
larger
dhcps.
C
C
Then
we
have
two
and,
as
I've
said,
a
lot
of
traffic
aggregates,
well
those
precedent
speeches
down
to
two
so
that
one
is
not
necessarily
very
usable,
at
least
in
the
core
of
the
internet.
Then
we
have
three
which
is
experimental
next,
please,
then
we
have
code
point
four,
and
this
one
has
a
different
kind
of
problem.
It's
more
of
a
historical
ssh
bug
all
ssh.
C
Next,
please,
then,
we
have
the
scp-5
and
we
have
nqb,
which
is
provisionally
allocated
to
this
to
this
value.
C
Then
we
have
six
next,
please
six
has
the
exact
same
problem
as
2,
but
to
a
lesser
extent,
because
on
that
column
you
have
ef
and
f13,
which
are
quite
popular,
and
then
you
have
7,
which
is
also
left
in
the
experimental.
C
So
you
can
either
view
this
in
two
ways:
either.
Those
person
is
bleaching
is
something
that
is
a
given.
It
happens
in
the
internet
and
then
assigning
one
of
those
code
points
that
is
smaller
than
seven
is
a
big
deal,
and
then
you
should
consider
these
smaller
code
points
as
aggregates,
as
opposed
to
assign
them
for
a
specific
purpose,
or
you
can
consider
it's
not
a
problem,
because
those
bleaching
is,
after
all,
a
pathology
and
it
shouldn't
be
happening,
and
then
you
can.
You
have
other
options.
You
can
just
choose.
C
Well,
you
can
have
a
dual
allocation
like
nqb
does
and
it
deals
with
it
that
way,
because
it
has
one
quadrant
assigned
from
the
edge
and
one
code
point
for
the
core
and
the
codebrand
for
the
core
is
the
scp-5
which
is
meant
to
traverse.
C
In
any
case,
whichever
way
you
choose
to
go
about
this,
you
have
to
think
because
assigning
the
small
code
point
kind
of
risks,
making
code
points
in
the
same
column
unusable
next
slide.
Please.
C
C
You
either
have
networks
who
don't
care
about
their
server
at
all,
and
then
they
pass
code
points
transparently,
and
this
is
this-
is
a
valid
use
case
of
this
and
we've
seen
this
in
measurements.
Then
you
have
managed
networks
another.
You
know
good
use
case
for
difficult
this
service.
Sorry,
so,
operators,
police
code
points
allow
only
some
through
remark,
the
other
ones,
to
protect
internal
nodes
in
the
network
and
that's
perfectly
fine.
C
So
whatever
operators
do
with
code
points,
there's
lots
as
long
as
it's
not
those
precedent
splitting
then
it's
fine,
because
it
doesn't
affect
new
assignments
of
gold
points
if
yeah,
oh
and
then
finally,
there's
the
unmanaged
networks
or
what
I
call
unmanaged
networks
that
have
really
weird
behavior.
C
I
consider
tostrescence
bleaching
to
be
in
this
category,
for
example,
but
there
are
also
other
types
of
pathologies
like,
for
example,
bleaching,
the
the
other,
the
other
three
bits
of
the
dscp,
and
that
happens,
but
only
in
some
very
few
weird
networks
and
you
can
have
inconsistent
remarking
anyway,
problem
networks
and
basically
we'd
like
to
propose
some
text
about
this.
What
would
like
to
get
input
from
the
working
group
on
what
to
say
about
them
next
slide?
Please.
C
So
yeah,
what
do
we
say
about
them
will
go
in
the
next
version
of
the
draft.
C
I
guess
at
the
moment
our
draft
is
informational.
We
don't
know
if
we
could
maybe
make
recommendations
for
future
assignments.
I
mean
I'm
going
to
include
the
table
and
the
implications
that
I've
just
talked
about
in
the
version,
alongside
with
the
text
that
other
people
proposed,
and
maybe
it's
also
worthwhile
talking
to
operators
to
understand
a
little
bit
more
about
how
things
are
used,
but
yeah.
That
concludes
my
presentation.
What
I
have
is
a
lot
of
appendix
slides
with
lots
of
data,
so
please
be
sure
to
check
those
out.
P
Martin
martin
duke
google,
first
of
all,
I'd
like
this
is
my
first
opportunity
to
thank
you
in
person
for
doing
this
draft.
I'm
learning
a
ton
just
listening
to
all
this
and
that
that
grid
you
showed
just
really
makes
it
super
clear
that
we're
burning
the
last
generally
useful
code
point
which
is
pause
for
some
reflection
on
this
yeah
like.
I
would
encourage
you
to
take
this
to
maybe
ops
area
next
time.
That
might
be
a
good
step
in
terms
of
the
operator
survey.
P
You
may
not
reach
the
people
who
are
still
bleaching,
because
those
are
generally
less
informed,
less
connected
people,
but
nevertheless,
that
is
a
good
entrance
point
into
this
into
that
community,
but
yeah
mainly
thanks.
So
your
intent
is
still
to
have
this
be
an
informational
draft
in
your
potential,
if
not
mistaken,
and
you're,
proposing
potentially
additional
draft
that
that
was.
Was
that,
like?
What's
the
new
draft
supposed
to
be.
D
Oh
okay,
okay,
right,
okay,
maybe
step
in
this
chair
here.
So
the
the
current
work
item
is
informational.
Yes,
and
I
guess
this
draft
will
be
revised
as
informational
yeah
and
it
is
informational
until
we
get
to
the
point
where
we
start
clarifying
the
way
that
these
20
year
old
rfcs
are
being
updated
based
on
the
current
practice.
D
So
I
I
hadn't
realized
this
when
we
started
it,
but
maybe
brian's
carpenter's
post
recently
on
the
mailing
list
was
actually
quite
quite
deep,
quite
useful.
Maybe
after
20
years
we
should
reflect
on
what
the
best
practices
as
we
deal
with
the
last
few
chord
point
registrations.
We
should
kind
of
get
this
right,
so
I
think
anna's
line
of
this
document
is
informational
is
quite
correct
if
we
think
ahead.
D
Well,
okay,
so
the
thing
the
thing
is:
it'll
be
good
to
get
discussion
on
the
date
that
ana's
had
and
to
decide
on
the
way
forward.
I
don't
think
this
blocks
the
nqb
work
by
the
way,
because
I
think
we
can
friend
qb.
We
can
find
the
right
answer.
No,
and
then
we
really
don't
have
very
many
chord
points
left.
So
we
have
to
decide
what
we're
doing
in
future.
So
I
think
these
things
could
work
in
parallel.
P
Yeah
I
mean
I
I
would
just
looking.
I
mean
having
been
aware
of
this
for
five
minutes
now
I
I
I
would
be
tempted
to
a
try
to
enter.
You
know,
engage
with
the
operator
community
and
see
if
we
can
handle
this
bleaching
situation,
which
I
think
would
be
great
if
we
could
fix
it.
But
secondly,
like
I
mean
we've
got
a
lot
of
experimental
code
points
and
maybe
like
deprecating
half
of
them
would
be
wise,
so
we
have
a.
P
D
I
think
we
need
to
consider
what
anna
martin's
experiment
was.
Actually
local
use
and
experimental
local
use
of
the
scps
is
incredibly
important
yeah
and
we
also
have
managed
use
of
dscps
and
networks
that
just
pass
the
scps
without
doing
a
lot
of
changes
to
them
and
those
which
do
changes
which
are
historically
based,
which
are
kind
of
maybe
harmful,
so
there's
maybe
different
categories,
so
this
is
going
to
be
a
thing
that's
useful
to
explore.
I
suspect
all
the
people
in
the
queue
are
going
to
talk
about
this.
Yes,.
P
D
D
J
Yeah,
I'm
would
you
go
guys,
I'm
configuring
bleaching
at
the
bacherators
of
dodge
telecom
and
long-term
gifts
of
participant.
J
I
appreciate
work
on
an
update
of
the
existing
drafts
because
I
think-
and
I
contributed
a
lot
of
my
operational
experience
and
what
I'd
like
to
have
or
like
to
see
in
future
is
something
which
allows
for
default
transport
in
the
backbone
default
saying
just
default,
nothing
else,
but
service
differentiation
in
the
access,
and
that
should
be
also
standardized
sharing
of
resources.
And
what
I'm
aware
of
is
you
can
either
optimize
for
throughput
or
you
optimize
for
performance,
which
is
low,
jitter
and
low
delay.
J
I
don't
want
to
say
it's
that,
or
this
option
just
should
be
fair
and
not
discriminate
against
others.
So
there
are
no
pipes,
it's
just
a
default
behavior
in
the
backbone
and
the
standardized
behavior
on
the
access
how
resources
are
shared,
and
I
can
tell
you
that
many
people
who
request
quality
of
service
concepts
from
me
nowadays
come
and
ask
for
exactly
that.
Thank
you.
J
C
J
E
Just
like
to
express
support
for
some
kind
of
bcp
or
rfc
updates
to
try
to
discourage
the
use
of
bleaching
in
the
network
in
some
effective
way.
So,
yes,
we
can
continue
this
offline.
D
D
D
D
I'll
put
that
bob,
you
completely
broke
up.
C
C
M
M
D
Yeah,
so
the
the
comment
was
consider
also
iepg,
where
you
can
have
the
same
talk
with
a
very
different
focus,
because
it's
operators
who
are
probably
really
engaged
with
this
yeah
so
iepg
as
well.
K
This
is
to
be
two
key
questions.
One
is
you
mentioned
that
there
may
be
an
operator
survey?
Will
that
will
that
happen,
and
the
second
one
is
that
when
the
recommendation
will
come,
will
the
recommendation
will
come
to
tsbg
or
somewhere
else.
D
Easy
questions
to
answer
thanks
ever
so
much
for
asking
we
can
launch
an
operator
survey.
We
would
have
to
do
that,
of
course,
with
the
ops
area,
because
it's
an
operator
survey
and
I
think
operator
service
have
worked
quite
well
in
that
area.
K
D
It
it
looks
like
it
might,
because
the
current
diffserv
architecture
is
all
recommended.
It
was
put
in
place
about
20
years
ago
to
make
something
work.
When
nothing
worked.
Now
we
have
things
working.
We
might
be
able
to
change
that
a
little
bit.
However,
if
it's
working
well,
we
might
want
to
make
small
changes
thanks.
R
Yeah
I
want
to
echo
martin's.
This
was
really
good.
I
felt
like
I
learned
a
ton
in
10
minutes
and
I
would
encourage
you
to
also
take
this
to
maprg
as
a
measurement
presentation
there.
I
think
you
would
not
only
the
operator
community,
but
I
I
suspect
many
others
in
the
ietf
community
and
the
associated
research
communities
would
benefit
from
understanding
these
observations.
Thank
you.
D
Thank
you
anna.
I
think
you've
had
enough
invitations
for
the
moment
for
doing
more
work.
Please
do
some
more
work
and
come
back.
D
S
All
right,
thank
you.
You
can
go
on
to
the
next
slide.
This
is
a
brief
update
for
the
nqb
draft.
Obviously
some
discussion
around
code.
Point
assignments
will
continue
on
the
mailing
list,
but
here
just
wanted
to
cover
the
edits
that
were
made
in
the
draft
recently,
and
I
did
do
a
presentation
at
the
interim
a
short
time
ago.
So
a
lot
of
time
has
passed
since
then,
but
one
update
the
draft
has
been
made
this
graph
10.
S
and
I
can
on
the
next
slide,
I
I
did
send
a
list
of
the
edits
to
the
mailing
list.
This
is
verbatim
what
I
I
sent
next
slide
has
a
little
bit
better
breakdown
of
what
the
changes
really
amount
to
this
version
here
is
instead
of
going
section
by
section
in
the
document,
but
if
you
want,
my
next
slide
really
summarizes
what
the
the
changes
were.
S
First,
one
is
the
section
in
the
graph
that
talks
about
what
types
of
senders
are
compatible
with
the
nqb
marking
and
and
would
be
recommended
to
mark
their
traffic
as
nqb.
S
In
particular,
it
talks
about
sports
flows
of
sports
applications
that
are
sending
at
a
relatively
low
rate
and
for
those
who
attended
the
last
ietf
as
well
as
the
interim
note
that
that's
been
a
point
of
discussion
for
a
couple
of
rounds
to
try
to
get
that
language
right,
and
so
hopefully
we
got
it
right
this
time,
but
encourage
folks
to
take
a
look
at
that
and
if
there
are
further
comments
or
suggestions
on
refining
that
language,
I
certainly
would
appreciate
hearing
those.
S
S
That
suggested
that
a
little
bit
more
background
and
rationale
would
be
helpful
for
explaining
that
the
third
item
additional
implication
of
edca
manipulation.
This
was
a
comment
on
the
mailing
list.
S
That
pointed
that,
in
the
draft
suggests
that
one
approach
to
making
legacy
wi-fi
networks
look
a
lot
more
like
they're
supporting
the
nqb
php,
in
other
words,
make
them
support.
S
S
The
mailing
list
is
that
well
by
doing
that,
you
give
up
a
priority
queue,
so
you
turn
a
priority
queue
into
a
queue
that
has
the
same
priority
as
best
effort,
and
so
now
there's
a
a
discussion
of
that
or
statement
on
about
that
in
the
draft
and
then
fourth
bullet
the
iana
section
so
reformatted
that
to
align
with
expectations
for
the
instructions
to
ayanna
and
also
per
a
comment
at
the
interim
gave
unique
names
for
the
two
code
points.
S
Forty
five
and
five
names
I've
chosen
are
in
qb
edge
and
nqb
core.
Although
it's
kind
of
an
open
question
to
the
working
group
are
those
the
right
names,
I
think
some
of
the
discussion
about
you
know
toss
precedence,
bleaching
and
other
network
effects
that
that
remark
code
points.
S
S
Really
been
looking
for
other
comments
or
the
review
comments
before
starting
working
with
glasgow.
Obviously
the
recent
comments
about
a
good
point
of
time.
I
think
we
need
to
include
those
on
the
mailing
list
before
taking
this
to
working
from
the
basketball,
looks
like
glory,
have
your
hand
up.
D
Oh
gauri,
first
talking
from
the
floor,
so
I
can
stretch
and
get
some
exercise.
Yeah
thanks
greg.
I
think
the
whole
dscp
thing
can
be
dealt
with
and
I
like
the
ch
who
registrate
names,
the
two
registering
them
with
different
names
is
probably
right.
D
D
S
Right
all
right,
thanks
yeah,
the
the
you
know,
should
remark
or
and
should
not
use
the
value
45
across
interconnects.
I
think
that's
the
areas
where
I
agree
that
you
need
consensus
on
those
and
it
does
kind
of
get
back
to
what
is
the.
What
is
the
end-to-end
philosophy
for
diffserv
going
forward
seems
like
you
there's
some
view
that
diffserv
is
by
intention
your
as
specific
or
you
know,
domain
specific
and
then
obviously
other
views
that
it
was
ideally
intended
for
end
to
end
use.
So.
H
There
for
for
the
history
this
serve
was
trying
to
be
both
trying
to
allow
individual
networks
to
configure
the
services
that
they
wanted
to
configure
and
provide
opportunity
for
end-to-end
and
end-to-end
services.
The
end-to-end
has
not
worked
out
all
that
well
see
ana's,
wonderful,
slides
for
the
gory
details.
D
Any
other
questions,
okay,
maybe
david,
do
you
think
we're
near
a
working
group
last
call
because
it
seems
like
the
frameworks,
clause
and
all
we
have
to
do-
is
pin
down
some
new
text
around
the
use
of
these
diff
chord
points?
Is
there
anything
else
you
think
needs
to
be
considered
for
a
working
group
must
call
to
start.
H
E
S
S
They
could
also
be
classified
into
the
same
or
aggregate
into
the
same
php
in
the
enquiry
php.
So
I
think
that
there's
a
demand
for
a
code
point
sooner
rather
than
later.
D
Maybe
part
of
the
question
here
is:
how
long
do
we
need
the
allocation,
for
I
wondered
how
good
your
crystal
ball
was.
Do
you
think
45
would
work
across
the
internet
core
in
five
years
time
and
we
don't
need
the
five
assignment
or
do
you
think
we
need
the
five
assignment
forever
and
we
don't
need
45,
because
all
the
wi-fi
equipment
will
be
updated
within
five
years?
Do
we
have
any
clue
about
where
things
are
going.
D
D
Well,
I
just
thought
with
asking
the
question:
I
thought
the
answer
might
be:
no,
we
don't
have
a
great
crystal
ball
and
this
has
always
been
a
problem
in
the
itf.
We
don't
know
what's
going
to
be
adopted
by
whom
we
just
encourage
adoption,
and
then
maybe
we
have
to
figure
out
this
text
appropriately
in
the
way
we
can't
consider
the
outcomes.
E
Sebastian
says
that
you
have
conveyed
the
core
of
his
question
correctly.
N
H
D
T
D
Thank
you
that
closes
the
dscp
part
of
our
schedule,
we're
a
little
later
than
the
agenda,
but
I
think
we're
still
on
time
for
all
presentations,
so
the
next
presentation
will
be
by
tom
herbert.
D
I
took
the
decision
to
have
a
slightly
longer
slot
for
tom
so
that
he
could
have
a
full
half
hour
just
because
this
is
an
interesting
and
useful
background,
particularly
for
udp
options,
but
also
for
anyone
who's
in
the
transport
area
and
wants
to
know
about
what
is
currently
going
on
with
implementation
of
internet
checksums
in
real
equipment.
D
So
go
ahead.
Tom
tell
us
please
thank.
I
You
so
my
name
is
tom,
herbert
as
corey
mentioned,
and
I'm
going
to
give
a
little
bit
of
presentation
on
how
we're
implementing
the
internet
checksum.
So,
as
gore
said,
this
is
kind
of
a
relevant
topic.
It
comes
up
a
lot.
Udp
encapsulation,
there's
a
lot
of
discussion
on
that
in
actually
any
encapsulation,
but
we're
also
seeing
questions
being
raised
in
udp
options,
and
just
yesterday,
fred
champlain's
talk
on
ib
parcels.
I
The
topic
also
came
up,
so
the
topic
is
relevant
because
the
internet
checks
them.
As
we
know
it's
quite
pervasive.
It's
in
all
major
transport
protocols,
udp
tcp,
it's
also
in
ipv4
and
the
fpv4
header,
checksum
kind
of
the
go
to
validation,
check.
We
like
it
because
it's
really
simple
easy
to
compute.
That's
really
just
an
addition.
I
It's
well
known
that
it's
not
the
strongest
of
verification
checks,
but
the
thing
that
makes
it
relevant
particularly
to
the
implementation
is.
It
can
be
very
costly
if
we
don't
optimize
it
properly
computing
a
sum
over
some
field
of
bytes.
I
But
I
think
the
the
relevance
with
protocols,
especially
new
one,
is
when
a
new
protocol
is
being
developed.
We
do
have
to
consider
the
implementation
effects,
particularly
something
that
is
potentially
costly,
like
the
internet
checks
them
next
leg.
Please.
I
The
algorithm
is
a
ones
complement,
two
byte
sum,
and
typically
this
just
starts
from
a
start
offset
in
the
packet
to
the
end.
So
we're
going
to
basically
sum
up
all
the
two
byte
words
in
the
packet
from
some
beginning
point:
you'll
see
the
end,
and
that
gives
us
an
answer
and
that
answer
is
kind
of
the
check.
I
The
sender
actually
sets
the
sum
such
that
it
sums
to
a
known
value
in
the
case,
so
the
internet
checks
on
it'll
be
all
ones
ffff
and
the
receiver
performs
the
same
algorithm.
So
it
adds
up
the
same
bytes
and
all
it
has
to
do
is
match
the
answer
to
the
expected
answer
and
if
it
matches,
then
we
assume
the
packet
is
correct.
I
So,
as
I
mentioned,
that's
using
tcp
udp,
the
ipv4
header,
checksum,
gre
checksum
udp
options
will
have
a
checksum
also,
and
it's
probably
used
in
several
other
transport
and
possibly
even
some
layer,
two
protocol
or
layer,
three
protocols
next
slide.
Please.
I
So
the
algorithm,
the
specific
operation,
it's
one,
a
one's
complement
addition
and
the
idea
of
one's
complement
addition
is,
we
add
two
binary
numbers
of
some
word
size.
It's
normal
addition.
If
a
carry
is
generated,
though,
that
carry
is
added
back
into
the
result,
to
get
the
final
answer
so
in
this
example,
we're
adding
210
as
a
byte
and
plus
106.
I
I
So
one
of
the
nice
things
about
the
internet
checksum.
It
has
a
lot
of
very
nice
arithmetic
properties
that
we
can
work
with
and
do
some
optimizations,
and
you
can
contrast
this,
for
instance,
to
some
of
the
stronger
checksums
or
verification
codes
like
crc32,
which
don't
have
these
properties
and
actually
makes
them
harder
to
compute
or
harder
to
use.
I
So
one
of
the
the
first
properties
that
is
well
known
and
it's
actually
used
in
internet
checksums
today,
all
ones
is
mathematically
equivalent
to
zero
and
one's
complement
addition.
So,
for
instance,
if
we
have
a
word-
and
we
add
the
os
version
of
that-
that
actually
equals
the
starting
word,
so
it's
an
identity,
so
there
are
effectively
two
zeros
in
checksum
or
one's
complement
edition.
I
It's
also
communicative
and
associative.
So
we
can
add
two
words:
we
can
switch
the
order
and
get
the
same
result,
and
then
we
can
group
sums
together
in
different
combinations
to
get
the
same
result.
So,
given
a
set
of
words
that
we're
summing
through
one's
complement
addition,
we
can
sum
them
up
in
any
order.
We
can
reverse
them.
We
can
group
them
together.
I
So
there's
a
lot
of
options
there
and
that
is
going
to
actually
be
quite
useful
in
some
of
the
kind
of
sub
algorithms
that
we
have
when
we're
manipulating
checksum,
we
can
also
define
a
sub
check,
sub
check
sum
subtraction,
which
is
basically
the
not
operation.
I
I
The
other
one
is
that
check
some
checksums
of
a
larger
word.
Size
can
be
folded
to
a
smaller
word
size
with
basically
an
equivalency.
So
we
can
take
a
32-bit
checksum
value,
for
instance,
and
we
can
perform
an
operation
and
make
that
a
16-bit
checksum
value,
which
is
equivalent
of
checksum
going
from
32
to
16
bits.
It
doesn't
work
the
other
way,
but
we
can
go
from
larger
word
sizes
to
smaller
word
sizes
and
I'll
have
an
example
a
little
bit
later
on
on
what
the
procedure
for
that
is
next
link.
I
I
If
it's
an
odd
number
of
bytes,
then
we
simply
add,
on
a
virtually
add
on
a
zero
logically
out
on
a
zero
to
the
n
in
some
protocols,
for
instance,
tcp
and
udp,
there
is
a
pseudo
header,
so
the
pseudo
header
is
also
added
in
the
idea
of
the
pseudo
header
is
to
protect
bytes
and
fields
outside
of
the
checksum
coverage
area.
So,
for
instance,
a
pseudo
header
would
include
the
ip
addresses
and
the
length
in
the
case
of
tcp
checksum,
so
the
pseudo
header
is
is
created,
that's
obviously
standard.
I
We
add
that
to
the
ones
complement
sum
or
that
coverage
area
from
the
start
to
the
end
offset
that
gives
us
the
full
ones
complement
sum,
and
then
we
basically
take
the
not
of
that,
and
that
becomes
the
result-
and
I
should
have
mentioned
that
before
this
operation
commences.
If
the
checksum
field
is
within
the
area
that
we're
check
summing,
usually
we'll
set
that
to
zero
for
the
purpose
of
compute
computation
and
then
once
we
have
the
result,
which
is
again
a
summing
over
all
of
the
bytes
that
are
covered.
I
I
I
I
That's
the
the
purpose
of
validation,
so
same
algorithm,
there's
a
pseudo
header,
there's
no
pseudo
header,
like
in
the
case
of
gre
checksum,
then
effectively
just
add
in
a
zero,
which
means
it's
a
noaa
and
again,
if
the
result
is
all
ones
we
consider
the
checksum
ballot.
If
it's
not,
then
it's
considered
to
be
a
corrupted
packet.
I
So,
as
I
mentioned,
the
internet
checksum
it's
computationally
expensive,
we
do
have
to
basically
touch
every
byte
every
word
of
a
packet
in
the
case
of
something
like
tcp
or
udp
checksum.
So
there
is
a
lot
of
motivation
to
optimize
this,
and
over
the
years
there
have
been,
has
been
a
lot
of
work
to
do
just
that,
we
can
actually
divide
this
into
basically
two
areas,
so
we
have
some
checksums
that
cover
small
amounts
of
data.
I
The
best
example
of
that
is
the
ip
header
checksum.
Typically,
this
is
20
bytes,
maybe
up
to
40
bytes,
it's
a
very
small
amount
of
data,
so
the
way
direction
we've
gone
on
that
is
to
have
very
specialized
cpu
instructions
to
handle,
handle
that
case.
Basically,
a
one's
complement
addition
instruction.
I
The
other
side
of
this
is
large
packets.
This
is
like
payloads,
so
this,
for
instance,
is
the
typical
case
of
tcp
and
udp,
where
we're
calculating
the
checksum
over
the
full
payload,
which
could
be
thousands
of
bytes.
In
that
case,
it
really
warrants
checksum
offload.
This
is
when
we
basically
have
the
hardware
actually
perform
the
checksum
on
behalf
of
the
host
and
all
nick
vendors,
for
instance,
today
support
some
variant
of
checksum
offload.
I
It's
heavily
motivated,
obviously
by
tcp,
where,
where
the
checksum
wasn't
or
is
never
optional,
but
also
udp,
so
what
they've
done
is
they
have
optimized
the
nics
in
a
couple
of
ways
we'll
get
into
that
in
a
moment,
but
the
upshot
of
this
is
for
tcp
and
udp
checksums
from
a
host
perspective.
I
The
nics
actually
do
all
of
the
work,
and
basically
we
completely
saved
the
cpu
it's
already
baked
in
these
have
been
around
for
a
very
long
time.
So
generally,
this
sort
of
checks
on
computation
is
not
considered
to
be
a
problem
in
terms
of
performance.
At
this
point
next
slide,
please.
I
So
if
we
look
at
the
kind
of
the
the
naive
method
to
write
a
checksum
algorithm
in
pseudocode,
so
we
want
to
do
a
16-bit
sum
and,
as
I
mentioned,
if
the
first
thing
we
want
to
do
is
check.
I
So
the
sum
on
the
fourth
line
there
we're
just
adding
every
two
bytes
and
having
a
running
sum
and
each
each
time
we
do
the
add
we
have
to
check
the
carry
if
the
carry
is
set,
which
is
what
that
check
is.
If
some
16
is
greater
than
all
ones,
then
we
add
the
carry
back
in
and
proceed
to
the
next
two
words.
So
it's
again
a
running
one's
complement,
some
where
we're
basically
performing
a
normal
addition
of
two
bytes
and
then
adding
in
the
checksum.
Q
Is
there
any
good
data?
You
know
what
the
biggest
reasons
are
for
check
some
errors,
and
you
know
what
their
probabilities
are.
We
just
kind
of
you
know
the
use
case
always
escaped
me
from
the
analysis.
I
Yes,
there
has
been
a
lot
of
work
on
that.
I
think
it's
probably
out
of
the
scope
of
this
presentation,
but
we
do
know,
for
instance,
that
checksum
is
susceptible
to
search
and
combinations.
It
will
detect
all
one
byte
errors,
but
it's
possible
that
it
can
miss
or
one
bit
errors.
It
can
it's
possible.
It
can
miss
two-bit
errors
because
they
can
cancel
out
each
other.
D
You
asking:
is
it
memory
copies
or
dma
failures
or.
Q
We
very
often
had
this
discussions
about.
When
can
I
leave
away
the
checksum?
Remember
gory
right
so
when
we
did
these
tunnel
things
and
udp
without
checksum,
so
there
was
the
ongoing
debate,
but
me
coming
mostly
from
the
network
side.
I
never
looked
into
the
end
to
end,
which
is
what
the
expensive
part
is.
So
I
was
just
curious.
I
Thank
you
I
I
yes,
I
would
suspect
it
would
be
in
the
network
if
there
is
an
error.
Modern
computers
obviously
have
a
bunch
of
redundancy
checks
and
have
ecc
and
memory.
So
it's
much
less
likely.
Network
corruption
is
probably
more
likely.
I
It's
still
not
not
super
likely.
As
I
mentioned
we
do
know,
the
the
internet
checksum
is
is
fairly
weak.
If
we
need
strong
integrity
checks
for
security
purposes,
for
instance,
then
we
do
have
to
use
much
stronger,
much
stronger
things,
much
stronger
calculations.
I
It's
so
one
question
that
often
comes
up
is:
why
isn't
the
ethernet
crc
sufficient?
The
main
difference
is
the
internet.
Checksum
is
end
to
end,
so
ethernet
crc
does
just
much
stronger
protection.
That's
always
in
hardware.
Internet
check
somehow,
however,
is
much
weaker.
There's
also
another
value
to
it.
In
the
case
of
something
like
udp
options,
the
checksum
actually
can
be
used
to
differentiate
between
standard
uses
of
a
byte
space
and
non-standard
use
use
cases.
I
L
Not
not
a
question
but
an
observation.
So
from
my
experience
with
our
customers,
we
have
checked
some
errors,
typically
in
the
order
of
perhaps
once
a
year
across
the
entirety
of
our
customer
base,
and
that
typically
happens
with
a
fairly
old
network
gear
which
has
not
ecc
protect
memory
and
when,
basically
the
packet
is
in
flight
and
you
have
unprotected
or
undetected
multi-bit
errors,
which
then
show
up
in
higher
layers
of
the
protocol
where
you
do
have
crcs
that
will
flag
a
bad
transport
in
tcp,
for
example,
okay,.
D
Thanks
richard
tom,
you
have
about
10
minutes
left.
I
Okay,
so
one
of
the
optimizations,
the
first
one
is.
We
can
actually
sum
over
larger
words,
as
I
mentioned,
so
we
can
do
a
32-bit
edition
and
we
can
translate
that
into
16
bits
next
slide,
please.
I
I
So
this
is
forwarding
the
checksum
to
16
bits.
It's
pretty
straightforward.
I
Basically,
just
add:
if
we
start
with
the
32-bit
value,
add
the
high
order,
16
bits
low
order,
16
bits
do
one
com
once
complement
sum
and
the
result
is
basically
the
folded
value
and
again
this
falls
out
from
the
arithmetic
properties
of
the
checksum
next
slide.
Please.
I
On
modern
cpus,
as
I
mentioned,
there
are
specific
instructions
to
do
this
to
optimize
the
once
complement
edition
in
x86,
for
instance,
there's
an
ad
with
carry
the
ad
instruction
itself
sets
a
carry
bit
in
a
control
register
on
x86,
and
then
the
specialized
instruction
ad
with
kerry
performs
an
ad,
but
then
it
does
that
check
of
the
carry
and
adds
the
result
back
in.
So
this
is
used
extensively
in
networking
stack,
for
instance,
when
we
want
to
implement
the
checksum
over
the
ipv4
header.
I
This
basically
is
resolved
to
a
few
ads
with
carries
next
slide.
Please.
I
So
I'll
briefly
touch
on
this,
this
is
actually
code
to
do
checks
on
the
ipv4
header.
The
first
section
is
actually
doing
eight
bytes
at
a
time.
So
in
order
to
check
some
over
20
or
do
to
check
some
calculation
over
20
bytes,
we
do
an
eight
byte,
add
an
eight
byte,
add
and
then
a
four
byte
add
and
all
those
are
ones
complement
add
and
at
the
end
we
add
in
the
carry
bit
when
it
was
generated.
I
I
So
I'll
touch
on
this
briefly,
one
of
the
common
operations
in
dealing
with
checksum
is
if
a
packet
header
is
updated.
For
instance,
in
nat
the
ip
addresses
are
updated.
We
do
not
want
to
recompute
the
full
ip
checksum.
We
don't
want
to
recompute
the
full
tcp
checksum.
What
we
can
actually
do
is
is
figure
out
what
the
delta
is,
meaning,
if
we
add
a
certain
value
into
the
checksum
field,
that
would
basically
offset
the
change
we
made
elsewhere
in
the
packet.
I
So
it's
very,
very
common
thing
to
do,
and
it's
also
very
highly
performant,
because
we
can
even
pre-compute
in
some
cases
this
addition
so
updating
the
checksum
for
something
like
nat
may
come
down
to
one
single
ad
with
carry
operation.
One
single
one,
complement
addition
and
then
set
the
field.
So
this
again,
the
arithmetic
properties
of
checksum
make
this
highly
efficient.
In
order
to
do
things
like
this
next
slide,
please.
I
So
checksum
offload,
as
I
mentioned,
is
very
useful
to
get
performance
when
we're
offloading
or
when
we're
processing,
tcp
and
udp
checksum.
As
I
mentioned
all
nics
have
this
there's
actually
two
methods:
I'm
only
going
to
touch
on
the
the
more
common
method,
or
at
least
the
preferred
method,
which
is
what
we
call
protocol
agnostic
offload.
I
So
this
is
a
case
where
the
nic,
which
is
network
interface,
controller,
that's
performing
the
checksum
calculations
on
behalf
of
the
host,
does
not
need
to
know
what
the
particular
protocols
in
the
packet
are.
It
can
do
a
checksum
calculation
both
on
send
and
receive
for
arbitrary
protocols
works
to
encapsulate
checksums,
and
we
can
use
these
techniques
to
validate
multiple
checksums
per
packet.
I
The
alternative
is
more
of
a
legacy
mode
which
older
devices
did
and
some
of
the
original
checksum
offloads
did
this.
They
only
understand
certain
protocols
and
can
do
verifications
on
those
protocols.
However,
outside
of
that,
for
instance,
introducing
new
encapsulation,
we
found
that
they
basically
don't
work.
So
there's
a
strong
preference
in
the
community
to
use
protocol
agnostic,
offloads
and
in
the
case
of
checksum.
I
So
transmit
checksum
offload.
Basically,
this
is
following
the
same
procedure
as
a
host,
except
we're
going
to
tell
the
nick
or
the
device
how
to
do
it.
So
we
send
the
start
offset
the
end
offset
and
a
transmit
descriptor
you'll
see
the
end.
Offset
is
the
end
of
the
packet,
so
that's
kind
of
implicit
and
we
give
the
device
where
to
put
the
checksum.
I
The
one
thing
the
host
has
to
do
is
it
will
kind
of
prime
the
checksum
field.
So,
for
instance,
if
the
pseudo
header
is
involved
like
in
case
of
tcp,
the
host
will
compute
the
once
complement
some
of
this
of
the
pseudo
header
and
knot
it
and
then
set
it
into
the
checksum
feel.
So
when
the
device
gets
this
packet
via
a
transmit
descriptor,
it
performs
the
calculation
and
all
it
does
is
start
from
the
starting
offset
to
the
end
of
the
packet
perform.
I
I
So
I'll
skip
this
for
a
day.
That's
wanted
to
touch
on
there.
There
is
a
way
to
do
multiple
checksums
per
packet,
in
particular
network
devices.
I
They
only
have
the
capability
to
offload
one
checksum
per
packet.
However,
we
also
want
to
take
advantage
of
that
tough
load,
multiple
of
them
like
in
the
case
of
udp
encapsulation,
where
the
udp
checks
them
may
be
sent
the
tcp
checksum
may
also
be
set.
We
want
to
offload
both
of
those.
There
is
a
way
to
do
this
called
local
checksum
offload
without
requiring
the
device
to
actually
compute
a
checksum
twice
next
slide,
so
receive
checksum
offload
is
also
similar,
so
we
wanted
a
protocol
agnostic
way
to
do
that.
I
What
we
do
is
we
just
have
the
device
calculate
the
ones
complement
sum
over
the
whole
packet
and
it
will
return
that
sum
to
the
host
and
then
the
host
stack
can
take
that
sum,
and
it
can
manipulate
it
such
that
it
can
derive
the
checksum
over
any
bytes
of
the
packet.
Basically,
the
way
it
does.
That
is,
as
it
goes
through
the
packet.
I
I
If
we
have
the
full
checksum
value
from
the
beginning
to
the
end
of
the
packet,
what
we
do
is
we
subtract
out
the
ones
complement
sum
of
the
ipv6
header,
and
in
this
case
there
is
a
surplus
area.
We
subtract
that
out,
what's
left
is
precisely
the
sum
over
the
udp
and
udp
payload,
and
then
the
host
can
use
that
to
calculate
the
checksum.
So
this
did
require
the
host
to
do
a
couple
of
checks
on
calculations
but
they're
on
the
small
portions
of
the
packet,
not
the
big
portion.
I
So,
similarly
to
transmit,
we
do
want
to
be
able
to
support
offloading
of
three
checksums.
This
kind
of
naturally
works
on
the
receive
side
because
the
algorithm
they
just
described
so
we
don't
need
any
special
support
from
the
device.
All
we
need
really
is
that
that
full
checks
on
what
we
call
checksum
complete
next
way,
so
I
will
touch
on
these
briefly.
I
I
This
is
where
we
want
to
use
larger
packets
from
the
stacks
perspective
and
then,
when
we're
sending,
we
want
to
break
those
packets
into
smaller
mtu,
size,
packets
for
transmit
and
similarly,
on
our
receive,
we
want
to
collect
together
some
number
of
packets
on
a
flow,
maybe
make
a
super
packet
and
get
that
to
the
stack.
I
We
get
a
lot
of
advantages
from
this
if
we
are
able
to
process
larger
packets
versus
smaller
packets,
so
it
reduces
the
per
packet
overhead.
If
we
can
consolidate
coalesce
packets
like
this,
we
also
have
to
consider
the
checksum,
though,
when
we're
both
sending
and
receiving.
So
I
don't
have
time
to
cover
this
today,
but
there
has
been
a
lot
of
work
in
this
area
and
in
fact,
gso
and
tso
and
lro
they
become
quite
critical
again
for
performance.
I
So
we
do
want
to
maintain
these,
and
this
is
another
area
where
the
checksum
becomes
relevant,
because
we
still
want
to
maintain
the
properties
of
the
checksum
in
terms
of
performance,
and
I
believe
that's
all
I
have
for
today.
Any
questions.
E
More
of
an
observation
here,
the
checksum
delta
technique
is
very
useful
for
for
updating
fields
in
the
ip
header,
for
example,
that
commonly
get
changed
on
root,
such
as
the
ecn
field.
E
There
can
be
bugs
in
the
these
implementations
and
these
results
in
certain
packets
that
have
been
changed
on
boot
having
incorrect
checksums
while
the
rest
are
correct.
So
it's
a
useful
technique.
You
just
have
to
make
sure
you
get
it
right.
E
Otherwise,
you
get
problems
that
might
sometimes
be
hard
to
notice,
because
they
an
incorrect
received
check.
Some
looks
like
a
dropped
package
which
has
the
same
effect
on
congestion
control
as
a
ce
mark,
for
example,.
D
I
The
the
stack
will
basically
when
it
creates
a
packet
it'll
know
where
the
checksum
is.
So
that's
it's
not
super
critical
for
it
to
be
in
a
fixed
location.
I
However,
the
device
also
may
have
constraints
so,
for
instance,
it's
very
likely
that
a
device
when
it
wants
to
offload
a
checksum,
the
start
offset
may
have
to
be
and
and
both
both
of
the
offsets
may
need
to
be
within
the
header
of
the
packet.
I
So
all
these
devices
really
assume
that
the
checksum
were
offloading
and
the
field
are
in
the
header
because
they
may
only
have,
for
instance,
a
one
byte
offset
field
real
estate
in
the
transmit.
Descriptor
is
very
costly,
so
we
can
only
get
a
few
bytes
for
this,
so
they
may
use
a
single
byte.
I
Typically,
we
assume
that
the
checksum
is
at
a
two
byte
alignment,
so
a
256
of
256
values.
We
can
double
that,
which
means
the
checksum
could
be
anywhere
up
to
500
and
I
guess
10
bytes
into
the
packet
so
having
it
at
a
fixed
location,
isn't
as
relevant
as
having
it
make
sure
it's
aligned
to
an
offset
of
two
bytes
and
try
to
keep
it
within
the
header
of
the
packet
so
that
the
we
can
put
it
in.
We
can
fit
it
into
the
device.
Obviously,
software
stacks,
it's
not
so
relevant.
I
There
may
be
some
instances,
particularly
on
some
older
systems
where
they
don't
really
like
to
do
unaligned
operations.
So
it's
very
basically
like,
I
think,
all
their
currently
defined
protocols.
The
checksum
start
should
always
be
two
byte
aligned
to
avoid
any
issues
with
unaligned
operations.
I
Yeah,
and
I
think
that
alignment
is
what's
in
the
current
draft,
okay,
so
yeah,
I
noticed
it
was
one
bite
a
line,
so
I
also
point
out
there's
a
little
bit
of
a
misnomer
that
I
see
come
up,
particularly
this
is
with
in
the
context
of
udb
encapsulation
and
in
the
context
of
udp
options.
I
When
someone
sets
the
mix
of
checksum
optional,
that
is
not
necessarily
saving
cycles
or
saving
performance
and
in
fact,
in
some
cases
it
can
be
worse.
And
the
reason
for
this
is,
if
you
consider
something
like
european
encapsulation,
where
rfc,
I
think,
69
35
and
6936
basically
allow
checksum
zero,
even
in
ipv6,
where
ipv6
basically
states
that
checksum
is
always
required
in
certain
circumstances,
particularly
from
a
host
point
of
view.
I
If
someone
sets
the
checksum
to
zero
thinking
that
that's
going
to
save
cycles,
we
may
still
have
to
compute
the
checksum,
because
there's
an
embedded
tcp
packet
and
we're
receiving
this
on
say
a
host
in
a
virtual
network
that
has
a
number
of
vms.
So
we
still
have
to
offload
that
tcp
checksum,
so
we're
not
really
saving
much
there,
because
we
have
to
go
through
all
the
algorithms
and,
as
I
pointed
out,
we're
still
going
to
get
the
full
checksum
from
the
device
anyway.
I
But
the
problem
is,
in
some
of
the
legacy
check,
some
algorithms
that
I
described
if
we
set
a
checksum
to
zero
and
we
still
have
to
offload
that
in
our
tcp
checksum.
This
receiver
may
in
fact
have
to
go
through
the
whole
checksum
calculation
on
the
host,
which
is
really
where
performance
drops
significantly.
I
So
I,
I
think,
the
the
checks,
the
checksum
we've
absorbed,
at
least
on
the
host
and
the
host
on
the
next
stack,
there's
very
little
value
to
disabling
the
checksum
performance,
wise
from
a
host
perspective.
I
The
only
reason
I
think
that
that
you
would
disable
it
is
if
the
device
like
routers,
which
was
the
motivation
for
disabling
checks
on
ipv6,
if
they're,
communicating
and
they're
terminating
udp,
they
may
not
have
any
any
hardware
to
support.
Checksum
offload.
I
I
And
and
yeah
so
there's
a
obviously
there's
a
lot
of
discussion
on
this,
but,
but
I
do
want
to
stress
this
fact-
and
I
said
it
before:
we've
absorbed
it
the
cost
of
the
udp,
the
tcp
checksum
right.
That's
not
a
current
problem.
We
have
the
current
problem
we
have
is
when
new
protocols
are
introduced.
I
How
do
we
continue
to
leverage
those
and
not
accidentally
introduce
some
sort
of
protocol
combination
where
we're
computing
checksum
checksums
on
the
host,
and
basically
the
performance
is
such
that
the
the
protocol,
the
new
protocol,
may
not
even
be
viable.
So
this
is
where
really
the
implementation
becomes
critical.
We
have
to
consider
the
implementation
in
these
new
protocols.
D
Thank
you
we'll
move
to
joel's
presentation
now.
Thank
you.
Thank
you
tom
for,
for
that,
oh
and
I'm
going
to
be
joe.
So
just.
D
So
gauri
fairhouse
proxying
as
joe
touch
on
the
udp
options,
update,
there's
just
five
slides
and
the
first
one
talks
about
core
updates
between
version
13
and
version
15.
D
and
the
first
one
being
the
new
option
area
structure,
which
is
why
we
allocated
most
of
the
time
in
tsvwg
to
the
overview
by
tom,
on
what
what
the
ocs
is
doing,
which
is
to
check
some
that's
applied
in
udp
options
over
the
options
area
where
it's
placed
and
how
it's
aligned-
and
this
is
now
in
the
draft
it's
been
in
for
a
little
while
being
revised.
So
the
text
should
now
be
good.
Please
look
at
this.
D
If
that
concerns
you
joel,
also
integrated
the
unsafe
kind
and
primarily
to
do
with
fragmentation
and
the
fragmentation
text
was
updated.
Request
response
was
now
required
as
an
option.
In
other
words,
if
you're
implementing
udp
options,
we
expect
that
function
to
be
part
of
the
implementation.
You
provide
next
slide.
D
There
are
a
few
other
updates,
obviously
nits.
We
got
quite
a
lot
of
feedback
by
the
working
group
and
in
the
working
group
feedback
people
noticed
little
inconsistencies,
numbers
that
didn't
form
a
complete
series,
numbers
that
were
missing
and
a
few
places
where
we
made
clarifications
about
the
format.
D
One
of
the
things
we
did
change
in
the
more
recent
versions
is
to
decide
when
the
ocs
field
was
optional.
The
thing
that
tom
was
just
talking
about
whether
we
should
have
an
optional
mode
to
ocs.
D
D
D
D
I
So
I
asked
this
on
the
mailing
list.
What
is
this
state
of
implementation?
D
I
think
the
question
is
really
directed
to
joe
and
the
working
group
in
general.
We
implementation
experience
of
the
latest
version.
I
have
not
heard
anything
discussed
on
the
mailing
list.
Please
blink
it
there.
The
general
concept
of
udp
options
was
that
an
earlier
version
of
this
was
implemented.
Tom
jones
will
speak
on
dplp
mtud
and
he
did
part
of
that
implementation
work.
It
wasn't
pushed
to
kernel.
D
It
wasn't
reflecting
the
latest
version
of
this
draft,
but
it
demonstrates
perhaps
that
the
thing
might
well
be
implementable,
as
in
other
areas,
we'd
love
an
activity
to
implement
this
we'd
love
a
hackathon
activity
to
take
and
make
something
out
of
this
implementation.
Experiences
would
be
welcome.
Does
that
answer
tomorrow?
Do
you
want
to
follow
up.
D
Excellent
clashing
toms,
that's
good!
Is
that
the
last
slide.
Okay,
I'm
going
to
the
working
group
chair
position
because
I'll
sit
down.
D
I'll
just
announce
that
when
we
complete
the
revision
of
udp
options,
that's
coming.
We
expect
the
document
to
be
stable
and
at
this
point
I've
invited
some
people
to
review
this
on
behalf
of
tsvwg
tommy,
paulie
and
colin
perkins
have
agreed
to
look
through
the
specification
and
see
if
they
can
see
anything
from
a
full
read-through,
as
this
is
our
first
stage
of
review.
This
is
a
new
transport
protocol.
D
U
Can
I
have
the
next
slide
please,
this
is
just
our
names,
so
we
have
the
the
dprp
mqtt
udp
options
document.
It
just
provides
additional
text
on
top
of
udp
options,
to
explain
how
to
integrate
the
requirements
to
implement
dplpm2ds
so
the
formats
and
a
bit
about
how
to
send
packets.
It's
a
really
short
document,
it'd
be
great.
If
more
people
could
read
it
since
the
last
revision
we've
aligned
this
with
joe's
spec.
U
A
lot
of
this
alignment
has
just
been
minor
updates
and
terminology
and
what
things
are
called,
and
so
we
changed
probe
token
to
nonce
value,
because
that's
what
the
language
joe
went
to,
we
clarified
that
we
can't
send
the
two
kinds
of
options
more
than
once
in
each
datagram
and
we've
rewritten
some
of
the
text
on
probing
with
data
to
make
it
easier
to
follow.
It's
a
it's,
a
small
change.
It's
a
really
small
document.
U
We
don't
have
a
ton
to
say
right
now
and
if
we
go
to
the
next
slide,
I
think
the
most
important
questions
there
we
think
we're
finished.
I
know
we're
we're
metered
by
udp
options
proper,
but
we
think
we're
done.
We'd
love
to
get
some
more
feedback
from
the
working
group.
I
actually
think
this
document
could
probably
be
almost
wrapped
up
pending
just
minor
changes
from
joe
in
terminology
and
language
and
udp
options.
V
Yes,
mike
heard
I've,
I
read
the
document.
I
didn't
post
any
comments
to
the
mailing
list,
because
I
didn't
have
any.
It
looks
really
good
the
one
question
I'd
have
for
you,
tom
and
gory.
V
Would
you
envisage
that
the
the
dpf,
the
dpo
pmtud
procedure
could
be
implemented
as
a
shim
on
top
of
the
udp
options,
rather
than
being
integrated
into
it?
I
asked
that,
because
that
it
make
it
that
makes
a
difference
in
what
we
want
to
specify
in
the
base
spec
for
an
upper
layer
interface.
I
think
that's
something
that
hasn't
really
gotten
sufficient
attention
and
scrutiny.
Thank
you.
U
I
am
not
really
sure
what
the
question
is.
I
think
you
mean
I
mean
so
if
we're
using
the
the
udp
options
to
implement
dpl
pm2d,
then
it's
not
really
a
shim.
U
I
think
the
the
service
offered
to
the
upper
layer
is
reporting
the
discovered
value,
turning
on
and
off
dplp
m2d
and
maybe
tweaking
some
of
the
parameters
there's
an
sctp
implementation
of
dplpm2d,
which
is
experimental,
which
has
sort
of
five
parameters
to
it.
So
it's
a
really
small
service
set.
I
don't
really
know
how
you
would
shim
this
in.
It's
not
really
a
core
part
of
udp
options,
because
it
is
protocol
action
rather
than
just
the
wire
format,
which
is
most
of
udp
options,
see
I'm
a
bit.
Maybe
you
could
clarify.
V
D
I
guess
the
church
action
is.
This
document
will
be
declared
finished
if
nobody
comes
and
says
that
more
work
is
needed.
So
please
review,
please
check
it
and
if
I
hear
nothing
then
we
will
ask
the
co-chairs
to
start
a
working
group
last
call
on
this,
although
we
will
probably
keep
that
in
synchronization
with
the
udp
options
based
pick
it's
12
noon
and
we're
at
the
end
of
the
meeting.