►
From YouTube: IETF109-6MAN-20201120-0900
Description
6MAN meeting session at IETF109
2020/11/20 0900
https://datatracker.ietf.org/meeting/109/proceedings/
B
While
he's,
but
we're
waiting
for
him
to
come
back,
one
thing
I
discovered
I
had
to
do
is
my
I'm
on
the
beta
stream
of
my
home
network
provider,
and
they
usually
do
updates
so
like
around
two
in
the
morning.
So
I
got
them
to
turn
it
off
for
this
week.
B
It's
not
usually
an
issue.
So
all
right,
why
don't
we
get
started?
Welcome
six
man?
This
is
our
second
session
next
slide.
B
I
think
you've
seen
this.
I
think
you've
seen
that-
and
this
was
the
agenda
from
the
first
session
we
didn't
get
to
linda
dunbar's
talk,
so
that's
going
to
be
the
first
talk
now
next
slide,
and
then
we
have
a
report
on
the
sr
compression
design
team.
There
was
just
many
of
you
may
have
been
in
the
spring
working
group
that
just
preceded
this,
and
there
was
a
pretty
big
long,
just
discussion
about
this
we're
gonna.
B
A
C
C
Good,
thank
you
so
here
I'm
going
to
talk
about
ipv6
based
solution
for
the
5d
edge
computing
speaking
service
next
page.
Please
a
little
bit
background.
So
in
5p
at
computing
it
applies
to
edge.
Computing
is
a
project
in
3gpp
essay
2
under
the
project,
23
748.
C
So
in
the
nutshell,
there
is
a
5d
system
on
the
left-hand
side
of
the
dotted
line
we
have
cell
tower.
We
have
a
packet
session
anchor
and
we
have
distributed
functions
for
5g
system.
There
is
an
interface
to
the
local
data
network.
They
call
ldm
for
edge
computing.
C
C
Well,
the
common
characteristics
of
this
5gh
computing
is
that
there
could
be
many
back-end
servers
for
the
same
application
and
they
are
very
close
to
you
in
proximity
next
page.
Please.
C
C
Today
the
train
is
using
anycast
instead
of
having
multiple
local
dns
resolvers
in
each
ldm.
There
are
many
many
benefits
of
using
anycast,
not
only
it
can
balance
the
traffic
across
different
servers
using
the
utilizing.
The
network
condition
is
actually
basically
being
able
to
utilize
the
routing
layer,
information
to
achieve
better
load,
balancer
load
balancing
and
to
remove
the
single
pointer
failure
at
the
dns
resolver.
All
the
application
load
balancers
also
that
the
many
client
or
user
device
may
use
their
cached
ip
addresses
for
extended
period
of
time.
C
Of
course,
using
any
cast
among
multiple
servers
in
close
proximity
also
bring
up
issues
like
because
they
are
so
close
to
each
other
for
5g
network.
C
One
application
server
in
ldn1
maybe
is
very
close
to
the
one
in
ldm2,
so
the
network
distance
difference
is
not
significant
enough
to
balance
out
other
factors
like,
for
example,
higher
capacity
or
maybe
some
side,
maybe
more
preferred
than
others.
So
there
are
other
issues
brought
up
by
this
next
slide,
please.
C
So
the
sticky
service
in
the
5k
edge
computing
is
referring
to
the
scenario
where
the
edge
servers
are
very
close
to
each
other
when
the
user
device
ue
initially
anchored
to
the
psa
one
package
session
anchor
one
and
moving
to
a
second
cell
tower
and
to
be
anchored
at
a
psa2
for
this
particular
ue.
C
The
service
is
better
to
be
served
by
the
original
server,
so
they
call
it
sticky.
So
in
a
way
that,
if
you
look
at
the
local
data
network
too,
there
will
be
many
many
traffic
from
different
uee,
different
users
or
different
applications,
and
even
from
the
same
user,
it
may
have
multiple
applications.
C
All
those
users
they
actually
may
resolve
to.
For
example,
ip3
may
may
be
anchored
into
the
edge
application
server
too
versus
the
newly
migrant
ue1
need
to
go
to
ldl1.
So
in
the
3gpp
sa2
edge
computing
project,
there
are
many
proposed
solutions
and
we
find
that
actually
ipv6
provide
a
very
elegant
solution.
That's
why
we
want
to
present
here
to
get
people's
feedback
next
slide.
Please
next
slide.
C
So
ipv6
has
this
elegant
very
convenient
way
having
this
destination
extension
header.
So
this
is
the
flow
when
the
ue
traffic
coming
to
anchored
to
the
psa
psa1.
C
So
after
the
traffic
coming
out
of
psa1
to
the
ingress
router
to
the
local
data
network
will
have
source
and
destination
based
on
the
criteria
in
choosing
the
closest
any
cash
server,
and
it
will
end
up
to
go
to
the
anycast
server,
say
s1
attached
to
the
router
r1,
we
call
r1
being
the
egress
router
and
we
assume
that
for
the
anycast,
for
the
sticky
service
is
not
for
all
services,
so
there
will
be
application
manager
informing
the
the
routers
of
the
services
that
they
need
to
be
speaking.
C
So
when
the
s1
server
coming
back
with
the
response
back
to
the
source,
the
r1
basically
insert
use
the
acl
to
filter
out
the
service
and
insert
the
sticky
service
sticky
destinations
of
tov
inside
the
destination
extension
header.
So
then
the
traffic
comes
back
to
back
to
the
ue
and
in
the
ue
it
has
the
expected
behavior
to
copy
the
destination
extension
header
from
the
received
packet
to
the
subsequent
packets,
for
the
services
and
for
the
flows
that
need
to
be
sticky.
C
So
with
that,
when
the
ue
moved
to
anchor
to
the
psa2
on
the
traffic
comes
to
the
router
b
and
the
router
b
will
be
able
to
be
informed
by
the
sticky
service
management
system
to
filter
all
those
packets
and
be
able
to
filter
out
the
original
egress
router,
which
is
r1,
and
with
that
information,
our
router
b
can
either
use
a
tunnel
or
use
a
segment
rounding
to
forward
the
packet
to
explicitly
to
the
router
one
to
achieve
this
service
to
be
sticky
to
the
original
server
next
page.
Please.
C
So
here's
the
sticky
service,
subtle,
there's
a
sticky
type
right
now,
which
is
defined
as
just
to
represent.
We
have
two
types:
one
is
the
the
egress
router
can
be
identified
with
32
bits
identifier,
even
if
the
router
equals
router
is
using
ipv6
address.
C
We
assume
that
within
some
environment
the
local
data
network
is
managed
by
one
provider.
They
can
use
a
shorter
identifier
to
represent
the
egress
router.
There
could
be,
if
that's
not
the
case,
say
in
the
heterogeneous
environment.
You
have
to
use
the
actual
address,
then
the
sticky
type
can
set
to
two.
Then,
with
that
you
will
use
the
tunnel
endpoint
subtle,
which
is
defined
by
the
tunneling
cap
draft
to
represent
the
egress
router,
and
we
put
this
sticky
surface
sticky
destination
subtle
in
the
destination
extension
header
next
page.
Please.
C
Another
option
say
this
is
hope
to
get
some
feedback
from
the
experts
here.
There's
also
the
ipv6
routing
extension
header
so
based
on
the
definition
of
the
routing
extension
header
is
to
say,
is
to
list
the
number
of
intermediate
nodes
to
be
visited
on
the
way
to
the
destination
packet
destination.
C
So
in
this
case,
that
would
be
the
egress
router
to
be
one.
The
nodes
to
be
visited
so
we'd
like
to
get
some
feedback
is.
Is
it
okay,
like
routing
extension,
header
versus
destination
extension
header?
Do
people
have
any
opinions
on
that.
C
F
Okay,
I
was
going
to
agree
with
ron.
I
think,
if
you
give
it,
if
you
put
in
the
destination
header,
then
you
open
the
possibility
that
it
could
be
still
be
delivered
to
the
end
point
at
was
it
l2
or
whatever,
but
that
l2
could
open
a
proxy
session
for
certain
applications
and
maybe
proxy
it
back
to
l1.
C
Thank
you,
okay,
next
page,
so
this
is
for
when
the
user
device,
like
older
older
version,
doesn't
do
what
3gpp
spec
is
doing
so
it
doesn't
do
anything
with
the
incoming
package
doesn't
copy
the
destination
option
header
into
the
next
subsequent
package.
This
is
one
option
to
achieve
the
sticky
service.
C
C
So
with
that
5g
management
system
can
notify
the
local
data
network
management
system
that
this
particular
user
is
to
be
re-anchored,
and
then,
with
that
this
router
ingress
router
will
be
able
to
find
out
through
his
routing
protocol,
find
out
where
this
psa2
is
attached
and
be
able
to
send
this
sticky
service
sub
theory
information
to
the
new
ingress
router.
So
that's
the
service
without
dependence
on
the
ui
behavior
next
case
next
page
here
is
the
another
case
where
there's
no
assistance
from
5g
system
at
all.
C
You
need
to
inform
his
neighboring
routers,
assume
that
the
5g
site
a
there's,
our
a
there,
has
like
10
neighbors,
immediate
neighbors,
and
then
he
need
to
be
able
to
send
his
this
destination
header
information
to
all
the
neighboring
ingress
router.
C
So
all
those
ingress
router
will
have
the
acl
to
filter
out
the
sticky
service
ids
and
if
he
ever
receive
a
packet
which
matches
with
the
sticky
service
id
and
the
float
label
from
his
neighboring
router,
it
will
start
to
extract
the
the
destination
egress
original
egress
router
from
the
destiny
destination
header
and
then
form
either
tunneling
or
sr
approach
to
forward
the
packet
to
the
original
original
egress
router
next
page.
C
Okay.
So,
as
I
said
earlier,
the
purpose
is
to
get
feedback
from
the
experts
here
to
validate
this
approach
actually
workable,
and
we
are
looking
for
feedback
if
it's
workable,
we
can
propose
this
to
3gbp
to
show
for
ipv6.
Actually,
the
solution
is
very
elegant.
Instead
of
many
other
proposed
solutions
out
there,
in
addition
to,
I
should
say.
B
Okay,
so
we're
a
little
over
time,
but
if
you
have,
if
there's
some
quick
comments,
if
not,
we
can
take
it
to
the
list.
B
All
right
well,
thank
you
linda
and
yeah,
but
please
please
encourage
people
to
comment
on
the
list.
C
A
Let's
see
so
we
chang
please
the
floor
is
yours.
G
G
The
design
team
was
set
up
at
this
july
and
we
have
eight
team
members
from
huawei
channel
mobile
telecom,
cisco,
juniper,
asu,
nokia
and
zte.
We
have
a
middle
east.
Now
it's
a
private,
but
the
archive
can
be
read
by
anyone
about
the
documents.
G
G
I
think
the
team
members
in
the
design
team
offer
great
efforts
on
the
document
and
we
have
already
submitted
the
requirement
dropped
on
time
as
we
promised
here.
I
I
think
we
should
thank
all
the
team
members
next
page.
G
G
And
the
working
group
also
hope
we
can
answer
the
following
two
questions.
One
is
why
there
are
several
proposals
already
standardized:
why
the
is
there
value
in
picking
one
or
some?
G
G
So
for
the
design
team,
I
think
we
have
a
rough
plan
first
stage
during
itf
108
to
itf
109,
we
hope
to
output
a
requirement
draft.
I
think
we
have.
G
Achieved
this
plan,
and
the
second
is
during
itf
109
to
itf
110,
we
will
do
some
analysis
on
several
proposals
and
the
output
initial
analysis
draft.
So
this
is
a
basic
plan.
Next
page.
G
Next
page
there
we
are
sorry
yeah
the
precise
of
the
design
team
so
far
so
good.
I
think
we.
G
We
have
achieved
what
we
promised
till
now.
We
kicked
off
since
end
of
july
and
till
now
we
have
had
more
than
20
meetings
during
those
meetings.
G
We
drafted
the
two
versions
of
a
requirement
and
you
can
access
the
current
version
of
the
draft
through
the
link
in
this
page,
and
we
are
calling
for
the
review
and
we
also
hope
a
six-month
working
group
can
give
your
comments
to
us
so
that
we
can
resolve
those
comments
as
soon
as
possible
and.
G
B
G
Okay:
here's
is
overview
of
the
requirements
draft.
There
are
three
main
parts
of
the
requirements.
G
The
first
aspect
aspect
is
srb6,
the
seedless
compression
requirement.
The
second
activist
is
srb6
specific
requirement
and
the
last
one
is
protocol
design
requirements
as
well
as
we
have
a
appendix
in
this
draft,
which
contains
some
requirements
without
unanimity.
The
unanimous
concerns
within
the
design
team
in
the
figure,
the
item
with
the
right
flag,
is
those
which
have
been
agreed
within
the
design
team.
G
G
G
G
So
next
page
for
the
analysis
draft
phase,
I
think
the
design
team
will
provide
a
list
of
proposals
at
first.
I
think
the
list
will
be.
G
Posted
to
itf
in
january
2021
and
we
hope
to
submit
initial
analysis
draft
to
itf
before
next
meeting.
So
that's
all
that
is
basic
status
and
applying
for
the
design
team.
B
E
First,
thanks
to
wai
ching
during
hiring
a
lot
of
work.
Second,
people
might
get
excited
that
the
document
looks
incomplete.
There
are
requirements
that
are
still
in
the
open
state
that
aren't
in
the
document.
So
if
it
looks
incomplete,
don't
get
too
nervous
about
it.
Yet
second
is
a
question
probably
for
eric
klein
and
you
bob
and
olay
we've
had
an
open
call
for
adoption
on
the
crh
draft.
Since
late
may,
it's
waiting
on
this
design
team,
it
looks
like
the
design
team
won't
be
back
until
ietf
110..
E
Now
I
have
customers
who
are
interested
in
the
crh,
but
could
care
less
about
the
srv6
control
plane,
they're
provisioning,
their
boxes
from
a
controller
that
does
everything,
so
they
don't
want
the
srv6
control,
plane
or
anyone
else.
Sooner
or
later
you
know.
When
do
we
make
the
decision
to
adopt
crh
for
no
reason
other
than
for
the
people
who
are
using
it
and
not
using
any
iepf
routing
protocol.
F
I
think
I
think
it's
important
to
note
that
the
the
sr
design
team-
well,
it
may
be
going
slow,
it
does
seem
to
be.
It
does
seem
to
be
working.
It's
just
maybe
not
working
fast,
but
we
had
talked
about
there
being
some
other
options
like
whether
or
not
six
man
needs
to
adopt
fcrh
or
whatnot
you
need.
You
just
need
a
a
rooting
header,
some
type.
F
So
I
think
I
think
we
could
look
at
we
could
consider.
I
could
I
could
double
check
with
rooting
80s
and
see
if
how
they
feel
about
that.
I
don't
personally
have
any
problem
with
that
at
all.
So,
okay.
B
That,
by
watching
what
the
design
team
is
doing,
we
can
sort
of
try
to
figure
out
where
they're
heading,
and
that
may
help
too.
F
G
So
excuse
me,
couldn't
I
give
some
words
on
that.
G
Please
yeah
from
a
design
team
side.
I
think
the
scoop
for
us
is
very
clear.
G
So,
from
my
point
of
view,
we
should
keep
study
on
those
different
proposals
and,
if
cih
is
adopted
by
six
mind,
I
don't
think
the
there
is
a
direct
relationship
between
the
draft
output
to
italy
and,
as
I
just
mentioned,
any
draft
output
from
design
team.
G
They
are
all
the
individual
draft
and
that
is
not
any
agreement
for
the
working
group
or
for
iekf.
So
I
think
design
team
just
to
give
some,
I
think,
maybe
some
guide
or
information
on
the
direction.
I
don't
think
that
should
be
used
as
some
justification
or
something
like
that.
Thank
you.
H
Just
on
the
context
of
what
whether
the
design
team
thinks
that
there's
a
new
voting
header
subtype
needed,
the
design
team
is
just
focused
on
building
the
requirements
and
then
allowing
not
analyzing
the
proposals
eric.
So
I
don't
think
you're
gonna
get
anything
on
on.
You
know
there
won't
be
a
design
of
a
solution
by
the
design
team.
It's
just
the
requirements
and
analysis
of
the
proposals
that
it's
going
to
produce.
B
B
D
Yeah,
the
the
progress
of
the
design
team
is
going
where
and
we
are
going
to
finish
the
discussion
as
soon
as
possible,
but
it's
not
the
like
contest
used
right
now
and
it
oh,
it's
also
not
the
requirement
of
the
human
being
working
group,
so
I
don't
think
the
down
any
solution.
I
So
what
I
kind
of
want
to
say
is
looking
at
the
whole
crh
thing
right
now
we're
in
a
situation
where
this
has
been
around
for
a
number
of
ieps
and
we've
moved
from
requirements,
documents
to
a
design
team
to
etc,
etc.
The
fact
is
is
that
this
code
is
shipping
today,
it's
in
production
code,
and
there
are
people
like
myself
who
need
it
and
we
need
it
in
order
to
deal
with
geopolitical
issues.
For
one
thing:
it's
critical
for
dealing
with
geopolitical
issues
and
avoiding
the
suppression
of
internet
freedoms.
I
That's
what
we're
using
it
for
I'll
bobby
blunt,
and
by
delaying
this
any
further.
All
that
we
are
doing
is
driving
protocol
development
further
outside
of
the
ietf,
because
the
development
is
not
going
to
stop.
This
is
simply
too
critical,
and
I
would
argue
that
that
is
not
in
the
interests
of
the
technical
community.
I
It
is
not
in
the
interests
of
the
vendors
who
their
very
customers
are
the
operators,
and
we
have
a
massive
demand
for
this
on
the
continent,
where
we
have
to
have
a
working
solution,
which
is
what
crh
gives
us
to
deal
with
authoritarian
states
who
wish
to
play
with
traffic.
This
is
how
we
deal
with
it,
and
I
am
pleading
with
this
working
group
to
let
let's
stop
the
delays,
because
crh,
as
we've
said,
is
a
building
block.
It
is
not
tied
to
srh.
A
You
are
very
faint
women.
B
A
We
can't
hear
you
do
you
want
to
see
if
you
can
fix
your
audio
issues
and
then
we
go
to
so
far
and
then
come
back
to
is
that
okay
and
when
you're
at
the
head
of
the
queue?
Please
unmute
yourself
a
couple
of
seconds
ahead
of
time,
because
it's
it
takes
a
little
while
to
get
going.
Okay,
safar
disappeared.
B
J
Okay,
this
is
from
huawei,
so
I
first
I
very
appreciate
the
the
design
team's
work,
because
I
know
they
will
try
very
hard
to
propel
this
department's
job.
J
The
second
point
I
mean
that
you
have
the
adoption
of
this:
the
crh
and
the
relationship
of
the
adoption
and
the
retirement
has
been
discussed
between
the
routine
area
and
the
intel
area.
I
think
that
if
the
work
is
being
done
for
the
adoption,
I
think
this
not
only
the
entire
area,
but
it
also
should
be
coordinated
with
looking
area
and
the
spring
working
group,
the
third
one
I
want
to
know
that
I
wanted
to
clarify,
I
think,
in
the
adoption.
The
first
adoption.
K
J
Of
the
crh,
even
without
the
requirement
of
the
design
team,
there's
a
studio
of
the
comments
under
the
challenge
of
the
existing
the
city
work
because,
for
example,
I
want
to
know
the
was
the
id
to
be
identified
and
also
besides,
this
idea
identified
also
the
existing
the
functionalities,
such
as
the
vpn
l2
vpn,
how
to
be
supported
with
the
crazy.
J
B
Okay,
thank
you
so
whim.
Last,
let's
see
if
your
audio
works.
B
B
Could
hear
him
okay
all
right?
Well,
thank
you
interesting
discussion.
We
will
have
some
follow-up
with
our
ads
after
the
meeting
is
over.
A
L
Excellent
okay,
so
this
talk
is
one:
that's
on
a
draft,
that's
jointly
with
bob
I'm
going
to
do
the
talking,
we're
being
helped
by
anna
who's.
Doing
some
data
experiments
which
we'll
talk
about
a
little
later.
L
The
the
draft
is
the
ikey
v6
minimum
path,
mtu
hot
by
hub
option
and
we're
at
revision4,
which
we'll
talk
about
in
a
few
slides.
Please
go
ahead.
L
The
background.
Is
we
assert
that
path?
Mtu
discovery
isn't
working
well
and
we'll
explain
that
in
some
measurements
in
a
few
minutes,
the
hot
optic
option
is
being
proposed
as
an
idea
to
exchange
the
path
mtu
that
routers
have
as
a
probe
package
is
sent
across
the
network.
L
This
is
the
shape
of
the
hot
by
hop
option.
If
you
haven't
seen
this
before
the
bb
and
c
lines,
just
underneath
the
package
are
the
most
important.
In
other
words,
if
you
are
a
ipv6
router
and
you
get
this
hot
buy
hop
option,
you
can
skip
it
and
continue
processing.
We
prefer
you
to
fill
this
in
because
it
will
tell
us
things,
but
we're
aware
that
some
routers
won't
do
this
and
that's
quite
okay
and
we'd
prefer
you
to
update
it.
L
L
What
have
we
done
when
we
revised
zero,
three
and
zero
four?
We
did
two
revisions
in
the
last
period
and
our
two
revisions
did
a
fairly
big
rewrite
of
the
words.
Our
original
words
were
good,
of
course,
because
we
try
to
write
good
words,
but
they're
now
actually
good
and
readable
by
others,
which
is
a
real
advantage.
L
We
think
so,
if
you
prove
us
wrong,
please
read
it
because
my
final
slide
is
going
to
say
read
this
draft,
because
it's
now
structured
in
a
way
which
is
which
presents
the
different
parts
to
different
people
who
are
interested.
We
talk
about
what
the
router
has
to
do.
We
talk
about
what
the
source
has
to
do.
L
One
of
the
things
that
was
exposed
by
looking
at
the
api
was
there's
probably
two
ways
to
use
the
information.
That's
in
the
option.
The
first
way
is
basically
in
the
way
path.
Mtu
discovery
has
traditionally
used
it.
You
get
a
packet
back
with
some
information,
you
use
it
at
the
ipv6
layer
or
you
directly
change
the
the
cache
or
entries
you
have
for
path
mtu
for
the
flows
that
you've
received
these
messages.
L
The
upper
layer
then
reads
the
options:
data
it
can
validate
the
options
data
because
it
knows
more
about
the
contents
of
the
packet
which
it
receives
as
a
probe
and
then
highlighted
in
green.
There
is
a
way
for
this
upper
layer
protocol
to
then
throw
away
the
option
data.
If
the
packet
isn't
validated,
it
seems
subtle.
You
probably
have
to
read
the
draft
to
to
understand
it
properly.
L
But
what
this
means
is,
if
the
upper
layer
processes
the
hop
option
data,
then
you
can
have
a
better
security
method,
because
the
hop
option
data
can't
be
easily
injected
off
path.
If
this
happens,
that
that
simple
check
of,
can
you
deliver
it
to
the
upper
layer?
Is
the
upper
layer
expecting
this
packet
with
the
option
attached
to?
It
provides
a
lot
of
extra
security
and
by
thinking
this
through
particularly
the
two
different
ways
you
could
use
the
hot
buy
hop
option.
L
L
L
I
tried
to
go
quickly
because
we've
got
slides
also
on
experiments.
The
hop
option
experiments
have
currently
been
based
around
some
ipv6
probes,
which
look
like
this.
It's
a
small
form
pc.
That's
running
the
probe
software.
We
have
six
of
these
probes
in
various
places.
L
Two
in
the
usa,
two
in
the
uk,
one
at
home
and
one
elsewhere,
we're
looking
for
more
people
who
want
to
have
probes
and
join
in
the
probing
and
discovery
of
where
this
works.
L
B
K
B
L
Right
so,
first
set
of
results:
mtu,
okay.
So
what
did
we
find
out?
We
did
a
lot
of
experiments
for
mtu
to
figure
out
what
mtu
we
saw
running
from
these
probes
to
sites
select
the
alexa
top
1
million.
We
looked
at
half
a
million
targets
for
ipv4,
only
300
000
targets
for
ipv6,
but
the
picture
emerges
quite
clearly.
L
So
the
first
bit
of
good
news
is
that
these
from
these
target
locations,
which
are
arguably
not
isp
edge
locations,
we
know
that,
but
for
these
locations
most
of
the
past
got
pathed
to
use
greater
than
1500
bytes,
and
that
applies
for
ipv6.
Also
96,
that's
great.
So
what
happens
to
the
remaining
four
percent
that
it's
slightly
sadder
answer
for
the
remaining
four
percent
path?
Mtu
discovery
works
well
with
ipv6
for
0.05
percent
of
cases.
L
L
We
got
a
ptb
message
back
from
a
router
on
path
with
a
ptb
message,
but
then
we
still
couldn't
make
the
path
work.
Okay,
these
are
just
broken
paths.
Sometimes
the
internet
isn't
working
or
maybe
the
target
server
isn't
actually
online
that
we're
trying.
L
L
L
But
plp
mtud
relies
on
sending
successive
probes,
so
you
can't
discover
this
in
one
shot.
You
have
to
use
multiple
probes
to
get
there,
and
the
idea
between
the
behind
the
hot
by
hop
option
is
that
you
can
send
discovery
packet
and
avoid
all
that
number
of
probes,
particularly
if
you
want
to
get
a
big
mtu.
That
saves
you
many
many
rtts,
so
good
news
and
bad
news.
Bad
news.
If
you
talk
about
pmtud
for
ipv6
lots
of
people
implemented
it,
but
few
routers
actually
give
you
the
information.
You
need
plp.
B
L
L
Next
slide,
then,
if
we
oh
we're
good,
we
got
it
larger
internet
mtu.
So
my
ambition
in
life,
partly
is
that
we
get
to
use
these
really
big
frames
that
ethernet
actually
provides,
even
though
the
ieee
haven't
standardized
them,
because
we
know
many
line
cards
actually
support
large
mtus
and
we
know
many
servers
have
them,
and
many
client
operating
systems
support
these.
L
So
what's
our
chance
of
using
them.
So
I
had
data
that
showed
tests
from
the
edge
looking
at
the
alexa
top
1
million
and
it
wasn't
very
exciting
muslim
supported,
1500,
very
boring.
So
why
might
that
be
the
case
on
end-to-end
paths
across
the
internet
to
web
service?
L
Well,
one
place
to
look
is
perhaps
in
the
peering
database.
What
do
internet
exchanges
actually
advertise
as
an
mtu?
I
mean
you've
got
to
get
your
packets
via
an
internet
exchange
in
most
cases,
and
the
answer
is
38
in
peering
db
use
a
default,
mtu
51
use
a
1500,
byte
mtu,
probably
the
same
advertisement.
Actually,
so
only
seven
percent
used
a
nine
kilobyte
mtu.
L
L
It
might
not
help
you
in
general,
we
don't
have
isp
data
and
I
suspect,
if
we
had
isp
data,
we'd,
find
that
many
isps
at
the
edge
give
you
1500
or
slightly
less.
We
know
that
from
previous
work,
so
this
isn't
really
a
complete
data
set.
It
just
shows
where
one
of
the
bottlenecks
is,
if
you're
into
content
edge
services,
and
you
have
data
for
the
mtu,
get
to
the
content
edge
to
akamai,
to
cloudflare,
etc.
We'd
love
to
talk
to
you.
That
would
be
fun.
L
L
L
Again,
it's
not
a
bad
piece
of
news.
It's
probably
an
expected
piece
of
use.
We
probably
want
more
data
to
other
types
of
servers
and
we
presented
that
data
last
ietf.
So
for
next
ietf
we'll
have
a
better
set
of
data.
We
could
dig
into
two
of
our
probes
and
tell
you
what
happened
for
two
one
was
in
the
uk,
which
appears
to
be
quite
a
good
place,
because
the
extension
headers
were
fine,
the
service
provider
forward
them.
We
could
probe
the
internet
from
a
home
residential
router.
One
of
them
was
in
bob's
house.
L
We
talked
about
last
iatf.
The
operator
was
comcast,
I'm
I'm
really
hopeful
that
we
have
a
conversation
with
comcast
to
find
out
exactly
what
was
happening,
and
the
news
was
summarized
in
this
quote
here.
So
I'll
read
it
so
that
I
don't
misrepresent
what
they
said:
the
routers
punt
extension
headers
to
the
control
plane,
the
control
plane
bandwidth
is
limited
and
can
be
impacted
if
excess
transit
extension
headers
traffic
is
forwarded
in
current
implementation.
L
That
means
it's
a
good
to
filter,
transit
packets
that
contain
extension
headers
to
protect
the
control
plane
capacity
if
you're
in
v6
hops.
You
probably
saw
this
same
thing
being
discussed
there
in
the
draft
I
put
at
the
bottom
of
the
slide,
and
it
talks
about
more
detail
why
that
might
be
the
case.
L
L
So
we
plan
to
look
more
at
the
edge
see
where
extension
headers
work
at
the
moment.
If
you
want
to
use
the
extension
we
advocating
in
this
draft
across
the
general
internet,
it
may-
or
it
may
not
work-
that
in
itself
is
not
a
disaster
when
it
works.
It
provides
extra
information
when
it
doesn't
work,
just
be
careful
not
to
use
it
on
a
packet
that
you
really
care
for
use
a
probe
packet
to
carry
the
extension
header
or
you
could
use
it
in
the
general
internet.
L
And
yeah
well
obviously
we're
looking
for
more
people
to
look
at
this
problem
space
because
we'll
be
doing
extensive
measurements
in
the
next
period
and
we'd
love
to
talk
to
you
next
slide.
L
Oh
next
steps-
yeah,
that's
where
we
got
to
so
we
have
a
working
piece
of
code
in
vpp
that
adds
the
field
modifies
it
updates
it
sends
it
up.
So
we
have
one
router
that
could
do
a
proof
that
we
can
actually
do
something
here.
We'd
love
more
people
to
implement
this
really
really
love
to
talk
to
you
and
see.
If
we
could
do
something.
The
target
place
where
this
is
most
useful
is
likely
within
the
data
center
within
a
data
center,
big
mtu
is
often
supported.
The
switches
often
have
the
support.
L
The
routers
might
well
have
that
support,
and
this
would
actually
be
a
big
win
to
letting
you
immediately
verify
that
the
configuration
of
the
device
is
on
path
matches.
What
you
expect
so
looking
for
people
to
implement
the
router
side,
we're
also
looking
for
people
who
are
interested
in
the
host
code,
you
can
use
it
in
a
very
simple
way
and
get
the
information
out
about
your
path.
L
Would
people
be
interested
in
talking
to
us
about
that?
We'll
do
that
experiment
as
well.
Please
talk
to
us
new
data
for
march.
Please
read
this
version.
If
I've
said
nothing
else,
that's
useful,
the
version
now
is
really
readable
and
we
don't
know
how
to
improve
it.
So
please
tell
us,
is
that
my
last
slide.
A
A
While
the
experiments
go
on
or
do
you
do
you
see
any
issue
with
you
know?
From
my
perspective,
this
looks
like
we're
almost
done
right,
that
we
could
push
this
document
forward
and
publish
it,
but
yeah
what
do
you?
What
do
you
prefer?
What
you
see
as
the
most
successful
path
here
for
getting
both
deployment
and
and
publication.
L
I'll
talk
first
and
then
I
think
bob
should
talk,
because
I
don't
know
why
we
agree
yet
on
this.
But
if
people
want
to
implement
this
right
now,
I
would
love
to
talk
to
them,
because
the
more
experience
we
get
sooner,
the
better
everything's
going
to
be
so
no.
I
would
happily
delay
publication
if
there
was
active
work
to
get
this
implemented
and
get
experiments
going
with
this,
then
maybe
it
didn't
happen
be
experimental.
Perhaps
I
could
push
you
to
do
a
ps
because
we
actually
knew
about
it.
L
A
And
if
anyone
wants
to
help
out
in
the
experiment
by
hosting
probes,
I
guess
gory,
you
are
the
contact
for
that
right.
L
Yep
we
will
put
in
touch
with
anna
who's
very
helpful,
and
she
anna
has
a
virtual
version
of
the
probe
as
well,
which
runs
in
a
virtual
box.
So
you
don't
actually
have
to
take
hardware.
B
B
A
I
I
wanted
to
check
with
regards
to
those
virtual
probes,
any
particular
hypervisors
anything
else
before
I
chase
you
up
on
this.
L
That's
an
easy
answer.
Please
email
us!
It
is
probably
better
to
talk
to
anna
because
she
could.
We
can
put
you
in
touch
and-
and
she
will
quickly
tell
you
what's
involved.
She
has
a
quick
fact
sheet
and
you
can
talk
about
which
hypervisors
etc
work.
L
Indeed,
well,
I
think
we
can
we
we,
we
would
just
love
to
get
more
measurements,
cool,
okay,.
K
A
Corey
then
we're
on
to
the
next
presentation.
I
think
unless
andrew
are
you
still
in
the
queue
or
I
guess
not
super
super
sepi.
You
are
up.
A
K
Okay,
can
I
grab
please
go
ahead.
K
Hello,
everybody:
this
is
an
update
about
our
draft
on
application
of
alternate
markings,
so
next
live
yeah.
Just
a
quick
recap
regarding
the
methodology
from
whom
is
not
very
familiar
with
this
kind
of
methodology.
So
alternate
markings
allow
a
passive
performance
measurement
technique
to
make
measure
packet,
loss,
delay
and
deliveration.
K
K
Otherwise
you
can
add
one
more
bit
and
that's
why
we
are
asking
for
two
bits:
that
is
the
delay
bit.
That
can
be
used
to
select
some
special
packet
that
are
dedicated
for
delay
and
are
fully
recognized
over
the
metro.
So
next
slide.
K
K
In
this
case,
the
ultimate
marking
is
extended
for
multi-point
flows.
In
this
case,
you
can
also
monitor
the
full
network,
or
otherwise
you
can
select
select
the
subnetwork
to
monitor
at
different
levels.
This
allows
flexible
performance
management
approach,
so
you
can
go
to
monitor
a
sub
network
up
to
a
single
flow
or
a
single
link.
K
K
K
K
We
are
requesting
an
option
adder
that
can
be
a
non-biop
or
destination
based
on
the
type
of
performance
measurement
you
want
to
enable
in
general,
the
source.
Node
is
the
only
one
that
writes
this
option,
either
to
mark
the
flows,
but
the
intermediate
or
destination
nodes
passively
receive
and
read
these
bits
to
perform
the
measurement,
but
they
did
not
modify.
K
K
K
Slide
the
reason
for
the
introduction
of
this
pro
monitoring
identification
field
is
requirement
for
implementation
reason,
because
to
allow
a
flexible
measurement,
you
have
to
reduce
the
per
node
configuration
in
order
also
to
increase
the
number
of
flows
that
you
want
to
monitor
and
the
use
of
acl
is
limited.
So
that's
why
it's
better
to
have
an
identification
field
in
the
in
our
cld.
K
In
addition,
it
also
simplifies
the
counters
handling
and
also
it
facilitates,
and
it
is
the
data
export.
So.
M
K
Some
discussion
about
the
uniqueness
of
the
this
field
because
it
can
be
set
by
a
centralized
controller
and
the
controller
can
have
the
construct
the
nodes
in
order
to
guarantee
uniqueness
of
these
into
malignant
pi.
Otherwise
it
can
be
randomly
generated
by
the
source
node,
but
in
this
case
there
are
some
possibility
to
have
the
collision
a
chance
of
collision.
For
example,
if
we
are
going
for
20-bit
flow
monitor
identification,
there
is
a
50
chance
of
collision
for
1,
1,
1,
200
flows.
K
Okay,
thanks.
Thank
you
regarding
the
alternatives.
K
You
can
see
all
the
alternatives
that
are
on
the
table,
so
you
can
use
the
destination
option
to
measure
only
by
node
in
the
definition
address
the
opera
option
for
every
router
on
the
path
or
you
can
combine
the
definition
option,
plus
any
routing
either,
for
example
ssh
in
this
case
you
are
monitoring
every
destination
node
in
the
root
list.
K
You
can
prefer
general
in
many
cases
when
measurement
is
not
enough,
so
it
could
be
required
job
by
of
measurement,
but
even
if
for
rob
biops,
I
know
that
there
is
some
discussion
about
the
applicability
and
in
any
case
there
is
no
problem,
because
even
if
a
non-part
node
does
not
is
not
configured
to
read
this
option,
the
up
by
op
is
made
only
for
the
nodes
that
are
configured
to
to
read
these
option.
So
this
not.
This
does
not
affect
the
measure.
K
Yeah
regarding
the
security
consideration,
so
there
are,
of
course,
some
security
concerns,
but
are
limited
because
the
ultimate
market
implies
modification
on
the
fly
to
an
option
other.
So
but
in
any
case,
the
information
is
very
limited
and
this
must
be
performed
in
a
way
that
does
not
alter
the
qrs
experience
regarding
the
head
to
the
measurement.
K
K
In
addition,
an
attacker
cannot
gain
the
information
by
using
one
single
monitoring
point
but
need
synchronized
monitoring
points,
because
this
the
difficulty
to
do
this
kind
of
attack
regarding
regarding
the
privacy
are
also
limited,
because
the
method
only
relies
on
information
in
the
option
either
without
any
release
of
use
of
that
next
life.
K
Yeah
the
change
from
the
last
version.
We
addressed
the
comment
from
ron
about
the
time
the
timing
aspect
of
this
methodology,
because,
yes,
I'm
concerned
about
the
resiliency
of
reordering,
but
there
is
a
mathematical
formulation
in
both
rfc
321
and
lx3889
about
that,
and
we
also
include
a
small
paragraph
that
describes
how
the
alternate
marking
means
resilient
to
this
kind
of
reality.
K
K
A
Always
okay,
thank
you
guys
any
any
comments
on
this
draft.
So
guiseppe
you
have
have
you
made
an
early
request
for
for
to
ayanna.
B
K
K
A
It's
just
to
wait
until
prague,
then
hopefully
we
will
try
to
make
some
progress
between
now
and
then.
If
things
come
up,
I
guess
the
barrier
to
do.
Interim
meetings
is
lower
now
than
than
normal
as
well.
So
we
that's
certainly
something
we
could
do
if,
if
it's
required,
but
otherwise,
we
will
virtually
see
you
in
prague.
A
L
Sorry,
I
was
I'm
very
slow,
pressing
the
right
buttons.
No,
I
just
wondered
whether
people
were
actually
reading
the
drafts
not
able
to
use
the
tool,
and
maybe
we
just
have
to
try
and
get
eyes
on
on
the
drafts
and
get
people
to
say
that
they've
read
things.
A
B
B
A
I
B
A
This
50
response
rate
at
this
moment
right,
so
I'm
gonna.
B
K
A
Few
people
have
contacted
me
on
the
side
here
asking
how
they
get
contact
details
for
anna.
Perhaps
you
you
want
to
send
out
an
email
to
the
list
as
well,
asking
for
probe
volunteers
and
and
provide
contact
details.
K
K
B
So
I
think
it
does
work
better
to
just
call
the
question
you
know
after
the
before
or
after
the
talk
than
to
do
it
during
the
talk
or
else
maybe
it
bill
works
better
at
the
end
of
the
session,
when
people
are
tired,
when
I
go
home
but
okay
well
good,
I
will
end
this
look
good,
so
I
think
we
will.
Thank
you
all.
I
think
it's
been
a
good
meeting
two
sessions,
it's
2
20
in
the
evening
in
the
morning.
For
me,
some
of
you
are
in
better
time
zones.