►
From YouTube: IETF115-IPPM-20221107-0930
Description
IPPM meeting session at IETF115
2022/11/07 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
B
D
B
Okay,
it's
9
30
here
in
London,
well
time
to
start
the
ippm
session
first
session
of
the
week,
so
welcome
everyone.
If
you're
not
here
for
the
ippm
session,
then
please
check
which
room
you
want
to
be
in
Tommy
is
here
as
well
he's
joining
us
from
California.
B
So
this
is
an
ietf
session,
so
you
should
familiar
familiarize
yourself
with
this
note.
Well
slide,
you
will
be
probably
be
seeing
it
quite
a
lot
during
the
week,
but
basically
everything
you
say
here
is
a
contribution
to
the
ietf
when
that
comes
with
a
consideration.
So
if
you're
aware
of
patents-
and
things
like
this,
you
should
you
should
familiarize
yourself
with
the
guidelines
and
rules
around
that.
It's
also
important
to
note
that
things
here
will
be
recorded
and
can
be
publicly
be
made
publicly
available
online.
B
So
so
you're
aware
of
this,
and
please
kind
of
maintain
a
respectful
tone
to
each
other
when
having
discussions
and
disagreements-
and
there
are
a
lot
of
documents
you
can
read-
if
you
want
to
further
familiarize
yourself
with
any
of
these
matters.
B
You
may
briefly
remove
your
masks
if
you're
eating
and
drinking,
but
that
cannot
be
an
excuse
for
leaving
them
off
for
longer
periods.
So
please
try
to
kind
of
respect
these
rules.
It's
for
all
of
our
safety,
of
course,
if
you're
actively
speaking,
meaning
that
you
are
up
by
the
microphone
presenting,
it
is
okay
to
take
off
your
mask.
While
you
are
presenting,
but
that's
pretty
much
the
only
exception.
B
There
are
no
exemptions
for
mask
wearing,
no
medical
or
otherwise.
So
please
be
aware
of
that,
and
also
be
aware
that
the
masks
that
are
required
should
be
equivalent
to
n95
or
ffp2
or
better,
and
there
are
free
masks
all
around
here.
So
please,
if
you
don't
have
masks,
go,
go
fetch
some
right,
a
bit
of
meeting
management.
We
have
we're
running
this
through
meet
Echo.
We
have
the
link
here
queuing
for
everyone,
including
on
site,
is
based
on
meet
Echo.
B
So
if
you
are
here,
please
please
join
the
meet
Echo
session
using
the
using
your
mobile
device
and
then
use
the
queuing
system
of
me.
Deco
to
put
yourself
in
queue.
Slides
are
pre-loaded
into
meet
Echo,
so
it's
possible
for
anyone
presenting
to
share
the
slides
there
is.
There
is
a
button
in
the
mid
Echo
to
to
that
allows
you
to
share
pre-loaded
slides.
B
It
is
possible
to
share
your
screen
as
well,
if
needed,
but
ideally
that's
not
needed,
and
it's
not
recommended,
and
we
have
a
link
here
for
the
notes,
and
we
would
like
to
see
if
anyone
wants
to
volunteer
as
a
note
taker.
B
E
So
I
I'm
happy
as
I
mentioned
on
chat
to
do
the
notes
so
I
can
cover
that.
I
would
ask
that
if
people
are
up
at
the
mic,
please
do
clearly
state
your
name
to
I
know
you'll
also
be
queuing.
But
if
there's
conversation
back
and
forth
we'll
make
sure
that
the
remote
people
can
tell
who's
talking.
B
Okay,
these
are
the
administrative
pieces,
so
let's
go
on
a
little
bit
of
document
status
right
now
we
have
the
ioam
conf
state
and
IPv6
options
drafts
in
iesg
review,
and
we
have
just
done
a
second
working
group
last
call
for
the
explicit
flow
measurements
document
and
I.
Don't
think
we
got
any
pushback
on
the
second
on
this.
Second
last
call,
so
it
looks
like
this
one
will
likely
make
progress.
B
Okay,
we
have
a
pretty
packed
agenda
today,
it's
divided
into
kind
of
three
sections.
We
have
we're
going
to
talk
about
adopted
or
working
group
documents,
and
we
we
have
a
set
of
documents
here.
We
will
start
with
a
newly
adopted
one
which
is
which
are
these
performance
measurements
on
on
on
on
link
aggregation.
B
B
Then
Greg
will
be
doing
a
presentation
on
on
Pam
and
after
that
we
move
into
to
the
section
which
we
call
the
lightning
talks.
So
here
we
have
a
bunch
of
documents,
a
bunch
of
presentations.
They
should
all
be
done
as
a
one
slider
pretty
much
similar,
similar
but
shorter
than
a
hot
RFC,
but
pretty
much
just
just
present
your
concept
on
High
level
and
see
if
people
are
interested
to
to
collaborate
with
you
further
offline.
B
So
with
that
said,
I
think
we
can
get
started
and
we
have
pm
on
lag
first.
C
B
C
C
Let's,
firstly,
recap
the
motivation:
the
lag
provides,
an
aggregation
for
of
medical
links.
We
want
to
measure
the
performance
of
each
link.
However,
however,
existing
active
PM
methods
run
a
single
test
session
over
the
aggregation
without
the
knowledge
of
each
member
link.
This
will
make
it
impossible
to
measure
the
performance
of
even
physical
member
link,
so
we
followed
the
similar
idea
of
RFC
31
71
30,
the
BFD
and
next
to
measure
the
performance
metrics
of
every
member
link
of
lag.
C
Multiple
sessions
need
to
be
established
between
the
two
end
points
and
are
connected
that
are
connected
by
the
lag.
These
sessions
are
called
micro
sessions
when
the
macro
sessions
need
to
associate
with
the
corresponding
member
links.
For
example,
when
the
reflector
receives
a
test
package,
it
needs
to
know
from
which
membrane
the
packet
is
received
and
correlated
with
micro
session.
C
C
This
shows
the
web
and
the
t-wamp
extensions,
including
control
message
and
the
test
packet.
We
add
two
new
control
messages,
the
request,
the
o1
micro
sessions
and
the
request,
T1
micro
sessions
and
in
the
test
packet.
We
add
sender,
micro,
session
ID
and
the
reflector
micro
session
ID.
Both
IDs
are
locally
assigned
next.
C
C
There
are
some
discussions
during
the
adoption
core
it's
about
the
interop
issue,
because
web
light
and
example
are
different
on
the
format
Cuban
per
flight
sender,
with
a
stamp
reflector
or
a
stamp
sender
with
a
T1,
polite,
reflector
may
as
usual.
So
our
conclusions
are.
Firstly,
there
is
no
requirement
for
such
deployment.
This
only
happens
when
micro,
when
misconfiguration-
and
so
if
there
is
misconfiguration
existing
mechanism
described
in
this
draft,
can
detect.
F
C
F
B
Thank
you,
I
thought
we
had
Greg
in
the
queue,
but
maybe.
G
Yeah
I
was
too
quick
to
jump
in
the
queue
while
the
friend
continuan
already
resolved.
Just
a
note.
So
the
use
case
and
scope
of
this
work
is
analogous
to
be
of
the
over
lag
work,
which
is
exactly
as
Frank
noted,
is
only
for
single
link
between
aggregation
points.
B
Okay,
thank
you.
If
we
have
no
more
comments
on
this
one,
we
can
move
on
to
the
next
point
in
the
agenda,
which
should
be
capacity
measurement
ow
Al.
Do
you
want
to
drive
the
slides
yourself
or
do
you
want
me
to
share
it.
E
We
need
your
audio
Al.
We
can't
hear
you.
H
I'm
gonna
get
rid
of
the
video
now
just
because
that's
you
can't
really
see
me
anyway.
It's
too
dark
here
and
I'll
try
to
share
my
screen.
H
H
All
right,
so
can
you
see
the
first
slide?
Yes,
great
all
right.
So
this
is
the
the
talk
on
the
test
protocol
for
one-way,
IP
capacity
measurement,
and
you
know
it
turns
out.
We
can
do
a
lot
more
than
capacity
measuring
with
this
protocol.
H
So
I
encourage
you
to
think
about
that,
but
we're
what
we're
responding
to
this
time
is
the
security
directorate
early
review
response,
which
is
now
pretty
much
fully
implemented
in
the
draft
and
with
an
exchange
with
Roman
Daniel
of
you
know
the
the
SEC
area
A.D.
We
want
to
have
a
little
encrypted
mode
discussion
so
anyway,
moving
right
along
here,
so
I
sent
this
list
to
the
ipbm
list
that
describes
the
extensive
changes
in
the
protocol.
H
The
you
know,
the
big
changes
are
for
newly
defined
modes
of
Security
operation
and
I'm
gonna
go
into
one
two
and
three
here
more
extensively
in
the
next
picture,
but
things
you
can't
see
in
the
pictures
are
the
new
key
management
and
firewall
configuration
the
new
subsection
outlines
which
basically
align
with
each
step
of
host
processing
for
this
protocol.
So
that
should
make
it
pretty
easy
for
people
to
read
and
understand
as
we
work
our
way
through.
H
We've
got
new
security
considerations
where
we
discussed
the
attacks
going
back
to
ipf114
and
a
really
expanded,
Ayanna
section
where
we've
got
the
new
registry
group.
I
H
Registry
groups
to
support
the
expansion
of
the
protocol,
so
we're
on
slide
three
now.
H
Basically,
what
I
want
to
say
here
is
that
we
have
you
know
for
background
in
the
control
phase,
which
is
the
top
half
of
the
slide,
we're
using
two
kinds
of
packet
exchanges,
zero
setup,
Exchange
and
a
test
activation
Exchange,
and
for
a
for
the
newly
required
authentication
mode
for
the
control
phase.
H
We've
added
the
authentication
die
test
and
processing
on
all
the
requests
from
the
client
to
the
server
and
all
the
replies
from
the
server
to
the
client.
In
both
tests
testing
that's
set
up
test
activation.
So
that's
a
that's
a
very
complete
authentication
mode.
Now,
we've
also
got
a
a
an
optional
authentication
mode
which
takes
care
of
one
of
the
messages
of
the
data
phase.
So
now
going
down
below
the
control
phase
line
into
the
data
phase,
we
typically
have
our
load
pdus.
We
send
in
really
high
rates.
H
We
can
go
up
to
40
gigabits
per
second
now
in
our
running
code
and
we
expect
to
get
feedback
pdus
back
from
the
from
the
server,
for
example.
So
now
the
key
information
we're
trying
to
authenticate
here
is
the
measurements
that
we
conduct
at
50,
millisecond
intervals
and
share
back
in
the
feedback
or
depending
upon
the
role.
H
And
that's
an
important
piece
of
information
too,
so
we
can
authenticate
this
it's
an
optional
mode.
It
might
affect
some
of
the
round
trip
measurements
we
make
here
and
that's
why
it's
optional.
You
know
no,
no
big,
no
big
shakes
there.
H
The
the
thing
that
the
feedback
we
got,
which
we,
which
we
talked
about
at
itf-114
and
then
continued
to
decide
not
to
do,
is
to
add
authentication,
digest
and
processing
to
the
load.
Pdus.
You
know
we're
sending
lots
and
lots
of
packets
there
very
often
they're
at
the
rate
that
the
CPE
can
barely
handle
without
doing
anything
extra.
You
know
like
authentication
and.
H
Not
encryption,
so
these
are
you
know
when
these
are
the
lightweight
CPE.
We
can
really
get
in
trouble
there
and
we
decided.
You
know,
there's
really
not
that
much
information
here
to
protect
so
we're
not
going
to
add
the
load,
the
the
authentication
digest
and
processing
in
the
low
pdus.
That's
our
that's
our
choice.
For
now.
H
We've
also
got
the
completely
unknown
authenticated
mode,
so
that's
available
as
well
for
folks
who
want
to
do
this
so
moving
on
to
to
slide
four
Brian
Weiss
Who
provided
the
they
very
kindly
provided
the
the
early
sectarian
review
suggested
that
we
might
use
dtls
during
the
test
setup
and
activation.
H
However,
the
it
was
kind
of
like
I
I've
said
before
here.
The
information
we're
exchanging
there's
a
very
limited
value
test
is
starting.
We
got
configuration
of
the
test.
You
know
you
could
easily
think
or
pin
print
this
and
reveal
that
it's
a
measurement,
there's
no
results
there
in
the
control
phase.
When
we
talked
you
know
back
and
forth
from
Roman,
it's
really
the
measurement
results,
the
the
send
rate
structure.
That's
that's!
The
critical
information
that's
exposed,
so
dtls
doesn't
help
with
that.
H
We
can
only
use
it
during
the
control
phase,
the
re-transmissions
that
it
has
ordered
delivery,
they're,
not
really
helpful
in
the
control
phase,
I'm,
sorry,
data
phase,
so
the
most
valuable
information
is
communicative
or
dtls
can't
help
us.
So
we're
not
going
to
do
this.
That's
our
that's
our
choice.
Right
now
so
Roman,
the
security
ad
Estes
SMS
series,
a
very
pointed
questions
trying
to
lead
us
to
the
truth,
which
was
very
kind
to
him
and
I
replied
back
to
that
on
the
list.
H
H
So
a
simple
solution
here
is
to
encrypt
all
the
things
to
operate
the
protocol
within
an
encrypted
tunnel.
Excuse
me,
however,
we've
got
this.
You
know
that
when
you
have
encryption
like
this,
it's
a
bilateral
agreement.
The
tests
are
point
to
point,
so
the
users
can
choose
their
own
encrypted
tunnel
and
their
keys
and
and
set
it
up.
You
know
as
they
desire
there's
there's
plenty
of
support
for
independent
tunnel
implementations
in
Linux
hosts,
there's
even
Hardware
support
in
smart
Nixon.
You
know
data
center
equipment
and
so
forth.
H
So
you
know
we
strongly
take
the
position
here.
There's
no
need
to
modify
the
protocol
to
use
an
encrypted
tunnel,
we'll
simply
say
here's
a
mode
where
you
use
the
the
protocol
and
you
set
up
your
own
encrypted
tunnel.
So
you
know
that
turns
out
to
be
an
advantage.
Some
users
may
want
to
characterize
their
own
measurements
in
the
tunnel
technology
they
chose.
So
let's
leave
the
choice
to
the
users.
H
A
emphasis
in
ippm
is
accuracy,
so
you
know
we'd
even
recommend
that
that
some
people
who
are
going
to
use
a
tunnel
run
some
tests
unauthenticated
first
to
see
if
the
tunnel
has
negative
impact-
and
you
can
purposely
characterize
the
tunnel
encryption
this
way,
you
know
very
likely
it's
going
to
be
that
the
capacity
you
see
with
an
encrypted
tunnel
is
less
than
the
unauthenticated
mode
or
even
the
the
recommendation.
The
recommended
authenticated
mode
that
people
can
use,
that's
still
an
option.
So,
let's
see
there,
we
go.
H
Oh
yeah
and
then
the
you
know
in
the
running
code,
we've
got
the
MTU
set
at
a
fairly
conservative
value,
so.
H
Our
our
our
case
is
that
the
you
know
if
we
wanted.
If
we,
if
people
want
to
use
encryption,
they
can
set
it
up
themselves
between
consenting
adults,
and
you
know
that's
what
we
would
describe
in
the
draft.
We
don't
see
a
need
to
modify
the
protocol
to
do
this,
it'll
work
just
as
well.
So
that's
what
we'll
do
in
the
draft-
and
this
is
the
kind
of
thing
I
I
just
wanted
to
talk
about
for
a
few
minutes.
If,
if
folks
have
comments
on
this.
A
H
Oh
it
it
it,
it
performs
re-transmissions
and
ordered
delivery.
So
we
make
measurements
on
the
feedback.
Packets
and
you
know,
obviously
the
test
stream.
So
we
don't
want.
We
don't
want
the
feedback.
Packets
retransmitted.
We
measure
round
trip
time
on
those
and
we
don't
want
them
reordered.
So,
basically,
the
dtls
features
which
are
really
nice
that
you
know
that
might
make
sense
in
the
control
phase.
They
don't
help
us
at
all
when
we're
making
our
re-transmission
I'm.
H
Sorry,
our
our
feedback,
measurements
for
Round
Trip
time
on
the
the
you
know,
the
feedback
pdus.
Does
that
make
sense.
A
H
A
All
right
all
right,
thanks.
H
So
here's
the
here's-
the
wrap
up
I've
still
got
a
couple
of
minutes
here.
I
think
we'll
have
some
more
SEC
area.
H
Maybe
SEC
directorate
interactions
we'd
like
to
agree
to
on
a
proposal
for
a
fully
encrypted
mode,
but
you
know
we'd
like
to
implement
a
working
group
agreed
proposal
and
for
us
the
bottom
line
is
you
know,
as
far
as
we
can
tell,
even
when
we,
even
when
we
put
in
encryption
in
our
protocols,
it's
not
widely
activated
in
the
measurement
protocols
that
are
used
to
scale,
you
might
see
it
going
on
between
consenting
parties
and
that's
fine,
but
you
know,
oh
lamp
had
encrypted
mode
t-whip
had
encrypted
mode.
H
We
have
600
000
T,
wamp
instantiations.
We
don't
use
encryption
in
any
of
them.
So
look
if
we,
if
we
have
have
some
agreement
on
this,
you
know
we
could
spend
another
draft
and
maybe
get
a
working
group
class
call
early
to
2023.
You
know,
maybe
sooner
if
the
encryption
solution
is
simple,
it's
really
just
about
wording
and
describing
it-
and
you
know
I,
wanted
to
note.
H
Finally,
here
that
lots
of
measurements
with
RFC
99d7
the
capacity
metric
and
round-trip
time
under
working
load-
those
are
the
kinds
of
things
we've
been
sharing
on
the
list
very
recently,
and
it
points
to
the
fact
that
you
know
this
protocol
can
be
used
for
other
things,
work
and
load
assessment
capacity
under
working
load
and
so
forth.
We've
got
a
lot
of
capability
here.
So,
let's
it
said
you
know,
I,
don't
really
see.
This
is
scope
creep,
I
see
it.
H
B
A
H
A
H
E
A
I
mean
it's:
it's
meant
to
go
over
UDP,
so
I
mean
like
you
would
use
less
if
you
could
about
a
liability
yeah.
So
there's
a
retrans
mechanism
for
the
handshake,
which
is
a
different
thing,
I
think
it's
similar
I
mean
aside
from
security
I,
think,
like
the
transport
guarantees,
are
similar
to
UDP,
so
I'm,
not
sure
that
this.
The
premise
of
this
is
correct.
H
A
H
Okay,
okay,
all
right
so
so
dtls
would
be
would
be
applicable
then,
but
you
know
I'd
make
the
same
case
that
that
you
know
the
the
if
you,
if
you
really
want
if
well,
if
you
really
want
to
encrypt
everything,
then
The
Simple
Solution
is
to
put
it
in
a
tunnel.
A
Yes,
if
you
certainly
if
the
probe
traffic
you
want
to
encrypt,
you
would
need
to
do
that,
but
obviously,
as
you
as
you
are
well
aware,
that
seems
that
potentially
can
influence
the
throughput
you
can
generate,
which
may
or
may
not
be
a
problem.
A
Yeah
I
I
I
mean
it
seems
a
little
weird
to
say:
I'll,
be
brief.
Sorry
I
know
I'm
out
of
time.
It
seems
a
little
weird
to
say
well,
like
you
can
try,
you
can
do
the
test
without
encryption
and
into
these
tests
with
encryption
to
verify
that,
like
it's,
not
messing
up
your
measurement
and
do
the
test
with
encryption
because
then,
like
the
data,
is
then
passing
unencrypted
right.
H
Well,
I
mean
if
you
did
that
once
or
twice
before
you
started
your
your
tunnel,
then
you'd
have
the
you
know
that
I
mean
those
are
the
kinds
of
things
that
that
you
know
it's
a
really
simply
die.
It's
a
really
simple
comparison:
there's
not
that
much
information,
that's
going
to
be,
you
know,
exposed
and
you're
doing
it
quickly.
Potentially,
you
know
in
a
on
a
port
that
people
would
have
trouble
finding.
A
Okay,
so
if,
if
the
premise
is
that
you're
using
some
like
other
link,
that
is
not,
you
know,
you're
putting
on
like
a
local
host
or
whatever
so
like
you're
running
it
through
the
encryption
without
actually
putting
it
out
online
I
think
that's
fine
anyway!
I'm
I'll
take
enough
time.
So,
thanks
bye.
E
So
I
wanted
to
clarify
for
the
the
next
steps
here
around
encryption,
since
it
seems
like
from
this
you're
proposing,
you
know,
just
you
know,
put
it
in
an
encrypted
tunnel.
If
you
want
and
I
guess
the
other
thing
on
the
table
is
you
know
if
dtls
does
work,
you
could
just
say
you
know
slap
slap
in
details.
E
H
Yeah,
a
good
good
question:
I
I'm,
trying
to
avoid
protocol
changes
at
this
point
right
and
I.
Think
it's
I
think
it's
really
simple!
That
people
could,
you
know,
choose
their
own
tunnel
they're
likely
to
do
that
anyway,
they're
even
likely
to
have
the
encrypted
tunnel
up
and
running
anyway
between
consenting
adults,
so
so
the
so.
The
answer
is
really
simple:
if
you,
if
you
want
to
make
these
measurements,
run
it
inside
a
tunnel
and
and
there's
lots
of
different
tunnel
choices,
you
know
why.
Why
should
we,
you
know
get
into
those
details?
Okay,.
E
E
H
H
Okay,
well
thanks
very
much.
Everybody
I
appreciate
the
the
corrections
and
the
feedback
today.
K
K
Let's
go
through
that
next,
please,
okay,
so
in
terms
of
implementation,
we're
changing
to
using
abpf
we
used
to
have
patches
to
the
kernel
and
abpf
runs
in
user
mode
and
we
can
use
it
on
more
platforms.
So
that's
that's.
Where
we're
trying
to
do
just
for
the
implementation.
We
can
always
support
Windows
and
we
have
it
done
for
PDM
V1
and
have
been
testing
it
and
now
we're
working
on
doing
the
PDM
V2
in
ebpf.
Okay.
K
Our
proposal
for
PDM
V2
was
also
accepted
into
the
IAB
Workshop
very
interesting
workshop
on
managing
encrypted
networks,
and
we
think
this
might
be
a
very
good
way,
because
certainly
this
is
a
big
problem.
Our
thought
is
ideally
once
we
have
a
stable
implementation
to
co-locate
PDM
V2
at
various
points,
potentially
with
some
of
the
CDN
providers.
Next,
please
so
the
bigger
question
for
all
this
is
like
it's
all
well
and
good
to
Define
extension
headers.
K
K
K
We
have
done
some
testing
using
cdns
and
Cloud
providers
and
one
cloud
provider
and
one
CDN
is
working
very
closely
with
us
to
try
to
fix
the
problems,
because
there
are
problems
such
as
within
a
CDN
you'll,
get
to
the
edge
in
IPv6
but
internal
to
the
CDN
Network.
You
are
traveling
in
ipv4,
and
so
suffice
it
to
say
that
if
you're
traveling
in
ipv4
you're
not
likely
to
be
supporting
extension
headers.
So
let
us
fix
that
problem
first.
K
So
that
is
our
big
effort
right
now.
We
hope
you
agree
that
IPv6
should
be
supported
in
cdns.
So
next,
please!
K
So
as
far
as
PDM
V2
itself,
there
are
three
parts
to
the
communication
there's
the
secondary
to
secondary,
which
is
what
this
particular
draft
defines.
But,
and
that
is
an
independent
protocol
in
itself.
It
is
the
PDM
V2
encryption,
but
somehow
you
have
to
get
the
keys,
you
need
a
control
protocol
or
what
we
call
a
registration
protocol
to
create
a
common
context
and
so
on.
That
is
what
we
are
calling
primary
to
secondary
and
primary
to
primary.
K
If
we
split
the
drafts,
as
we
would
like,
then
just
our
opinion
is-
is
possibly
we're
ready
for
last
call.
So
I
would
like
to
get
some
thoughts.
E
If
you
were
splitting
this,
you
know
we
would
need
to
do
some
kind
of
like
an
adoption
call
at
least
or
your
consensus
call
within
the
group
to
say
we
want
to
have
more
different
documents
for
this
I'd
also
be
curious
to
know
how
much
of
the
current
document
is
would
be
kind
of
generic
between
the
different
modes
like
How
Deeply
entangled.
E
Would
these
two
documents
be?
How
much
are
we
buying
by
separating
them
out
yeah.
K
That's
a
really
good
question:
I,
actually
think
that
we
should
take
out
because
we
have
in
in
the
secondary
to
secondary
what
we
have
is
some
of
the
security
requirements
for
the
primary.
The
secondary
and
primary
primary
flow.
I
would
suggest
that
we
take
all
of
that
out
as
I
say,
keep
it
clean
and
then
start
the
second
document,
so
I
would
say
that
the
the
the
amount
of
repetition
should
be
relatively
minimal.
K
The
reason
I
say
this
is
that
the
reason
for
the
split
really
is
that
well
I,
guess
I
myself
get
really
tired
of
reading
60
page
rfcs
it
it
it's
hard
to
keep
your
head
straight.
You
know-
and
so
maybe,
but
but
I
totally
get
your
question.
Maybe
what
we
should
do,
I,
don't
know
what
everybody
thinks
is:
maybe
just
go
ahead
and
create
like
a
a
draft
for.
H
K
F
E
E
K
Sure,
no,
no
totally
totally
get
that
and-
and
you
know
so
so
why
don't
we
go
at
least
at
least
an
outline
of
what
sections
there
should
be
and
how
the
two
documents
would
look
split
apart
and
then
and
then
people
can
say
no,
you
should
just
really
have
them
be
one
document,
the
the
other.
The
other
consideration,
too,
is
that
really,
if
scalability
is
not
an
issue
for
an
implementer,
you
can
just
go
ahead
and
separately
implement
the
secondary
to
secondary
it,
really
if
you
just
do
secondary
to
secondary.
K
Probably
if
you
get
behind
a
hand,
you
know,
if
you
increase
to
more
than
a
handful
of
clients
and
servers
you
would,
you
will
probably
be
doing
a
bunch
of
you
know,
shooting
yourself
in
the
foot
by
doing
that.
But
having
said
that,
if
you
do
want
to
do
it
for
a
very
small
environment,
that's
certainly
possible.
That
was
another
part
of
our
thinking.
K
Okay,
great
great
so
we'll
go
ahead
and
and
give
you
guys
a
prototype.
Thank
you
perfect.
B
I
B
I
My
name
is
Stuart
Cheshire
I
am
presenting
on
behalf
of
Christoph
parsh
who's.
The
primary
author
on
this
document
he's
not
able
to
be
here
in
person
this
week
and
it's
2
A.M
in
California,
so
he
asked
me
to
step
in
for
him.
These
are
his
slides.
Let's
go
on
the
response
from
this
draft
is
a
working
group
draft.
We
Kristoff
is
working
on
a
couple
of
updates
which
I'll
describe
in
more
detail.
I
One
is
we
discovered
a
floor
in
the
algorithm
that
it
can
terminate
too
early
and
underestimate
the
buffer
blades,
and
the
second
issue
is
a
suggestion
for
a
well-known
URI.
So
if
we
move
to
the
next
slide,
I
can
describe
this.
I
So
the
goal
to
remind
everybody
of
the
responsiveness
test
is
to
measure
the
round
trip
delay
when
the
network
is
actually
being
used.
Most
of
the
time
right
now,
when
we
run
a
ping
command
on
the
command
line,
we're
testing
the
network
when
it's
idle,
which
is
not
very
interesting
because
I
want
to
know
how
the
network
operates
when
I'm
actually
using
it,
not
when
it's
idle.
I
I
And
in
this
example
that
Kristoff
made
Let's
assume
the
bandwidth
delay,
product
is
32
megabytes
and
each
TCP
connection
has
a
received
window
of
four
megabytes
and
that's
why
it
takes
multiple
connections
to
fill
up
the
pipeline.
So
the
test
starts
with
four
connections
and
they
each
grow
to
their
four
megabyte
received
window.
We
have
16
megabytes
in
flight,
but
that's
still
not
filling
the
pipe.
So
in
the
next
slide,
we
add
for
more
connections
and
we
observe
that
the
aggregate
throughput
goes
up.
I
So
that's
great
now
we
can
tell
from
this
graph
that
we've
hit
the
limit,
but
of
course
the
test
doesn't
know
that,
so
the
test
adds
four
more
connections,
and
at
this
point
we
don't
get
any
more
throughput
because
we
have
hit
the
capacity
of
our
bottleneck
link
and
at
this
point
the
extra
traffic
we
introduce
into
the
network
is
sitting
in
a
queue
on
Entry
to
that
bottleneck,
Network
and
that's
what
we
want
to
measure.
We
want
to
observe
what
is
the
depth
of
the
buffer
bloat
in
this
network?
I
How
much
is
it
willing
to
delay
our
packets
and
make
them
late
with
the
current
test?
This
is
where
it
terminates,
because
it
sees
that
the
latest
round
of
adding
connections
did
not
increase
the
throughput
significantly
so
it
concludes.
I've
now
filled
the
full
capacity
of
the
pipe.
My
measurements
are
done.
You
can
see
from
this
graph
that
it
may
have
filled
the
full
capacity
of
the
link,
but
it
has
not
yet
filled
the
buffers
and
in
Real
World
operation.
It's
very
likely.
I
You
would
fill
the
buffers
until
they
overflow,
so
this
risks
underestimating
the
true
buffer
blow
to
the
network,
and
if
we
move
on
the
solution
to
this
very
simple,
it
is
described
in
the
issue
tracker
in
GitHub
all
of
this
attracting
GitHub.
So
we
welcome
any
suggestions
from
other
people
who
noticed
mistakes
or
have
ideas
for
improvements.
I
The
solution
is
very
simple.
Instead
of
terminating
the
test,
when
the
throughput
stops
going
up,
we
monitor
both
throughput
and
delay
and
terminate
the
test
when
both
of
those
are
stabilized,
so
simple
fix
there,
that's
going
in
the
next
version
of
the
draft.
The
next
suggestion
next
slide.
Sorry
I've
used
the
clicker
if
it
was
working.
I
This
is
another
suggestion
we
had,
because
this
responsiveness
test
is
basically
built
on
HTTP
gets
and
puts
any
web
server
is
a
suitable
platform
for
running
this
test.
So
there's
a
suggestion
that
we
have
a
well-known
URI
of
a
configuration
file
in
Json
format,
so
any
web
server,
and
maybe
in
the
future
this
would
be
built
into
standard
web
servers
that
they
all
support,
doing,
responsiveness,
measurements
and
the
way
you
find
this
out
is
by
querying
the
well-known
URI
and
see
if
you
get
back
a
configuration
object
next
one.
I
We
have
a
couple
more
issues
tracked
on
GitHub
that
are
in
the
pipeline.
One
is
that
I
described
earlier,
that
you
keep
adding
connections
until
the
results
stabilize.
Sometimes
they
don't
stabilize
and
it
can
be
hard
to
tell
if
the
little
fluctuations
are
significant
so
because
we
don't
want
the
test
to
run
forever.
Sometimes
we
have
to
terminate
before
we're
totally
confident
the
other
issue.
I
We've
been
talking
to
home
Gateway
vendors
companies
that
make
Nat
gateways
and
Wi-Fi
access
points,
and
one
of
the
really
important
things
for
Diagnostics
here
is
to
narrow
down
where
the
buffer
bloat
is.
So
anybody
in
this
room
can
pull
out
their
iPhone.
You
can
run
the
Apple
responsiveness
test
or
you
can
run
the
Ookla
speed
test
app,
which
now
it
also
includes
a
similar
working
responsiveness
measurement.
I
So
now
you
know
you
have
horrible
buffer
blades,
but
but
where
is
it?
Do
you
blame
the
ISP?
Is
it
your
Wi-Fi
access
point?
The
Next
Step
you
want
to
do
is
diagnose
that
we've
had
interest
from
equipment
vendors
to
host
a
test
endpoint
on
the
home
Gateway,
so
you
can
run
a
test
from
your
iPhone
to
the
home
Gateway
and
tell
if
maybe
your
Wi-Fi
access
point
is
where
the
buffer
bloat
is
occurring
or
the
home
Gateway
contest
Upstream.
I
I
A
lot
of
these
home
gateways
for
cost
reasons
are
not
built
with
more
powerful
CPUs
than
they
really
need
to
get
the
job
done,
and
we've
had
strong
feedback
from
them.
That
doing
TLS
at
a
gigabit
per
second
is
is
beyond
the
capabilities
of
their
Hardware
I
know
my
iPhone
can
do
it
and
my
Mac
can
do
it,
but
but
a
little
fifty
dollar
home
Gateway
box
can't
so
those
requests
as
a
request
to
support
unencrypted
capacity
measurements
as
well.
I
That's
that's
something
we're
going
to
work
out
how
to
put
into
the
draft.
It's
not
totally
obvious
how
to
do
it
for
the
same
reason
that
HTTP
and
https
don't
run
on
the
same
port.
So
there
isn't
a
good
in-band
negotiation
way
to
say:
are
we
using
encryption
or
not,
but
we
will
sort
that
out
and
I
think
that's
our
last
slide.
I
So,
yes,
I
will
answer
any
questions.
If
people
have
any,
of
course,
file
issues
on
GitHub
and
Kristoff
will
be
available
via
email.
To
answer
any
questions
too.
H
Thank
you,
Stuart
for
the
for
stepping
in
for
Kristoff
I,
know,
I,
pretty
appreciate
it
I'm
wondering
if
you're
familiar
with
any
of
the
measurements
we've
shared
on
the
ippm
mailing
list.
Recently
they
tend
to
point
to
you
know
the
underestimate
of
capacity
and
therefore
working
latency
that
you
guys
are
talking
about
and
well
we're.
H
What
we
saw
basically
was
that
the
RFC
1997
IP
air
capacity
measurements
produced
there's
higher
latency,
because
we
have
a
you
know
we
sort
of
have
a
stronger
method
of
producing.
You
know
the
working
load
and
expanding
the
buffer
blow
in
the
channel
That
We're
measuring
so
we
saw
I
mean
to
be
to
be
clear
about
the
numbers
we
saw.
H
You
know
like
on
a
one
gigabit
downlink,
we
saw
you
know
the
full,
the
full
gigabit,
measured
with
the
ipuar
capacity
Tool.
We
saw
about
900
megabits,
with
responsiveness
and
the
capacity
of
report
it
reports,
and
when
we
tried
the
Go
responsiveness
version,
the
capacity
was
very
much
lower
than
that
on
the
order
of
about
a
120
megabits
per
second.
When
it
should
have
been,
you
know,
downlink
capacity
of
one.
You
know
940
megabits.
So
you
know
we're.
H
We
definitely
saw
the
kind
of
problem
that
you
guys
are
saying
you're
going
to
attack
this
time
around.
H
You
know
the
latency
is,
is
you
know
a
good
thing
to
monitor,
but
you
know
it
may
be
that
there's
faster
ways
to
get
there
and
to
you
know
to
measure
the
working
latency
than
you
know,
adding
one
connection
at
a
time,
so
I
just
encourage
you
to
think
about
that,
and,
and
also
you
know,
consider
the
measurements
that
have
been
shared
thanks.
I
H
I
I
I
wouldn't
have
stated
quite
that
strongly
we
that
this
software
is
still
in
development,
and
this
draft
is
draft
zero
one.
So
it's
a
fairly
young
draft,
the
existing
code
would
it
would
look
for
the
throughput
to
stop
increasing
as
it
signed
that
it
had
filled
the
link,
and
actually
it
was
me
who
pointed
out
the
issue.
I
I
was
thinking
about
this
and
I
said
just
because
the
throughput
hasn't
now,
just
because
the
throughput
is
no
longer
increasing,
doesn't
mean
you
filled
the
buffers
because
the
round
trip
delay
might
be
still
going
up.
So
that's
not
a
particularly
profound
observation
that
was
just
something
I
realized
that
we
should
wait
until
both
of
both
of
these
measures
have
stabilized
so
I.
Don't
think
that's
some
fundamental
flaw
in
the
way
it's
measuring
capacity,
it's
just
a
a
slightly
different
condition
in
the
in
the
while
loop
about
when
to
stop.
I
As
to
your
comments
about
it
was
only
reporting,
900
and
something
megabits
on
a
gigabit
link.
Do
you
have
any?
Can
you
explain?
Do
you
have
a
quick
summary
about
what
you
think
was
causing
it
to
not
get
the
full
gigabit.
H
Well,
in
general,
TCP
with
multiple
connections,
you
know
has
some
instabilities
about
the
throughput
that
it
measures
and
without
without
a
lot
of
averaging
that
you
know
some
of
the
companies
use,
you
tend
to
get.
H
You
know,
variations
from
measurement
to
measurement
and
and
also
some,
you
know
just
some
interaction
problems
that
I
think
mad
Mathis
can
explain
to
you
very
well.
So
the
you
know,
the
the
issues
here
are
are
in
some
sense.
You
know
choices
and
you
know
that
the
fact
you're
adding
additional
information
to
the
algorithm.
Now
that
may
be.
That
may
work
out
very
well,
but
the
you
know
right
now.
H
The
capacity
estimate
is
is
quite
low
and
that's
resulting
in
a
working
latency
which
is
low
as
well.
That's
at
least
according
to
our
measurements.
So
you
know
I'm
just
trying
to
share
some
experience
that
you
know
non-developers
have
had
and
it's
been
shared
on
the
list.
So
you
know
I
hope
that
you
guys
will
take
that
for
what
it's
worth.
Thank
you.
I
It
sounds
like
your
feedback.
Is
that
it's?
Your
observation
is
that
you
disagree
with
using
TCP
with
cubic
congestion
control,
and
it
should
be
something
else.
That's
some
other
transport
protocol,
that's
filling
the
network
is.
H
H
Dc
DCP
had
when
you
had
a
lot
of
TCP
connections
together,
there's
an
instability
that
results
and
Matt
like
I
said
man.
Mathis
can
explain
that
to
you
very
well.
In
fact
it's
it.
He
explained
it
very
well
in
in
RFC
that
we
wrote
here
in
ippm
some
years
back
on
model
based
methods
for
measuring
that
capacity.
So.
A
Google,
maybe
this
is
a
dumb
question,
but
you
mentioned
the
use
case
of
measuring
buffer
Bullet
to
the
Wi-Fi
access
point,
but
isn't
that,
like
a
single
hop
where's,
the
buffer
blood?
In
that
case.
I
The
the
buffer
bloat
is
in
the
Wi-Fi
chipset
in
the
access
point.
That
is
a
big
black
hole
that
sits
on
packets
for
half
a
second.
So
when,
when
data
comes
in
over
your
cable
modem
and
of
your
gigabit
Ethernet
into
the
access
points
for
most
people,
unless
you're
sitting
right
on
top
of
the
access
point,
your
Wi-Fi
rate
is
less
than
a
gigabit.
I
So
if
you're
downloading
data,
a
software
update
an
app
you're
watching
Netflix
video,
your
Wi-Fi
access
point
is
the
slowest
hop
and
that's
where
the
queue
builds
up
and
depending
on
who
makes
that
access
point
in
the
chipset
inside
it.
You
might
have
half
a
second
of
queuing
two
seconds
five
seconds.
I
I
have
been
doing
some
tests
on
I
won't
say
which
vendor,
but
one
particular
access
point
that
has
five
seconds
of
buffering.
So
you
can
measure
packet
comes
in,
sits
in
Ram
trips
for
five
seconds
and
then
gets
delivered
to
the
Wi-Fi
client.
So
Wi-Fi
access
points
right
now
seem
like
they're,
the
biggest
source
of
delay
on
an
end-to-end
internet
power.
I
Both
of
those
are
envisaged,
so
most
customers
have
an
integrated
box.
They
have
a
cable
modem,
that's
cable,
modem
and
that
Gateway
and
the
HTTP
server
and
Wi-Fi
and
an
ethernet
switch
all
in
one
box.
There
are
other
people,
probably
some
of
the
people
in
this
room,
who
have
more
of
a
component
system
with
a
ubiquity
unify
Wi-Fi
access
point,
that's
just
a
Wi-Fi
access
point
and
some
other
box,
that's
being
the
NAT,
Gateway
and
so
forth.
I
So
each
of
these
bits
of
hardware
could
run
a
little
web
server,
serving
the
right
URLs
for
doing
the
responsiveness
test,
and
that
would
give
the
client
on
your
device
multiple
Vantage
points
in
the
network
to
hit
kind
of
reaching
further
into
the
network
to
narrow
down
where
the
buffer
bloat
is
happening.
Thank
you.
I
I
mean
to
give
one
example.
It
doesn't
help
the
customer
to
call
their
ISP
to
complain
about
buffer
bloat
if
the
buffer
bloat
is
actually
on
the
Wi-Fi
access
point.
They
bought
themselves
so,
but
we
have
also
measured
cases
where
the
cable
modem,
CS
cmts
equipment
at
the
ISP
is
configured
depending
on
what
data
rate
you're
paying
for
50
meg
100
right,
200
Meg.
They
configure
the
queues
through
two
seconds
of
buffering,
at
whatever
rate
you're
paying
for
so
again
under
load.
I
Every
packet
you
receive
sits
in
RAM
chips
at
the
ISP
for
two
seconds,
while
it
it
sits
and
twiddles
its
thumbs,
and
then
it
comes
out.
So
you've
got
a
gigabit
in
two
seconds
of
going
nowhere,
gigabit
coming
out
the
other
side,
so
you
could
have
two
seconds
of
lag
in
the
ISP.
You
could
have
five
seconds
of
lag
in
the
Wi-Fi
access
point
and
knowing
which
you
is
afflicting
you.
Is
that
really
important
to
know
what
to
do
about
it?.
E
All
right,
thank
you.
So,
first
there's
not
the
reason.
I
got
in
queue,
but
just
to
call
back
to
the
conversation
with
Al,
I
I
would
encourage
you
and
Chris
nothing,
the
others
to
look
at
the
stuff
that
Al
wrote
to
the
list,
because
that
does
have
a
lot
of
details,
and
you
know
even
I
certainly
think
there's
room
for
different
methodologies,
and
the
point
of
this
is
not
to
measure
capacity,
but
at
least
understanding
the
comparisons
and
the
difference
in
the
measurement
results
will
be
good.
E
So
I
came
in
just
about
the
kind
of
the
unencrypted
mode.
Looking
at
the
Json,
because
I
assume
that
you're
well
known
URI
will
contain
the
Json
that's
currently
defined
in
the
document.
It
does
seem
that
that
you
know
includes
the
URLs
of
the
different
tests.
E
I
We
were
I
think
we
were
thinking
too
much
in
binary,
All
or
Nothing,
black
and
white,
that
it's
either
clear
text
or
it's
TLS
and-
and
your
point
is-
is
really
good,
which
is
the
problem
is
not
that
these
devices
can't
support
TLS.
They
can't
support
TLS
at
a
very
high
rate.
So,
yes,
thank
you.
I
think
that's
a
good
Insight!
The
the
configuration
blob
will
always
be
fetched
over
TLs,
but
the
actual
High.
L
E
Yeah
and
just
one
more
comment
off
of
that
of
just
something
that
we'll
need
to
change.
Currently,
your
test
is
always
done
explicitly
over
http
2.,
of
course,
when
you're
unencrypted
that
will
not
be
an
option
so
we'll
have
to
go
to
http
one
which
will
have
different
flavors
of
performance
as
well.
So
that's
going
to
need
to
be
a
consideration.
I
Okay,
let's,
let's
think
about
this
a
little
bit
more
I
know
the
intention
of
HTTP
2
is
that
security
is
not
optional
and
for
traffic
on
the
public
web.
That
makes
perfect
sense
for
this
kind
of
benchmarking
traffic.
My
understanding
is
that
there's
nothing
about
HTTP
2.
That
says
you
can't
run
it
directly
over
TCP
you're
not
supposed
to,
because
it's
supposed
to
be
tied
to
TLs,
but.
E
Right,
I
I
guess
I
mean
you
can
but
I
don't
know
if
we
want
to
be
encouraging
implementations
to
kind
of,
for
this
reason
build
the
unencrypted
HTTP
2
mechanism
that
it's
something
that
is
not
normally
supported,
and
so
you
need
more
exotic,
implementations
or
more
knobs
that
some
of
the
languages
we
may
have
may
not
support.
Today.
Yes,.
I
Let's
think
more
about
the
right
way
to
do
that.
It's
not
as
simple
as
just
saying
we'll
use
HTTP
one,
because
part
of
the
way
the
test
works
is
it's
fetching
large
chunks
over
the
H2
connection
and
then
in
the
middle
of
that
it
inserts
a
get
request
for
little
one
byte
object
and
that's
how
we're
measuring
on
a
working
busy
connection.
How
quickly
can
it
deliver
some
other
piece
of
data
on
demand
and
that
that
that
ability
to
interleave
multiple
requests
is
is
not
there
on
HTTP
one
one?
E
Yeah
all
right
that
sounds
good.
It
and
sorry.
One
final
there's
just
a
curiosity
question
because
I
as
we
were
talking
about
this
in
HTTP
2
versus
HTTP,
one
in
general,
the
document
and
this
presentation
we're
talking
about
having
multiple
TCP
connections.
Here,
how
are
you
inducing
the
HTTP
Stacks
to
create
multiple
TCP
connections
when
with
when
they're
doing
HTTP
2,
because
generally
it
will
try
to
use
one.
I
That's
a
good
question:
I,
don't
know
the
specifics
of
I'm
I'm
I'll.
Give
you
my
best
understanding
of
this,
that
on
the
tools
on
Mac,
OS
and
iOS,
I
think
it's
using
the
the
URL
session
apis
and
by
making,
but
most
applications
make
one
URL
session
and
use
it
for
everything.
I
think
this
code
is
explicitly
making
multiple
URL
session
objects
and
the
connection
pooling
is
per
object
as
to
what
the
go
responsiveness
test
does.
E
I
Yes,
yeah,
that's
a
good
point:
a
client
could
make
four
different
calls
to
an
API
to
get
a
URL
and
not
realize
that
under
the
covers,
those
are
all
being
coalesced
into
one
connection.
Okay,.
B
Great,
we
have
a
little
bit
time,
so
I
just
want
to
insert
myself
and
ask
one
more
question.
B
I
was
wondering
if
you
have
considered
these
tests
in
in
certain
mobile
scenarios,
so
imagine
that
you
you're
running
this
from
your
from
your
device
on
a
mobile
network
and
you're
you're
on
a
high-speed
train
say
so
you
start
your
capacity
measurement
or
your
responsiveness
measurements,
while
you're
in
one
cell
and
then
you
might
hand
over
to
a
different
cell
with
completely
different
properties,
for
instance,
going
from
140
cell
to
a
5G
cell.
B
Maybe
in
the
first
cell
you,
you
sort
of
your
algorithm
thinks
that
you
found
the
capacity
and
you
start
measuring
the
delay.
But
then
you
hand
over
to
some
to
some
cell
with
much
higher
capacity
and
then
that
would
probably
lead
in
the
delay.
Dropping
is
this
something
your
algorithm
is
currently
considering,
but
I
mean.
If
you
see
you
start
to
see
a
delay
build
up,
but
then
the
delay
drops
and
would
you
then
kind
of
try
to
add
more
more
throughput
into
the.
I
I
think
the
simple
answer
to
your
question
is:
we
have
not
considered
those
kind
of
changing
environments.
Okay,
one
of
the
things
we
have
considered
just
for
usability
reasons
is
that
our
goal
is
for
the
test
to
complete
in
about
10
seconds.
This
is
this
is
the
kind
of
test
where
you
pull
out
your
phone.
You
run
the
test,
you
wait,
you
see
the
answer,
it's
not
something
that
takes
five
minutes
to
run
so
by
keeping
it
short.
I.
Think
that
reduces
these
kind
of
opportunities
for
conditions
to
change.
I
I
Suddenly,
the
whole
environment
changes
the
same
could
go
if
you're
sharing
capacity
with
some
other
client,
and
they
finish
that
download
and
suddenly
there's
twice
as
much
share
available
for
you
I
think
it's
always
possible
to
conduct
not
construct
scenarios
where
those
changes
happen
at
just
the
wrong
time
and
and
and
you
don't
notice,
the
change
has
happened,
I
think
the
answer
there
is
at
the
human
level.
If
you
run
the
test
two
or
three
times,
hopefully
you
get
consistent
results.
I
The
the
other
thing
to
remember
for
this
test,
in
particular,
is
the
focus
of
this
test
is
not
how
many
megabits
per
second
are
you
getting,
because
there
are
many
many
tests
that
already
do
that
everybody
runs
iperf
and
and
many
other
ways
of
testing
capacity.
The
focus
of
this
test
is
to
work
out.
How
deep
are
those
dark
buffers
and
our
belief
is
that
the
depth
of
buffering
actually
is
not
something
that
varies
with
time
of
day.
It's
not
something
that
varies
with
traffic.
I
It
tends
to
be
a
configuration
option
of
the
network
hardware
and,
as
such,
it's
relatively
stable.
Now,
if
you,
if
you
roam
from
one
wife
from
one
mobile
cell
to
another
one,
then
you
might
inherit
a
completely
different
configuration
just
like
if
you
were
to
roam
from
one
Wi-Fi
access
point
to
another,
you,
you
could
end
up
in
a
dramatically
different
network
environment
I
mean
to
give
a
silly
example
right.
I
You
could
have
one
Wi-Fi
access
point
plugged
into
a
one,
megabit
DSL
modem
and
you
walk
next
door
and
you
roam
to
an
AP,
that's
plugged
into
gigabit
fiber
and
a
lot
of
the
time
that
roaming
is
completely
transparent
to
the
application.
If
your
IP
address
hasn't
changed
and
the
SSID
network
name
hasn't
changed,
the
application
doesn't
even
know
it's
just
suddenly.
The
configuration
is
different.
I
J
Hi
good
morning,
so
this
one
is
going
to
be
pretty
fast.
Actually,
we
have
no
change.
Since
my
last
ETF,
the
reason
is
simple:
we
were
waiting
for
the
implementers
to
have
a
feedback
on
the
implementation,
as
we
agreed
so
I
think
we're
gonna
wait
a
little
bit
more,
so
I
think
also
that
it
doesn't
change
anything
about
the
working
group
less
code,
so
we're
gonna
postpone
it
to
maybe
I
hope
by
next
ietf.
J
B
J
Well,
actually,
there
was
a
student
for
that
and
it
turned
out
that
it
it
was
quite
unreliable,
so
you
know
things
happens,
and
now
we
have
another
student,
which
is
a
master
in
the
University
of
liege,
and
so
this
is
is
Master
thesis
so
I
for
sure
hope
that
it's
gonna
be
okay.
This
time.
B
F
C
Okay,
thank
you
very
much
hello.
Everyone
again,
I'm
tianjo
from
Huawei
and
I
will
report
the
progress
of
the
iom,
young
data
model.
C
We
received
a
comments
from
the
working
Google
last
call
and
thanks
Tom
and
Greg
for
the
very
useful
discussions.
There
are
some
minor
issues.
We
will
update
the
new
revision
and
there
is
a
major
issue
raised
by
Tom
to
add
some
examples
we
are
going
to
accept,
but
still
we
can
discuss
later
whether
it
is
necessary
and
next
next
stage.
C
That
means
i1
ex
and
i1
flags
are
excluded
from
this
year
model
and
then
the
first
question
whether
to
include
the
presentation
of
iom
data.
My
suggestion
is
Young
model
only
focus
on
the
configuration
DCR
mode.
Only
okay
only
focus
on
the
configuration,
because
the
exports
may
not
use
your
model.
For
example,
ipfix
could
be
used.
On
the
other
hand,
the
export
should
be
should
be
iom
independent.
That
means
there
are
other
measurements
could
generate
the
same
Matrix
like
a
delay
loss
and
so
on
known.
C
The
last
question
should
should
the
configuration
of
iom
or
IPv6
or
an
asset
should
be
part
of
this
document?
I
would
say
yes
and
I
do
not
think
we
need
to
augment
the
PCL
model
here.
The
filter
defined
in
this
defined
in
this
draft
is
only
used
to
identify
the
target
flow
which
does
not
have
the
which
does
not
have
an
ion
header.
We
matched
the
packet
well
enter
the
iom
process.
We
can
use
the
protocol
entry
in
this
draft
in
the
young
model
to
find
the
irony
instruction.
C
B
F
Gonna
go
and
ask
the
same
questions
again
that
the
other
one
already
asked.
So
there
is
in
many
cases
people
supplying
examples
for
young
models.
We
can
go
and
create
a
full
example,
as
part
of
the
document
feasible
doable,
but
we
also
have
to
recognize
and
acknowledge
that
there
is
no
implementation
yet.
So
it
would
be
really
a
made-up
example
that
we
can
go
and
put
into
the
document
and
I'm
just
wondering
how
useful
that
is.
F
F
If
we
look
at
other
management
documents
that
we've
done
just
recently
like
comp
state,
there
is
no
examples
in
there
at
all
right,
so
I'm
just
wondering
whether
people
have
a
a
feeling
for
or
some
guidance
on,
whether
we
want
to
put
a
full
example
in
whether
it's
useful
or
whether
we
should
just
go
and
leave
it
for
now.
F
Recognizing
that
this
is
like
early
stages
and
we're
waiting
for
reference
implementation
and
then
the
second
thing
that
John
already
mentioned
is
I
would
be
very
much
in
favor
of
following
the
suggestions
that
Cameron
had
I.E.
We
focused
the
scope
on
the
the
core
iom
documents
so
9197,
and
that
way
avoid
like
forward
references
on
other
documents
that
are
still
in
flight
and
and
making
that
a
little
like
nebulous
on
on
what
the
scope
of
this
document
would
be.
G
Well,
I
I
understand
that
desire
to
move
forward
and
progress.
This
document,
but
I,
see
that
IEM
Dax
is
important
and
very
much
needed
mode
of
I
am
especially
in
environment
with
their
constraints,
so
that
because
it
allows
for
export
of
operational
State
and
Telemetry
information
out
of
band
or
for
the
management
plane,
for
example,
so
I
concerned
that
if
there
is
no
effort
to
drive
this
work
and
I,
don't
see
any
draft
individual
draft
or
any
draft
of
any
maturity
that
addresses
it.
G
So
a
narrowing
scope
of
this
draft
at
this
point
seems
like
a
risk
of
IM.
Dax.
Yang
model
would
never
be
done
in
time,
because
I
can
point
to
work
and
discussion.
We
have,
for
example,
on
mpos
architecture
enhancements
using
the
network
actions
where
I
am
Dex
is
very
attractive
and
especially,
for
example,
in
a
deterministic
networking
within
the
OS
data
plane,
where
their
resources
allocated
for
that
net
flows
are
explicitly
reserved
so
that
even
their
operational
State
and
Telemetry
information
are
important.
C
Yes,
then
there
will
be
several
Choice.
One
is
yeah.
One
is
just
that
I
suggested
exclude
this
ion
text
or
on
the
other
hand,
maybe
we
need
to
to
have
a
consensus
on
how
to
operate
the
i1
text
where
to
detect
the
ACL
in
the
at
the
transit
node.
G
Yes,
I
agree,
I,
think
that,
having
this
discussion
before
we
make
a
decision
on
how
to
progress
this
work,
it
will
be
very
useful
and
helpful
not
only
to
the
group
but
to
the
large
community
that
is
looking
to
use
an
app
Apple
IAM
technology
in
their
use
cases
networks.
Thank
you.
M
Good
good
morning,
everyone,
my
name,
is
Rakesh
Gandhi
and
presenting
the
simple.
The
stamp
extension
for
Sr
networks
on
behalf
of
the
authors
listed
next
slide.
Please.
M
So
the
agenda
is
the
updates
that
we
have
made
in
revision.
Zero
six
will
briefly
have
a
look
highlight
the
stamp
based
work,
that's
being
done
in
other
working
groups
and
the
next
steps
where
we
are
and
what
we
should
do
next
next
slide,
please
so
in
the
latest
revision
zero.
Six.
We
have
addressed
various
review
comments.
Many
thanks
to
Rick
and
Greg
for
providing
the
comments
so
comments
from
Rick
conspirant
regarding
the
wecheck
flag
for
the
state
does
reflector
specifically
for
the
counters
for
the
direct
measurement
tlv.
M
M
M
So
there
is
a
ongoing
work
to
extend
the
stamp.
That's
a
great
protocol
work
done
in
ippm
and
we
are
using
that
in
Spring
cavity
a
few
drafts
there,
as
well
as
this
one
for
mpls
as
well.
So
please
review
this
draft
as
well
as
and
provide
review
comments
and
next
slide,
please
so
for
this
draft.
Currently
we
have
no
open
issues
and
we
believe
the
draft
is
ready
for
the
working
of
last
call.
So
we'd
like
to
request
our
kingdom
last
call.
B
All
right
thanks,
then
we
move
onwards
in
the
agenda.
We
will
have
Greg
to
talk
about
Pam.
G
Yes,
we
can
go
to
the
next
slide,
please,
okay!
So,
let's
recapture
what
we're
trying
to
address
here,
service
level
objectives
are
components
of
service
level
agreement
and
they
reflect
on
key
performance
metrics
that
are
critical
to
user
experience
with
a
particular
service.
G
So
in
some
use
cases
the
whole
history
of
each
SLO
is
not
needed
and
what
is
important
and
critical
is
capturing
when
their
SLO
are
violated
or
not
being
delivered
and
that's
sufficient
to
draw
the
conclusion
of
Wellness
of
their
service
being
delivered.
G
So
basically
how
much
it
in
regard
to
SLA
and
the
second
observation,
so
you
can
see
here
on
a
drawing
that
what
will
matter
what
matters
here
is
that
when
a
particular
SLO
is
outside
of
the
range-
and
there
could
be
a
different
interpretation,
so
it
could
be
that,
for
example,
operator
is
interested
in
early
warning
about
their
Dynamics
or
degradation
of
the
particular
performance
metric
and
will
request
that
the
system,
the
Pam
will
inform
about
the
violation
of
optimal
threshold,
whereas
the
operator
and
customer
I
will
certainly
will
be
interested
in
knowing
with
the
degradation
crosses
critical
level.
G
So
for
that,
we
allow,
in
our
approach
in
the
discussion
in
this
document
that
there
are
two
types
of
thresholds
might
be
associated
with
their
any
given
SLO
next
slide,
please
so
what
have
they?
We
received
excellent
comments
from
Adrian
and
he
agreed
to
join
us
and
collaborate
together.
Welcome
edrin,
thank
you
and
we
work
together
in
addressing
his
comments
and
other
comments
we
received
in
the
meantime
between
minutes
next.
G
So
what
we
wanted
to
emphasize
is
that
one
of
the
use
cases
for
bam
and
SLO
is
ITF
transport
slices
and
if
we
look
at
the
draft
that
being
worked
in
the
TS
working
group,
so
the
definitions
clearly
state
that
SLA
is
composed
of
slos
and
service
level
expectations
and
service
level
expectations.
Unlike
service
level
objectives
they
are,
can
they
can
be
expressed
but
they're
more
challenging
to
measure
so
thus,
slas
are
outside
the
scope
of
this
work.
G
Usefulness
of
metrics
in
in
the
foundation
of
this
approach
is
that
for
each
given
SLO
there
are
associated
performance.
Metric
is
measured
over
predefined
interval,
and
that
is
the
same
interval
being
applied
to
us
all.
Slos
that
are
part
of
the
same
SLA,
but
at
the
same
time,
because
their
decision
of
transition
between
availability
in
unavailability
state
is
done
based
on
the
number
of
consecutive
intervals
that
can
hide
on
their
notion
of
number
of
translations.
G
Violations
is
lost.
So
that's
why
the
ratio
metric
seems
to
be
a
useful
metric
so
and
the
ratio
can
be
applied
equally
to
violated
intervals.
So
when
there
are
optimal
thresholds
being
crossed
and
severely
violated,
where
their
critical
thresholds
can
be
lost.
Of
course,
if
only
one
threshold
critical
is
defined,
then
the
metric
for
the
violated
intervals
will
be
identical
to
the
severe
violated
next
slide.
G
Please,
if
you
have
any
questions,
let
me
know
I,
don't
see
their
queue
so,
okay,
so
let's
continue
in
discussion
of
ITF
Network
slicing.
What's
important
is
that
a
network
slice
can
be.
G
Constructed
using
a
set
of
connectivity
constructs
and
these
constructs
are
could
be
a
point
to
point
to
point
to
multi-point
and
more
about
it
in
more
details,
you
can
look
at
MBI
a
norbound
interface
yank
model
for
Network
slides,
so
the
an
SLO
can
be
provide
applied
in
or
assigned
to
all
the
construct
that
are
included
in
the
composite
service
or
subsets
of
constructs
can
be
assigned
at
different
sets
of
slos.
G
So
thus
it
allows
to
monitor
individually
this
subset
of
constructs
and
draw
a
conclusion
and
learn
about
their
availability
and
availability
State
and,
in
turn,
so
that
the
composing
service
will
be
a
composition
of
this
subservice
Pam
indicators
and
Metric.
G
Okay,
let's
continue
so
we
still
have
some
questions
that
we
want
to
discuss
and
we
are
discussing
with
offers
and
we
invite
your
comments,
suggestions
and
contributions,
one
of
them
it's
individual
packet
metrics,
whether
it's
important
or
might
be.
I
G
Burden,
it
might
be
an
optional,
and
so
we,
as
originally
we
indicated
so
that
future
work
in
this
direction,
will
include
Yang
data
model,
ipfix
extensions
support,
statistical
slos
and
more
defining
the
policies
that
guide
monitoring
service
level
objectives.
G
As
I
said,
we
always
welcome
comments
and
questions
and
contributions,
so
we
would
like
to
better
understand
their
state
of
the
working
group
adoption
poll
and
resolve
questions
that
being
raised
during
adoption.
Pro,
we
have
a
working
version
that
includes
updates
applied,
resulting
from
the
comments,
but
I
believe
that
we
still
have
one
discussion
that
needs
to
get
some
closure,
or
at
least
we
would
like
to
understand
what
the
status
of
the
working
group
adoption
poll.
E
E
In
general,
you
know
we
had
decent
feedback.
You
know
to
point
out,
you
know
a
number
of
the
support
was
coming
from
co-authors,
and
so
you
know
we
don't
count
that
quite
as
much
of
course.
So
you
know
we
definitely
had
decent
support.
There
wasn't
a
lot
of
detailed
feedback
in
that
and
then
the
one
main
piece
of
detail
and
feedback
was
from
Benoit
and
that,
unfortunately,
hasn't
gotten
more
response.
E
So
I
think
when
the
chair
is
discussed,
we
would
like
to
see
some
resolution
or
more
discussion
there
or,
alternatively,
more
in
depth
comments
or
reviews
from
other
people.
So
I
don't
know.
If,
then
was
you
know
in
the
room,
we're
watching
this
discussion?
E
I
don't
see
him,
but
you
could
also
try
to
help
reach
out
or,
if
others,
in
this
room,
if
they
are
interested
in
this
work,
please
feel
free
to
chime
in
on
that
call
for
adoption
thread
and
add
support,
because
at
this
point
it
we're
kind
of
50,
50.
E
I'd,
say
as
far
as
getting
consensus
here.
G
G
Offers
and
contributors
man
I'll
just
refer
to
offers,
so
we
discussed
you,
know,
questions
and
comments
and
responded
to
him
during
their
two-week
period
of
adoption
poll.
But
for
some
reason
we
heard
nothing
back
from
him
whether
our
responses,
what
we
believe
is
straightforward,
updates
that
been
already
Incorporated
in
a
based
on
his
comments,
Incorporated
in
a
working.
C
G
That
is
not
uploaded
yet
because
we
felt
that
yes,
I
agree
that
it's
beneficial
to
have
a
conclusion,
but
at
the
same
time
it's
not
clear
that
his
comments
are
really
that
critical.
That
basically
cannot
be
resolved
in
the
course
of
the
discussions
as
the
document
progresses.
But
that's
probably
our
offers
View
right.
E
That's
fair,
I
mean
one
option
certainly
would
be
to
if
you
have
this
working
copy
and
you
have
other
changes
upload
a
new
version.
We
do
a
new
call
on
that
version
and
if
we
don't,
you
know,
if
we
get
consensus
and
we
don't
get
objections,
then
we
can
move
forward.
G
B
B
Okay,
now
we're
moving
into
the
the
lightning
talks
section
here,
so
we'll
have
a
bunch
of
presentations
and
they
will
be
rather
quick.
So
please
really
try
to
to
make
your
presentation
short
and
crisp,
so
we
will
start
with
the
enhanced
alternate
marking.
N
N
So
maybe
you
know
that
NFC
8321bs
and
AFC
88
89bs
are
in
the
RFC
editor
queue,
so
the
work
is
quite
completed.
I
also
want
to
thank
keep
the
occasion
to
thank
Martin
for
great
help
on
this.
But,
however,
there
are
some
pending
points
that
we
want
to
explore.
For
example,
in
some
protocols
there
are
no
additional
bits
that
can
be
used
and
also
some
deployment
experience
give
new
requirements.
N
For
example,
the
entropy
of
the
pseudo
random
flow
identification
can
can
be
increased
in
some
cases
to
to
to
extend
the
number
of
flows
that
can
be
monitored,
and
there
is
also
another
point
that
we
want
to
take
into
consideration.
That
is,
the
implementation
of
the
world
framework
included
the
multi-point
measurements.
N
So,
that's
why
the
draft
aims
to
consider
all
these
aspects
and
generalize
the
alternate
marking
data
fields
for
all
the
transport
protocols,
so
in
order
to
introduce
new
metadata
new
flow
monitoring
for
flow
identification
extension
and
new
Flags,
so
the
the
extended
data
fields
can
be
used
for
several
application
to
have
a
shortest
marketing
period,
have
more
dense
delay
measurement
increase
the
enthalpy
of
the
flow,
more
monitoring
identifier
and
automatically
set
up
the
backward
Direction.
N
B
D
D
So,
as
we
all
know,
embed
the
network
environment
to
utilize
the
controllers
to
look
like
the
data
and
calculate
the
packet
loss
and
delay.
On
the
other
hand,
in
your
multiple
domain
scenario,.
D
Domain
controllers
collect
Merit
data
of
each
shade.
Well,
it
is
a
domain
controllers
across
calculated
the
undertone
the
results.
Their
interaction
is
required.
The
controller
interaction
induces
the
complexity
and
categories
in
the
native
measurement,
which
is
basically
to
guarantee
the
high
as
well
requirements
of
customers
such
as
bands
and
finances.
So
we
would
like
to
propose
a
distributed
flow
flow
of
performance
measurements
method.
Without
the
participation
of
the
control
environmental
results
could
be
used
by
the
router
to
select
the
forwarding
pass.
That's
the
high
as
a
requirements.
D
The
second
draft
is
also
about
flowmenting
Happy
base
physics
Network
next
next
slide.
Please.
D
Automarking
options
to
provide
enhance
the
capabilities
and
all
those
advanced
stuff
and
function
and
abilities
in
this
is
helpful
to
application
and
deployments
and
pass
flow
measurement.
That
is
also
called
the
inbounder
network
in
a
network
Telemetry,
it's
researching
and
deploying
in
China
mobile
from
the
test
results
in
our
life.
It
is
better
to
either
is
better.
It
is
better
for
flexible
deployments
on
condition.
Light
on
node
ID
and
a
stable
measurement
period
can
be.
D
You
know,
enabled
in
this
Innovative
plan,
so
to
factor
flexibility
Deploy
on
pass
flow
performance
environment
based
on
auto
marking,
method
in
ipv
system,
with
the
Pacific
participation
of
a
controller,
another
ID
and
the
steerable
measurement
period
is
is
auditory.
Demand
demanded
multiples
can
be
found
in
the
draft
next
slide.
Please
also
in
the
next
step,
we
will
continue
to
improve
the
draft
and
any
comments,
and
the
feedbacks
are
welcome.
Thank
you.
B
Great
thanks
a
lot.
Next
up,
we
have
the
devices
measurements.
I
O
Slide
please
in
this
draft
we
want
to
propose
to
to
put
the
explicit
flow
measurement
props
directly
on
the
user
devices,
for
example,
smartphone
tablets
or
personal
computer.
This
new
idea
has
some
advantages.
The
first
one
is
the
scalability,
because
on
the
user
device
there
are
a
few
connections
to
Monitor
and
it
does
not
need
so
much
resources
to
monitor
all
of
them.
O
Using
the
probe
inside
the
user
device,
give
the
possibility
to
obtain
more
precise
measurements.
For
example,
if
we
consider
the
client
the
the
delay,
we
can
remove
the
client
application
Delay
from
the
measurement,
so
we
can
measure
only
the
the
delay
that
is
on
the
internet
path.
Obviously,
another
Advantage
is
that
the
monitoring
of
both
direction
is
guaranteed
and
and
another
big
Advantage
is
that
we
can
save
on
network
monitoring
equipment
because
there
could
be
a
sort
of
user
device
and
network
props
coordination.
O
So,
for
example,
is
possible
to
set
alarm
thresholds
on
a
user
device
that
signal
to
network
props,
which
session
has
to
be
monitored
and
networking
in
network
monitor
equipment
can
concentrate
only
on
those
sessions
and
we
have
also
the
possibility
to
improve
some
Metric
because,
for
example,
the
the
qubit
that
is
not
end
to
end
can
can
become
end-to-end
if
the
if
the
prop
is
placed
in
the
user
device
handle
bit,
measures
are
not
affected
by
losses,
so
are
more
reliable.
Obviously
this
proposals
does
some
assumptions.
O
Some
privacy
related
assumptions,
so
the
device
owner
decides
whether
to
traffic
is
smart,
is
traffic
or
not,
so
there
should
be
some
sort
of
decision
that
the
device
owner
can
do
in
his
device
and
the
device
owner
can
also
decides
if
share
his
performance
data
with
only
with
these
internet
service
provider
or,
for
example,
also
with
national
authorities
for
for
starts
or
other
evaluation
on
the
performance.
A
Martin
do
Google,
okay,
assumption's
an
odd
word
for
like
what
the
device
owner
decides
and
has
the
power
to
decide
to
do
and
I
mean
I.
Think
you're
right
to
consider
the
privacy
considerations
here,
but
I'm
concerned
that
this
become
become
a
mandatory
requirement
on
some
networks,
and
thus
you
know
not
really
be
an
optional
forfeiture
of
privacy,
but
rather
a
mandatory
one
thanks.
Yes,
it's
Monday.
G
G
Next
slide,
so
this
is
a
just
to
bring
to
your
information
the
work
started
and
mpls
working
group
to
extend
LSP
thing,
with
ability
to
bootstrap
stamp
session
with
this
some
state
Associated
at
session
reflector.
G
So
those
familiar
with
the
OSP
thing
will
notice
that
it's
a
similar
with
how
lspping
is
used
to
bootstrap
point
to
point
BFD
session.
There
are
proposals
of
extending
lspping
to
bootstrap
point
to
multi-point
build
this
session.
So
this
follows
this
similar
Paradigm
of
using
an
extension
to
LSP
pin
to
bootstrap
stamp
session,
whether
it's
a
point
to
point
a
point
or
multi-point
and
more
extensive
discussion
will
be
this
week
at
mpls
working
group
session.
B
P
I
ever
reported
this
draft
about
the
past
consistency
over
76.
Next,
please
remember
with
exposes
through
stamp
or
QR.
Blood
is
important
to
ensure
that
test
packet
under
black
packs
are
transmitted
on
the
next
animal,
that
is
to
ensure
that
the
passes
are
consistent.
If
task
consistency
cannot
be
guaranted,
there
will
be
some
issues.
For
example,
the
delay
method
is
in
occurrent
for
the
time
step
carry
the
inner
reply.
Packets
don't
belong
to
the
passage
of
the
test
packet.
P
Dropped
away,
processes
of
process
or
method
to
keep
the
past
consistent
and
when
measures
as
our
passes
process
through
stamp
or
to
apply,
we
use
the
path
segment
to
associate
the
transmission
parts
of
the
test
or
pockets
under
the
pocket.
By
this
way,
we
can
solve
those
about
issues
less
of
a
request.
You.
B
M
B
B
L
B
L
Okay,
thank
you,
I'm
Juanita,
from
turnover
Bell
and
on
behalf
of
other
authors.
I
will
present
this
Yamoto
for
the
alternate
marketing
method,
so
I
want
to
First,
explain
the
alternate
marketing
use
cases
in
China
mobile
so
for
the
diversity
of
the
service
type
and
the
SLA
depreciations
in
the
5D
period.
It
brings
great
challenges
to
the
performance
monitoring
of
their
background
networks.
So
in
China
mobile
we
have
the
5G
background
background
Network,
which
is
deployed
for
400
000
MTN
devices.
L
Mtn
is
defined
in
idot,
g.s
Ray
10
and
uses
an
alternate
marketing
to
do
the
service
flow
level,
performance
measurement
and
accurate
for
locations
so,
and
we
also
plan
to
use
the
alternate
marking
to
provide
the
accurate
performance
monitoring
for
the
dedicated
line
service
in
the
very
near
future,
so
why
we
need
a
young
model
for
the
alternate
marking.
So,
as
you
know,
it's
a
large
scale
for
the
background
and
the
Metro
networks
in
China
mobile,
and
we
need
a
consistent
young
model
to
manage
the
performance
monitoring
across
a
different
domain
or
different
vendors.
L
So
we
propose
a
model
for
the
alternate
Market
and
we
have
three
objects
here
and
the
global
object
and
head
node
and
the
endone.
So
in
global
object
we
have
some
label
and
also
some
fir
mode
for
the
global
configuration
and
in
head
node.
L
We
can
configure
the
flow
ID
as
a
key
parameter
and
also
the
alternate
status
period,
hot
by
Hub
series,
mic
IP
and
the
interface
name
as-
and
there
are
some
other
parameters
and
I
didn't
list
here
and
for
the
end.
Node
also
can
pick
the
flow,
ID,
VPN
period
and
hope,
I
hope
status
and
the
if
names.