►
From YouTube: IETF114-IPPM-20220729-1630
Description
IPPM meeting session at IETF114
2022/07/29 1630
https://datatracker.ietf.org/meeting/114/proceedings/
A
All
right
welcome
to
ippm.
This
is
the
last
session.
I
think
we'll
probably
wait
a
couple
minutes
before
starting
to
let
some
other
people
show
up,
but
I
guess
if
you
are
remote,
maybe
if
someone
can
give
a
signal
of
if
you
can
hear
the
audio
just
fine.
B
C
C
E
C
C
C
Okay,
we
have
now
been
teleported
to
philadelphia,
so
let's
continue
so
yeah
welcome
everyone.
This
is
ippm
last
session
of
the
week,
I
hope
you're
still
awake
and
fresh
and
yeah,
let's
get
on
with
it.
I
hope
you're
all
familiar
with
this
slide.
If
you're
not,
you
should
really
try
to
read
it
by
now
it
it
sort
of
it's
important
for
whatever
you
try
to
say
and
what
you
contribute
with
there.
So.
C
Meeting
management
we're
running
this
as
a
hybrid
meeting,
so
you
can
join
me
deco
on
this
link
here.
If
you
are
on
site-
and
you
want
to
join
the
queue,
please
do
so
via
meet
echo
either
using
the
the
on-site
tool
that
you
can
have
on
your
phone
or
via
the
meet
echo
session
on
your
laptop.
C
C
Oh,
thank
you
very
much.
Do
we
have
somebody
who
can
take
the
role
of
jabber
scribing
as
well
just
relaying
stuff
from
the
chat?
Oh,
okay,
perfect,
perfect.
All
good.
C
F
I
see
I'm
still
the
delegate
from
vienna,
that's
cool
yeah,
so
comp
state,
that's
on
me.
I've
sat
on
that
for
like
almost
a
month
I'll
get
to
that
like
next
week.
Ipv6
options
to
be
clear
is
with
the
well
I
mean
I
I
have
possession
of
it.
But
the
action
item
is
on
the
authors,
just
to
reiterate
that
in
case
they're
listening
and
don't
realize
that
and
8321-8889
will
probably
be
a
while
before
I
guess
through
isg.
F
I
think
we
have
to
do
another
big
scrub
on
the
experimental
language
in
there
kind
of
lucy
goosey.
You
can
try
this
or
that
stuff
and
frankly,
input
and
help
would
be
useful
there.
If
people
aren't
so
inclined
thanks.
C
Right
next
slide,
please,
so
we
have
a
quite
packed
agenda
today,
lots
of
drafts,
so
we
will
be
focusing
mainly
on
on
drafts
that
we
are
working
on
that
are
adopted
by
the
working
group.
So
we'll
be
starting
with
a
presentation
of
iom
data,
integrity
and
deployment.
It
will
be
presented
together
and
then
we
have
the
iom,
yang
and
stamp
yang
stamp
srpm.
C
Protocol
followed
by
a
presentation
on
the
explicit
flow
measurements
which
has
just
been
through
working
group
last
call.
Then
we
have
ippm
responsiveness
and
encrypted
pdm
v2.
These
are
the
these.
Are
the
adopted
drafts
that
we
focus
on
this.
This
meeting
are
people
fine
with
this
order
of
presentations.
C
So
it
seems
and
then
we
have
one
more
presentation:
it's
about
precision,
availability,
metrics,
which
will
be
presented
by
greg.
D
G
So
what
we
made
as
changes
from
previous
version
is
pretty
light,
so
we
can
call
on
the
document
quite
or
pretty
stable
by
now.
The
changes
include
that
we
replaced
references
to
draft
that
became
rfcs,
so
you
can
find
them
on
the
slides.
G
G
A
Is
do
you
know
if
there's
any
plan
for
other
implementations,
so
we
can
test
interrupt
with
this.
I
don't
know
how
much
that
right.
A
Okay,
I
I
think
having
the
one
implementation
is
probably
sufficient,
so
we
can
do
working
group
last
call
reviews.
It
would
be
fantastic
to
see
other
implementations.
A
You
know
as
we
are
progressing
the
document,
so
I
don't
know
if
we
can
try
to
rope
people
into
doing
hackathon
at
the
next
itf
or
something
like
that.
That
could
be
useful.
G
This
is
worth
mentioning
that
the
implementation
is
about
the
integrity
of
the
trace,
because
this
is
kind
of
a
corner
case
in
the
in
the
integrity
of
iom.
So
this
is
so
that's
all
for
the
iom,
integrity.
H
G
We
published
the
new
version
right
after
vienna
and
basically
it
only
includes
references
to
beer,
and
so
that
led
us
to
consider.
Maybe
a
working
group
last
call.
C
J
Hi
xiaomi
from
zte.
I
have
a
comment
on
this
document.
I
suggest
authors
to
add
one
more
reference
to
iom
com
state,
as
you
have
seen
on
this
chair
slide.
That
document
is
now
passing
working
group
last
call
and
with
our
transporter
ad.
So
I
suggest
you
to
add
that
reference
and
add
some
description
on
the
function
of
iom
capabilities
discovery.
G
G
A
K
K
We
adjust
the
comments
from
andy
as
the
young
doctor
early
review.
There
are
some
major
minor
issues:
firstly
use
the
derived
from
our
self
for
the
when
statement
using
identities.
We
followed
this
suggestion
and
modified
the
yam
model
and
then
use
the
interface
with
data
type.
Yes,
we
align
and
the
use
of
plans
string
as
a
list
key
to
adjust.
This
we
add
length
1
to
max,
to
disallow
empty
strings
and
to
cl
clarify
the
use
of
order
by
user.
K
K
We
add
a
description
to
clarify
this.
There
is
no
monetary
type
of
profile
in
the
list,
but
at
least
one
profile
should
be
added,
and
then
there
are
several
points
in
the
model.
The
description
are
very
simple:
we
added
more
detailed
information
here
and
there,
especially
for
the
mentioned
lines
at
last.
We
cleaned
some
needs
and
that's
all
next.
K
L
Okay,
hello
next
slide,
please
so
this
is
justin.
Give
you
a
quick
update.
We
progressed
the
work
and,
as
discussed
in
our
previous
meeting,
so
included
the
coverage
of
rfc
8972
stamp
options.
Extensions.
L
L
8972,
including
a
stamp
session
identifier
that
may
be
used
for
session
demultiplexing,
as
well
as
extra
padding
location,
tob
time,
step
information,
tv
class
of
service
that
allows
testing
treatment
of
different
dhcp
markings
and
in
one
way,
both
directions.
L
What
direct
measurement
was
access
report
follow-up,
telemetry
so
to
improve
accuracy
of
timing
measurement,
in
particular
in
a
virtual
environment,
nfv
and
authentication
for
extensions.
L
Next
slide,
so
next
we'll
continue
working
and
we'll
try
to
get
it
ready
for
the
working
group
last
call
by.
L
L
Any
questions
so
please
take
a
look
at
this
document
and
please
send
your
comments
and
questions
on
the
mailing
list.
Thank
you.
A
Yeah,
thank
you
for
this
when
you,
when
we
have
a
new
version
that
we
think
addresses
all
of
those
early
comments.
That
looks
like.
I
think
that
early
review
is
done
quite
a
long
time
ago.
Yes,
2018.,
so
I'm
wondering
if
it
would
make
sense.
You
can
just
let
us
know
on
the
list
and
then
we
can
kick
off
another
young
doctor's
review.
L
We
yeah
and
then
we'll
do
last
call
yeah.
We
follow
by
the
list
of
comments
that
mahesh
provided
and
addressing
when
we
feel
comfortable,
then
we'll
appreciate
your
help
and.
D
I
I
So
it's
a
very
brief
update
on
the
revision
that
we
posted
recently
just
to
highlight
some
work
in
other
working
groups
and
the
next
steps.
I
So
many
thanks
to
footer
for
his
help
with
aligning
the
draft
with
rfc
8972
for
the
flags
and
the
tlvs,
so
the
new
flag,
we
had
defined
verification
flag.
It
applies
to
all
the
stamp
tlvs
and
including
the
two
tlvs
defined
in
this
trap.
I
So
this
is
also
added
for
the
the
two
tlus
defined
in
this
draft,
and
this
also
allows
us
to
transmit
the
tlv
flags
back
to
the
sender
so
other
than
these
two
updates.
We
added
experimental
values
for
the
tlvs
in
order
to
facilitate
the
interrupt
testing
and
implementation,
but
we
also
have
made
the
request
for
earlier
allocation
as
well.
I
I
So
we
do
have
a
few
companion
drafts
in
other
working
group
this
one
in
the
spring
for
the
srpm
using
a
stamp.
There
is
also
some
for
the
enhanced
srpm,
as
well
as
one
for
the
mpls
pseudo
wire
next
slide.
Please.
I
And
we
are
seeking
your
comments
and
suggestions
at
this
point,
as
well
as
the
inaudible
location.
Thank
you
greg.
L
Hi,
thank
you,
rakesh,
and
I
just
wanted
to
fyi.
We
talked
with
the
and
footer
about
some
other
methods
and
we
started
work
on
document
ip
udp
encapsulation
of
stamp
in
mpos
and
using
lsp
team
to
bootstrap
stamped
session,
along
with
their
controlling
the
path
for
reflected
test
packets.
So
aiming
this
work
for
mpos
working
group,
but
appreciate
your
reviews
and
comments
to
mpos
and
ittm
working
groups.
Thank
you.
J
I
A
That's
great
to
hear
so
regarding
the
early
allocations,
since
that's
something
we
can
do
is
that.
Are
there
any
objections
to
doing
those
early
allocations?
I
guess
you
know
greg.
We
had
gotten
a
lot
of
comments
from
you
before
are
we
do
we
think
it's
okay
to
go
ahead
and
ask
ayanna
for
that
now.
L
I
haven't
thought
of
my
comments
as
objections,
so
it
was
comments
and,
as
rakesh
pointed
out,
so
we
had
a
meeting
and
we
realized
that
there
are
many
ways
to
so.
Okay,
the
problem
that
this
draft
addresses
is
real
operational
problems.
So
now
we
started
to
work
on
a
little
bit
different
approach
to
addressing
the
same
problem
so
yeah.
It's
perfectly
fine
to
go
and
help
with
the
implementations
and
get
the
early
ayanna.
So
to
put
it
in
a
good
footing,.
D
E
Oh
hi,
everybody
I
apologize
for
my
voice.
I've
got
a
vocal
cord
problem
and
I'll
I'll
go
to
the
video
here
briefly
to
slots,
for
the
media
are
already
taken,
blah
blah.
Okay,.
B
E
E
E
We've
we've
made
some
real
substantial
progress
in
the
last
interim
period
between
meetings
and
that's
thanks
to
tommy
and
marcus
for
initiating
the
sector
review
and
thanks
to
brian
weiss,
who
it
was
responded
to
us
twice
now,
and
so
this
this
first
page
is
mostly
about
the
the
comments
we've
already
resolved
in
in
the
zero
two
of
the
draft.
E
And
then
we
got
some
more
comments
and
we
had
some
open
issues
so
well
we're
going
to
go
on
with
with
that,
as
they
know
in
the
remaining
slides.
E
But
let
me
say
this
that
I
think
a
lot
of
things
have
come
out
of
the
sector
review
that
are
really
valuable
for
anybody,
designing,
an
active
measurement
or
a
protocol
to
deliver
that
active
measurement,
and
so
I
want
people
to
think
in
those
terms
you
know
how
can
I
apply
this
to
what
I'm
doing
it's
certainly
more
applicable
to
anything
beyond
the
capacity
protocol
and
the
truth
is
the
capacity
protocols
more
applicable
to
things
beyond.
E
Just
measuring
capacity
we've
been
measuring
loss,
latency,
reordering
duplication;
we've
been
measuring
everything
since
day
one.
So
those
are
the
you
know
it's.
It's
really
lined
up
very
well
in
udp
based
transport
to
measure
what
everybody
needs
to
measure.
E
Excuse
me
so
there's
there
are
two
categories
of
changes:
the
text,
clarifications
alone
and
the
text
plus
protocol
modifications.
So
the
the
text
is
clarified
in
o2.
E
We
also
clarified
that
we
used
a
conventional
communication
setup
with
a
well-known
port
at
the
server
and
the
brian's
observation
that
authentication
mode
can
help
us
with
protections
for
features
like
bit.
Error.
Checking,
let's
see
so
like
I
said,
all
the
all.
That
sort
of
stuff
is
in
touchdown
in
note
2.,
and
we
had
this
the
original
idea
of
four
security
modes
for
operation,
the
unauthenticated
and
the
password
auth
code.
E
It
is
implemented,
but
authorizing
for
all
the
important
messages
and
encrypt
all
the
things
we
were
looking
for.
One
more
recommendation
on
item
d
here
and
that's
that's
been
a
topic
of
the
the
other
messages
we've
exchanged
recently.
E
So
if
we
go
on
to
slide
three
here,
brian
or
not
brian
tommy
or
whoever's
doing
it,
who
is
it?
Is
it
tommy?
Okay,
thanks
tommy,
I'm.
E
Yeah
cool,
thank
you,
so
the
the
thing,
the
main
things
to
keep
in
mind
here
this
this
is
the
previous
draft
and
the
current
draft
described
the
protocol
version
nine,
and
that
looks
like
this.
We've
got
a
setup
exchange.
We've
got
a
test
activation
exchange
and.
E
Both
parts
of
the
setup
phase
and
we
look
on
those
differently
from
the
test
phase
in
terms
of
their
demands
on
the
hosts
and
their
requirements
and
the
information
they
expose
and
also
exchange.
So
these
in
red,
we
were,
you
know,
zero.
One
of
the
draft.
We
were
adding
the
server
admission
control
and
also
the
load
adjustment
algorithm
check.
E
E
What
brian's,
basically
asking
us
to
do,
is
to
add
the
the
auth
digest
and
processing
on
the
reply
in
this
test
setup,
so
that
would
make
a
complete
authenticated
exchange
for
the
initial
commands
for
things
like
the
the
femoral
report
and
the
bandwidth
admission
check
and
so
forth,
and
then
what
he's
also
asking
us
to
do
is
to
add
the
authentication
digest
on
the
request
and
reply
for
the
test
activation
exchange.
E
So
that's
a
that's
an
important
ad.
We
think
we
can
do
this.
Obviously
it
means
a
protocol,
modification
and
updating
the
fields
and
so
forth,
but
that
seems
doable
so
then
a
little
more
controversy
comes
when
we
get
to
the
load,
pdus
and
the
feedback
messages.
So
let
me
let
me
talk
about
the
feedback
messages.
First,
we're
sending
these
load
pdus
we're.
What
the
heck
is.
This
lies
more
lies
in
browser
user
agent
strings
from
rich
salts.
I
don't
know
what
the
hell
that
means.
Rich,
are
you.
E
I
guess
not
okay,
so
so,
when
the
load
pdus
are
flowing
in
the
test
phase,
we've
got
this
feedback
messages
and
50
millisecond
default,
and
that's
where
we
communicate
the
loss.
The
delay
the
receive
rates,
all
the
other
parameters
I
mentioned,
like
reordering
and
delay
variation
and
or
if
the.
If
the
server
is
making
these
measurements,
then
the
server
sends
the
new
sending
rate
down
to
the
client
for
an
upstream
test.
E
So
that
means
sending
the
sending
rate
structure
down
and
basically
brian's
question
here
is:
can
we
add
the
auth
authentication,
digest
and
processing
to
the
feedback,
messages
and
yeah?
So
so
this
is.
This
is
what
we'd
like
to
do
is
to
probably
do
this.
It's
probably
going
to
be
an
additional
option
beyond
authenticating
the
control
phase,
the
setup
phase
here,
and
but
it
seems
worthwhile
to
to
try
to
do
that.
E
E
So
that's
cool,
but
the
the
place
where
we're
having
problems
is
in
basically
adding
the
authentication,
digest
and
processing
on
the
load
pdus
because,
obviously
to
measure
capacity
we're
sending
a
lot
of
them
we're
going
to
encounter
hosts
either
at
the
client
or
the
server
or
both
that
have
processing
limitations.
E
And
so
you
know,
you're
going
to
see
us
pushing
back
on
the
idea
of
of
authenticating
every
packet
in
the
in
the
load.
Pdu
stream.
We've
got
so
much
other
stuff
to
do
all
right.
E
So
three
things
to
keep
in
mind,
test
setup,
test,
activation,
they're,
the
control
exchanges
they're,
the
the
things
that
we
would
probably
be
easily
able
to
to
authenticate,
also,
probably
encrypt,
and
if
we
were
to,
if
you
were
to
imply
in
encryption,
then
we'd
also
need
an
additional
pack
to
open
an
ephemeral
port
on
the
firewall.
Brian,
really
looked
at
our
firewall
operations
and
gave
us
some
really
good
advice
on
that.
E
And
this
is
the
kind
of
stuff
you
know
firewall
at
the
server,
and
this
is
the
kind
of
stuff
that
would
be
really
useful
for
anybody
planning
an
active
protocol
and
installing
it
in
the
network.
So
these
with
all
things
to
keep
in
mind,
you
know,
as
I
go
through
the
details
now
tommy,
you
can
switch
to
slide
five.
E
We
may
need
to
go
back
to
slide
four
here
occasionally,
but
let's
try
to
just
go
ahead,
so
the
firewall
operation
at
the
client,
the
client
basically
initiates
all
the
exchanges,
so
we
punch
our
own
pin
holes
in
the
client
and
we're
okay
at
the
server,
though
our
current
practice
is
that
we
open
an
ephemeral
port
range,
so
whatever,
whatever
port
range,
comes
back
from
the
client,
we
we
basically
allow
that
and
but
if
we,
if
we
put
that
dummy
packet
in
as
I
just
showed
that
would
open
the
pinhole
on
the
client
for
the
client
firewall
for
the
you
know
the
two-way
exchange
and
the
test
activation,
and
that
seems
likely
to
work
whether
we
encrypt
or
whether
we
just
authenticate.
E
So
I
think
you
know
we.
I
think
we
can
probably
handle
that
with
at
least
one
more
dummy
packet.
So
that's
again
a
protocol
modification
we'll
be
looking
at
so
then
moving
on
here.
We
need
to
look
at
reorganizing
the
mode
of
operations,
same
thing,
yeah,
yeah,
yeah,
so
basically,
based
on
brian's
input,
we're
looking
at
required,
authentication
for
the
control
modes,
messages,
the
test,
setup
exchange
and
activation
exchange.
E
They
would
both
be
authenticated
in
a
required
mode
of
operation,
so
we
could
have
optional
authentication
for
the
data
messages
and
maybe
only
status
as
I
mentioned
optional
encryption
for
the
setup
messages,
and
maybe
the
activation
messages
too,
maybe
use
dtls
for
the
with.
D
E
Exchanges,
and
also
maybe-
and
this
a
big,
maybe
maybe
reuse
the
keen
from
the
authentic
from
the
dtls
in
the
authentication
aspects.
Then
we'd
also
have
this
optional,
unauthenticated
mode,
which
obviously
we've
got
working
now.
So
you
know
those
are
that's.
That's
our
reading
of
the
current
requirements
and
and
and
options
for
the
various
modes
of
operation.
E
So
yeah
so
going
in
deeper
here
we
prefer
not
to
add
the
digest
on
the
load
pdu.
E
E
If
an
attacker
adds
the
stop
bits,
you
know
jumping
in
on
a
message,
then
a
premature
end
of
test.
You
you'll
see
that,
but
it's
no
threat
to
the
internet,
just
kind
of
annoying
and
adding
the
shot
256.
What
digest
that
significantly
increases
the
minimum
packet
sizes?
E
So
basically
we're
we're
trying
to
avoid
that
if
we
possibly
can
and-
and
you
know,
we
think
we
can
do
that
on
the
other
hand,
with
the
status
pdu,
there's
a
skeptical
stuff
to
protect
here-
integrity
wise,
as
I
mentioned,
the
new
measurements
or
the
sending
rate
command
and
the
sampled
rtt
measurements.
Those
are
all
you
know,
sort
of
important
to
to
protect,
so
it
seems
viable
to
protect
those
with
the
digest.
E
But
we
need
to
keep
the
the
fact
that
round-trip
time
measurements
are
are
taking
place
on
that
on
that
50
millisecond
feedback.
So
you
know
it's
the
old
trade-off
between
accuracy
and.
E
E
Okay,
key
management,
so
this
is
a
brand
new
thing
that
brian
has
raised
it's
good
to
know
about
we're.
Currently
using
manually
configured
keys,
one
per
server.
He
suggested
we
look
into
our
c20
7210,
which
we'll
do
if
we
add.
D
E
Key
identifier
that
could
help
us
to
do
the
key
management.
We
don't
have
that
we
don't
also
have
a
config
file
for
the
key
and
the
id.
So
you
know
these
are
things
we
could
look
into
adding
brian
suggests,
or
we
could
add
a
section
just
describing
the
orderly
key
rollover.
So
there's
lots
of
options
there.
But
this
is
a
you
know:
it's
an
excellent
feedback
and
I'm
really
glad
to
get
it.
E
So
then
the
dtls
for
confidentiality
in
the
setup
phase
it
adds
re-transmission
and
order
delivery.
Those
are
good
things,
but,
with
the
cost
of
you
know
like
a
fairly
fairly
significant
dtls
setup.
As
I
understand
it,
so
you
know
I've
been
looking
at
the
pictures.
I
see
tommy
nodding
there,
it's
a
it's
a
you
know
it's
something
to
keep
in
mind.
E
E
E
Seems
possible,
so
you
know
we're
going
to
look
into
doing
that
and
then
again
this
I
mentioned
the
deriving
key
from
the
dtls
session.
We'd
be
looking
at
that
on
the
feedback
messages
and
if
we're
wondering,
if
you've
got
the
support
necessary
for
that
in
the
open
ssl
from
the
dtls
session.
It's
a
question.
G
E
So
we've
got
the
topic
of
silent
rejection
during
the
senate
setup
phase
in
authenticated
mode
we'd
likely
use
silent
projection
now,
because
we
don't
really
know
where
the
requests
come
from.
There's
no
authentication,
but
if
we've
got
a
successful
validation
of
authentication,
then
we
could
return
the
full
rejection
message
with
the
area
code.
We've
sorted
this
out
in
the
follow
up
and
discussions
with
brian,
and
if
we
have
authenticated
mode
with
failed
authentication,
we
could
have
the
silent
rejection
again.
E
On
the
other
hand,
compile
time
to
help
troubleshooting,
we
could
turn
on
to
reject
turn
on
non-silent
for
troubleshooting.
That
would
be
probably
a
good
thing,
so
you
know
these
are
things
we
can
look
at
doing
also.
The
client
does
not
currently
validate
the
server
setup
response.
E
I
think
I
mentioned
that
in
in
the
picture,
and
so
we
need
to
be
sure
that
the
what
digest,
checking
lists
and
expanding
that
would
fix
it
to
check
on
the
in
the
setup
response
all
sort
of
dual.
On
my
my
mind,
the
authentics
time
is
not
a
complete
production
of
against
replay
attacks.
E
Brian
pointed
that
out
it's
a
you
know,
kind
of
we
could
add
in
a
record
of
previously
received
messages
within
that
window
and
we
could
add
an
id
which
can't
be
replayed
with
the
same
hmac,
but
the
you
know
the
idea
is
this:
isn't
an
infrequent
diagnostic
message:
we
can't
measure
capacity
all
the
time
and
also
we're
not
norad
here.
So
maybe
we
could
not
worry
about
this
quite
so
much,
but
the
id
might
help.
E
So
you
know
those
are
things
where
we're
thinking
about
ways
to
solve
them
and
again,
thanks
for
the
comments
so
next
slide
there,
brian
tommy,
so
the
kind
of
the
summary
that
came
up
in
bryant's
brian's
mode
d
encrypted
all
the
things.
This
is
the
safe
advice.
E
The
strong
authentication
for
all
methods
is
a
good
choice.
We've
got
that
for
well
for
all
of
them
that
we
think
we
can
do
make
it
optional
for
a
site
to
deploy.
That
sounds
good,
require
it
on
the
authentication
of
the
setup
messages,
so
we're
going
to
have
a
mode
for
that
make
it
make
authentication
optional
on
the
data
plane
that
seems
defensible
to
brian
and
because
he
understands
the
effect
on
the
test
accuracy,
the
next
sector
error,
reviewer
or
the
ids.
E
They
might
not
understand
that,
so
you
know
we
got
to
make
that
clear
and
since
orderly
key
rollover
is
a
good
thing
to
have
so
we'll
look
into
adding
the
features
to
support
that
and
also
dtls
and
reusing
the
keys.
Possibly
so
that's
a
good
summary,
I
think,
of
of
all
the
kinds
of
things
we're
looking
into,
but
notice
that
that
brian
asked
the
question
look.
He
said
I
haven't
seen
anywhere
that
full
encryption
is
a
requirement
and
I
I
took
it
from
the
wording
of
of
privacy.
E
Is
the
default
in
the
perversive
monitoring
is
a
an
attack,
rfc
and
also
from
ted
hardy,
saying
that
you
know
encryption
has
to
be
on
by
default
in
itf
protocols,
but
that's
not
written
down
anywhere.
E
Easily,
but
putting
this
all
in
a
tunnel,
it's
going
to
make
the
it's
going
to
have
an
impact
on
the
measurements.
It's
going
to
have
an
impact
on
the
host.
E
And
note
that
it
it's
basically
not
in
this
list,
so
so
this
is
brian's
advice
he's
not
speaking
for
the
sector
he's
not
speaking
for
the
ids,
but
this
is
all
very
reasonable
to
us.
So
you
know-
or
we
can
do
this-
I
think-
and
you
know
that's
where
we're
gonna
go
for
now
be
great.
If
we
could
hear
we
want
you
to
do
two
more
things
too.
E
So
anyway,
let's,
let's
keep
bugging
people
about
that
next
slide.
Please
tell
me.
E
E
At
I
mentioned
that
we
can
have
new
types
of
algorithms
supported
by
the
protocol,
so
here
I've
got
a
pod
where
we've
got
megabits
per
second
on
a
docsis
downlink
up
to
when
I
1
000
megabits
or
one
gigabit,
and
I'm
comparing
on
the
left
the
type
b
algorithm,
which
is
the
current
default
to
a
new
algorithm
which
we're
calling
type
c
and
which
is
implemented
in
the
new
running
code.
E
So,
let's,
let's
look
at
these
these
measurements.
Basically,
it's
it's
a
two
ten
second
tests
in
series
on
the
one
gigabit
downloading
measurement,
it's
the
one.
It's
a
udp
7.5.0
in
debug
mode
and
we've
got
the
50
millisecond
feedback
measurement.
You
can
get
that
from
the
debug
and
packet
loss.
Measurements
are
in
blue,
so
you
see
those
counts
and.
E
Nice
to
the
network,
we
get
some
some
bursts
of
loss
here,
but
not
much
when
we're
testing
kind
of
in
the
steady
state
at
the
at
the
maximum.
E
Now
the
big
difference
between
type
b
and
type
c
is
that
we're
going
to
continue
to
retry
a
fast
ramp
up
mode
and
the
fast
ramp
up
mode
is
very
different.
Instead
of
being
linear
at
a
factor
of
10,
it's
now
a
multiplicative
at
a
factor
of
1.5,
the
current
sending
rate
by
the
way,
that's
a
good
thing,
and
now
what
we
see
here
is
the
the
air
bit.
E
E
E
In
the
mobile
testing
and
if
you
want
to
have
short
intervals
of
testing
like
five
seconds
or
things
like
that
or
if
you're
in
mobile-
and
you
expect
lots
of
variation
in
your
maximum
data
rate,
because
you're,
switching
back
and
forth
between
5g
and
and
the
other,
then
then
type.
M
E
Is
for
you
it's,
I
note
that
the
rt
rtt
variation,
it's
kind
of
an
underestimate.
The
servers
and
everything
else
here
are
kind
of
limited
to
one
gigabit.
So.
E
D
D
E
Responsiveness
when
it
matters-
and
we
do
that
in
the
same
context
as
we
call
speed
test,
we
do
it
in
the
same
context
as
the
capacity
tests
and
in
the
responsiveness
metric
as
I'm
reading
it.
The
big
difference
is
that
we're
making
the
measurements
at
udp
and
not
at
the
tcp
or
http.
E
So
it's
kind
of
a
complimentary
thing
to
the
the
other
measurements
that
have
been
proposed.
E
C
Oh,
we
have
martin
in
the
queue.
Would
you
like
to
take
his
comment
now,
or
would
you
like
to
take
a
look.
E
E
F
Yeah,
let's
give
your
voice
your
vocal
cords
a
chance
to
relax.
Do
you
anticipate
delivering
this
document
to
me
in
next
six
months
or
so.
E
F
B
E
B
F
Can
so
like
I
mean
you
asked
to
refer
to
the
like
the
previous
conversation
about
like
what
will
get
through
the
isg?
I
don't
want
to
presume
to
speak
for
roman
or
paul,
and
I
personally,
you
know,
don't
have
a
strong
position
on
it.
Maybe
it
would
be
good
to
start
like,
rather
than
just
sort
of
guess,
we
should
just
like
start
a
dialogue
with
with
them,
and
you
know
I'm
happy
to
be
included.
We
can
just
start
discussing
like
you
know
how
much
of
a
deal
breaker
is
this
for
them.
F
E
F
B
E
A
good
point,
so
at
least
we'll
keep
one
of
them,
though
right,
okay,
good
thanks,
martin
I'll
I'll,
do
that
and
we
can
refer
to
the
sector
reviews
as
well.
Thank
you.
So
next
steps.
E
Says,
oh,
look,
you
know.
There's
pins
popping
up
looks
like
a
road
map
to
me.
That's
because
we've
talked
to
lots
of
people
who
are
doing
this
and
we're
really
looking
forward
to
everybody's
input
and
ideas
on
how
to
do
this.
E
Additionally,
what
we
can
do
with
it
and
then
the
backup,
slides
you'll
see
that
we
can
implement
alternative
rates
of
rate
programming
to
emulate
applications.
So
that's
a
really
valuable
feature
of
this
protocol
and
this
it's
not
just
capacity.
It's
almost
anything.
You
can
imagine
that
you
can
write
the
load
adjustment
system,
for
it
can
be
static
and
it
can
be
dynamic
based
on
the
measurement
feedback,
so
think
about
this.
I'm
almost
sorry,
I
called
it
a
capacity
measurement
protocol
because
we
can
do
so
much
more
than
that.
E
There
is
a
comment
there
from
will:
hawkins
also
have
the
same
feedback
and
problem
in
responsiveness.
The
presence
of
encryption.
The
protocol
means
that
it's
more
difficult
to
measure
low
power
limit
devices.
We
think
this
problem
is
significant,
so.
H
E
Basically
agreeing
with
me,
we
got
to
let
kind
of
let
the
measurement
side
of
this
go
loose
on
encryption
to
get
effectively
the
right
answers.
Thank
you
will.
E
And
edward
says
it
looks
like
we
lost
the
room.
C
C
Yeah,
I
I
think
you
hear
us,
because
I
enabled
my
local
microphone
here,
but
we'll
hopefully
get
this
fixed
as
soon
as
possible.
Do.
A
A
A
E
E
A
E
F
A
Yeah
actually
al,
I
I
did
have
a
couple
comments
myself,
that's
fine,
so
I
mean
in
in
general,
I
think
the
the
approach
and
sentiments
you're
expressing
I
I
agree
with
a
lot.
When
actually
could
we
jump
back
to
like
slide.
L
A
So
when
we're
talking
about
the
authentication
of
the
data,
the
actual
load
packets,
which
I
would
certainly
agree
that
that
doesn't
seem
like
a
great
idea
to
try
to
add
more
authentication
to
those
since
it's
a
pretty
minimal
attack,
I
have
a
couple
questions,
so
we
you
mentioned
the
three
second
timeout
here
is
that
a
negotiated
timeout
in
the
kind
of
initial
test
setup
such
that
that
is
just
an
example
time
or
is
that
a
fixed
time
in
the
document.
A
E
A
pound
define
and
okay.
A
And
then
I
was
also
wondering
you
know
like
if
we're
getting
if,
for
some
reason
we
get
pushback
on
not
having
these
authenticated
kind
of
going
into,
you
know
what
do
we
actually
think
an
attacker
could
do
with?
This
would
be
good.
I
mean
you
mentioned
they.
Could
you
know,
stop
the
test
prematurely?
A
Would
there
be
something
we
could
potentially
do
at
the
end,
either
in
a
status
feedback,
or
it
seems
like
like
the
the
final
message,
which
would
be
authenticated
to
indicate
if
one
side
like
thought
it
sent
a
stop
or
just
essentially
detect
after
the
fact
that
something
like
a
stop
bit
was
set
by
an
attacker.
So
you
could
just
confirm
that,
like
yes,
we
ran
this
test
and
there
were
the
number
of
stops
that
we
expected
to
see,
or
there
was
a
mismatch,
and
so
we
should
throw
this
result
out.
E
Yeah,
I
think
I
think
the
client
can
send
the
client
can
send
its
own
stop
bit
and
that
that
that
would
either
be.
It
would
either
be
in
the
load
if
it's
an
upstream
test
or
in
the
feedback,
if
it's
a,
if
it's
a
downstream
test
and
and
both
the
client
and
the
server
have
the
timeouts
for
ending
the
test.
D
B
C
H
Hello
working
great,
so
I'm
igor
lubachev,
I'm
going
to
talk
here
about
the
explicit,
slow
measurement.
That's
a
draft!
That's
in
the
working
group!
Last
call,
if
you
recall
the.
If
you've
been
following
it,
it's
been
basically
merged
together
from
two
different
drafts
that
explore
slightly
different
techniques
to
solve
a
very
similar
problem.
That's
why
we
have
quite
a
list
of
others
here.
Nothing
has
much
changed
since
the
were
since
the
last
call.
So
we
have
the
same
version
of
the
draft.
H
We
haven't
received
a
whole
ton
of
feedback
in
last
call,
maybe
because
it's
perfect,
but
but
so
I'll,
take
this
opportunity
to
kind
of
reintroduce
the
problem
and
show
you
what
we've
come
up
with
and
then
ask
for
hey.
If
you
do
have
some
comments
for
during
the
last
call,
I
mean
it's
awesome
time
to
to
give
them
and
to
improve
things.
So
the
problem
is
well
network
operators
need
to
be
able
to
troubleshoot
problems
such
as
loss
and
latency
issues,
and
I
mean
detect
them
first
and
to
do
that.
H
It's
really
best
to
actually
be
able
to
observe
the
problem,
because
otherwise
a
response
to
a
trouble
call.
Is
we
looked
at
a
few
at
a
few
statistics
here
and
there
we
saw
nothing,
sorry
that
might
be
just
like.
Well,
maybe
you
haven't
looked
enough.
Maybe
you
should
look
at
a
few
more
so
with
protocols
like
tcp
at
the
last
resort,
you
could
pull
out
a
wire
shark
and
try
to
observe
the
problem
as
it's
happening.
H
If
somebody
is
saying
that
you
have
lost
in
your
network
and
you
can't
find
it
so
observe
it
with
encrypted
protocols,
that's
not
going
to
work
well,
because
the
transports
are
encrypting,
all
the
headers,
everything
that
would
be
useful
for
this
purpose
and
they're
doing
it
for
a
good
reason.
H
Tx
stuff
could
lead
leak,
some
information
that
could
be
considered
to
disclose
more
stuff
than
endpoints
wish
to
disclose
so
privacy
risk,
and
the
second
concern
is
that
unpassed
devices
trying
to
be
super
helpful,
to
be
able
to
be
very
helpful,
need
to
understand
what
they're
seeing
and
when
they
understand
what
they're
seeing
they
don't
understand,
any
different
anything
different
and
may
cause
problems,
and,
as
an
effect,
you
have
protocol
classification
that,
basically
you
just
can't
do
any
innovations
can
any
do
any
changes.
We're
all
familiar
with
that.
H
There
are
other
uses
for
these
techniques
that
are
unrelated
to
encrypted
transports,
so
we're
trying
to
find
develop
techniques
that
can
be
used
with
just
a
few
bits
like
really
really
a
few
like
one
or
two
three.
That
would
be
enough
to
figure
out
quite
a
bit
of
information
about
any
problem.
That's
happening
the
advantage
of
having
just
a
few
techniques.
H
A
few
bits
is
that,
first
of
all,
fewer
bits
means
it's
easier
to
do
any
sort
of
privacy,
security,
analysis,
you're
much
less
likely
to
leak
stuff
inadvertently,
especially
if
your
bids
are
purpose-built
and
not
built
for
some
other
purpose,
and
second,
is
that
an
explicit
signal,
as
opposed
to
implied
signal
from
transport
headers,
means
that
it's
not
integral
to
the
operation
of
the
transport,
which
means
you
can
just
turn
it
off
or
you
can
only
enable
it
selectively
when
you
need
it,
you
can
grease
them
and
that's
that
will
help
against
protocolsification.
H
Some
of
the
prior
art
latency
spin
bit
after
quite
a
bit
of
debate
in
quick
working
group.
It's
been
added
to
quick
version,
one
and
its
purpose
is
to
be
able
to
measure
round
trip
latency.
H
Now
this
draft
in
this
draft.
We
are
the
whole
ton
of
additional
bits
that
are
designed
to
do
particular
measurements,
latency
measurements
and
different
kinds
of
loss
measurements.
They
can
be
used
together
in
combinations.
Next.
H
Some
of
them
again,
the
goal
is
not
to
read
the
eye
chart
and
the
like
ever
since
this
table
is
in
the
draft,
but
the
idea
is
that
we
are
discussing
what
different
bits
do
we
compare
their
performance
in
terms
of
fidelity
versus
latency
of
the
measurement
versus
how
how
many
measurements
you
can
do
on
a
particular
flow?
So
that's
about
latency.
Next.
H
And
similar
for
loss,
we
have
many
different
loss,
metrics
you
can
derive
from
using
different
bits
or
their
combinations.
Again
we
have
a
bunch
of
analysis
for
comparing
a
different
alternative.
What
you
can
do
with
just
two
bits,
one
bit
trade-off
between
again
fidelity
and
how
quickly
you
can
see
loss
after
it's
happened.
Do
you
see
lost
shape,
or
do
you
see
just
approximate
average
loss
per
connection
round?
Three,
perhaps
anyway,
so
a
lot
of
different
analysis
here
next.
F
Hey
gore,
martin,
duke
google
square
bit
in
particular,
seems
to
have
a
lot
of
overlap
with
8321,
which
is.
H
F
Okay,
so
like
one
like,
I
think
we
should
probably
figure
out
where
we're
gonna
like
discuss
and
specify
this
square
bit
thing
unless
it's
just
referring
to
that
draft.
H
So
it
is
referring
to
I
mean
it's
square
bit
is
not
was
not
invented
here.
Okay,
it
is
just
one
of
the
signals,
so
the
purpose
of
the
draft
is
not
to
specify
bits
on
the
wire
that
will
be
up
to
the
protocols
how
to
how
they
choose
to
implement
it.
The
purpose
is
to
give
you
techniques
to
analyze
techniques
and
to
say,
and
basically
to
say,
that's
what
you
can
do
with
as
many
bits
as
you
want
to
spare
yeah.
I.
F
Mean
83-21,
even
though
it's
currently
experimental
and
headed
for
standards.
I
guess
it's
the
same
thing.
It
is
not
attached
to
a
protocol
and
I
guess
there
are
other
draft
other
drafts
that
are
instantiating
that,
but
also,
interestingly,
I
think
they've
gone
away
from
the
like
end
packets
to
having
like
a
time
interval.
F
So
I
don't
know.
I
think
we
should
get
our
story
straight
on
that,
but.
H
F
H
N
Hello,
yep
yeah,
just
to
answer
to
martin
point
yeah.
The
square
bit
is
introduced
also
in
the
8321,
but
yeah
the
difference,
as
you
know
that
in
the
8321
in
particular,
in
the
proposed
standard
document,
we
focus
only
on
fixed
timer
blocks,
while
the
square
bit
is
on
based
on
fixed
number.
So
in
case,
of
course,
these
will
be
standard
track.
I
mean
the
explicit
flow
measurement.
This
will
need
a
an
accurate
detail
as
we
do
in
the
83-21.
F
I
just
want
to
clarify
that
I
I
technically
have
no
dock
in
this
fight,
but,
like
I
think
the
community
should
decide
the
best
way
to
do
this,
because
83-21
had
both
and
83-21.
This
is
down
selecting
to
time
so,
like
I
said.
H
F
Well,
I
mean,
I
don't
think
it's
a
question
of
name
class
thing
I
think
name
clashing.
I
think
like,
if
you're
going
to
use
that,
if
you're
going
to
use
a
bit
for
loss
detection,
I
think
communities
to
decide
how
that
works
and
whether
it's
this
or
whether
it's
packets
or
whether
it's
packets
or
time
either
one,
but
we
should
decide
thanks.
Thank
you.
A
Next
slide,
just
a
comment
on
that.
So
does
the
current
document
where
it
defines
the
square
bit
talk
about
the
time
option,
or
does
it
only
describe
the.
H
No,
it
talks
about
packets,
yes,.
A
A
F
Sorry,
I
just
yeah,
I
think,
that's
reasonable.
With
the
caveat
that,
like
if
83
21
business
the
pro
standard,
we
really
have
community
consensus
that
that
is
the
best
way
to
go
forward.
Then
we
should
probably
make
sure
that's
clear
in
any
informational
document.
H
And
yeah
totally,
we
should
discuss
it
yeah.
Thank
you
all
right,
so
this
slide
is
basically
talking
about
this
being
actually
in
use.
There's
a
number
of
industry,
that's
in
the
implementation,
so,
for
example,
akamai
orange,
implemented
it
and
ran
this.
For
about
a
year
in
production,
we've
made
go
back,
we're.
B
H
Good
so
I'll,
just
summarize
a
number
of
implementations
from
different
operators,
a
number
of
implementations
from
researchers,
so
that's
been
done
and
just
like
the
last
slide,
the
quick
history
of
it
is
that
we've
been
running
last
call
since
july
6.
we
just
received
on
the
least
substantive
good
feedback
from
marcus
about
some
hidden
parts
of
the
delay
bits
I
just
now
feedback
from
martin.
So
let's
looks
like
we
need
another
revision
of
it.
Let's
discuss
on
the
list.
C
A
A
O
O
We
see
it
so
you
know
you
should
see
it
in
the
presentation
mode,
yes
great,
so
so
hello.
Yes,
this
is
responsiveness
under
working
conditions.
We
submitted
a
zero
one
version
shortly
before
this
meeting
here
the
update
from
zero
zero
one
to
zero
one
is
the
mill
is
listed
here.
We
closed
the
set
of
get
up
issues
we
merged
a
few
pr's
got
a
few
contributions
and
in
terms
of
the
significant
changes,
there's
first
stuart
cheshire
added
dns
based
service
discovery
for
the
network,
quality
measurement
or
the
responsiveness
measurement.
O
That
way,
it
means
that,
if
I
am
on
my
network,
I
can
basically
browse
these
dns
based
services
that
are
being
and
the
the
services
that
are
being
announced
through
dns.
I
can
discover
them
on
my
local
network
and
just
discover
endpoints
on
my
network
that
allow
me
to
test
the
responsiveness
in
my
local
network,
so
we
hope
that
this
is
going
to
be
a
very
useful
addition.
O
We
added
server-side
example
configurations
in
the
appendix
so
that
if
people
want
to
deploy
a
responsiveness
measurement
endpoint,
they
can
simply
simply
take
a
look
at
those
example
configurations
and
we
did
a
significant
rework
of
the
measurement
algorithm
and,
of
course,
some
wording
changes
minor,
minor,
minor
fixes
and
so
on.
So
I
want
to
double
click
on
the
significant
rework
of
the
measurement
algorithm.
O
O
The
question
is
whether
we
would
be
able
to
achieve
higher
good
put
by
adding
more
connections,
because
the
bdp
of
the
path
is
is
very
large
right,
and
so
the
only
point
to
do
this
is
we
were
adding
more
connections
right
until
we
again,
we
reached
the
maximum
good
put
at
which
point
it
levels
off
again,
and
so
now
the
question
becomes
well.
Did
we
actually
reach
the
link
capacity?
Yes
or
no,
or
do
we
still
need
to
add
more
connections
and
to
learn
this?
O
We
add
yet
more
connections
into
the
pool
until
we
realize
well,
okay,
the
good
but
didn't
changed
so
there's
no
change
in
good
good
put,
and
so
we
declare
saturation
at
which
point
we
started
the
latency
probes
right.
We
sent
a
set
of
probes
on
the
load,
generating
connections
and
a
set
of
probes
on
the
separate
connections.
O
So
what
are
the
problems
with
this
approach?
O
These
latency
probes
have
a
tendency
to
time
out,
so
we
may
sometimes
not
even
be
able
to
get
a
measurement,
and
finally,
these
kind
of
one-shot
measurement
that
are
happening
only
at
one
point
in
time
have
the
tendency
to
be
impacted
by
short-term
buffer
occupancy
variations
right.
We
have
seen
cases
where
there
is
on
these
load
generating
connections.
There
is
an
effect
of
what
we
call
a
synchronized
packet
loss
where
all
of
these
connections
get
a
packet
loss
at
the
same
time.
O
So
these
here
are
the
again
these
the
the
draw
the
drawbacks
of
this
algorithm.
So
what
we
realize
is
the
way
we
can
actually
solve.
Most
of
these
problems
is
by
basically
well.
We
we
keep
on
trying
to
reach
capacity
right,
but
instead
of
waiting
until
saturation
to
start
the
probing,
we
start
probing
right
away
and
we
probe
every
100
milliseconds.
We
probe
on
separate
connections
and
we
probe
on
the
load
generating
connections,
and
we
just
keep
on
going
through
the
algorithm,
and
we
continuously
probe
every
100
milliseconds.
O
We
send
one
probe
on
the
first
load,
generating
connection
and
one
probe
on
the
separate
connection.
So
what
does
this
means?
As
you
can
already
see
graphically
right?
We
have
a
lot
of
probes
now
right,
and
so
we
get
four
data
sets
with
this
approach.
We
get
latency
on
the
separate
connections,
we
get
tcp
handshake,
latency,
we
get
tls,
handshake
latency
and
we
get
http
rigorous,
respond
latency
under
on
the
separate
connections
on
the
load
generating
connection
which
we
call
itself.
We
get
also
a
http
request,
response,
latency,
and
so
those
are
now
very
large.
O
O
First
of
all,
we
take
the
90th
percentile
right,
so
that
way
we
filter
out
those
probes
that
were
happening
at
the
beginning
of
the
test.
When
the
link
was
not
yet
buff
uploaded,
then
we
need
to
weigh
those
four
different
numbers
right,
and
so
our
goal
is
to
put
equal
weight
on
the
load
generating
and
on
the
separate
connections.
O
Now
for
the
separate
connections,
we
get
three
data
points
right:
the
tcp
handshake,
latency,
the
tls
handshake
latency
and
the
http
latency,
and
so
we
weigh
these
in
in
the
way
we
show
it
here
in
the
in
the
slides.
We
have
one
sixth
for
each
of
the
separate
data
points
and
half
for
the
on
the
low
generating
data
points
that
way
both
are
weighted
equally
now.
O
This
is
going
to
give
us
a
number
in
terms
of
seconds,
and
so,
as
we
want
to
express
responsiveness
in
terms
of
round
trips
per
minute,
we
basically
normalize
it
to
rpm
for
those
interested.
This
is
the
final
formula.
60
000,
divided
by
the
p90,
is
of
the
different
values
normal
appropriately
weighted.
The
way
we
have
described
it.
O
So
the
advantages
of
this
approach
is
first,
we
have
a
very
large
sample
size
about
150
data
points
for
15
second
test,
which
is
great,
which
removes
a
lot
of
variance
which
avoids
all
of
the
issues
that
we
described
earlier.
We
have
much
less
timeout
issues,
because
the
probing
happens
right
from
the
start.
O
O
O
That
way,
any
web
service
could
basically
expose
this
responsiveness
measurement
as
a
service,
and
we
could
simply
discover
it
by
hitting
the
well-known
uri
to
see.
If
there's
a
json
config
available
issue
number
63,
we
need
to
explain
the
impact
of
congestion
control.
It
came
up
in
the
past,
but
we
haven't
yet
had
the
time
to
address
this
question
and
we
want
to
write
a
section
on
how
the
different
congestion
control
algorithms,
like
cubic
bbr
and
so
on,
can
affect
the
responsiveness
issue
number
55.
O
O
Issue
number
66.
What
l
also
brought
up
is
we
need
to
allow
non-tls
measurements
because
on
some
low
end
devices,
if
we
want
to
allow,
for
example,
that
your
router
exposes
the
dns
servers
via
the
the
service
through
dns
space
for
discovery
right
low-end
gateways,
usually
don't
have
the
performance
to
actually
fill
the
link
with
tls
traffic,
which
self-generated
tls
traffic,
and
so
we
want
to
allow
for
non-tls
measurements
and,
finally,
the
most
important
one
search.
O
So
if
we
look
at
what
I
explained
earlier
in
terms
of
our
algorithm
right,
we
have
we
are
measuring.
We
are
trying
to
reach
the
maximum
good.
Put
now
good
put
is
actually
not
what
we
are
measuring.
We
are
trying
to
measure
responsiveness,
which
is
the
buffer
bloat,
and
so
it
means
we
are
trying
to
measure
the
buffer
occupancy.
O
O
So
as
we
go
through
the
algorithm
and
as
we
start
with
four
connections,
right
and
often
one
tcp
connection
has
has
frequently
a
limit
in
terms
of
how
much
data
it
could
can
put
in
flight
and
for
someone
typically,
this
can
be
around
four
megabytes.
Eight
megabytes,
six
megabytes.
For
this
example.
Let's
pick
four
megabytes
right,
so
we
have
four
connections
with
a
maximum
of
in-flight
data
of
being
four
megabytes.
O
That
would
give
us
a
maximum
of
in-flight
data
of
16
megabytes,
which
means,
as
we
are
on
the
top
graph
right.
We
are.
We
have
created
those
four
connections
and
we
are
leveling
out
in
terms
of
good
put.
We
haven't,
however,
yet
achieved
the
maximum
good
put,
which
means
the
buffer
occupancy
will
be
zero
right,
there's
no
buffer
block
yet
happening.
O
Now
our
algorithm
decides
to
add
more
connections.
Let's
say
we
add
eight
connections.
Now:
okay,
so
eight
connections
times
four
megabyte
means
16
megabytes.
So
the,
however,
we
are
still
reaching
to
the
good
puts.
We
haven't
yet
reached
this
point
of
inclination
where
we
are
actually
creating
buffer
bloat.
O
Now,
as
we
reach
the
maximum
good
put,
we
actually
start
building
a
queue
now,
and
so
now
the
queue
starts
filling
up,
but
as
it's
only
eight
connections,
which
means
16
megabytes
worth
of
buffer
occupancy,
we
haven't
yet
completely
filled
the
link.
We
are
only
at
64
megabytes.
Sorry,
we
are
only
at
16
megabytes
on
a
64
megabyte
buffer
right
now
our
algorithm
keeps
on
adding
more
connections
right.
So
now
we
are
at
12
connections
and
the
buffer
occupancy
keeps
on
increasing.
O
Now,
at
this
point
in
time,
we
realize
okay,
we
reached
capacity
for
the
good
put,
and
so
we
declare
saturation
and
we
terminate
the
test.
What
does
this
mean?
Well,
this
means
that
this
is
the
responsiveness
that
we
measured,
but
in
reality
the
responsiveness
is
much
much
worse
right.
We
haven't
even
yet
filled
the
buffer
completely.
O
So
what
is
the
solution
to
this
problem
right?
Well,
the
solution
is
we
go
through
this
algorithm,
but
then
we
see
that
well,
actually,
the
buffer,
the
responsiveness
is
still
evolving
right.
We
haven't
leveled
out
the
responsiveness
yet
and
as
we
realize
that
okay,
with
12
connections,
we
reach
48
megabytes
of
buffer
occupancy,
and
so
we
can
say:
okay,
we
we
leveled
out
at
12
connections.
Let's
add
more
connections
to
see
if
we
can
push
it
even
higher,
so
we
add
more
connections
and
we
realize
that
the
buffer
occupancy
is
increasing.
O
The
only
way
for
us
to
find
out
whether
really
we
achieved
64,
the
full
buffer
occupancy
is
by
have
one
more
iteration
of
adding
more
connections
and
we
realize
by
adding
more
connections.
We
are
not
the
good
but
is
not
changing,
nor
is
the
buffer
occupancy
changing,
and
so
it
means
we
can
declare
saturation
and
we
can
declare
the
final
responsiveness
result.
O
So
this
is
the
new
algorithm,
so
it
means
that
we
not
only
need
to
saturate
good
put,
but
we
also
need
to
saturate
responsiveness
and
once
good,
put
and
responsiveness
stop
changing.
We
declare
saturation
and
so
in
in
the
I
in
the
draft
for
the
upcoming
version.
What
we
need
to
change
is
in
terms
of
the
algorithm
is
that
we
are
adding
connections
as
long
as
either
good
but
increases
or
the
responsiveness
is
decreasing.
O
So
this
is
going
to
be
the
new
algorithm
for
the
next
for
the
upcoming
version.
In
terms
of
other
news,
the
open
source
go,
go,
responsiveness,
implementation
is
evolving
rapidly
and,
of
course,
we
would
like
people
to
try
it
out.
We
invite
everyone
to
to
test
it
will
here
is
on
this
call
and
would
be
very
happy
to
get
pull
requests
and
github
issues
as
well.
O
In
terms
of
the
other
implementations,
ucla
tests
now
started
measuring,
load
and
latency
as
well.
O
O
So
with
that,
I'm
at
the
end
of
the
presentation-
and
if
there
are
any
questions
I'm
I
would,
I
would
be
very
happy
to
take
them.
E
Kristoff
and
will
for
your
work
on
this.
I
think
that
the
the
one
thing
you're
going
to
want
to
fix
in
the
draft
is
the
is
the
equation
for
responsiveness.
I
was
about
to
do
something
with
it
and
I
noticed
that
instead
of
one-sixth,
you've
got
one-third
in
all
the
denominators
there,
accepting
the
something
the
last
one.
So
I
I
was
confused
by
that,
but
I
see
now
that
you
mean
one-sixth
and
that
sort
of
adds
up
to
one
with
the
half
half-weighted
aspect.
E
I
think
that
when
you're
also
when
you're
reporting
capacity-
it's
you
know
it's
based
on
it's
based
on
a
lot
of
connections
and
that's
the
then
I
I
tried
it
out
on
a
couple
of
cases.
I
saw
20
connections
and
so
forth,
and
you
know
those
are
you're
going
to
get
if
matt
mathis
was
probably
here
too,
you
would
say
the
same
thing:
you're.
E
E
I
learned
I
really
learned
a
lot
of
this
from
matt,
so
I'm
just
I'm
just
proxying
matt
here
when
when
I
tell
you
my
story,
but
I
think
I
mean.
H
E
Think
you're
on
the
right
track
in
monitoring
the
delay,
as
well
as
the
good
put,
but
there's
other
factors
here
too.
That
matter-
and
you
know-
we've
been
we've
actually
been-
we've
actually
been
looking
at
in
our
capacity
measurement,
we've
been
looking
at
the
the
possibility
to
reduce
the
some
of
the
factors
we
found
that
reordering
and
duplication
they
happen.
They
happen
on
5g
networks,
more
prevalently
than
any
place
else,
and
the
truth
is
those
are
measurements
that
or
packets
that
are
delivered,
that
contribute
to
capacity.
E
So
tcp
isn't
going
to
tell
you
about
those.
You
know
it's
going
to
discard
those
as
as
it
forwards,
information
up
the
stack,
but
we
can
grab
those
in
the
udp
measurements
and
you
know,
like
we
end
up.
We
end
up,
including
the
reordered
and
duplicate
packets,
now
we're
thinking
about
making
that
the
default,
especially
for
mobile
testing.
So
you
know,
there's
lots
of
there's
lots
of
room
for
improvement
in
our
algorithms
here,
and
you
know
I'm
glad
to
keep
exchanging
ideas
with
you.
Thanks.
D
A
Right
I
jumped
and
cue
not
chair
head
on
just
to
comment
one
earlier.
You
mentioned
the
the
issue
about
using
a
well-known,
and
I
looking
at
that.
I
think,
there's
some
debate
about
you
know
what's
in
that,
is
that
just
the
config
is
that
the
actual
test
overall
within
kind
of
http
I
mean
there's,
certainly
a
sentiment
that
well
known
can
be
overused.
A
I
think
this
is
a
decent
use
of
it,
but
I
think
it'd
be
worth
if
you
want
to
drop
a
line
to
mark
nottingham
who
has
to
review
all
of
those
anyway
to
see
you
know.
Is
this
going
to
be
something
that
would
get
through
the
expert
review
for
adding
well-known
and
any
advice
there
that'd
be
good
to
get
regarding.
D
A
New
new
algorithm,
I
think,
everything's
good
one
of
the
things
that
came
up
in
the
chat.
I
think
there's
questions
about
like.
Why
would
you,
for
example,
not
just
measure
when
the
responsiveness
starts
decreasing
and
I
think
the
answer
to
that
is
you
know
the
responsiveness
won't
actually
decrease
until
there's
enough
load.
A
I
think
then
one
edge
case
came
up
to
me.
It's
like.
Are
there
any
scenarios
in
which
the
responsiveness
only
starts
decreasing
further
to
the
right,
such
that
you
know
like
we
could
get
to
16,
and
you
know
our
good
put
has
flattened
out,
but
the
responsiveness
will
only
start
going
up
starting
at
20
or
24,
and
so
we
could
actually
stop
the
test
too
soon.
Is
that
something
we
should
be
concerned
about.
O
I
so
on
your
first
comment:
yes,
absolutely
you're!
Absolutely
right
unless
we
hit
capacity,
responsiveness,
won't
change
at
all
right,
that's
the
party
on
the
left.
I
don't
think
there's
a
case
where
we
could
hit
capacity
and
only
20
flows
farther
down
the
road
responsiveness
would
start
changing.
O
M
So
we
kind
of
keep
a
high
score
of
the
highest
throughput
we've
seen
and
the
highest
latency
we've
seen
and
every
time
we
break
that
record.
We
record
the
new
record
after
we've
gone
for
four
seconds.
Without
setting
a
new
record
for
either
of
those
things,
that
means
we've
added
four
more
connections,
we've
put
more
data
into
the
pipe
and
neither
has
changed
and
just
to
back
up
a
little
bit
to
second,
what
kristoff
was
saying
as
we
have
more
data
in
flight.
M
M
If
we're
continuing
to
add
more
data
in
flight,
then
within
four
seconds
we
will
have
seen
the
delay
go
up
and
that
will
cause
us
to
keep
testing
until
delay
stops
going
up.
So
that's
your
answer.
Tommy.
I
think.
If
we
didn't
have
that
four
second
window,
there
would
be
a
risk
of
a
premature
exit,
but
that
four
seconds,
I
think,
is
what
makes
it
work.
M
Okay,
so
I'll
just
say
the
thing
I
came
to
the
microphone
originally
to
explain
a
bit
more
about
christoph
mentioned,
the
the
dns
service
discovery
service
type.
The
motivation
for
that
is,
I
might,
I
may
run
the
network
quality
test
and
get
a
lousy
score.
M
But
as
an
engineer,
I
want
to
know
why
so
we've
been
talking
to
home
gateway,
vendors
and
wi-fi
access
point
vendors
who
actually
want
to
host
a
test
endpoint
on
their
wi-fi
access
points
or
on
the
home
gateway.
So
you
can
eliminate
the
modem
or
the
dsl
from
the
equation
and
do
a
local
test
and
see.
Is
it
my
wi-fi
or
is
it
my
cable
mode,
that's
causing
a
problem.
M
P
D
P
Okay,
well
just
a
small
comment
about
the
them
this
this
technique
to
measure
bandwidth.
P
I
I'm
not
sure
how
noisy
it
is,
because
you
know
the
inside
the
the
network,
you
you
got
some
other
traffic
and
perhaps
I
I'm
not
sure
if
you
are
just
mentioning
just
I
I
don't
know
the
the
last
smile
or
something
like
that
or
you
are
measuring
something
more
more
bigger
than
that.
But
I
imagine
that
there
are
the
there
is
some
some
traffic
more
traffic
and
could
be
a
little
noisy.
P
I
mean
the
responsiveness
is
not
like
an
stride
line
like
you,
you
you
draw
like
there
and
therefore
how
you
deal
with
this
this
variation,
because
it
could
be
really
very,
very
important
when
you
you
go
from
you
go
through
several
links.
Thank
you.
O
Yeah
thanks
for
your
comment,
so
maybe
I
I
didn't
introduce
that
properly
at
the
beginning.
What
we
are
measuring
here
is
end-to-end
capacity
and
end-to-end
responsiveness
so
from
the
client
to
the
server
right.
So
it's
not
necessarily
last
mile
it
is
wherever
the
bottleneck.
The
bottleneck
is
up
for
this
kind
of
communication
right
so.
L
D
All
right,
let's
move
on
nalini,
do
you
want
to
present.
Q
So,
okay,
so
this
is
our
pdm
destination
option,
it's
an
ipv6
destination
header
and
basically,
what
we
do.
This
is
an
end-to-end
measurement.
Q
It's
put
on
at
the
source
at
the
end,
use
end
client
and
we
put
in
a
sequence,
number
and
timing.
The
idea
is
to
be
able
to
very
quickly
separate
server
time
from
network
time
and
the
potential
users
are
large
enterprises,
okay,
and
so
what
have
we
done?
We
had
an
early
sector
review
and
I'll
go
through
that
and
we're
working
on
implementation
of
this
and
we're
also
testing
extension
headers
across
the
internet
and
I'll
talk
about
that.
Q
Q
So
this
is
the
sector
review
and
basically
they
said
it
wasn't
ready,
because
it's
still
very
early
and
and
there's
a
few
things
left
to
do
basically
on
authentication
authorization
and
so
on,
and
otherwise
they
think
it's
pretty
good,
and
so
we
will
continue
to
work
with
them.
We
feel
pretty
good
because
it
doesn't
look
like
there's
huge
amounts
to
change
and
we'll
address
whatever
there
is,
so
that
was
that
feels
good,
okay.
Q
Next,
so
the
big
thing
that
we
did
this
last
time
other
than
implementation
is
look
and
see,
can
ipv6
extension
editors
can
actually
be
used
because
if
they
can't
we're
wasting
our
time
in
defining
this
thing,
it
doesn't
matter
encrypted
or
not.
It
won't
work.
So
so
that's
what
that's
what
we
did,
and
so
we
tested
stuff
so
so
next
and
so
what
we
did,
and
so
so
what
we
did-
and
this
is
a
this-
is
the
start
of
the
testing.
Q
Q
So
I'll
show
you
that,
and
the
other
thing
we
did
is
we
did
it
across
on
a
couple
of
different
continents,
maybe
yeah
three
different
continents,
four
different
and
a
bunch
of
different
cities.
The
idea
is
is
to
see
if
it's
going
through
the
core
of
the
internet,
because
if
it's,
you
know
because
we
want
to
find
out,
is
it
being
stopped
at
the
source?
Is
it
being
stopped
at
some
transit
network?
Is
it
stopped
at
the
destination?
Q
If
extension
headers
are
not
getting
through,
then?
Where
are
they
and
ideally,
why
are
they
so
next
and-
and
so
it
was
real
easy
because
pdm,
our
mod
to
the
kernel,
sends
pdm
with
every
packet.
We
can
just
do
a
very
large
ftp
and
that's
what
we
did
and
this
one
happens
to
be
toronto
to
mumbai,
and
you
can
see
it
was
a
big
old
ftp
and
successfully
transferred
next.
Q
Please,
and
you
can
see
our
wonderful
little
pdm
extension
header
right
there
in
the
packet
trace
next,
please
and
in
fact,
turned
out
a
lot
of
these
things
were
fragmented,
and
so
we
didn't
even
mean
to
test
fragmentation
header,
but
it
was
fragmented
and
all
those
got
through
just
fine
as
well
next,
so
you
can
see
it
appeared
to
go
all
the
way
through
for
a
bunch
of
different
sites,
big
old
ftp,
going
across
okay.
Q
Next,
please,
and
so
we
also
started
doing
curls,
and
this
one
was
actually
at
the
hackathon
we
started
doing
it
to.
Q
I
think
we
set
up
an
apache
server
in
warsaw
with
a
bunch
of
junk
data
in
it,
so
it
would
create
a
bunch
of
fragment,
headers
and
we've
tested
from
the
hackathon,
and
here
it
is
we're
doing
a
curl
from
the
ietf
network
successfully
next
next
yeah,
so
we're
going
to
do
a
lot
more
testing
and
to
see
kind
of
where
things
start
and
there's
already
a
number
of
people
who
want
to
work
with
us.
Q
C
F
Q
Q
R
Hi
paul
briscoe
just
say
what
I
said
to
you
when
we
were
chatting
in
the
hackathon
yep
that
all
the
advantage
points
are:
data
centers,
essentially
they're,
they're,
the
the
hosting
services
data,
centers
and
so
nothing's,
going
over
a
sort
of
consumer
access
network.
Q
Q
Q
R
Yeah.
Because
because
I
think
that's
that's,
where
you're
more
likely
to
find
problems
but
yeah.
Q
Hey,
let's
just
see
one
step
at
a
time,
let's
see
where
we
got
problems,
because
you
know
the
other
thing
too,
like
like,
let's
find
out
where
what's
the
situation,
where
is
it
being
dropped?
Why
is
it
because
I
tell
you
just
right
in
our
testing
at
the
hackathon,
we
found
one
bug
in
a
particular
router
implementation,
where
the
the
hop
by
hop
header
just
right.
Q
There
wasn't
going
out
at
the
source,
and
so
of
course
the
question
is
what
happened
if
they
fix
their
bug
right
and
like
lo
and
behold,
you
know
so
so
and
and
now
it's
it's
super
interesting
we're
also
talking
to
the
free
router
people.
I
just
had
the
little
young
man
write
me
back,
we'll
we'll
we'll
modify
free
router
to
send
h-by-h.
I
mean
hbh,
because
what
I
think
is
if
we
can
bypass
if
we
can
control
all
the
equipment.
Q
If
we
can
control
the
equipment,
control
the
end
points
and
know
exactly
what
it
is,
we're
testing.
I
think
we
gotta
we
have
some
shot
at
figuring
out.
What
is
the
actual
situation,
yeah
any
feedback
or
comments
or
anything
that
anyone
has
on?
You
know
what
we're
forgetting
you
should
remember
to
test
this.
You
know,
please,
please
let
us
know
we
happy
to
test
yep,
we'll
more
results.
Next
time.
C
N
C
L
Okay,
so
this
is
updated
just
to
remind
you
what
we're
trying
to
do
is.
L
In
this
work,
we're
trying
not
to
look
at
how
each
particular
slo
is
complied
with,
but
in
overall
the
combination
of
multiple
slo,
how
it
reflects
their
service
and
as
a
whole,
and
we
express
that
as
precision
availability
of
the
service
that
is
characterized
and
constrained
by
multiple
slos.
L
So
if
you
look
at
this
figure,
so
you
see
that
their
period
where
their
particular
slow,
a
particular
metric,
is
within
acceptable
range.
But
then
there
are
periods
when
it
exceeds
the
critical
threshold
and
these
periods
are
can
be
considered
as
service
and
unavailability,
whereas
there,
when
it's
acceptable,
that's
acceptable
and
thus
it's
a
service
availability
period.
Next
slide,
please.
L
So
this
is
update.
We
already
presented
it
virtual
meeting
in
vienna
remotely.
F
L
Received
a
very
detailed
and
helpful
comments
from
mad
work
together
and
ned
agreed
to
join
us
and
continue
working
on
this
document.
So
let's
look
what
updates
we
have
now
for
this
meeting
to
share
with
you
next
slide
please.
L
So
we
clarify
the
problem
statement
and
so
basically
what
we
are
trying
to
solve
what
we
are
addressing-
and
it's
not
only
on
particular
values
that
the
service
experiences
at
the
given
time
at
a
given
point
in
time,
but
how
it
relative
to
the
thresholds,
their
slo
that
is
set
for
their
particular
metric
next
slide.
L
L
There
was
one
metric
that
we
missed,
so
we
added
packets
since
last
violated
packet.
Oh
yeah,
next
slide.
L
And
with
terminology,
so
if
you
recall
in
the
first
version,
we
were
not
yet
decided
whether
referred
to
the
metrics
as
errored
time
intervals
and
well
violated.
So
we
just
discussed
and
comments
from
that
helped
us
us
to
settle
on
a
violated
term.
So
now
everything
is
referred
to
as
violated
intervals,
severe
violated
intervals
or
violation.
Free
interval
next
slide.
L
So,
as
you
see
so,
this
item
is
can
be
taken
out.
There
are
some
more
work
that
we
will
be
doing
in
working
with
this
document
and
still
we
have
some
plans
for
the
future
and
we
will
appreciate
your
comments,
suggestion
and
please
think
about
joining
the
work.
There
are.
L
Be
done
next
slide,
please!
So
again,
we
think
that
since
we
merged
this
work
and
addressed
the
comments
from
ned
the
work
matured
enough
that
we
appreciate
your
consideration
for
the
working
group,
adoption.