►
From YouTube: IETF111-MAPRG-20210730-2130
Description
MAPRG meeting session at IETF111
2021/07/30 2130
https://datatracker.ietf.org/meeting/111/proceedings/
A
All
right,
so
is
there
a
participant
count
here:
46,
okay,
oh
yeah,
46
at
the
top,
all
right.
It's
it's
we'll
see
it's
a
it's
14
30
pacific
time.
So
it's
ready
we're
ready
to
start
the
meeting,
I'm
dave,
planka
and
miryakulavind.
The
co-chair
is
also
on.
This
is
the
measurement
and
analysis
for
protocols,
research
group
at
ietf,
111
meeting
online.
If
you
need
to
contact
us
rg's
map,
rg
chairs
at
ietf.org,
is
up
on
the
screen.
A
A
Well,
I
can't
I
have
to
see
what
it
says.
It's
a
note.
Well,
okay,
so
so
the
notewell
for
intellectual
property
is
similar
to
that
for
the
ietf,
but
read
it
in
detail,
please
if
you
mean
to
if
you're,
sharing
any
work
that
might
be
covered
by
intellectual
property
restrictions
by
you
or
your
company
slide.
Three
which
again
I
can't
see,
but
I
know,
is
the
privacy
and
code
of
conduct
portion.
So
you
can
find
details
on
slide
three
of
the
chairs.
Intro
slides.
Can
you
see
them
there's
the
participants?
B
A
I
don't
need
to
I
mean
I
have
them
up
on
my
phone
right
next
to
me,
so
I
guess
I
can
muddle
through
it
with
that.
If
you've
seen
yourself.
C
The
presentation
view
top
left
of
the
icons
on
the
top
row.
A
The
goals
of
the
irt
goals
of
the
rtf
I'll-
let
you
read
those
details
there
in
your
copious
free
time,
but
we
don't
we're
not
a
research
organization,
I'm
sorry
we're
not
an
develop
standards,
development
organization
or
a
research
organization,
so
we
don't
have
some
of
the
same
restrictions
and
consequently,
you
might
see
a
broader
set
of
topics
in
the
irtf
than
you
do
in
itf
working
group
meetings.
A
Next
slide,
it's
like
five.
Our
charter
is
linked
at
that
at
that
link
is,
is
meant
to
share
nascent,
usually
a
nascent
research
measurement
work
about
protocols
that
are
that
are
designed
and
or
operated
under
rfcs
and
decisions
made
in
the
ietf.
So
that's
what
we
focus
on
and
we
prefer
if
there,
if
you
saw
selections,
we
prefer
submissions
or
contributions
that
responded
to
our
call
for
comments
you
might
have
seen
in
the
mailing
list.
A
We
had
one
candidate
participant
that
happen
not
to
respond
to
that,
and
so
we
do
that
as
a
sort
of
a
form
of
equity
between
submitters,
so
that
we
expect
you
to
respond
to
the
call
and
we
sent
that
person
a
personal
copy
of
the
call
to
which
they
did
not
respond,
but
would
be
happy
to
have
them
in
the
future.
If
they'd
like
to
mailing
list
and
the
slides
links
are
up
media
going
jabber.
A
So
next
slide,
we
have
the
iub
workshop
slide
now.
Okay,
so
the
ib
is
holding
a
workshop
on
measurement
network
quality
for
end
users.
It's
coming
up
in
mid-september
14th
through
16th.
There's
a
submission
deadline
coming
right
up
on
monday,
the
second
it
says
midnight
anywhere
on
earth,
which
I
believe
means
any
time
during
that
day
in
the
second
so
so
submit
to
that
if
you're
interested
or
watch
for
participating
in
it
coming
up
in
september.
B
For
iab
workshops,
what
we
usually
need
these
are
invitation
only
so
we
need
to
submit
you
a
short
position
paper
in
order
to
participate,
but
it
doesn't
really
have
to
be
a
big
burden.
We
just
want
to
know
like
what
your
thoughts
are
and
what
your
interests
are,
and
then
we
can.
We
can
have
you
at
the
workshop.
If
you
want
to.
A
Thanks
maria
and
then
so,
let's
switch
to
the
agenda
slide.
We've
got
a
tight
schedule,
but
these
have
worked
well
for
the
past
three
or
four
meetings.
The
I'll
try
to
keep
us
to
the
times
of
day
listed
in
the
agenda
there.
So
I
apologize
in
advance
if
I
have
to
interrupt
you
to
tell
you
how
much
time
there's
left
in
your
segment,
but
please
bear
with
me
and
help
me
keep
that
time.
A
So
it's
fair
to
the
latter
participants
we're
going
to
have
measuring
measuring
shutdowns
and
more
from
matt
ford
up.
First
then,
we'll
have
and
matt
you
can
you're
welcome
to
share
your
slides
if
you
can
and
then
we'll
switch
to
yon
ruth
about
quick
in
ipv4
and
nicholas
kuhn
is
joining
us
to
talk
about
feedback
from
using
quix
zero,
rtt
bdp
extension
over
satellite
communications.
A
Oliver
gasser
will
come
up
after
that
with
some
multi-path
tcp
measurement
stuff
and
then
jason
living
good
and
one
of
his
collaborators,
who,
I
think,
is
the
possibly
the
first
author
7m
is
joining
us
on
comparative
latency
under
load
performance
of
broadband
cpe.
E
E
Right
great,
so
thanks
very
much
to
the
chairs
for
agreeing
to
accept
this
presentation,
it's
more
of
a
an
infomercial
in
the
spirit
of
a
lightning
talk
than
than
perhaps
some
of
the
more
detailed
measurement
research
that
that
is
on
the
rest
of
the
agenda.
But
I
wanted
briefly
just
to
to
use
this
opportunity
to
explain
some
of
the
rationale
and
the
reality
of
a
new
resource
that
the
internet
society
has
produced
recently
called
internet
society
pulse,
and
I
subsequently
realized
that.
E
E
So
the
internet
society
has
to
engage
with
quite
a
broad
spectrum
of
stakeholders
in
terms
of
folks
interested
in
the
evolution
of
the
internet
and
the
governance
of
the
internet,
and
so
on,
and
really
what
we
wanted
to
try
and
do
was
bridge
what
we
perceived
to
be
a
gap
between
a
lot
of
really
excellent
and
interesting
measurement.
Research
that
tends
to
be
the
the
meat
of
the
agenda
and
meetings.
E
So
that
was
a
sort
of
broad,
a
high
level
goals
for
for
this
effort.
What
we're
doing
in
terms
of
bringing
data
and
and
and
an
application
to
that
is
trying
to
essentially
curate
trusted
third
party
measurement
data.
So
we're
not
really
in
most
cases
generating
any
new
measurements
ourselves.
E
We're
trying
to
forge
partnerships
with
folks
that
do
have
data,
and
then
curating
is
a
is
a
nice
word
to
use
essentially
presenting
the
aspects
of
that
data
that
we
at
the
internet
society
believe
are
most
relevant
to
the
kind
of
topics
that
we
want
to
talk
about.
E
And
we
want
to
use
that
data
to
examine
trends
over
time.
So
longitudinal
studies
are
very
interesting,
generating
reports
and
ultimately
be
able
to
tell
data-driven
stories
about
the
internet,
how
it's
changing,
how
it's
evolving
and
so
on
and
of
course,
it's
a
website.
So
it's
available
to
everyone
everywhere
is
that's
a
broad
topic
right,
internet
measurement
and
and
bridging
gaps
and
all
the
rest
of
it
we're
trying
to
sort
of
narrow
it
down
a
bit
and
that
I've
talked
a
bit
about
topics
that
are
interesting
to
the
internet
society.
E
What
we're
calling
focus
areas,
the
the
two
that
we
have
launched
with
and
the
two
that
are
on
the
website
now
in
the
most
detail,
are
both
internet
shutdowns
and
enabling
technologies
we'll
shortly
be
launching
focus
areas
on
internet
centralization
and
internet
resilience
with
a
focus
on
africa,
and
then
we
also
hope
to
launch
later
this
this
year,
probably
in
q4
country
and
region
reports,
because
those
are
tools
that
we
think
will
be
particularly
useful
to
policymakers,
who
are
interested
in
how
their
countries
compare
with
their
immediate
neighbors
or
their
or
regional
averages.
E
We
are
doing
this
entirely
thanks
to
the
support
and
partnership
of
quite
a
diverse
range
of
organizations,
and
we
were
certainly
very
keen
to
add
add
to
to
this
list
of
data
partners,
and
so
I
want
to
just
take
a
minute
just
to
say
thanks
to
these
folks,
because
none
of
this
would
really
be
possible
without
without
their
help.
E
Briefly,
the
the
focus
areas
are,
as
I
mentioned,
internet
shutdowns,
where
we're
trying
to
curate
a
database
of
shutdown
related
events
on
that
page,
you
can
see
a
map
that
shows
some
visualizations
of
the
intensity
in
terms
of
the
number
and
the
duration
of
shutdown
events
and
where
they
occur,
how
they,
how
they
look
on
a
sort
of
timeline
you
can
search
by
the
location,
the
type
of
shutdown
and
so
forth,
and
you
know
over
time
we're
hoping
to
populate
this
essentially
with
snapshots
of
data
that
evidence
the
fact
that
shutdowns
occurred
so
we're
very
dependent
on
partners
like
uni
and
the
center
for
applied
internet
data
analysis
and
access.
E
Now
we're
oh
we're
we're
very
much
focused
here
on
shutdowns,
where
we
can
point
to
evidence
either,
because
the
government
said
they
shut
down.
The
internet
or
other
partners
said
that
they
were
ordered
to
shut
down
the
internet
or
where
we
can
see
that
you
know
there
was
clearly
an
outage
of
bgp
or
you
know.
Traffic
to
to
to
to
to
prominent
websites,
for
example,
was
was
blocked
and
that
that
kind
of
thing
we're
so
we're
very
much
trying
to
provide
this
sort
of
data-driven
side
of
when
and
where
shutdowns
occurred.
E
And
for
each
of
these
events,
we
provide
a
more
detailed
page
where
we
try
and
identify
relevant
media
coverage,
related
blog
posts
and
so
forth,
possibly
of
more
interest
to
to
this
audience.
We
have
a
focus
area
on
enabling
technologies,
which
is
basically
anything
that
we
think
is
really
contributing
to
the
future
scalability
and
trustworthiness
of
the
internet,
and
at
the
moment
we
have
data
on
ipv,
v6,
tls,
1.3,
sec,
http,
3
and
https.
E
We
want
to
add
to
that
list
of
technologies
over
time,
and
so
I'm
personally
quite
interested
in
any
feedback
on
what
you
know.
Technologies
would
be
good
to
add
to
that
list,
but
you
know,
as
I
say
you,
you
can
browse
that
page
and
look
at
the
data
there.
E
It's
really
to
try
and
convey
some
of
the
the
information
around
per
country,
differences
in
relation
to
these
these
technologies
and
how
they're
evolving
regional
differences
and
the
longitudinal
sort
of
you
know
differences
in
terms
of
whether
or
not
these
technical
technologies
are
rolling
out.
You
know,
seemingly
quickly
or
perhaps
less
quickly
like
iv6.
E
Well,
as
I
mentioned
about
to
launch
a
a
focus
area
on
internet
centralization,
which
is,
I
know,
is
a
topic
close
to
some
people's
hearts.
There
have
been
workshops
and
such
on
the
topic,
and
we
are
including
analysis
of
seven
different
technology
markets
as
part
of
this
looking
at
two
different
metrics
of
centralization
and
we're
tracking
those
over
time.
E
There's
a
lot
to
unpack
in
this.
I
have
no
time
now
to
do
that,
but
it's
just
to
sort
of
wet
your
appetite
for
for
this
focus
area
that
will
be
coming
up.
That
will
hopefully
be
of
some
interest
to
folks
that
are,
you
know,
sort
of
curious
about
how
how,
when,
when
people
talk
about
internet
internet
centralization,
how
can
we
sort
of
break
that
down?
How
is
it
evolving
over
time
and
and
and
what
does
it
look
like
matt,
yeah.
E
E
We
have
a
sort
of
a
definition
of
internet
resilience,
we're
looking
to
combine
a
lot
of
different
metrics
into
index
of
resilience
and
we're
building
that
out
in
partnership
with
afronic
looking
initially
at
africa,
because
I
think
we
figured
that
if
we
can
do
anything
useful
in
africa,
given
the
given
the
the
the
kind
of
infrastructure,
that's
deployed
there
and
the
kind
of
data
that
we
have
about
it,
then
we
then
we
can
probably
do
something
useful
anywhere
frankly,
so
so
look
out
for
that
later
this
year,
that's
particularly
intended
to
help
us
engage
the
regulatory
community
in
that
in
that
region
in
that
continent.
E
The
sort
of
takeaway
message
from
my
talk
really
is
that
we're
very
interested
in
feedback
on
this
tool.
It
may
you
know,
as
I
say,
the
audience
for
this
is
pretty
broad
and
it,
and
it's
certainly
broader
than
than
than
the
folks
in
this
room.
E
But
but
you
know,
if
it's
useful
to
you,
if
you
think
it
would
be
useful
to
you,
if
you
have
data
that
you
think
you
could
contribute,
be
very
keen
to
hear
about
that
and
on
my
think,
my
last
slide
there's
a
bunch
of
ways
you
can
get
in
touch
with
us,
so
we
have
a
presence
on
twitter.
We
have
an
email,
alias
where
you
can.
You
can
just
drop
us
an
email
with
any
comments
or
questions.
E
The
website,
as
I
mentioned,
is
post.internetsociety.org.
We
produce
a
monthly
newsletter
which
basically
is
a
wrap-up
of
what's
new
on
the
site
in
the
last
month.
You
can
sign
up
to
that
on
the
on
the
website.
E
A
Thanks
a
lot
man,
I
didn't
see
anyone
yet,
but
in
the
chat
martin
duke
asks
a
question
about
your
internet
shutdown
slide
about.
I
guess
about
scope.
He
said
by
internet
shutdowns.
I
mean
legal
authorities,
not
boss
attacks
or
is
it
any
kind.
E
Yes,
exactly
it's
it's,
I
don't
have
the
definition
in
front
of
me,
but
it
is
yes,
it
is
not
ddos
attacks.
It's
not
ransomware.
It's
very
much.
The
government
mandated
shutdown.
A
That
we're
looking
at
there,
okay,
okay,
all
right!
Thanks
a
lot
man
yeah!
If
you
could
stop
sharing
that,
and
we
could
bring
up
next
yeah
jan
ruth's
presentation.
A
F
All
right,
yeah
welcome
everybody,
so
today
I
want
to
show
you
the
deployment
of
quicken
ipv4
and
also
how
you
could
get
there
or
get
at
least
to
http
3
via
http
headers
and
via
the
dns.
F
Well,
it
does
so.
We
have
been
monitoring
quick
since
2018
and
we've
been
updating
or
at
least
try
to
update
to
the
drafts
whenever
they
got
released.
So
we
have
quite
a
nice
picture
how
quick
evolved
in
the
time
during
the
standardization
and
well
how
the
deployment
on
the
intent
looked
like.
F
What
you
can
see
here
is
the
deployment
in
2019.
I
split
it
into
two
plots
because,
as
you
can
see
already
by
the
y-axis,
the
deployment
is
quite
flippy,
depending
on
which
versions
you
see.
So
what
you
can
basically
see
is
here
that
around
february
2019
we
have
roughly
3
000
ips,
that
announced
quick
support
and
what
you
can
also
see
in
the
legends
and
the
different
colors.
F
There
are
different
versions
of
quick
being
deployed
and
tested
on
the
internet.
We
can
also
see
very
specific
versions
from
some
well
manufacturers
or
implementers
like
move
fast
from
facebook,
and
what
you
can
also
see,
then,
if
you
look
to
the
plot
on
the
right,
is
that
there
are
that
we
have
often
on
phases
where
larger
parties
activate
their
quick
servers
all
the
time
and
then
stop
them
again,
which
of
course
also
depends
a
little
bit
when
we
updated
our
scanners
ourselves.
F
What
you
can
basically
see
here
is
that
people
really
started
experimenting
with
a
lot
of
different
versions
of
the
year
and
which
basically
reflects
the
standardization
of
quick.
Where
we
see
a
lot
of
drafts
being
put
out
and
a
lot
of
different
versions
been
actually
experimented
with
on
the
internet.
F
F
What
you
can
see
is
that
we
have
a
bit
more
like
two
million
ips
now
supporting
it,
and
maybe
the
most
interesting
one
is,
if
you
start
looking
at
the
rfc
9000
line.
This
is
the
point
in
time
where
the
rfc
was
actually
released
for
quick,
and
you
can
see
roughly
a
month
before
we
start
to
see
the
first
quick
version
once
being
actually
deployed
on
the
internet,
even
though
the
rfc
wasn't
out
that.
F
But
after
that,
we
can
also
see
in
the
brown
dashed
area
that
there
are
more
quick
version
ones,
but
from
a
different
implementer
being
rolled
out
on
the
internet,
and
today
it
has
already
the
largest
share
of
those
versions
being
supported
on
the
internet.
But
there
are
still
some
other
pre-rfc
versions
being
well
at
least
deployed
when
we
look
at
who's
actually
deploying
quick,
here's,
basically,
the
autonomous
systems
from
which
those
ips
are
coming
from.
What
you
can
basically
see
is
here
in
the
top.
I
don't
know
four.
F
We
see
the
people
that
we
actually
think
we
would
see
like
the
big
cdns
cloudflare,
google,
akamai
and
fastly
those
also
edge
cars
a
little
further
down.
F
We
can
also
see
some
regular
network
service
providers,
but
then,
of
course,
also
amazon,
and
if
you
go
down
that
list,
you
find
even
more
like
facebook,
but
basically
here
is
just
the
top
10..
So
this
is
what
we
find
on
udp
port
443.
But
the
question
is:
how
would
you
actually
know
that
you
should
have
been
connecting
there
and
from
the
start
on
most
people
actually
deployed
on
443,
but
that
isn't
mandated
anywhere?
F
So
the
question
is:
if
you
have
like
a
web
server
running
on
top
of
quick
with
http
3,
how
would
you
actually
get
there?
There
are
actually
different
mechanisms
and
one
of
those
mechanisms
of
the
alternative
service,
header
and
http
that
you
can
actually
use.
We
can
basically
tell
the
connecting
client
that
there
is
a
different
service,
for
example,
by
quick
available,
and
what
we
here
did
is
we
grabbed
all
domains
on
the
dotnet.org.com.
F
And
we
requested
the
the
root
of
those
sites
at
the
dub
dub
dot
domain,
dot
tld,
and
what
you
can
basically
see
here
in
that
table
is
that
there
we
have
roughly
eight
to
ten
percent
of
hosts
actually
using
or
at
least
of
those
that
actually
answer
to
our
http
request.
Actually
using
this
this
header,
when
we
look
further
into
those
headers
here
at
the
example
of
the
com
zone,
you
can
see
here
the
counts
of
protocols
and
ports
that
are
being
announced.
F
What
you
can
see
here
is
that
most
of
those
that
actually
use
the
alternative
service
header
actually
announced
some
version
of
http
3
with
the
dash
27
basically
means
it's
draft
27
and
so
on.
So,
as
you
can
see,
is
the
setter
is
actively
being
used
on
the
internet
to
signal
quick
support
today.
F
I
think
it's
a
year
old
now
and
we
did
quite
similar.
We
also
got
the
zone
files.
We
got
the
domains
from
the
zone
files
and
we
requested
the
https
dns
record
again
for
those
domains.
What
you
can
see
in
this
larger
table
is
basically
the
same
information
as
before.
In
the
second
row,
you
can
see
those
or
the
number
of
domains
that
actually
give
us
a
record,
which
is
quite
a
lot.
F
I
would
say
two
to
three
percent,
given
that
it's
only
a
year
old
and
from
those
you
can
see
that
most
actually
also
give
us
an
alpn
which
is
the
the
protocol
and
the
version.
Sorry,
the
protocol
identifier
and
what
you
can
also
see
is
here
that
mohs
actually
claimed
to
support
http
2
and
roughly
yeah,
depending
on
the
zone
file
55
to
roughly
60,
shows
some
version
of
http
3,
which
that's
quick
underlying
again.
We
actually
found
one
alpn
that
claims
h3
in
the
dot
com
zone.
F
F
If
you
look
into
those
this
data,
you
will
find
that
actually
one
of
the
driving
is
is
cloudflare
that
that
has
a
quite
large
share
of
sites
where
it
actually
announces
these.
These
https
records
for
yes,
so
this
already
brings
me
to
the
end.
Yes,
so
what
have
you
seen?
Quick
has
been
developed
and
experimented
with
on
the
internet
quite
extensively.
F
F
We
mainly
found
experiments
on
the
public
internet
from
google
and
akamai,
and
now
we
see
a
lot
more
parties
involved
and
regarding
the
different
mechanisms,
how
you
would
discover
these
services,
we
found
that
the
alternative
service
header
in
http
is
well
fairly
deployed,
eight
to
ten
percent
and
most
of
those
actually
show
yeah.
Some
version
of
http
3
there's
also
already
some
support
for
this
dns
https
record,
and
that's
it.
Thank
you
very
much.
F
F
Dimitri,
I
forgot
the
last
name
from
lightspeed,
contacted
us
yeah,
exactly
that's
his
ips
weren't
being
picked
up
by
our
scanners
and
he
suggested
that
we
should
use
longer
connection
ids
and
as
far
as
my
tests.
Well
I
mean
it's
quite
a
year.
It's
it
was
a
year
before
now.
As
far
as
I
remember,
I
think
we
didn't
get
some
google
servers
when
we
used
shorter
connection
ids,
but
I'm
not
100
sure.
F
That's
also,
I
think,
before
we
we,
we
changed
it
because
there
were
some
changes
in
from
2009
to
2020.
There
were
some
new
drafts
coming
out.
I
think
I
changed
the
scanner
to
actually
use
only
eight
bytes,
because
I
thought
why
spend
all
those
fights,
but
that
showed
some
worse
results,
so
I
switched
to
18.,
but.
A
Guys,
I'm
gonna
interrupt
you.
Could
you
take
this
offline
or
to
the
sure,
to
the
chat?
Thank
you
so
much
just
because
we
only
have
five
millions
on
balance
to
to
deal
with
picking
up
here
and
if
you're
interested
luke.
I
see
lucas
pardue
added
a
comment
about
a
a
cloudflare
blog
post,
https
record
that
is
probably
permanent
here.
So
can
we
get?
Thank
you
thank
you
so
much
jan
for
bringing
that.
Can
we
get
nicholas's
slides
up.
D
A
Yes,
can
you
hear
me
yeah,
we
can
hear
you,
okay,
you.
H
Got
it
go
okay,
thank
you.
Thank
you,
everyone
for
giving
us
the
opportunity
to
present
the
work
we
have
been
doing
on
basically
trying
to
test
the
xero
rtt
bdp
extension
for
quick
over
public
satcom
access.
H
Basically,
there
are
too
many
content
on
these
slides
and
only
have
10
minutes,
and
the
main
idea
was
to
have
slides
with
lots
of
contents
so
that
anyone
can
just
read
it
later
on.
The
main
point
of
this
slide
is
that,
basically,
we
are
working
on
quick
over
satellite
links
and
our
satellite
links
have
a
very
high
delay
and
a
non-negligible
capacity.
H
So
we
have
a
very
large
bandwidth
delay
product,
making
it
hard
for
condition,
control
to
actually
reach
the
available
capacity.
We
are
always
interested
for
anyone
who
has
the
same
issues
that
we
have.
We
think
that
there
are
a
lot
of
cases
where
the
bdp
is
high
and
we
think
that
our
approaches
could
help
them.
Basically,
what
our
extension
is
doing
is
that,
basically,
during
a
previous
connection,
you
record
the
rtt
and
the
bandwidth
delay
product
of
the
connection,
and
you
share
them
when
you
want
to
resume
a
connection.
H
Basically,
it's
all
about
exploiting
previous
connections
and
characteristics.
When
you
want
to
resume
a
new
connection,
we
think
this
could
be
useful
for
other
use
cases
and
ours.
H
You
could
optimize
client
requests,
you
could
do
safe
jump
and
you
could
also
share
transport
information
across
multiple
connections
here
before
going
up
into
the
results
of
the
zerot
extension,
I
think
it's
important
to
re
to
recall
quickly
some
results.
So
these
are
all
the
details
of
the
experiments
we
have
been
running.
They
are
the
link
with
a
platform
that
has
been
exploited
and
we
have
been
using
pico,
quick
implementation
with
bbr
and
basically,
we
change
rtt
and
we
download
different
file
sizes
for
different
bottom
links.
H
Bottom
link
bottom
link
capacities,
I'm
not
sure
if
it's
clear
on
your
slide
on
your
hand,
I
I
realized
that,
on
my
screen
it
was
and
on
the
screen
I'm
seeing
now
it
isn't
so.
Basically,
I
may
not
spend
too
much
time
there.
The
main
point
that
we
wanted
to
say
with
this
slide
is
that
sometimes
because
I'm
working
for
geosatellite
world,
it's
important,
it's
important.
H
It
is
important
for
us
to
recall
that
sometimes,
when
you
have
very
low
capacity,
basically,
the
rtt
does
not
has
a
much
impact
on
the
file
you're
transferring.
H
However,
we
have
to
be
honest
here
when
you
have
a
short
file,
such
as
the
gray
plot.
If
you
have
a
one
megabyte
file,
increasing
rtt
cvd
impacts
the
link
utilization.
H
H
So
this
is
what
I
said
earlier
on.
We
have
a
draft
in
the
draft
on
that
specifies
an
extension
to
quick,
which
is
called
as
this
zero
rtt
bdp.
What
it
does
is
that,
during
the
previous
connection,
client
rtt
and
the
the
current
entity,
creation
window
and
client
ip
are
stored,
and
then
they
can
be
re-exploited
for
the
next
connection,
so
here,
basically,
it
is
two
possibilities
and
two
way
this
can
be
done.
H
So
basically,
the
draft
that
we
have
is
describes
lots
of
different
solutions
on
how
to
do
that
in
quick
and
basically
we
are
comparing
two
ones
here
in
this.
In
our
experiments,
one
is
using
the
bdb
frame
extension
and
the
other
one
is
using
local
storage
as
a
server
both
are
available
in
pico,
quick,
one
has
been
implemented
by
viv,
veris
and
franklin
and
david
and
the
other
one
has
been
implemented
by
christian
and
basically
so.
H
The
difference
with
the
locals
storage
or
the
server
solution
is
that
basically
they've,
not
nothing
is
sent
to
the
client
and
the
client
cannot
read
the
bbb
frame
and
cannot
reject
it,
but
still
the
server
can
use
previously
stored
control,
window
and
entity.
Information.
H
I
I
wanted
just
to
thank
a
lot
and
all
the
work
on
q
log,
because
that
was
very
useful
for
us
on
implementing
and
checking
that
what
we
have
been
doing
or
working.
So
this
is
just
an
illustration
and
just
an
opportunity
to
thank
all
the
work
on
qrock
and
basically,
you
can
see
here
that's
doing
on.
Basically,
this
is
a
cute
log
screenshot.
H
H
So
because
there
are
different
approaches
on
how
the
server
can
use
the
parameters.
This
will
be
discussed
later
and
we,
it
could
be
seen
as
dangerous
to
directly
exploit
the
previously
measured
parameters
so
to
verify
that
we
have
a
satellite
broadband
system
access.
H
So
I
have
a
pico
quick
server
closed
behind
my
satellite
terminal
on
the
right
of
the
figure,
the
satellite
access
network,
I'm
using
the
geocache,
with
a
pro
25
offer
and
for
the
satellite
asp
network,
which
I
don't
know
anything
of,
and
then
I
have
pico
quick
client
instance
in
our
data
centers
in
the
further
where
we
have
virtual
machines,
that
is
on
the
public
internet
with
the
ovh
provider
and
we
upload
500k
or
one
mega
one
meg.
H
H
So
what
we
can
see
from
the
results
that
having
basically
if
we
look
at
the
orange
ethical
here,
is
that
on
average,
without
the
zero
rtt,
it
takes
us
11
seconds
to
upload
500k,
and
it
takes
less
than
eight
seconds
with
the
zero
rtt
and
less
than
five
seconds
with
a
zero
ttbp
extension,
the
difference
between
the
servo
rtt
and
the
rtt
bdp
extension.
H
If
more
on
how
the
information
from
the
bdb
frame
exploited,
we
can
see
that
if
we
want
to
be
safe,
we
can
have
less
than
eight
second,
and
if
we
don't
want
to
be
safe,
we
can
have
less
than
five
there's
still
some
optimization
here.
That
is
required
and
we
just
wanted
to
highlight
the
best
as
we
could
do.
H
If
we
didn't
want
to
be
fair
and
we
just
wanted
to
take
the
risk
of
breaking
the
internet
which
we
didn't,
then
there
may
be
some
question
pervasive
ongoing
tests
and
basically
I've
only
run
tests
on
the
upload
direction.
That
was
strange
and
because
the
traffic
was
blocked
from
in
the
download
direction.
I
didn't
know
why.
I
don't
know
if
it
was
from
a
server.
We
have
looked
that
a
lot.
I
don't
know
if
it
was
my
isp
network
that
was
blocking
it,
we
didn't
know.
H
H
The
gain
with
the
verity
would
be
less
important
in
terrestrial
network,
but
we
could
not
check
that
because
my
home
isp
was
blocking
the
traffic
in
both
direction
and
anyway,
it
was
safe
to
deploy
as
it
is,
because
we
did
not
break
the
internet
and
basically,
the
drafts
propose
three
methods
and
two
methods
are
implemented
in
pico,
quick
now
and
available
for
any
any
cross
testing.
H
We
look
for
implementers
and
we
are
very
happy
to
anyone
who
wants
to
run
more
results
and
more
and
more
tests
and
experiment
with
our
implementation.
We
would
be
very
happy
to
discuss
that.
A
Thanks
nicholas,
I
put
myself
in
the
queue
because
I
didn't
see
any
other.
I
just
have
a
quick
question.
This
reminds
me
of
in
aix
they
used
to
cash,
the
mtu
to
specific
destinations
in
the
kernel,
and
you
would
be
able
to
see
them
and
there
was
an
issue
about
how
long
you
kept
it
for
timeouts.
I
was
just
curious,
I'm
not
familiar
with
this
work,
but
I
wondered
if
there
was
recommendations
for
how
long,
whether
it's
kept
locally
or
whether
it's
advertised
back
across
the
quick
connection.
H
We
basically
the
we
do
not
provide
the
recommendation
in
the
on
the
duration
in
the
draft,
but
we
recommend
that
anyone
wants
to
use
and
reuse
information
runs.
A
safety
check
to
see
that
the
performance
or
the
metrics
that
were
measured
earlier
on
are
still
valid.
Basically,
the
network
may
change
and
you
may
have
measured
something
that
is
not
true
now,
so
you
have
to
check
that
before
actually
using
the
previously
stored
values-
and
this
is
detailed
in
the
draft.
A
Okay,
thanks
so
much
we're
already
about
a
minute
over
vide.
If
you
have
a
comment,
I
leave
a
second
to
do
it.
If
you
have
a
question
that,
can
you
give
it
to
nicholas
outside
the
session
thanks?
So
thanks
a
lot
nicholas.
Thank
you
interesting
and
thanks
for
giving
us
your
next
steps,
so
we
can
catch
it
in
the
other
in
the
working
group,
let's
switch
to
oliver's
presentation
we're
just
past
halfway
through
our
time
here,
and
we've
got
two
coming
up:
hi
oliver.
I
I
I
Work
together
with
researchers
from
triple
I
t
delhi
as
well
as
the
university
of
so
what
is
a
modbus
tcp,
so
multipath
tcp
is
a
multipath
extension
to
tcp.
It
allows
basically
to
establish
multiple
connections
between
two
tcp
endpoints.
It
was
originally
proposed
in
rse
6824,
which
standardized
the
now
called
mpdcp
version
zero
and
in
rfc
8684,
which
was
standardized
in
march
2020.
I
This
was
about
mptcp
version
one,
so
there
are
two
different
versions
in
mptcp.
What
are
some
of
the
benefits
of
mpdcp
over
regular
tcp?
Well,
it's
a
higher
aggregated
throughput,
improved
resilience
and
it
can
provide
seamless
mobility
in
handover
situations.
I
Mbtcp
has
been
used
for
quite
some
time
by
industry,
for
example
in
ios
devices
or
in
the
siri
app
by
apple,
as
well
as,
for
example,
by
korea
telecom
to
improve
the
throughput
in
when
combining
lte
and
wi-fi
interfaces
recently
mbtcp
version.
One
has
also
made
its
way
into
the
mainline
linux
kernel.
So
everything
newer
than
linux
kernel
version
5.6
supports
mptcp
version
one.
I
Yet
we
have
found
no
recent
internet
white
study
of
mptcp.
So
that's
what
we
basically
did
and
we
we
had
two
goals
in
that
study.
So
I
wanted
to
understand
whether,
sir,
how
many
servers
actually
supported
mptcp,
because
that's
kind
of
a
prerequisite
for
clients
to
to
to
make
use
of
it
as
well,
and
we
also
wanted
to
understand
what
is
the
potential
impact
that
middle
boxes,
so
devices
within
the
network
that
are
kind
of
tinkering
with
the
connection
could
have
on
mptcp
connections.
I
I
Each
of
the
two
endpoints
declared
the
support
using
the
mv
capable
tcp
option,
tcp
extension
actually-
and
yes,
importantly
as
part
of
this
ambi
capable
option
the
as
well
as
the
server
both
send
their
own
64
bit
long,
random
key
and
that's
important,
because
we
check
for
the
actual
randomness
of
this
key
to
identify
potential
middle
box
interference.
So
we
used
the
z-map
scanner
to
run
an
ipv4
internet
wide
scan.
We
also
did
a
scan
for
ipv6
as
well,
but
of
course,
not
internet
right.
I
So
when
we,
when
we
basically
send
a
regular
mp
capable
syn
packet,
then
we
get
back
is
an
ack
packet
with
the
mp
capable
of
the
target
hose
that
we're
targeting
and
we
can
retrieve
the
end,
hosts
and
mbtcp
key,
and
we
would
classify
those
which
respond
with
such
an
ambi
capable
flag
and
the
key
as
potentially
mptc
capable,
and
I
will
come
into
a
minute
because
why
it's
just
potentially
nptsp
capable
and
not
we're
not.
There
cannot
be
100
certainty
that
these
are
really
capable
hosts.
I
I
So
the
question
now
arises
are
these
holes
that
we
find,
because
that's
quite
a
large
number,
especially
in
ipv4?
Are
they
do
they
truly
support
mvtcp,
and
in
order
to
understand
that
we
looked
at
what
kind
of
middle
box
interference
can
happen
when
establishing
mptcp
connection?
Well,
the
first
most
basic
case
is
that
a
middle
box
just
drops
the
mptcp
connection.
I
The
second
case
would
be
that
the
middle
box
mirrors
the
scanners
key,
so
that
the
key
that
we
send
is
basically
mirrored
by
the
middle
box
and
not
not
chosen
at
random,
and
the
third
thing
that
can
happen
is
that
the
middle
box
acts
as
a
real
proxy,
so
using
its
own
random
key
in
the
response,
and
the
fourth
way
would
just
act
as
a
path
through
which
is
kind
of
difficult
to
detect.
I
Right
what
we
can
so
in
order
to
understand
whether
those
hosts
actually
responded
with
their
own
mptcp
key
and
whether
that
key
was
chosen
at
random.
I
As
you
see
the
large
peak
on
the
left
side
of
the
normal
distribution
and
on
the
on
the
right
side
for
ipv6,
there
are
more
hosts,
actually
choosing
choosing
their
keys
at
random,
but
we
still
have
a
couple
of
outliers
on
the
left
side
and
out
there
on
the
left
side.
Actually,
the
key
that
we
are
sending
so
they're,
basically
mirroring
our
key
and
that's
why
we
see
the
spotlight.
A
Oliver,
just
about
three
and
a
half
minutes,
so
you
can
decide
how
you
want
to
spend
your
nights.
Yeah.
J
Thanks
all
right.
I
So
so
we
can
then
identify
the
impact
of
these
middle
boxes
from
so.
I
This
already
gives
us
kind
of
a
good
indication
of
how
how
many
middle
boxes
there
are
using
the
technique
of
analyzing
the
sender's
key,
but
we
also
went
a
little
bit
further,
so
we
used
a
tool
which
is
called
a
tracebox
whose
goal
it
is
basically
to
identify,
middlebox
interference
along
a
certain
connection,
and
we
then
classified
hosts
into
three
different
categories,
so
they
either
truly
supported
mbtcp,
which
means
that
there
was
no
middlebox
interference
with
regards
to
mbtcp
towards
the
target,
which
supposedly
would
support
mbtcp.
I
The
second
case
would
be
that
we
find
middlebox
interference
with
regards
to
mptcp
and
the
third.
We
also
had
a
couple
of
hosts
which
were
unresponsive,
and
what
you
can
see
here
is
that
we,
the
number
of
truly
capable
hosts,
actually
got
a
lot,
got
significantly
decreased.
So
if
you
remember
initially,
we
had
like
200,
000
or
so
in
ipv4,
and
now
we
have
a
couple
of
thousands
in
ipv4
and
just
a
few
dozen
in
in
ipv6.
I
Now
we
also
recently
started
to
check
for
mptcp
version
1
support
here.
As
you
can
see,
on
the
left
side,
the
handshake
is
slightly
different,
so
we
needed
to
adapt
for
that
as
well.
Mptcp
version
1
support.
There
are
two
interesting
findings,
the
first
one
being
that
we
again
have
a
very
high
share
of
mirroring
middle
boxes,
so
middle
boxes,
just
replying
replaying,
the
the
mb
capable
extension
that
we
sent
without
you
know
useful
content.
I
That's
the
first
observation
and
the
second
one
is
that
it
has
a
much
much
lower
actual
deployment
compared
to
mptcp
version,
zero
yeah.
So
that's,
basically
all
we
have
a
paper
which
was
published
recently
at
the
ifit
networking
2021
conference.
There's
a
lot
more
in
the
paper.
If
you
are
interested
in
that,
so
we
also
analyzed.
We
did
some
passive
data
analysis
and
we
also
have
a
website
mptcp.io,
where
we
publish
regular
measurement
results
so
feel
free
to
check
that
out.
That's
all
from
my
side,
thanks.
A
G
The
valence
of
these
different
choices
that
middle
boxes
can
can
make
is,
is
actually
quite
interesting
because,
depending
on
the
topology,
if
you're
terminating
the
connection,
it's
fine
to
generate
your
own
ease
and
all
that,
because
if,
if
the,
if
the
pathfind
gets
diverted
around
you,
then
you're
kind
of
doomed
anyway,
but
in
a
case
where
something
is,
is
truly
transparent
and
all
the
sequence
numbers
of
stuff
go
from
one
side
to
the
other.
You
know
without
being
modified,
then
it's
generally
better
to
go
and
just
pass
on
the
server's
values.
I
A
Sure
all
right
all
right,
thanks
so
much
oliver
for
bringing
that
work
to
us.
J
A
L
L
G
K
Can
you
hear
me
okay,
thank
you.
It
looks
good,
so
I'm
jason
and
my
colleague
shabnam
will
also
present
and
I'd
like
to
thank
her
for
being
up
super
early
in
the
morning
in
istanbul,
so
we
will
jump
right
in
so
I
think
I'm
like
becoming
this
guy,
like
I'm
crazy
broadband
performance
is
not
just
about
speed,
but
it's
about
working,
latency
or
latency
under
load.
It's
really
critical
to
end
user
quality
of
experience,
and
so
we've
been
focused
on
this
for
some
time.
K
So
the
background
here
and
shebnin
will
go
into
some
of
our
results
in
a
moment,
but
we
had
a
really
fortuitous
set
of
events
that
came
together
here
just
prior
to
the
onset
of
the
pandemic.
Really
in
january
of
2020,
we
rolled
out
a
cpe
based
measurement
system
to
millions
and
millions
of
our
cable
modems,
and
that
includes
a
latency
under
load
test,
as
well
as
a
whole
suite
of
other
tests,
and
we
started
running
about
700
000
tests
per
day,
so
one
really
nice
part
is
prior
to
that
pandemic.
K
So
it's
a
baked
in
feature
of
docsis
3.1,
it's
a
mandatory
feature,
but
we
had
one
particular
variant
of
a
cable
modem,
which
we
called
the
xb6
that
didn't
sort
of
have
a
qm
working
right,
and
so
it
was
turned
off
and
the
other
variant
had
it
turned
on.
So
from
a
measurement
standpoint.
It
was
really
wonderful
because
we
had
this
great
control
group
with
aqm
turned
on.
K
We
had
an
experiment
group
with
it
turned
off,
and-
and
so
we
were
able
to
at
at
some
point-
then
turn
it
on
on
the
other,
the
other
set,
but
we're
basically
to
two
variants.
The
exact
same
hardware
specification
just
from
two
different
manufacturers,
and
so
you
know
really
a
very
interesting
apples
to
apples.
Comparison
was
possible,
so
it
was
great
to
then
in
a
way
from
a
measurement
standpoint.
I
suppose,
to
see
everyone
move
to.
K
You
know
work
from
home
and
learn
from
home
with
all
of
the
attendant
network
demand
that
happened
at
that
time
really
created
a
perfect
laboratory
for
us
to
study
these
things,
and
so
we
had
you
know
many
many
tests
run
and
on
the
next
slide.
Shebnum.
If
you
can
speak
to
this.
D
So
these
graphs
show
the
cumulative
distribution
function
for
the
working
latency
latency
under
lot
displayed
in
milliseconds
on
the
x-axis,
and
here
we
have
mean
and
maximum
values,
but
we
have
a
new
platform
that
includes
other
percentiles
and
also
the
transaction
rates
to
estimate
the
packet
loss
as
well.
So
what
we
can
see
here
that
when
the
aqm
has
been
disabled,
we
had
really
cases
where
we
can
see
buffer
blocked
and
obviously
it
would
affect
the
quality
of
experience
for
many
sensitive
applications.
D
So
our
main
goal
was
to
assess
the
latency
in
our
access
networks
between
the
cable
modem
and
the
cmts
and
optimize
the
active
queue
management
settings
to
improve
the
latency
and
jitter.
But
we
also
made
sure
that
the
throughput
and
the
packet
loss
requirements
were
met.
So
we
tested
and
monitored
all
these
quality
of
service
levels
and
we
also
collected
gateway,
metrics
customer
calls
tickets
and
track
roles
to
make
sure
to
confirm
that
the
latency
and
jitter
improvements
we
deployed
did
not
create
any
other
quality
of
experience
issue.
D
So
this
data
showed
that
I
mean
for
these
graphs.
We
use
bi-directional
latency
a
lot
on
in
both
danstream
and
upstream.
We
had
tcp
loads
and
we
can
also
see
the
impact
of
downstream
tcp
load
on
the
upstream
tcp
acknowledgements.
D
So
we
can
see
the
downstream
load
increases
the
latency
for
the
upstream
traffic
and
before
we
deployed
our
latency
under
load
working
latency
platforms.
D
We
validated
the
test
parameters,
because
the
speed
case
is
more
widely
known,
but
the
test
parameters
that
are
optimal
for
the
speed
test
may
not
be
optimal
for
the
latency,
because
the
tcp
data
may
not
fill
up
the
queue,
even
though
they
may
fill
up
the
full
sptr
rate.
D
So
it's
very
important
to
have
an
accurate
latency
measurement
within
the
short
duration.
So
so
it's
important
to
have
a
common
and
accurate
way,
a
defined
way
in
a
standard
approach.
Another
challenge
is
that
we
started
to
measure
the
latency
for
higher
speed
tr
rates.
We
are
trying
upstream
rates
at
one
gigabits
per
second
or
more
so
so.
These
high
speed
ti
rates
have
other
challenges
when
we
want
to
have
an
accurate
working,
latency
measurement.
D
So
so
this
is
why
it's
important
to
define
a
common
platform
that
can
work
for
different
speakers,
network
architectures
and
also
different
configurations.
K
K
Certainly
our
recommendations
are,
you
know,
isps
and
equipment
makers
should
continue
to
deploy
aqm,
but
it
also
struck
us
as
we
went
through
this,
and
especially
as
we
were
working
on
the
measurement
platform
itself,
that
while
there
are
some
standards
around
things
like
speed
tests
and
these
things,
when
you
look
at
how
people
have
implemented
this
in
the
wild,
whether
it's
you
know
sam
knows,
arugula
or
fast.com
or
whatever.
K
These
are
all
you
know
wildly
different,
and
so
it
makes
it
very
difficult
to
compare
across
tools
because
it's
unclear
sort
of
what
span
standard
or
specs
they
use
what
variables
and
so
on.
So
it
seems
like
there's
an
opportunity
for
some
standardization
there
and
then,
of
course,
that
makes
possible
eventually
data
sharing
and
comparison.
K
So
I
think
that's
it
there's,
of
course
a
longer
paper
that
you
can
click
on
there.
The
second
link
and-
and
you
can
read
more
about
the
measurement
system
itself
and
the
two
top
links
so
happy
to
take
any
questions.
Thanks
guys,
we
have
jonathan.
M
Okay,
so
I
wanted
to
ask
to
confirm
exactly
which
a
qm
is
in
use
at
each
end
of
the
connection,
both
at
the
cpe
and
at
the
head
end
and
whether
ecn
is
enabled
in
each
of
those.
D
So
so
for
the
cable
modem,
the
atm-
if
that
is
the
darkest
pipe
defined
in
taxi
street
advanced
specifications,
so
both
cable
modems,
save
the
same
implementation
are
defined
in
the
specification.
D
We
also
have
some
cmts's
at
the
head
and
that
have
their
own
atm,
so
they
have
their
own
appropriate
records,
not
first
specifications.
We
also
optimize
their
values,
and
so
so
it
depends
on
the
cable,
modem
versus
cfts
models.
But
the
main
part
of
the
atm
is
the
same,
and
that
was
another
question.
I
think.
M
Yes,
it
was
whether
you
had
ecn
enabled.
D
A
Yes,
okay,
we're
about
at
the
end
of
time.
Let's
take
al's
question
thanks
jonathan
and
then
we'll
wrap
up
the
session.
L
Thanks
both
of
you
for
sharing
this
data
with
us
mike,
my
question
has
to
do
with
the
the
test
stream
that
you
used
defining.
You
know
working
load.
I
think
that's
gonna
influence
the
results
a
bit,
and
maybe
you
can
describe
it
briefly.
Thank
you.
D
The
current
platform
that
we
are
using
uses
iperf
I've
heard
340,
cp
load
and
then
network
for
udp
pink
latency
measurements.
D
K
And
that
does
point
out,
I
think
as
well
like
you
know,
we
were
sort
of
experimenting
a
little
bit
with
what
we
arrived
at
and
you
know
I
don't
think
there's
a
lot
of
great.
You
know
knowledge
about
what
the
right
thing
is.
That's
at
scale.
A
All
right
thanks
al
and
talk.
A
Thanks
yeah
thanks
and
jason
for
bringing
that
work
to
us
and
thanks
again
to
all
the
contributors.
That's
our
whirlwind
meeting
at
the
end
of
an
ietf
week
have
a
good
weekend
folks
and
join
us
next.