►
From YouTube: IETF114 ANRW 20220726 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Masks
are
mandatory,
so
I've
been
asked
to
explicitly
remind
people
of
the
masking
policy
at
ietf
this
year
across
the
board.
If
you
are
attending
in
person
or
required
to
wear
a
regionally
approved
mask
so
n95
kf-94
and
the
rest
certified
equivalent.
I
understand
the
ietf
is
is
supplying
masks
as
well
if
anybody
needs,
but
please
ensure
that
you
are
wearing
a
mask
at
all
times,
even
if
you
are
asking
a
question
of
presenters
the
exception,
of
course,
being
if
you
are
presenting,
you
are
not
required
to
wear
a
mask
in
person.
A
From
notewell
standard
stuff,
so
the
iutf
followed
the
intellectual
property
rights
disclosure
rules.
As
listed
on
this
slide
and
I
believe
most
presentations-
you
will
see
this
so
please
make
sure
you're
aware
that
any
irtf
contribution
is
that's
covered
by
patents
and
patent
applications.
A
You
must
disclose
that
fact
or
not
participate
in
the
discussion.
Rtf
expects
you
to
file
such
ipr
disclosures
in
a
timely
manner.
I'm
hoping
you
have
done
so
in
a
period
measured
in
days
or
weeks,
not
months.
Irtf
prefers
that
most
liberal
licensing
terms
possible
are
made
available
for
irtf
stream
documents
and
the
definitive
information
is
in
rfc
5378.
A
Moving
on
for
audio
and
video,
of
course,
irtf
routinely
makes
recordings
of
online
and
in-person
meetings
available
online.
If
you're
participating
in
person
and
choose
not
to
wear
a
red,
do
not
photograph
lanyard,
then
you
can
sent
by
default
to
being
recorded.
B
A
C
A
And
if
you
participate
online
and
turn
on
your
camera
and
or
microphone,
then
you
consent
to
appear
in
such
recordings
the
privacy
and
code
of
conduct
as
a
participant,
any
irt
of
activity.
You
acknowledge
that
written
audio,
video
and
photographic
recordings
of
you
might
be
made
public
personal
information
you
provide
to
the
irtf
will
be
handled
in
accordance
with
policy.
A
Yes,
I'm
told
I'm
breaking
up
left
and
right
so
hold
on.
I
don't
even
unmute.
A
Is
this
any
better
hoping
it
works?
Let's
find
out
as
a
participant
or
attendee.
You
agree
to
work
respectfully
with
other
participants
and
please
contact
the
ombuds
stream.
If
you
have
any
questions
or
concerns
about
this,
of
course,
the
code
of
conduct
is
in
rfc
7154,
an
anti-harassment
procedure
in
7776.
A
Recap
of
the
goals
of
the
irtf
fabulous
group
focused
on
longer
term
research
issues
related
to
the
internet,
while
the
parallel
organization,
which
is
the
ietf,
focuses
on
shorter
term
issues
of
engineering
and
standards,
making
the
irtf
conducts
research.
It
is
not
a
standards,
development
organization
but,
of
course,
important
to
help
bridge
the
the
two
sides.
A
So
while
the
irtf
can
publish
informational
and
experimental
documents
in
the
rfc
series,
its
primary
goal
is
to
promote
development
of
research,
collaboration
and
teamwork
in
exploring
research
issues
related
to
internet
protocols,
applications,
architecture
and
technology,
and
the
primer
for
irtf
primer
for
ietf
participants
at
irtf,
of
course,
is
rfc
7418.
A
So,
moving
on
to
the
fun
stuff,
welcome
to
a
rw
everybody,
I'm
really
glad
you
could
make
it
both
those
who
are
in
person
and
those
attending
remotely
it's
it's
lovely
to
have
people
join.
I
want
to
make
a
special
thanks
to
the
program
committee
this
year
in
particular,
because
it
was
a
very
competitive
season.
A
We
had
four
or
five
premier
venues
all
running
at
about
the
same
time,
so
the
people
who
volunteered
for
for
this
committee
a
particular
thanks,
because
everyone,
I
know-
had
multiple
commitments
around
the
same
time
on
a
personal
level,
someone
who
couldn't
be
with
us
is
my
co-chair,
tj
and
and
to
be
honest,
tj
deserves
so
much
of
the
credit
for
actually
putting
this
together
without
him.
A
I
honestly
couldn't
say
where
this
would
be
so
tj
if,
if
you're
in
the
room,
thank
you
and
if
you're,
not
in
the
room,
everybody
know
that
tj
did
a
marvelous
job
here.
A
Also,
thanks
to
the
review
task,
force,
ethan
and
and
johanna,
they
did
a
marvelous
job
in
just
looking
over
reviews
making
sure
they
were
in
order,
positive,
constructive
and
so
on.
So,
thank
you
both
for
that
and,
of
course,
thanks
to
the
sponsors
of
a
rw
specifically,
which
are
akamai
and
comcast.
A
So
logistics
and
links
the
paid
the
program,
the
paper,
pdfs
and
so
on-
are
all
available
online.
I'm
breaking
up
once
again,
one
sec.
A
Okay
and
I'm
gonna
try
and
wrap
this
up
quickly,
so
I
don't
continue
to
break
up
I'm
presenting
the
program
overview
in
reverse
order.
So
this
afternoon
we
have
a
set
of
invited
talks
to
broach
a
new
potential
area
of
investigation
for
the
irtf
around
formal
methods
and
verification
in
protocols
and
development.
So
please
come
back
this
afternoon
because
I
anticipate
these
will
be
a
fabulous
set
of
talks,
but
in
this
section
we
have
a
keynote
and
then
the
four
papers
that
were
accepted
for
publication.
A
I
want
to
first
introduce
lucas
pardue,
who
has
joined
from
cloudflare
and
he's
going
to
give
this
interesting
talk
called
layer,
four
and
three
quarters.
I
work
with
lucas
on
a
regular
basis
and
he
and
I
have
frequent
conversations
and
the
questions
that
come
up
are
actually
very
interesting.
A
The
layering
model,
in
particular,
has
done
very
well
and
we
all
know
for
the
evolution
and
sustainability
of
the
internet,
but
I
think
increasingly
in
many
domains,
we
are
starting
to
see
that
the
the
exchange
of
information
or
signaling
or
cooperation
between
layers
is
somehow
increasingly
important
or
relevant.
So
I
saw
lucas
give
this
talk.
A
few
weeks
ago-
and
I
know
he
was
really
excited
to
give
it
at
this
venue
so
lucas,
if
you
are
around,
I
am
going
to
hand
off
to
you
as
soon
as
I
load
the
slides
one
sec.
A
I
can
hear
you
just
fine.
Okay,
welcome,
slides,
I'm
not
seeing.
A
A
D
And
for
the.
B
D
My
sharing
here
we
go.
E
A
D
Okay,
that
was
submitted
last
night-
maybe
maybe
we've
just
crossed
paths
on
that.
Okay,
it
wasn't
approved.
Let
me
let
me
email,
you
a
copy.
A
A
Colin,
if
you're
there
I'm
tempted
to
switch
the
order
of
events
and
do
the
keynote
as
a
finisher.
I.
C
A
C
A
The
time
series
keynote.
A
A
C
A
D
There
we
go.
I've
got
control
of
these
slides
excellent.
Thank
you
very
much
for
everyone
for
their
patience.
Thank
you
to
the
chairs
for
inviting
me
to
do
this
talk.
I
was
super
excited
to
come
and
do
this
in
a
venue
with
a
live
audience
and
to
be
there
in
the
room.
Unfortunately,
I
I
came
down
being
ill,
so
I'm
gonna
preface
this
with
this
whole
shaboodle
of
delays
with
I'm
ill
so
be
kind
on
me.
D
So
I'll
be
quick
because
we
spent
some
time
trying
to
get
to
this
point.
I'm
lucas
pardo,
I'm
an
engineer
at
clyde
play
on
the
protocols
team
working
on
technologies
like
quick
and
tls,
and
hp,
2
and
hp
3,
these
kinds
of
layers,
7
or
downwards
protocols,
and
I
kind
of
take
the
view
that
the
layering
model
isn't
brilliant
for
for
some
of
the
kind
of
work
that
my
team
does.
It
helps,
but
it's
also
an
impedance
somewhere,
because
we're
often
talking
cross
layers
and
problems
that
exist
somewhere
in
the
ether.
D
So
this
talk
is
kind
of
a
tongue-in-cheek
view
to
communicate
to
people
who
want
us
the
kind
of
problems
that
we
face.
So
briefly,
you
know
just
in
case
you
don't
get
the
reference
to
the
talk
title.
This
refers
to
you
know,
platform,
nine
and
three
quarters
that
comes
from
the
harry
potter
universe.
If
you
came
just
to
get
harry
potter,
memes
or
jokes
you'll
be
pretty
disappointed
and
mainly
because
you
know
today's
talk
is
focusing
on
layer,
four
and
three
quarters,
that's
mainly
because
our
layering
model
doesn't
go
up
tonight.
D
It's
it's
restricted
to
seven,
so
you
know
that
this
is
a
familiar
model
that
we're
used
to
some
people
might
call
it
a
layer
cake.
We
know
that
the
cake's
a
lie
and
we
should
just
ignore
it.
At
least
that's
my
opinion
anyway,
and
the
reason
is
because
it's
detached
from
reality.
It's
a
work
of
fiction.
You
can
pretty
much
do
anything
with
it
that
you'd,
like
and
you'll,
be.
D
Fine
sorry
I'll
just
go
back
here
like
I
went
to
the
next
slide
and
I
missed
it
myself.
Obviously
I
got
a
bit
of
brain
fog
here,
yeah
I'll,
just
split
back
and
forth
between
these
two
slides.
What's
the
difference,
I've
pinned
upside
down
it's
fairly
noticeable,
there's
like
hardly
anything
that
would
have
changed
here
and
that's
kind
of
funny,
because
it
reminded
me
of
something
else.
Actually
some
of
you
might
know,
I'm
welsh.
I
grew
up
in
the
cardiff,
the
capital
of
wales
yeah.
D
They
did
the
layers
in
this
order
reminded
me
of
something
and
that's
cardiff,
central
train
station,
and
you
might
say
what
like.
Why
is
that
the
case?
And
if
you
look
closely,
it's
probably
too
small
to
see
here,
but
if
you
look
at
the
platform
numbers
they're
in
the
kind
of
central
column
of
the
the
image
here
they
go
from
one
through
to
eight.
I
see
jonathan
in
the
chat
mention
layer.
D
Eight,
we're
not
gonna
talk
about
that
one
today,
but
maybe
some
of
the
other
things
that
you
see.
If
you
very
quickly
look
at
that
image-
and
I
think
yeah
beyond
the
the
ordering
that
we
have
here,
the
interesting
fact
is
well
not
fact.
So.
The
interesting
observation
that
I
had
is
that
we
could
analogize
kind
of
transport
or
internet
protocols
in
some
way
to
kind
of
train
stations
and
trains.
D
So
bear
with
me
for
this
analogy
a
bit
for
a
bit
like
I'd,
say
that
indicative
of
communication
stacks,
insofar
as
like
hundreds
of
years
ago,
people
you'll
never
get
to
meet,
had
some
lofty
goals
and
improving
the
movement
of
things
from
a
to
b,
and
they
made
well-designed
intentions
and
design
decisions
that
lay
the
foundation
for
the
future
of
generations
to
come.
D
Those
train
tracks
pretty
good
structured
guides
for
the
movement
of
things
and
the
vessels
that
move
on
those
tracks
can
carry
various
kinds
of
loads
and
they
can
adapt
to
the
changing
needs
of
consumers.
But
sometimes
those
plans
don't
go.
You
know
they
don't
play
out
their
way,
that
they,
they
they'd
hoped
or
that
the
needs
change
in
such
a
way
that
the
tracks
and
the
foundations
aren't
quite
right
and
because
train
tracks
and
stations
are
physical
things.
D
It
can
be
hard
just
to
rip
them
out
and
swap
something
in
that
takes
time
and
planning
and
dedication
and
maybe
like
you're,
stuck
with
what
you
designed
and
that's
it,
and
that
comes
back
to
you
know
our
layer
cake.
Sometimes
you
know
we
have
these
layers
and
they
don't
really
serve
much
purpose
to
how
people
use
the
internet
today.
So
so
for
us.
For
me,
I
just
kind
of
removed
that
existence
of
these
layers.
D
From
my
mind,
I
could
just
take
layers
of
layer,
five
out
yeah,
throw
it
in
the
bin
and
and
everything's
fine,
and
you
might
say:
well,
you
can't
do
that
in
real
life,
but
but
actually
that's
something
that
cardo
central
train
station
already
did
they
did
us
in
the
60s
well,
ahead
of
osi.
They
were
really
paving
the
way
to
use
another
pun,
but
it's.
D
And
like
or
ignoring
unused
parts,
what
about,
if
you
want
to
extend
things
beyond
the
initial
design,
intent
card
central
train
stations
like
ahead
of
us
again
here
they
needed
to
add
another
platform
for
carrying
different
kinds
of
traffic,
so
there
is
actually
a
platform
zero.
At
carter,
central
I
used
to
catch
a
train
there,
myself.
D
What
about
other
things,
they've
also
crammed
in
several
lines
between
platforms
I
mean,
like
I
don't
know
who
like
in
terms
of
end
users,
that's
supposed
to
serve,
but
it
it
says
different
kinds
of
traffic
yeah.
I
don't
know
what
that's
about
I'm
sure
the
train
nerds
in
the
room
probably
have
a
very
well
justified
explanation
for
me.
You
might
be
laughing.
D
You
might
be
very
confused
by
all
of
this,
but
what
I'm
attempting
to
illustrate
is
pretty
straightforward:
that
there's
only
really
two
and
three
quarter
of
layers
of
things
that
we
need
to
care
about
in
this
room
today.
For
this
talk
and
those
familiar
with
harry
potter
will
be
familiar
with
the
obvious
thing
to
do
in
this
circumstance
that
you
just
put
those
two
things
together
and
you
just
run
up
them
as
fast
as
you
can
to
see
where
the
boundary
will
take
you
so
trying
to
return
back
to
reality
somewhat.
D
In
my
view,
this
these
are
the
only
layers
that
came,
and
this
is
the
boundary
between
them.
You
might
balk
at
layer
3
being
removed.
Surely
you
know
the
internet
protocol
and
the
internet
are
important.
I
would
tend
to
agree
a
lot
of
the
the
the
work
at
cloudflare
does
is
about
helping
to
build
it
in
better
internet,
and
we
tried
to
do
that.
D
So
it's
not
completely
opaque
to
us
or
or
not
there,
but
when
it
comes
to
a
focus
at
the
internet
or
you
know
the
web
and
the
way
that
people
and
humans
interact
with
that.
I
don't
really
want
to
spend
any
mental
energy
on
ip
details
and
that's
because
end
users
don't
really
care
they.
They
don't
access
services
via
ip.
The
direction
of
travel
is
that
they
they
should
be
becoming
well.
The
the
world
is
probably
becoming
less
reliant
on
client
ips
anyway.
D
We're
seeing
this
in
the
work
in
the
itf
in
terms
of
oblivious
protocols
for
various
different
application
protocols
on
the
top
of
them,
where
we
we
can.
D
The
traditional
use
of
client
ip
is
a
vector
for
authorization
or
authentication
is
kind
of
not
not
so
good
in
terms
of
privacy
and
so
on.
So
we
won't
dig
into
that
topic
today.
It's
a
very
rich
idea,
maybe
some
of
the
other
talks
will,
I
I
don't
know
but
yeah.
This
is
one
of
the
few
reasons
I
think
layer
3
can
be
ignored
safely.
D
So
I
just
want
to
move
on
from
that
on
to
to
some
of
the
other
reasons,
I
think
this
is
important.
It's
because
it's
what
I
spend
most
of
my
day,
job
on
and
the
team
I
work
in
does
I
talked
about
some
of
the
protocols
we
support
on
the
edge
and
that
our
team
is
responsible
for
the
servicing
and
there's
other
things
there
like
websocket
and
grpc.
D
If
you're
not
familiar
with
those,
I
won't
be
going
into
much
much
detail
on
them,
but
they
help
give
a
picture
that
many
people
think
it's
the
web
and
that
http
is
a
protocol
that
powers
the
web,
but
there's
other
the
services
that
build
upon
these.
They
might
have
user
agents
or
end
users,
but
they
could
be
api
traffic,
not
necessarily
just
people
browsing
page
page
to
page
and
the
reason
that
you
know
a
company
like
cloudflare
it
does
what
it
does.
D
It
provides
intermediation
between
clients
and
origins
and
that
can
provide
speed,
reliability
and
improvements,
but
also
provides
scale
to
clients
from
a
distributed
network,
but
also
a
scale
in
a
different
term
to
the
origins
for
people
who
operate
these
things.
It
could
be
difficult
to
keep
up
with
the
latest
trends
in
technology,
so
there's
a
long
tale
of
cheap,
shared
hosting
or
vps
services
that
have
no
hopes
of
kind
of
getting
their
stacks
to
support
these
latest
and
greatest
technologies
and
and
the
by
having
an
edge
in
front
of
them.
D
You
can
enable
that
a
bit
more
quickly
or
provide
this
kind
of
serverless
edge,
compute
platform
that
that
some
of
you
might
be
familiar
with
and
and
beyond
the
layer,
seven
protocols
I've
mentioned
there,
the
team's
also
responsible
for
a
product
called
spectrum
which
is
more
at
the
layer.
Four
things
like
tcp
and
udp.
D
We
do
a
few
layering
violations
to
help
provide
some
added
value
on
top
of
those,
and
these
are
lower
violations
in
the
good
sense
to
provide
some
some
value
for
people,
but
that
we
provide
some
specialized
support
for
other
protocols
that
maybe
sit
between.
D
You
know
four
and
seven
you
see
here
on
the
right
hand,
side
ssh
rdp
in
minecraft,
which
got
me
me
interested.
I
don't
know
much
about
minecraft
at
the
time
when
we
took
on
ownership
of
the
spectrum
product
and
if
you
look
there's
a
little
asterisk
that
says,
minecraft
java
edition
is
supported,
but
minecraft
bedrock
edition
is
not
supported.
My
curious
brain
made
me
want
to
understand
the
difference
there
again,
that's
a
whole
talk
title
in
its
own.
If
you're
not
familiar
with
minecraft,
you
go
visit.
D
These
links,
it's
quite
interesting.
What
the
differences
are?
It
basically
boils
down
to
tcp
versus
udp,
but
I
I
I
won't
digress
here
too
much.
The
interesting
thing
is:
there's
wireshark
dissectors,
those
previous
links
have
like
pages
upon
pages
of
schema
and
definition
of
the
protocol,
language
they're,
not
at
the
iedf,
but
the
the
amount
of
detail.
There
is
quite
astonishing
if
you're
not
familiar-
and
I
was
thinking
quake-
was
the
only
supported
protocol
in
wireshark,
but
no
there
are
indeed
dissectors,
but
anyway.
C
D
That's
kind
of
an
ambling
abstract
way
of
talking
about
this
topic.
If
I've
lost
you,
I'm
sorry,
if,
if
you
you,
you
want
to
tune
in
again
great
we're
going
to
get
more
into
some
nitty-gritty
details
right
now
and
talk
more
in
the
specific
terms
here.
So
talk
about
http.
D
This
is
a
layer
7
protocol.
Sometimes
we
forget
what
the
acronyms
actually
stand
for.
The
question
would
be
does,
does
this
mean
the
hypertext
transport
protocol?
Sometimes
I
hear
people
say
that
I
will
quote
mark
nottingham
from
itf
99,
who
said
that
roy
fielding
knows
whenever
you
call
it
the
hypertext
transport
protocol,
because
it
means
the
hypertext
transfer
protocol.
D
We
might
be
splitting
hairs
to
say
well,
what's
the
difference,
what
is
a
transport
protocol
speaking
recently
with
marwin
to
even
discuss
you
know,
is
udp
really
a
transport
protocol
or
not?
But
again
I
I
don't
want
to
digress
to
help
frame
some
of
this
discussion
and
why
I
think
it
matters.
D
I
want
to
talk
about
the
view
from
cloudflare's
edge
and
then
some
of
the
the
the
kind
of
traffic
profiles
that
we
see
from
real
users
contacting
our
edge
on
a
daily
basis,
so
to
get
into
that
first
of
all,
we
offer
something
called
radar.cloverlight.com.
D
There's
a
website
containing
you
know
up
to
up-to-date
statistics
about
various
things
that
our
ed
sees.
This
is
just
a
snippet
of
hp
and
tls
traffic
taken
from
april,
so
it's
a
little
bit
out
of
date.
But
if
you,
if
you
were
to
check
that
today,
you'd
see
some
maybe
similar
numbers,
maybe
different
numbers
so
to
go
and
take
a
look
but
yeah.
D
Just
the
the
high
level
summary
of
this
view
is
that
it's
quite
clear
that
most
of
the
traffic
that
our
edge
sees
is
encrypted
hb2
and
hp3
99
percent
of
the
traffic.
According
to
this,
this
view
at
that
time
was
https.
So
that's
encrypted.
Only
nine
percent
of
the
traffic
was
hp,
1.x,
so
1.1
we'll
call
that
just
for
convenience.
D
So
when
you
combine
those
two
figures
together,
that's
that's
a
tiny
tiny
proportion
of
requests
to
cloudflare
are
being
made
over
plain
text,
hb
1.1,
I
think
sometimes
when
when
we
think
about
http
in
the
web,
this
is
presumption
about
how
it
worked,
maybe
a
decade
or
so
ago,
and
that's
very
different
to
today.
So
when
we're
thinking
about
how
to
design
and
scale
services.
C
D
Like
an
older
view
of
the
protocol,
those
assumptions
don't
hold
up
anymore
and
I
think
well.
What
I
will
try
and
illustrate
through
the
next
few
slides
is
that
the
tooling
and
the
methodology
we
have
is
maybe
not
entirely
fit
for
the
purpose
of
of
how
these
layer,
seven
protocols
have
evolved
and
are
changing.
D
And
to
maybe
help
illustrate
that
sometime,
it
will
in
some
way
very
briefly
I'd
like
to
think
or
encourage
people
to
think
about
how
these
protocols
look
on
the
wire.
So
the
astute
amongst
you
will
understand
maybe
that
I've
flipped
the
layer
kick
upside
down
now
so
from
the
bottom.
We
have
layer,
seven
in
terms
of
this
http
semantic
if
you're
not
familiar
with
the
protocol.
It's
a
request
response
protocol
where
clients
send
request
messages
and
server
sends
response
messages
back
to
each
other.
These
are
kind
of
an
abstract
thing.
D
They
have
requirements
and
constraints,
but
they
don't
have
a
strict
serialization.
That's
the
duty
of
the
http
version
itself
to
describe
how
those
abstract
semantics
map
onto
the
wire.
So
on
the
left.
You
know
we've
got
1.1
where
we
take
this
this
message
and
we'll
convert
it
into
an
ascii
serialized
form,
and
we
put
it
into
a
tls
record
because
we
only
care
about
encrypted
traffic
here.
D
That
would
then
span
multiple
tcp
segments
in
the
middle
column,
weekends
have
the
same
hp,
request,
response
message
semantics,
but
instead
of
ascii
these
map
into
the
binary
frames.
So
we
have
this
splitting
up
of
headers,
so
the
kind
of
the
metadata
for
the
message
and
the
the
data
frame,
which
carries
the
message
content,
but
again
those
get
mapped
into
a
tls
record
and
sent
over
tcp
segments
very
in
a
very,
very
similar
way.
D
On
the
right
hand,
side
is
probably
the
most
different
aspect
here
with
this
is
this
is
quick
the
quick
transport
with
messages
getting
mapped
into
frames
very
similar
to
hb2,
but
before
they
go
down
into
to
anything
below
them,
they
get
mapped
into
a
quick
stream,
which
is
a
transport
primitive.
That's
available
that
solves
all
of
the
the
head
of
line
blocking
staffing,
quick
that
we're
not
going
to
talk
about
today,
but
those
map
into
quick
packets
and
they
get
sent
in
udp
datagrams.
D
So
keeping
that
in
mind
that
this
is
how
it
kind
of
looks.
Let's
look
at
some
statistics
very
quickly.
Radar
offers
this
high-level
global
view
of
traffic
trends.
This
is
an
interactive
map,
so
I'd
encourage
you
to
go
visit
and
play
around.
Maybe
look
at
your
country
or
where
you
reside
interesting.
Things
like
that
dark
orange
is
where
the
share
of
hp3
traffic
is
highest
going
to
lighter
for
lowest.
I
don't
know,
maybe
that
distribution
surprises
you
at
the
time.
I'm
sure
it's
changed
today
due
to
traffic
trends,
etc.
D
Maybe
it
doesn't.
Let
me
know
in
the
chat
back
in
may
2021
quick,
the
transport
protocol
launches
our
c9000
cloudflare
has
been
supporting
this
for
a
long
time.
We
we
did
a
blog
post
at
the
time
it's
hard
to
see
here,
but
the
traffic
show
at
the
time
was
twelve
point
four
percent
or
call
it
twelve
if
we're
rounding
down
and
in
the
meantime,
hp.
Three
draft
has
finally
been
published
this
year
and
we're
kind
of
keen
to
take
a
long-term
view
of
the
usage
just
as
a
headline
statistic.
D
In
that
time
of
the
last
year,
we've
seen
the
traffic
global
requests.
Sorry,
the
the
share
of
hp3
traffic
on
a
global
scale
rise
from
about
12
percent
to
over
25
percent,
which
is
you
know,
a
doubling
which
is
pretty
decent,
I'd,
say,
and
but
in
some
countries
that's
even
higher.
So
I
do
encourage
you
to
go.
D
Take
a
look
and
have
a
deep
dive
or
visit
that
blog
post,
where
we'll
dig
into
some
long-term
trends
over
time
to
show
maybe
the
the
wax
and
wane
of
traffic
profiles
over
the
last
year.
I'll
summarize
them
very
briefly
here,
but
to
look
at
this.
D
What
we
have
is
3
in
the
blue
that
you
can
see
shortly
after
the
rfc
9000
came
out,
hp,
3
traffic
for
us
overtook,
hp,
1.1,
and
it's
just
continued
to
grow
ever
since
at
the
top
http
hb2
is,
has
kind
of
been
bumbling
along
at
the
same
rate
and
one
point
one:
is
it's
still
there,
but
it
is
very
small
percent
just
to
describe
these
briefly
if
I
didn't
cover
that
these
are
measured
in
in
requests
per
second,
that
our
global
edge
sees
for
human
or
likely
human
traffic.
D
D
If
we're
going
to
break
those
requests
down
into
different
user
agents,
we
know
that
support
doesn't
roll
out
unilaterally
and
that
different
work
needs
to
be
done,
different
priorities,
etc.
So,
just
looking
at
these
three
browsers,
chrome,
firefox
and
safari-
we
can
see
you
know
different
kind
of
roll
outs
happening.
There
say:
chrome
is
now
at
around
60.
D
Firefox
is
kind
of
similar
today
or
back
in
may
at
least,
but
you
can
see
that
there's
been
some
gradual
kind
of
ramp
up
or
rollout.
Some
of
this
is
affected
by
cloudflare
customers
who
control
whether
they
offer
hp3
to
the
edge,
but
by
measuring
everything
kind
of
globally
and
then
looking
at
these
more
specific
details,
we
can
get
a
feel
for
how
the
the
different
user
agents
are
also
rolling
out
support,
maybe
experimental
or
by
default,
etc.
D
So
you
can
see
on
the
bottom
left
there
safari
kind
of
like
in
behind
slightly
chrome
with
firefox.
I
understand
that
you
know
these
numbers
might
be
changing
soon,
as
the
safari
rolls
out
support
more
widespread
for
hp
3
by
default.
That
will
will
remain
to
be
same
tuned
in
to
see.
D
This
is
how
it
looks
in
terms
of
global
requests
for
time,
I'll
just
kind
of
gloss
over
this
one
slightly.
D
D
If
we
took
chrome
out
of
the
picture,
you
can
see
more
clearly
that
the
different
kinds
of
changes
that
are
happening
for
these
browsers,
the
blog
post,
goes
into
more
detail
for
some
of
the
observations
here.
So
I
won't
go
into
any
more
detail,
but
I
would
like
to
focus
on
this
slide
a
bit
more
because
this
looks
a
box
which
you
know
good
bots
and
bad
bots.
D
We
we
might
spend
a
whole
session
on
that
one,
but
these
typically
bots
related
to
search
engines
or
social
media
entities
who
like
to
access
the
internet
to
do
stuff.
So
you
can
see
here
I
mean
the
headline:
is
that
there's
hardly
any
all
zero
hp3
usage?
They're
very
much?
D
You
know
in
a
1.1
or
2
world,
you
can
look
at
some
of
the
the
public
information
about
how
these
bots
decide
what
version
to
pick
it's
pretty
opaque
and
obtuse.
I
struggle
to
get
any
more
answers
or
insight
into
that
I'd
love
for
people
to
to
have
a
chat.
If,
if,
if
anyone
knows
anyone,
but
aside
yeah,
it's
it's
an
interesting
in
respect
to
hp3
usage,
is
pretty
high
for
users
and
low
for
bots.
D
So
with
that
all
in
mind,
binary
framing,
as
I
mentioned,
for
hp,
2
and
hp
3,
it
was
a
solution
to
a
problem
to
let
us
mitigate
some
stuff,
I'm
not
going
to
talk
about,
but
mainly
about
performance
and
head
of
blind
blocking
issues
that
can
affect
web
page
loading,
and
it
does
that
job.
So
it's
a
solution
to
all
our
problems.
D
Well,
text
based
http,
hb,
1.1
or
or
other
older
versions
have
a
pedigree
providing
quick.
Some
recent
examples
include
this
idea
of
request
smuggling
an
attack,
that's
kind
of
researched
lately
I'll
go
into
the
details
here
for
brevity,
but
despite
all
of
the
baggage,
it's
pretty
well
exercised
and
understood
by
technically
minded
folks,
whoever
care
to
look
into
this
stuff
around
hp
performance
and
behaviors
and
quicks
etcetera.
D
Yeah,
like
I
said,
hp,
2
and
3
constitute
the
majority
of
web
traffic.
As
far
as
we
see.
So,
how
can
we
best
characterize
the
quotes
of
those
specific
hp
versions?
There's
a
bunch
of
tools:
hd
spec,
h3,
spec,
h2
load,
there's
chrome
net
logs,
there's
q
logs,
which
is
a
specification
and
that
we're
we're
working
on
in
the
quick
working
group,
along
with
some
tooling,
to
support
that.
There's,
probably
other
things.
I'm
not
familiar
with.
C
D
You
have
a
favorite,
I
don't
know
these
aren't
so
much
analyzing
the
protocol,
but
analyzing
conformance
or
or
implementation
behavior
specific
to
protocols,
whether
they
implement
musts
or
shoulds
or
maybes,
etc,
and
how
those
things
interact
with
each
other
they're
all
pretty
good,
useful
tools.
D
I
use
them
myself,
but
in
general
I
would
say
in
my
opinion,
the
general
coverage
of
h2h3
as
an
internet
community
seems
lacking,
given
that
the
the
widespread
support
for
this
protocol
when
things
go
wrong,
people
come
to
me
to
ask
for
help
and
I'm
always
happy
to
give
it
to
them.
But
I
I
think
the
scaling
of
that
knowledge
and
the
debacability
is
something
that
we
could
do
better,
and
this
talk
is
part
of
that
activity.
So
I
want
to
like
use
some
examples
to
illustrate
this.
D
I
don't
want
to
pick
on
chrome
solely,
but
most
people
might
have
seen
these
kinds
of
error
messages
hands
up.
If
you
have
this,
this
often
just
gets
shown
or
something
like
this
when
anything
goes
wrong.
So
you
know,
if
you've
seen
this
message,
how
did
you
solve
it?
Did
you
just
hit
reload?
Probably
and
then
it
went
away,
but
if
you
look
online
and
you
search
for
these
things,
you
hit
interesting
websites
that
give
you
interesting
recommendations
like
flush,
the
speedy
pockets,
which
you
know.
D
B
D
Another
website
that
recommends
turning
your
antivirus
off
and
on
again
I
I
don't
know
if
that's
funny
or
not
a
bit
scary
in
my
mind,
and
then
we
get
to
quick.
The
latest
and
greatest
one
of
the
recommendations
like
in
the
top
of
google
search
is
if
you're
stuck
from
all
of
these
different
methods
that
don't
actually
help
anything
then
contact.
Google
customer
support-
I
don't
know
if
any
of
you
have
ever
tried
to
contact
google
customer
support,
but
you
would
have
maybe
mixed
mixed
results.
D
I
would
say,
but
I
I
actually
did
I
had
a
problem.
I
went
online.
I
logged
a
bug
after
speaking
to
some
folks
and
they
responded
very
quickly
and
it
was
great,
but
unfortunately
we
all
have
priorities
in
our
work,
and
so
that
bug
is
remained
open
and
sits
there.
It
could
potentially
help
in
in
a
few
different
ways,
but
we're
all
busy
all
get
prioritized.
I
understand
so.
In
the
meantime,
we
have
to
look
for
other
solutions
or
other
ways
to
look
at
problems
and
beyond,
just
chrome.
D
I
want
to
focus
on
a
specific
example
of
this
kind
of
thing
where
the
spec
says
what
we
must
and
may
do,
and
then
what
in
reality
actually
happens.
D
So
this
is
the
idea
of
of
taking
a
a
request
message
and
serializing
it
to
the
wire
and
what
the
hp3
spec
says
is
that
you
must
have
this
full
set
of
pseudo
headers,
which
articulate
say
for
the
request,
the
method,
the
path,
the
host
and
and
so
on,
but
every
request
needs
a
method,
so
this
could
be
get
post
head,
various
verbs
that
can
be
included
in
there
that
apply
to
all
versions
of
http.
D
But
if
you
don't
include
that
method,
then
the
request
is:
is
malformed,
so
they
must
be
treated
as
a
stream
error.
So
that
means
resetting
the
stream
with
a
specific
error
code
and
that
you
may
serve
a
response
before
that
happens
before
resetting
the
stream.
So
what
I
did
is
is
create
a
mile
form
client
that
did
this
on
purpose.
This
is
literally,
if
you
think,
of
old
school
hp
like
the
the
first
thing
that
you
would
see
in
the
request
line
is
the
method.
D
So
it's
like
the
first
thing
that
could
ever
go
wrong
and
I
test
that
client,
a
hp,
3
client
against
various
implementations
that
operate
on
the
internet
and
see
what
happens
so
to
briefly
summarize,
you
know
four
of
these
returned
a
bad
request.
D
One
returned
four
or
five
method
not
allowed,
which
I
thought
was
quite
amusing,
because
there
was
no
method
another's
center,
reset
stream,
which
is
kind
of
aligned
to
what
the
spec
says,
but
they
sent
a
different
error
code
and
some
just
closed
the
connection
outright,
which
is
pretty
terminal,
given
that
these
connections
can
hold
or
articulate
many
requests.
But
I
want
to
stress
that
none
of
this
is
bad
behavior.
They
all
kind
of
rejected
it
and
ultimately
in
hp.
That's
that's
generally
good
enough.
D
I
don't
think
anyone
is
there
as
the
protocol
police,
making
sure
that
you
know.
If,
if
you
receive
the
wrong
error
code,
you
you
take
action,
but
sometimes
these
things
can
can
result
in
systematic
errors
as
they
propagate
through.
So
it's
not
always
clear.
D
If
this
is
okay,
or
if
this
is
bad,
maybe
as
a
community,
we
need
to
to
take
a
look
and
and
see
if
there's
anything
more,
we
can
do,
and
sometimes
this
is
not
sometimes
it's
the
equilibrium,
that's
good
enough,
something
else
that
happened
different
to
that
a
while
back
netflix
did
some
work
about
hb2.
D
They
were
looking
to
implement
it
and
then
looked
at
the
protocol
and
then
compared
that
to
implementations
and
were
able
to
identify
a
number
of
attack
vectors
that
could
result
in
denial
of
service
on
hb2
implementations
and
there's
a
large
community
effort,
responsible
disclosure
to
kind
of
create
proof
of
concepts
of
these
attacks
and
and
integrate
fixes,
and
the
community
did
a
great
job
of
that.
But
you
know
if
you're
writing.
A
h2
implementation
today
is
that
information
is
accessible
to
you.
You've
come
after
the
whole
train
of
of
implementers.
D
Maybe
you've
read
about
these
things,
but
you
don't
understand
them.
How
can
we
make
sure
that
those
test
cases
apply
to
diverse
hb2
implementations?
Still,
there's
code,
you
could
run
them
that's
great,
but
is
it?
Is
it
sufficient
and
the
reason
this
is
important?
I'll
just
pick
on
one
of
these
one
of
the
attacks
was
a
ping
flood.
So
this
is
a
hb2
ping
frame
you
can
see
here.
D
The
description
is
that
attacker
sends
continual
pings
that
could
cause
build
up
and
a
dos
and
and
the
reason
this
is
interesting-
is
because
cloudflare
implemented
mitigations
for
these
h2
dos
attacks
that
were
that
affected
our
specific
implementation.
We
weren't
affected
by
them
all,
but
reflected
by
some
and
one
was
the
ping
flood.
But
if
we
kind
of
put
that
on
a
stack
for
a
moment,
grpc
is
a
lady7
protocol
that
does
stuff,
and
they
have
this
thing
called
pdp
estimation,
and
this
is
a
direct
quote
from
from
this
website.
D
That
says,
the
idea
is
simple
and
powerful.
Every
time
you
receive
some
data
send
out
a
h2
ping
frame
and
then
look
at
the
response
and
do
some
calculation
and
estimate
the
bdp,
which
is
pretty
cool.
You
know,
but
what
what
it
results
in
is
a
ping
every
data
frame
which
is
very
much
like
a
pain
flood
and
that
introduces
a
interesting
interaction
between
some
implementations
and
the
fun
thing
is
that
rust
has
a
library
that
implements
http,
2
called
hyper
and
they
have
a
feature
called
adaptive
window.
D
That
directly
was
influenced
by
the
grpc
bdp
behavior.
So
they
we.
We
had
a
situation
where
the
rust
hyper
client
would
speak
to
the
cloudflare
edge
and
get
closed.
The
dos
mitigation
would
close
them
because
it
would
detect
an
attack,
and
that
wasn't
great,
but
you
know
we
were
able
to
detect
these
things
and
understand
them
by
analyzing
them
in
things
like
wire
shop
or
the
tooling,
and
and
work
with
community
to
implement
the
fix,
and
the
fix
here
was
to
to
not
be
as
aggressive
in
that
pinging
and
the
act.
D
The
outcome
of
that
activity
is
actually
it
was
friendlier
to
the
server.
The
client
was
friendlier
to
the
servers
that
it
was
speaking
to
and
also
the
the
bdp
actually
got
better.
Everything
got
better,
so
that
was
a
net
win.
There's
a
different
problem
that
we
had.
We
launched
something
called
speedok
cloud
player
a
while
back
and
in
this
like
people
test
how
fast
in
terms
of
throughput
their
upload
or
the
download
was
and
what
their
ping
and
jitter
was
like.
D
Shortly
after
we
launched
that
we
had
reports
that
people
on
very
high
speed,
internet
access
were
seeing
kind
of
deflated
upload
figures
compared
to
other
such
speed
tests
that
behaved
or
acted
differently.
So
we
did
some
investigation
here
and
what
we
found
is
that
there
was
a
difference
between
just
hp,
1.1
and
hb2
upload
performance,
so
that
ruled
out
anything
lower
layer,
both
using
tcp
both
using
ip
everything
else
pretty
similar.
So
the
difference
there
was
solely
between
1.1
and
and
two
and
throughout
his
investigation.
D
What
we
found
is
that
it
was
due
to
hp2
flow
control,
and
this
is
quite
a
just
a
basic
thing.
It
was
there
hidden
in
plain
sight
that
the
server
was
just
not
giving
enough
flow
control
to
the
client,
causing
these
deflated
figures.
So
my
colleague
did
some
work
on
this
looking
at
different,
auto
different
buffer
sizes,
different
window
sizes,
and
we
settled
upon
an
auto
tune
solution,
pretty
similar
to
what
tcp
does
already.
D
But
at
the
layer,
seven,
and
by
deploying
this,
we
were
able
to
see
like
immediate
improvements,
which
was
a
again
a
very
good
win,
but
it
really
makes
me
question
why
such
a
basic
problem
was
there
for
so
long
and
no
one
had
detected
it,
which
makes
me
wonder
you
know
what
are
we
measuring
when
we
do
anything
like
this?
D
Are
we
measuring
layer,
4
layer,
7
the
whole
thing
and
what's
the
methodology
to
detecting
bottlenecks
or
performance
impediments
beyond
the
ideal,
like
I
say
here,
layers
below
hp
will
attract
affect
its
performance,
and
but
you
know,
even
if
your
lower
layers
perform
well,
they
aren't
indicative
of
upper
layer.
Success
and
this
kind
of
environment
is,
and
the
observations
we'd
seen
was
our
motivation
to
making
a
submission
to
the
iab
measuring
network
quality
workshop.
D
That
happened
last
year
and
then
we'll
say
you
know
on
top
of
just
throughput
stuff,
latency
and
jitter
of
fundamental
problems.
D
The
hb
protocols
are
very
latency,
sensitive
or
responsiveness
sensitive
and
that's
why
we're
super
happy
that
the
the
work
in
the
ippm
on
responsiveness
and
the
working
conditions
has
been
adopted
now
and
is
making
good
progress
about
testing
the
whole
stack,
including
hp2
implementations.
These
are
the
protocols
that
real
people
use
when
they
use
web
services
and
we
need
to
be
able
to
characterize
them
and
understand
where
there's
bottlenecks
or
performance
improvement
possibilities.
D
But
beyond
all
of
that
to
quote
harry
potter,
life
isn't
fair,
it
isn't
just
streams
and
requests
and
responses.
There's
a
whole
evolution
of
of
things
happening
at
http,
turning
it
into
more
of
a
substrate.
It's
always
been
a
substrate
for
things,
but
through
through
the
definition
of
the
hb,
datagrams
and
capsule
protocol.
That's
been
happening
in
the
mass
working
group
and
that
work
is
now
done
just
waiting
publication.
D
We're
able
to
provide
even
more
extreme
use
cases
to
use
a
word
so
udp
443
could
be.
The
future
of
everything
is
what
I'm
trying
to
indicate
here
on
top
of
reliable
stream
data.
We
have
quick
datagram
frames
which
allow
for
the
use
of
unreliable
datagrams
in
http,
in
combination
with
the
capsule
protocol.
We're
able
to
do
new
things
like
proxy
udp
over
http,
which
is
what
mask
has
been
working
on
they're
now
focusing
more
on
proxy
ip
over
http,
which
is
interesting
too.
D
We
also
have
the
web
transport
working
group
and
the
media
of
a
quick
working
group
all
trying
to
find
new
and
exciting
ways
to
use
these
protocols,
which
again
makes
me
even
more
interested
in
the
kinds
of
edge
cases
or
specific
problems
that
might
occur
to
them
and
that
we
need
to
empower
everybody
who
might
use
these
protocols
to
be
able
to
understand.
What's
going
wrong,
not
just
the
developers
who
are
working
on
them
right
now,
or
the
experts
who'll
be
running
those
implementations
or
edge
services
for
the
next
few
years.
D
D
This
is
an
example
that
takes
it
to
the
extreme
of
multiple
hot
proxies
that
tunnel
quick
over
quick
in
terms
of
layer
cake
that
is
even
more
complicated,
and
I
know
I
won't
go
into
it
now,
because
my
brain
hurts,
but
we
want
to
come
back
to
the
layer,
four
and
three
quarters
one
moment
this
is
the
conclusion
of
the
talk.
You'll
be
happy
to
hear
the
the
intersection,
the
venn
diagram
between
layer,
seven
and
four
would
obviously
include
other
layers,
except
that
they
don't
really
exist
in
practice.
D
So
this
is
where
the
quirks
are,
but
I
I
I
have
many
remaining
questions.
I
can
tell
you
that
there'll
be
things
there.
That
will
go
wrong
and
we
might
fix
some,
but
there'll
be
more.
It's
an
unknown,
maybe
to
use
that
horrible
analogy.
But
what
I,
what
I
want
to
focus
on
is
you
know?
How
can
we
get
better
as
an
internet
community
at
finding
the
things
we
know
where
they
are?
How
do
we
find
them?
D
How
can
we
get
better
at
fixing
them
in
a
more
faster,
more
robust
way,
and
how
can
we
get
better
at
documenting
them?
Do
any
of
these
things
need
protocol
changes?
I
think
probably
not,
but
we
don't
know
until
we
find
them
yeah.
That's
it.
That's
my
call
to
action.
We
need
to
scale
this
up
beyond
just
itf
people
in
in
this
conference
venue
this
week.
So
with
that
I'll
conclude
my
talk,
thank
you
very
much
for
the
invitation.
It's
been
great
to
share
my
mind
with
people.
A
Lucas,
thank
you
very
much
for
for
giving
the
talk.
This
is
just
as
entertaining
the
second
time
around.
For
me.
Look,
I
know
we're
over
time,
but
I
would
like
to
entertain
a
couple
of
questions.
A
I
don't
see
anybody
in
the
chat
queue
and
I
have
one
or
two
that
I
could
ask,
but
I'm
looking
at
the
room,
so
anybody
in
the
room
who
wishes
to
ask
and
and
while
people
are
thinking
about
it,
let
me
let
me
ask
you
this.
So
if
I
having
seen
this
now
for
the
second
time
and
having
had
a
few
conversations
with
you,
I
feel
like
there's
three
buckets
of
things
happening.
There's
this
notion
of
measuring
issues
that
have
cross-layer
components,
diagnosing
problems
and
signaling.
A
Somehow,
between
layers.
Do
you
have
a
sense?
I
mean
from
your
perspective,
working
on
the
ground.
Are
there?
Is
there
a
priority
order
to
these?
I
think
is
the
first
question.
The
second
is:
do
you
have
a
sense?
Are
we
talking
about
a
new
set
of
of
standards
here
that
need
to
be
evolved,
or
is
it
something,
maybe
that
the
formal
methods
and
verification
community
can
attack
some
of
these
things,
as
we
might
discuss
in
this
afternoon
session.
D
Those
are
all
excellent
questions.
I
would
say
the
best
way
I
can
answer
that
is
that
typically,
the
way
that
most
http
servers
work
is
that
they
would
log
request
logs,
which
are
very
high
level
view
of
the
kind
of
the
response
status
code
that
was
sent
when
a
request
came
in
with
some
information
about
that.
Maybe
some
headers,
the
user
agent
things
like
this
on
the
client
side,
you
just
get
the
result,
maybe
that
error
code
I
showed
in
chrome,
you
can
enable
additional
debug
tooling.
D
So
you
could
look
at
a
dev
tool,
open
that
up
and
maybe
capture
a
half
file
does
give
a
bit
more
information
about
the
request,
headers
and
response
headers
that
were
sent,
but
they
don't
go
into
any
of
the
framing
like
this,
this
eponymous
layer,
four
and
three
quarters
to
do
that.
You'd
need
to
maybe
enable
the
chrome
netlog,
which
would
capture
stuff
but
then
you're
getting
into
almost
you
know,
per
packet
tracing,
and
that
is
voluminous.
D
It's
large,
it's
it's
hard
to
read
through
quickly
and
when
it
goes
wrong.
It's
often
these
things
are
multiple
requests
all
in
a
single
connection.
So
if
one
request
fails,
it
could
be
due
to
something
unrelated
that
failed
in
that
connection,
due
to
maybe
an
implementation
bug
or
an
implementation
choice
that
was
was
defined
in
the
specification
that
implementations
must
decide
and
often
they
will,
but
they
won't
document
why
that
decision
was
made.
D
It
might
be
there
in
the
code
in
the
comment
or,
but
you
know
we're
not
all
experts
in
every
language
and
nowhere
to
look
for
these
things.
So
just
just
capturing
these
things
is
hard.
It's
a
hard
problem
to
solve
because
of
the
data
that's
required.
D
I
know
facebook
as
an
example,
uses
q
log
in
production
and
does
capture
a
lot
of
these
kind
of
events
in
real
time,
but
I
don't,
I
think,
that's
beyond
the
capabilities
of
a
lot
of
deployments,
so
it
could
be
that
we
need
a
way
to
think
about
how
to
to
do
logging
on
demand
or
or
sampling
or
tracing
of
things.
But
again
it's
it's
tricky.
I
don't
have
that
many
answers
in
this.
I
have
ways
that
I
know
how
to
work
myself,
but
those
aren't
necessarily
applicable
to
everybody.
I
don't.
D
I
don't
think
this
is
something
that
can
happen
in
the
specifications
themselves,
but
maybe
you
know
we
do
a
good
job
with
automated
testing
say
for
interrupt
in
the
quick
working
group,
but
that
focuses
very
much
on
the
transport
layer.
Maybe
there's
more
room
to
integrate
some
of
those
tools.
I
talked
about
earlier
into
more
of
an
automated
kind
of
view.
A
C
D
Thank
thank
you
for
the
the
comment
that
I
I
would
say.
Please
go
and
see
the
fcc
proposal.
I
can't
remember
the
right
name
here,
but
you
know
the
these
requests
for
comments
that
were
made
like
this
was
a
public
consultation.
I
think
mom
will
probably
correct
me
if
I'm
wrong
here
but
yeah.
D
A
Lucas,
thank
you
very
much.
I
think,
there's
george
michaelson
in
the
queue
I
noticed
the
last
minute,
I'm
hoping
he
can
reach
out
to
you
to
ask
a
question
if,
if
that
question
stands,
but
I'd
like
to
move
on
in
the
meantime,
let's
see
looking
for
the
next
presenter.
E
Yeah,
can
you
hear
me
guys
hi
good
morning
my
name
is
shafiqul
and
I'm
going
to
present.
Is
it
really
necessary
to
go
beyond
a
fairness
metric
for
next
generation
congestion
control?
E
Yeah,
sorry
for
the
devil.
Traditionally,
the
evaluation
of
fairness
and
evolution.
Congestion
control
mechanism
encompasses
an
examination
of
fairness.
This
is
done
by
calculating
throughputs
from
of
computing
flows
from
experiments
or
simulations,
and
this
data
vector
is
fed
to
an
equation
right
here
and
a
singular
vector,
vector,
jane's
fairness
index
is
calculated
here.
N
is
the
total
number
of
flows
and
x.
I
is
the
throughput
of
the.
If
connection.
E
And
this
is
also
used
to
evaluate
whether
a
new
congestion
control
mechanism
is
fit
for
deployment
on
the
internet,
and
this
is
done
by
evaluating
fairness
when
a
new
mechanism
competes
with
the
prevalent
congestion
control
mechanisms.
Traditionally,
renault
more
recently
cubic
15
years
ago,
bob
briscoe
introduced
the
term
flow
rate,
fairness
in
his
ccr
paper,
and
he
argues
that
fairness
should
be
defined
in
relation
to
cost
per
economic
entity,
not
perflow.
E
Yep,
so
the
question
is:
does
such
a
fairness
test
indeed
provide
a
good
reasoning
about
the
deployment
of
a
new
congestion
control
mechanism,
so
in
a
recent
hotnet
paper,
where
at
all
suggested
to
use
the
harm
concept?
That
is
how
harmful
a
new
congestion
control
algorithm
is
to
prevent
prevalent
congestion
control,
algorithms.
E
So
this
they
suggested
that
throughput
alone
is
not
should
not
be
used
as
a
performance.
Metric
developers
should
also
focus
on
various
performance
metrics,
such
as
delay,
loss
and
flow
completion
time.
E
So
the
paper
suggested
many
ways
of
calculating
harm
the
one
that
is
fit
for
deployment
from
from
the
high
harm
paper
dimension.
We
suggest
that,
if
the
harm
done
by
a
new
congestion
control
algorithm
alpha
to
a
widely
deployed
congestion,
control,
algorithm
beta
is
comparable
or
or
less
than
less
than
the
harm
done.
When
beta
competes
against
beta,
we
should
consider
it
acceptable
to
deploy
so
based
on
this.
E
First,
in
the
first
test,
we
use
flow
alpha,
a
new
congestion
mechanism
competing
with
flow
beta,
which
is
the
baseline
congestion
algorithm
in
the
second
test,
two
baseline
flows,
beta1
and
beta2,
competing
with
each
other,
and
we
map
from
a
flow
to
specific
measurements
as
m
column
flow
metric
value.
E
E
E
Anyway,
so
so
what
we
did?
We
ran
two
experiments
where
experiment
one
has
this
alpha
flow
and
versus
beta
flow,
and
the
fairness
matrix
is
calculated
from
mf
is
calculated
from
m
alpha
and
beta
and
from
the
experiment
to
where
we
have
beta
1
and
beta
2
flow.
Completing
we
took
m
beta
from
experiment,
1
and
m
beta
2
beta
1
from
experiment,
2
and
calculated
the
harm
metric
from
here.
E
The
negative
value
corresponds
to
alpha,
causing
much
more
harm
to
better
where
the
positive
values
represents
better
harms
alpha
when
they
compete
and
zero
means
no
harm.
E
So
our
measurement
setup
has
we
run
experiments
in
uios,
teacup
physical
testbed,
where
we
varied
the
link
capacity
to
10,
25,
50,
75
and
100
megabits
per
second
and
the
round
trip
time
was
valid
to
10,
20,
50
and
100
milliseconds
and
queue
size
was
set
to
half
the
bdp
and
the
full
bdp
for
each
bandwidth
and
delay
case.
E
We
chose
four
different
congestion
mechanisms
based
on
their
level
of
aggression
and
the
congestion
signal
they
use.
The
first
one
is
renault
that
that
is
very
traditional
and
and
then
we
show
and
that's
the
answer.
Loss-Based
mechanism,
then
we
chose
cubic,
which
is
more
aggressive
than
renault
and
also
lost
mechanism
than
bbr,
which
is
more
aggressive
than
renault
and
cubic
and
vegas,
which
is
the
least
aggressive
here,
and
also,
and
primarily,
a
delay-based
mechanism.
E
So
results
we
show
in
three
categories:
we
investigate
harm
metric
for
cubic
versus
renault
and
cubic
versus
bbi
flows.
We
investigate
how
the
harm
metric
behave
when
we
run
a
loss-based
flow
against
a
delay-based
flow,
and
we
share
experience
whether
the
deployability
of
new
congestion
control
mechanisms
could
be
judged
by
a
harm-based
approach.
E
The
heat
maps
here
show
the
whole
parameter
space
where
we
found
the
interesting
cases
at
rtt
when
rtt
is
above
50,
milliseconds
and
capacity
is
greater
than
75
megabits
per
second,
where
cubic
doesn't
fall
back
to
linear
tcp,
like
growth
in
the
and
we
cons.
We
call
this
from
now
on
the
high
bdp
scenario
and
in
case
of
low
bdp
scenario.
We
found
no
difference
in
the
fairness
and
harm
because
cubic
behaved
similar
to
renault.
E
A
E
Yeah,
so
this
shows
the
harm
metric
for
different
alpha
versus
beta
pairs,
and
then
we
look
at
the
relative
harm
and
fairness
distribution.
We
normalize
the
mf
and
mh
fairness
and
harm
values
for
varied
alpha
cubic
and
beta
renault
pairs,
and
here
it
could
be
seen
that
the
image
throughput
and
fairness
metric-
it's
not
like
obvious,
like
to
see
the
differences
here.
But
if
you
look
at
the
mhrt
metric,
the
impact
of
cubic
on
renault
is
quite
visible
when
in
case
of
throughput
metric,
it
wasn't.
E
And
then
we
ran
the
same
test
similar
test
with
bbr
versus
cubic
pair.
The
impact
of
rtt
is
still
there,
but
the
mfn
image
there
is
a
pronounced
gap
in
between,
and
this
is
because
fairness
is
quantifying
as
measurement
as
the
equal
capacity
sharing
lengths
in
case
of
renault
competing
with
venno.
E
It
provides
kind
of
better
fairness
when
cubic
competes
with
cubic.
It's
not,
and
it's
kind
of
cubic
is
a
complex
and
modal
algorithm.
So
that's
why
there
is
a
gap
here
and
then
we
run
the
same
test
similar
test
with
and
look
at
the
normalized,
mf
and
image
values
and
for
the
vegas
alpha
competing
against
renault,
better
flow,
and
this
is
across
the
entire
parameter
space
and
the
mf
values
is
slight
to
the
left.
E
Where
we
take
m
beta
from
experiment,
one
and
m
alpha
from
experiment
two
to
calculate
hum
metric
and
then
we
look
at
the
absolute
fairness
and
harm
throughput
harm
comparison
between
alpha
cubic
petrino
and
alpha
cubic
gamma
bbr
in
cases
of
high
bdp
scenarios,
and
we
show
the
raw
harm
values
here,
and
it
could
be
seen
that
bbr
captures
more
resources
than
cubic
captures
for
renault
than
renault.
In
all
cases,
and
more
specifically,
it
shows
that
bbr
captures
1.6
times
more
resources
for
50
of
the
cases.
E
So,
to
conclude,
we
say
that
an
old
slide,
so
we
we
applied
the
harm
concept
to
data
produced
from
experiments
and
comp
with
competing
pairs
of
various
tcp
variants.
We
looked
at
four
different
tcp
variants
based
on
their
level
of
aggression,
as
well
as
different
feedback
types.
We
presented
a
linear
representation
of
harm
to
better
assess
differences
between
the
metrics.
We
also
illustrated
the
efficacy
of
harm-based
approach,
using
experiment
result
that
we
showed
in
future,
we
plan
to
investigate
the
efficacy
of
harm
using
other
performance
metrics
such
as
loss.
A
A
I'm
going
to
ask
you
just
a
a
quick
question
here
is
so
I
I
I
think
harm
is
gaining
some
traction
which
is
lovely
to
see.
A
In
addition
to
the
end,
so
the
analytical
tools
are
developing
and,
and
you
know
at
scale,
small
things
tend
to
show
up
that
don't
show
up
at
small
scale
or
on
paper
it's
more
of
an
open,
unfair
question.
I
realize.
E
That's
a
very
good
and
good
question
to
understand
so
I
mean
like
this
requires.
As
I
mentioned
many
experiments,
the
data
to
test
it
and
and
harm
itself
is
kinda
complex
to
do
it
because
it's
it's
not
easy
to
calculate,
and
there
are
many
many
many
approaches
presented
in
the
paper
to
do
it.
So
we
chose
the
one
that
we
found
kind
of
fitting
for
figuring
out
if
it
can
be
safely
deployed
or
not.
So
if
you
want
to
do
it
in
real
life
experiments,
we
need
more
more
experiments
and
more
data.
E
A
Thank
you
very
much,
sephiroth
very
good.
Next
up,
jan
yvang.
A
B
E
B
I
know,
but
this
is
because
you
know
rain
office
around
the
time
drastically
something
new
you
know
two
floats
has
a
different
round
three
time.
The
bundle
share
is
completely
different,
but
on
the
other
hand,
you
know,
in
the
case
of
in
a
cubic,
the
condition
with
growth
doesn't
affect
around
the
time.
Very
much
so
that's.
That
means
it's
not
very.
It's
really
difficult
comparison
between.
Can
you
think
about
comparing
cubic,
and
you
know
for
fairness.
That's
my
from
my
point
of
view.
A
Specifically,
I'm
going
to
interrupt
here
and
say:
that's
a
fabulous
question.
It's
worth
thinking
about
cephacool
and
I
would
encourage
you
to
interact
offline,
okay.
A
F
F
The
network
is
spanning
four
continents
with
11
cities.
Four
in
europe
for
three
in
north
america,
three
nations,
one
in
australia
and
these
cities
are
connected,
via
least
like
lay
two
services
from
three
different
providers
for
redundancy.
This
is
talia,
cogent
and
equinix.
Providing
these
later
networks.
F
Yeah,
so
our
research
shows
or
before
we
started
researching,
we
knew
that
the
network
problems
were
quite
frequent.
We
analyzed
two
years
of
data
and
we
found
more
than
seven
hundred
thousand
packet
loss
incidents
this
for
this
number.
Every
time
we
during
one
minute
measurement
lost
a
single
packet.
We
counted
this
as
a
packet
loss
incidence,
so
even
this
large
number
of
incidents
is
actually
within
well
within
the
service
level.
Agreement
for
these
services.
F
But,
of
course
not
all
of
these
turn
into
support
cases,
mostly
because
of
routing
protocols
that
protected
against
outages,
but
all
together
during
that
time
period
with
the
operators
saw
2855
customer
support
cases
that
were
related
to
network
quality,
and
we
found
that
out
of
this,
only
around
35
percent
actually
received
a
proper
response
from
customer
service.
Regarding
what
was
the
actual
problem
that
happened,
and
this
this
is
a
far
too
low
number.
So
we
that's
why
this
project
was
initiated.
F
F
F
F
It's
usually
implemented
on
the
interface
cards,
so
it
doesn't
stress
the
cpu
in
the
router,
and
in
this
case
it
was
actually
already
implemented
and
in
use
for
for
the
higher
level
protocols.
It's
it
actually
provides
a
very
simple
measurement.
It's
sending
packets
and
it
triggers
the
an
event
if
packets
are
not
received,
and
this
improves
the
failover
times
for
for
higher
protocols
like
isis.
F
After
and
we
also
set
up
layer,
2
packet
loss
and
layer,
3
packet
loss
measurements
using
udp
ping,
this
required
quite
a
lot
of
virtual
machines,
set
up
quite
high
cpu
usage
compared
to
the
vfd
protocols
that
were
very
easy,
where
data
was
very
easy
to
generate
easy
to
process
and
easy
to
store
in
a
centralized
location.
F
First,
we
created
a
tool
to
visualize
all
the
data.
This
data
was
very
useful
both
for
the
network
operation
center
and
for
the
customer
support,
but
it
required
a
lot
of
training
and
a
lot
of
manual
pattern
recognition
to
to
recognize
what
data
is
actually
wrong.
What
happened?
What
what
was
the
cause
of
these
customer
complaints-
and
this
is
obviously
a
case
for
machine
learning,
pattern
recognition.
So
then
we
initiated
a
machine
learning
program
to
try
and
use
this
data
to
automatically
classify.
F
Outages,
the
first
step
in
this
process
was
to
do
an
in-depth
analysis
of
all
the
outages
or
all
the
customer
cases
where
customers
complained
about
all
features
and
we
identified
the
root
cause
of
all
the
complaints
and
I
identified
what
sort
of
data
was
most
useful
to
to
finding
the
root
cause
from
this
list.
F
You
can
see
that
multiple
or
outages
in
multiple
two
providers
at
the
same
time
tops
the
list,
and
the
reason
for
that
is
that
out
there,
just
in
a
single
provider
usually
didn't
cause
any
customer
complaints,
because
the
routing
protocols
are
quite
good
at
rerouting.
In
those
cases,
we
can
also
see
that
most
of
these
outages
were
caused
by
layer,
2
issues.
F
F
These
the
data
posts
in
this
case
are,
of
course,
multi-dimensional
with
lots
of
data
input,
and
the
process
is
run
multiple
times
to
create
multiple
classes.
F
F
F
So
what
we
found
was
that
a
two-stage
machine
learning
process
was
the
optimal
solution
for
this
case.
In
the
first
stage,
we
identified
the
most
common
outages,
which
relate
to
voltages
using
bfd
data.
Only
pft
is
very
easy
to
to
collect,
as
mentioned,
and
it's
got
very
little
impact
on
any
equipment
or
any
network,
and
for
many
real
life
implementations.
F
This
might
actually
be
good
enough
for
to
do
even
for
smaller
operators
with
little
resources,
collecting
bfd
trap
data
and
then
analyzing
them
would
be
a
good
idea
and
then
but
of
course,
bfd
being
a
layer
to
protocol
it
could
not
detect
all
types
outages
or
root
causes.
F
So
then
we
added
the
udp
ping
data
both
lay
two
and
layer,
three
eurips
big
data,
and
we
managed
to
do
a
very,
quite
quite
good
classification
of
all
the
other
courses
as
well,
and
this
with
this
data,
it's
much
more,
it's
much
harder
to
collect
the
data.
It
requires
a
lot
of
virtual
machines
it.
It
requires
a
lot
of
cpu
processing
to
actually
collect
the
data,
while
the
entire
machine
learning
process
to
classify
them
as
fast
it
took
less
than
one.
Second,
to
classify
a
case.
F
F
So
there
were
no
no
errors
detected
anywhere
and
then
we
can
come
up
with
the
results
on
the
right,
they're,
not
quite
as
good,
but-
and
the
reason
for
this
is
that
some
of
these
root
causes
have
have
very
similar
symptoms,
like
it's
very
hard
for
by
just
looking
at
the
symptoms,
only
to
to
see
the
difference
between
equipment,
maintenance
and
equipment.
Failure,
for
instance,.
F
A
So
I
I
will
note
very
quickly
so
that
we
can
move
on
being
machine
learning
in
in
a
network
capacity.
It's
still
some
proving
there's
a
bit
of
a
proving
ground
here.
A
F
Yeah
well
yeah.
That
question
is.
F
The
nice
things
about
our
setup
here
is
that
it
has
very
little
impact
on
the
actual
network.
There's
no
automatic
decisions
made
based
on
machine
learning.
There's
very
little
configuration
changes
to
the
equipment
all
these
protocols,
all
the
events
created
by
the
protocols
were
already
in
use.
They
were
used
by
by
the
isis
protocol
the
excellent
protocols
to
make
decisions
about
fast,
rerouting.
F
Ahead,
yeah
and
also
the
main
outputs
for
this
was
information
for
the
network
operations
center
and
the
customer
support
centers
to
to
provide
manual.
C
F
In
troubleshooting
things
and
to
inform
customers
when
somebody
calls
in
and
you
you
can
tell
them
immediately,
yes,
we
see
the
problem,
we
know
more
or
less
what
it
is.
That's
a
much
much
better
customer
experience
than
if
you
say
yeah,
we
don't
know
what
happened
we'll
we'll.
Let
you
know
once
we
know
something.
That's
not
a
very
good
answer.
G
D
G
So
the
first
thing
we
want
to
say
is
that
geosynchronous
satellite
communication
networks
are
becoming
interesting
items
to
provide
satellite
broadband
connectivity
in
many
5d
and
60
use
cases
that
use
satellites
for
backhauling
purposes
or
hybrid
schemes,
etc,
and,
in
parallel,
we're
witnessing
how
the
transport
layer
is
going
through
some
breakthrough
and
we're
seeing
quick
being
standardized
and
deployed
and
developed
further,
and
we're
also
seeing
how
modern
conjunction
control
is
also
being
worked
on
mainly
with
bbr
condition
control.
G
So
in
this
with
this
ecosystem,
here
we're
seeing
how
there
is
more
and
more
quick
traffic
on
the
web,
and
we
expect
more
quick
traffic
going
through
satellite
networks.
So
this
is
where
we
stand.
G
So
why
is
this
important
here?
So
there
are
some
challenges
that
satellite
networks
introduce
for
the
transport
layer
and
we
have
mainly
long
propagation
delay.
We
have
propagation
errors
as
well
and
we
also
have
likely
bandwidth
asymmetry
on
the
link
and
well
tcp
traffic
is
usually
optimized
for
satellite
links
using
the
performance,
enhancing
proxies
that
you
might
know
called
peps.
G
However,
unless
some
network-assisted
solution
is
introduced
such
as
mask
quick,
cannot
use
pipes
because
quick
headers
are
encrypted.
G
So
the
problem
here
is
that
many
studies
have
looked
into
tcp
with
pep
and
it
has
been
shown
that
tcp
with
peps
really
greatly
outperforms
quick
right
now,
even
with
quick,
fast
handshake
and
quick
really
falls
behind.
So
there's
been
raising
interest
into
looking
into
improving
quick
for
this
satcom
usgate.
G
So
in
this
context,
we're
going
to
look
into
the
general
performance
of
bbr
over
satcom
and
with
quick,
of
course,
and
we're
also
going
to
look
into
aspects
such
as
fairness
and
we're
also
going
to
look
into
these
items
that
I
mentioned,
like
packet
loss
bank
with
symmetry
and
also
how
the
quick
implementation
choice
makes
a
difference
here.
G
So
now,
for
some
background,
these
are
the
three
main
properties
that
I
mentioned
about
the
satellite
links
and
we
have
a
long
round
trip
time.
That
implies
a
longer
protocol
feedback
which
affects
the
transport
layer
and
on
many
different
aspects,
and
it
is
also
implies
a
higher
bandwidth
delay
product,
which
implies
also
larger
buffers.
G
Then
we
also
have
propagation
errors
and
finally,
the
the
bandwidth
asymmetry
that
I
mentioned
this
bandwidth
asymmetry
can
be
especially
problematic
if
the
uplink
gets
congested
and
axe
cannot
reach
the
sender
smoothly.
This
can
really
limit
the
for
forward
throughput
performance.
G
G
So
just
quickly
talk
about
bbr
bbr.
What
bbr
does
is
it
measures
the
bottlenock,
bandwidth
and
round
trip
time
on
the
link
in
order
to
find
an
optimal,
sending
rate
that
tries
to
maximize
the
use
of
the
link
while
keeping
the
the
path
rt
as
low
as
possible
and
well?
Bbr
have
been
shown
to
do
really
well
in
many
use
cases
when
compared
to
kubic,
especially
when
there
is
losses
on
the
path
etc.
However,
there
are
some
issues
that
have
been
found
with
bbr
that
are
related
to
mainly
fairness.
G
G
So
with
after
this,
there
was
an
update
to
bvr
and
bbr2
that
tried
to
fix
these
issues
tried
to
make
bbr
a
little
less
aggressive,
while
keeping
performance
high,
and
this
was
made
by
making
bbr
react
to
to
packet
loss
and
explicit
congestion
notification
and,
among
other
improvements.
G
G
So
now
we're
going
to
show
you
the
experimental
setup
that
we
built
yeah.
So
this
here
is
what
we
built.
We
built
a
testbed
based
on
teacup.
G
We
have
two
pairs
of
hosts
to
generate
quick
traffic
and
we
have
a
router
that
performs
a
link,
emulation
and
the
experiments
are
controlled
using
we
call
the
tcap
controller.
The
t-cap
flat
platform
developed
by
kaya
allows
to
design
experiments
and
automate
tests
really
easily
and
we
took
it
and
extend
it
extended
it
to
be
able
to
work
with
quick
and
to
extract
statistics,
as
well
as
q.
Log
trace
data.
G
We
use
two
quick
implementations.
These
are
ngtcp2
which
we
chose,
because
it
is
the
one
that
allows
us
to
to
experiment
with
vbr2
with
the
latest
version
and
we
also
use
picoquick
and
which
has
been
shown
in
some
studies
that
it
performs
quite
well
over
satcom.
So
this
is
why
we
also
wanted
to
look
into
this
so
yeah.
That's
it.
G
So
now
to
jump
to
our
results,
we
set
up
four
different
experimental
scenarios.
The
first
of
this
is
the
single
flow
book
download.
G
Here
we
start
the
download
of
a
very
large
file
from
the
client,
and
he
we
let
the
download
run
for
two
minutes,
and
then
we
measure
output
on
the
link
and
yeah,
and
this
is
what
we
saw
so
first
of
all
here
we
show
you
our
good
put
results
with
on
the
satellite
and
terrestrial
scenario
and
for
different
buffer
sizes,
with
the
two
implementations
that
I
talked
about
so
clearly.
G
G
If
we
look
into
transition
control
differences,
we
also
see
how
for
ng
tcp
2
kubik
is
gaining
some
advantage
for
higher
buffer
sizes
and
while
bbr
versions
remain
stable
along
buffer
sizes,
and
we
can
see
how
bbr2
is
giving
us
slightly
lower
good,
put
values
which
can
be
expected
due
to
the
its
less
aggressive
nature.
G
We
first
saw
how
ngtcp
2
with
cubic
was
losing
a
lot
of
performance,
especially
in
the
long
rtt
satellite
scenario,
and
also
when
we
increased
the
packet
loss
even
further
to
one
percent
to
one
percent.
We
saw
bbr2
actually
falling
behind
bbr1
a
lot
and
we
only
see
the
cds
in
the
satellite
scenario.
So
we
believe
that
this
mixture
of
long
rdt
and
high
pocket
loss
is
problematic
for
for
vbr2.
Here.
G
Even
when
there's
congestion
on
the
on
the
uplink,
and
so
we
looked
into
this-
and
we
saw
that
picoclique
is
actually
using
a
different
act
policy
where
it
tries
to
send
less
acts
like
acknowledgement,
packets
and
it
tries
to
optimize
this
for
the
satellite
link.
So
this
is
why
we
stress
the
relevance
of
these
strategies.
G
G
So
this
this
is
what
we
saw.
We
saw
cubic,
maintaining
really
good
furnace,
even
with
64
parallel
flows.
However,
our
research,
our
results,
showed
that
a
bbr2
and
bbr1
flows
competing
against
each
other
against
themselves.
Sorry,
I'm
drop
a
lot
in
fairness.
G
When
we
increase
the
number
of
flows,
especially
with
bbr1,
we
do
see
that
bbr2
does
a
bit
better
than,
but
still
the
the
furnace
values
here
are
quite
not
good
when
as
compared
to
kuvik
here,
then
we
also
compared
condition:
control
algorithms
against
each
other,
with
parallel
flows
using
different
condition,
control,
algorithms,
and
here
what
we
mainly
want
to
address
is
that
bbr2
with
cubic
is
behaving
quite
fairly,
which
is
showing
that
even
with
the
long
rtt
bbr2
and
has
managed
to
improve
these
furnish
issues
that
bbr1
had
towards
cubic.
G
G
So
we
set
up
a
four
flow
scenario
where
different
flow
starts
at
different
times,
and
they
all
run
for
for
three
minutes
and
yeah.
This
is
what
we
saw
so
here
we
show
you
the
good
put
over
time
over
five
minutes
of
four
different
flows
that
start
at
four
at
0,
40,
80
and
120
seconds,
and
we
also
show
you
the
rtt
measured
there
in
the
graph
and,
as
you
can
see
cubic
light,
commas
are
quite
slow
and
because
of
the
satellite
rtt,
they
take
really
long
to
converge
to
a
fair
share.
G
So
when
we,
when
we
use
vbr
here,
we
see
how
the
latecomers
join
the
link
faster
and
they
get
a
fair
share
faster
than
with
cubic,
and,
as
you
can
see
here,
it's
also
pretty
notable
that
with
bbr2,
these
latecomers
are
less
aggressive,
well,
bbr,
one
latecomers.
Actually
overtake
the
previous
flows
and
they
make
previous
flows
drop
in
performance.
Quite
a
lot.
G
However,
we
did
see
an
issue
here
with
vbr2,
where
bbr2
after
the
rest
of
the
flows
and
it
fails
to
recover
the
available
bandwidth,
and
we
have
seen
that
this
is
a
consequence
of
the
long
rpt,
because
we
repeated
these
experiments
with
lower
rdt,
and
this
doesn't
happen.
So
we
believe
this
might
be
either
some
problem
with
vbr2
for
long
rdts
or
perhaps
some
implementation
issues
with
ngtcp,
2
and
dbr2.
G
So
yeah
I'm
going
to
jump
to
the
discussion.
There
was
one
last
scenario
to
look
into
if
you're
interested
into
it.
You
can
read
the
paper
or
we
can
talk
about
it
later,
but
I
want
to
be
a
bit
quicker
here
so
to
to
give
a
general
idea
of
what
we
can
conclude
from
this
study.
G
We
have
seen
that
bbr
can
provide
better
performance
under
los
links
and,
while
bbr1
is
the
one
that's
giving
us,
the
best
performance
bbr2
is
doing
a
great
job
at
improving
fairness
towards
kubic,
for
better
coexistence
on
the
internet
and
better
behavior.
However,
there
are
some
problems
that
bbr2
has
to
face
for
these
satellite
networks
and
yeah.
We
believe
this
needs
to
be
further
looked
into.
G
We
also
saw
really
how
this
problem
with
bandwidth,
asymmetry
and
axe
arc
frames
on
the
return
path
can
be
a
problem
because
we
were
seeing
any
tcp
to
give
having
problems
to
deal
with
this
and
we.
This
is
why
we
stress
the
need
to
look
into
act,
policies
and
satellite
optimized
act,
policies
that
try
to
negotiate
strategies
with
that
send
less
acts
and
try
to
avoid
these.
These
issues.
G
So,
to
summarize,
even
though
bbr
seems
like
a
good
candidate
for
some
reasons,
there
is
some
some
stuff
to
work
into
here
and,
as
I
said,
bandwidth
asymmetry
is
proving
to
be
a
great
challenge
for
satellite
links,
and
we
also
we're
also
surprised
to
see
pico
quick
doing
so
well
with
their
satellite
optimizations
and
we
believe,
looking
into
these
strategies
that
they
use
might
be
clue
for
quick
overs
outcome
in
the
future.
A
I
think
what
I
would
like
to
do
is
just
move
on,
but
encourage
you
please
to
check
the
chat
channel,
because
I
think
you've
had
some
good
engagement
and
a
couple
of
thoughts
of
questions.
There
are
certainly
a
couple
of
things
in
here
that
are
that
are
counter-intuitive
and-
and
I
think
the
observation
about
act
patterns
is,
is
particularly
novel.
A
So
if
people
in
the
quick
working
group
are
not
in
the
room,
you
might
want
to
reach
out
to
them
lucas,
for
example,
who
just
gave
the
keynote
and
and
might
have
something
to
offer
there.
Okay.
Okay!
Thank
you.
Thank
you
very
much.
This
was
great.
A
Moving
on
to
our
last
short
presentation,
let's
see.
H
Yes,
yeah
hi
everyone
thanks
for
joining
this
presentation,
I'm
lucie
nervous,
I'm
a
phd
student
at
the
university
of
alberta
and
for
this
project.
I
worked
with
my
supervisor,
dr
paulu
and
the
project
I
started
priority
ever
forward
or
correction
for
http.
H
The
thing
is
that
when
the
http
client
asks
for
a
web
page,
there
is
a
list
of
web
resources
that
need
to
be
downloaded,
but
not
all
resources
are
the
same.
For
example,
we
have
images
that
can
be
rendered
incrementally
at
the
client
side,
so
we
can
use
the
multi-streaming
or
multiplexing
of
a
quick
to
avoid
the
headline
blocking
problem
at
an
application
layer
in
the
case
of
paco
loss,
but
there
are
also
other
types
of
web
resources
like
html,
css
and
javascript
that
need
to
be
downloaded
completely
before
getting
used.
H
So
we
call
them
high
priority
resources
and
in
this
project
we
have
the
idea
of
using
forward
or
connection
for
these
high
priority
resources
over
http
3e,
because
in
the
case
of
packet
loss,
the
client
needs
to
wait
for
the
retransmission
of
the
last
parts
as
a
function
of
rtt.
But
by
using
further
collection
we
can
recover
sooner
and
we
can
reduce
page
load
time.
H
H
And
for
the
paper
that
we
submitted
to
this
journal,
we
have
the
numbers
over
udp-based,
there's
a
transfer
of
udt
protocol
and
we
use
two-dimensional
x
or
baseboard
or
collection.
But
after
we
got
positive
reviews
from
the
workshop,
we
were
encouraged
to
implement
this
id
over
quick
and
we
have
chosen
ngtcp2
as
one
of
the
open
source
implementations
out
there,
because
it
includes
the
recent
extensions
to
quick,
like
the
extensible
prioritization
scheme,
that
we
need
for
this
project
and
to
further
simplify
the
task.
H
We
pre-computed
the
repair
data
and
wrote
them
in
a
file
at
the
server
side.
So
in
the
scenario
that
we
want
to
use
for
the
collection
for
hybrid
resources,
the
client
need
to
ask
for
that
as
a
web
resource
as
well
to
be
able
to
use
them
for
recovering
last
quick
frames
and
for
the
portal
collection
part.
H
H
Here
are
some
early
results
of
setting
100
milliseconds
for
the
rtt
and
10
percent
40
packet
loss,
and
these
two
diagrams
show
the
arrival
order
of
quick
frames
at
http
client
without
forward
or
correction
and
with
forward
location.
As
a
workload,
we
had
a
very
simple
work
load
up,
only
one
high
priority
resource,
the
green
one,
and
then
we
computed
the
repair
data
for
this
high
priority
resource.
We
got
a
file,
1.8
kilobyte,
yellow
file,
and
then
we
also
had
another
web
resource.
H
The
low
priority
incremental
resource,
the
pink
one
with
26.5
kilobytes
and
the
upper
diagram
showed
what
would
happen
at
the
client
sand
when
we
do
not
use
forward
or
correction
for
the
high
priority
resource.
H
We
have
the
quick
frames
for
the
high
priority,
the
green
bond
and
because
we
have
packet
loss
in
the
network,
one
of
the
quick
frames,
the
green
one
got
lost.
So
we
need
to
wait
for
103
milliseconds
to
receive
the
retransmission
of
that
last
quick
frames,
and
we
also
could
finish
the
whole
job
at
320
millisecond.
But
with
forward
or
correction.
H
The
client
needs
to
ask
for
the
high
priority
resource
the
repair
data
and
also
the
ping
resource,
and
the
thing
is
that
we
could
recover
the
two
last
frames
of
the
green
high
priority
resource
at
seven
milliseconds
using
two
redundant
data,
quick
frames.
So,
while
if
we
wanted
to
wait
for
the
transmission,
we
need
to
wait
for
109
millisecond
and
we
could
finish
the
whole
job
at
312.
H
and
the
improvement
is
7
milliseconds
versus
103
milliseconds.
Without
forward
lock
action
and
as
a
summary,
we
said
that
we
can
use
resource
prioritization
as
the
server
side
to
use
a
forward
or
character
only
for
high
priority
resources.
And
this
way
we
can
reduce
page
load
time
and
as
the
future
work,
we
are
trying
to
improve
our
implementation,
real
quick
and
we
need
to
study
the
condition,
control,
impacts
of
using
forward
or
collection.
A
H
We
chose
this
high.
Our
person
is
to
see
the
impact
on
this
small
size,
low,
high
priority
resource,
because
we
had
only
seven
quick
frames
here
and
we
wanted
to
make
sure
that
at
least
one
of
them
got
lost.
But
of
course,
if
we
have
a
large
file,
we
can
reduce
it
to,
for
example,
one
percent,
and
you
could
see
the
impact.
That's
okay,
yeah.
A
Great,
thank
you
very
much
only
saying
so
because
it's
hot
it's,
I
don't
know
that
anyone
is
seen
really
at
that
high
pack,
but
I
I
get.
I
see
the
reasoning.
Thank
you
very
much
for
the
presentation.
Thank
you.
Let
me
check
the
chat
channel
quickly.
A
I
think
we
are
in
the
clear
thanks
nasheen
and
thanks
everybody
for
attending.
Please.