►
From YouTube: IETF113-QUIC-20220322-0900
Description
QUIC meeting session at IETF113
2022/03/22 0900
https://datatracker.ietf.org/meeting/113/proceedings/
C
Okay,
good
morning,
everybody,
this
is
the
first
in-person
quick
meeting
I've
shared
so
bear
with
me.
I
have
no
excuse
for
first
day
technology
woes.
Apparently
we
have
a
bit
of
an
av
issue
with
the
screen
behind
me.
In
terms
of
that
would
be
running
me
tako.
You
know
visualization
of
what's
happening,
but
don't
worry,
we'll
live
on
in
case.
Anyone
in
the
room
hasn't
yet
done
it.
C
There
was
a
qr
code
to
scan
on
the
way
in
that
would
cover
the
blue
sheets
and
that
would
allow
you
to
participate
so
we'll
be
running
the
queue
via
the
the
website,
slash
application
that
that
qr
code
would
load.
So
if
you
would
like
to
get
onto
the
microphone
while
you're
in
the
room,
please
use
that
tool.
Similarly,
for
remote
participants,
you'll
be
familiar
with
using
the
me
tattoo
tool
to
participate
in
terms
of
sharing
your
microphone
and
entering
the
queue
and
so
on
and
so
forth.
C
So
without
further
ado,
let's
get
on
with
some
administrivia
and
some
chair
slides,
I'm
sharing.
How
do
I
forward
the
slide
there?
We
go
with
a
note.
Well,
so,
if
you're
not
familiar
with
the
note,
well,
no
well
the
note.
Well,
we
are
on
our
second
day,
so
I
hope
some
of
you
would
have
seen
this
yesterday.
C
C
C
C
David,
thank
you
very
much.
We've
covered
the
till
these
slides
are
taken
from
the
last
time
around,
so
some
of
the
outcomes
might
change.
It
doesn't
look
that
way
to
me.
So
again.
I
would
hope
some
of
you
are
familiar,
but
if
not,
you
can
always
use
your
intuition.
C
Oops
wrong
link
to
the
agenda
there.
We
go
the
obvious
error,
we
are
iatf113.
Please
ignore
that
link,
that's
my
fault,
but
the
the
rough
order
of
events
is
listed
here
anyway.
C
We're
going
to
do
some
chair
updates,
keep
trying
to
keep
it
brief.
You
know,
as
we've
completed
quick
vision,
one
it
kind
of
people,
maybe
not
following
the
working
group
so
closely.
So
we
just
want
to
give
you
an
idea
of
what's
happened
since
last
time,
then
we'll
get
into
the
adopted
working
group
items
so
we'll
cover
version
negotiation,
quick
load,
balancing,
quick
v2
onto
multi-path
and
q
log,
and
then
we
have
one
additional
item
to
talk
about
zero
rtt
bdp.
C
C
The
hp
3
and
q
pac
drafts
ended
all
48
after
about
a
year
of
purgatory,
so
we're
making
some
progress
there.
The
datagram
draft
entered
all
48
yesterday,
just
as
we
were
on
the
train
to
somewhere.
So
the
we'll
we'll
be
following
with
an
update
to
the
list
about
how
we're
going
to
manage
all
48
comments
around
this
time,
but
should
be
nothing
out
of
the
ordinary
in
terms
of
our
working
group,
github
flow
of
issues
and
resolutions.
C
So
look
out
for
that.
The
ops
drafts
the
the
applicability
manageability
draft
completed
ietf
last
call
we're
at
zero
github
issues
right
now.
So
thank
you
for
all
the
reviewers
from
the
various
areas
and
for
the
editors
in
responding
to
those
issues.
I
think
we're
in
a
good
place.
We
will
be
working
with
our
head
to
progress
onto
the
next
stage
and
the
grease
bit
document
is
dura
shepherd
write-up
from
me.
So
that's
on
me
and
that'll
be
coming
soon
ahead.
So
just
just
look
out
for
that.
One.
C
Related
work
we
had
mask
yesterday,
which
was
you
know
good.
We
have
web
transport
on
thursday,
which
I
hope
will
be
good,
and
there
is
a
media
of
a
quick
both
session
on
wednesday
morning
so
tomorrow.
This
time
that
you
might
want
to
look
out
for,
I
won't
go
into
any
more
details.
There.
C
So
I
have
two
slides
that
I
would
like
to
talk
about
general
working
group
business.
The
first
one
should
be
fairly
straightforward,
but
for
those
who,
maybe
don't
know
the
the
data
tracker
holds
both
our
charter
and
our
list
of
milestones.
The
data
tracker
allows,
generally
speaking,
milestones
to
have
dates
or
no
dates.
You
can't
mix
them,
but
you
can
go
inside
and
toggle
between
the
two.
Currently,
the
quick
working
group
uses
milestones
with
dates
and
all
of
the
milestones
along
the
lines
of
submit
draft
so-and-so
to
the
isg.
C
If
anyone
looks
at
those
they're
all
past,
you
we're
also
missing
some
milestones
for
the
documents
we've
adopted
since,
since
that
happened,
we
can
blame
codeword
for
that.
But
but
I
think
what
what's
what's
kind
of
fun
from
my
perspective
is
no
one
seems
to
have
complained
about
this
and
that
we're
making
good
good
progress
spencer's
in
the
queue
okay.
C
This
slide,
okay,
as
far
as
we're,
considering
we're
keeping
active
and
everything's
going
along
okay.
So
none
of
this
is
unique
to
the
quick
working
group.
As
far
as
I
can
tell,
I
wanted
to
do
some
data
driven
analysis,
so
I
scraped
the
the
data
tracker
api
and
gathered
some
information.
It's
not
a
thread
on
the
working
group
chairs
mailing
list
as
far
as
I
could
tell,
but
at
the
point
in
time
I
did
this.
C
Oh
sorry,
if
there
were
500
and
400
out
of
those
were
late,
so
that's
about
80
and
the
median
value
was
two
and
a
half
years.
So
my
my
interpretation
is
they're.
Pretty
pointless
having
goals
and
in
general
is
good
and
we
should
strive
to
keep
on
top
of
the
work.
That's
happening
here
and
keep
it
tuned
along,
and
we
do
see
that
so
for
the
quick
working
group.
C
The
proposal
would
be
to
switch
away
from
these
dated
milestones,
which
adds
you
know
some
work
for
us
to
decide
what
the
date
should
be
and
then
what
it
should
be
when
we
don't
hit
that
because
of
various
reasons.
But
the
order
of
presentation
of
that
list
would
be
the
anticipated
sequence
of
our
documents.
D
Yeah,
thank
you
so
spencer
dawkins
and
I
applaud
the
discussions
that
are
happening
everywhere
in
the
itf
about
that.
We
don't
lie
to
ourselves.
D
The
people
in
the
ietf
will
adjust,
but
I
would
suggest
you
to.
I
would
suggest
you
all
to
be
very
clear
in
what
you're
doing
and
why-
and
I
would
ask
the
working
group
to
consider
their
chairs
to
consider
dropping
a
liaison
message
to
people
who've
already
interacted
with
the
quick
working
group
from
other
sdos
and
explain
to
them
what
is
happening
and
why
and
that
we
all
think
it's
a
good
thing
and
that
this
is
the
way
this.
This
is
reality
for
the
last.
D
However
long
the
quick
working
group
has
been
in
existence
anyway,
and
now
they
can
tell
what
we're
actually
doing,
but
it
seems
like
to
me
that,
without
that,
having
the
dates
suddenly
disappear
completely
would
send
the
message
that
you
have.
We
have
no
idea
how
long
this
is
going
to
take,
which
may
be
true,
and
you
might
as
well
go
ahead
and
try
to
do
stuff.
That's
in
scope
for
this
working
group
yourself
in
another
sdo,
and
that
that
would
be.
D
That
would
be
a
train
wreck.
So
like
say
I
I
I
know
you
all
will
do
the
right
thing
whatever
that
is,
but
that
that
would
be
my
input
into
your
consideration
to
do
the
right
thing.
Thank
you.
C
E
David
scanasi,
quick
enthusiast
sure
this
is
between
the
chairs
and
the
a.d.
Please
just
decide
like:
let's
move
on,
we
trust
you.
C
Okay,
thank
you.
Okay
and
I'll.
Just
try
to
hammer
this
through
for
anyone
that
may
have
been
working
with
the
quick
drafts.
While
we
were
still
in
development
before
rc
9000
or
before
they
got
the
instructions
to
the
iana
team
to
construct
some
tables.
We
had
something
called
the
temporary
eye
on
a
table
which
is
a
wiki
page
on
the
bass
drafts
repo.
C
This
was
intended
to
capture
use
of
quick
and
hp3
extension
points
before
those
tables
were
created
so
that
we
could
just
coordinate
amongst
the
community
and
avoid
nasty
collisions.
That's
covered
stuff
like
quick
versions,
transport
parameters,
settings
frame,
types,
error,
codes
and
so
on,
and
it's
there
and
we
we
thank
people
who
have
taken
the
effort
to
go
and
register
the
values
there.
C
So
we
just
wanted
to
give
people
a
heads
up
of
what's
going
to
be
happening
and
not
effectively
archiving
that
temporary
table,
locking
it
and
then
assisting
with
the
various
responsible
parties
follow
the
due
process
to
get
everything
into
the
full
official
tables.
So
there's
no
action
immediately
required
from
anyone,
but
just
keep
an
eye
out
on
the
list
or
for
any
direct
contact
that
might
be
coming
from
the
chairs
and
please
assist
us
in
doing
the
right
thing.
E
Thank
you
lucas.
My
name
is
david
scanazzi,
I'm
a
quick
enthusiast
and
let's
talk
about
quick
version.
Negotiation
next
slide.
Please
so
had
this
slide
from.
I
don't
forget
how
many
years
now
and
I
just
keep
adding
lines
and
then
it's
starting
to
not
fit,
but
conceptually
we've
been
doing
vn
for
a
while
used
to
be
in
google,
quick.
It
got
added
to
itf
quick.
We
split
it
out
to
its
own
draft.
E
We
redesigned
it
as
a
working
group
and
we've
kind
of
landed
on
something
that
everyone
likes,
or
rather
that
no
one
dislikes,
and
over
the
course
of
the
last
almost
year,
we've
done
quite
a
bit
of
editorial
work.
That's
where,
where
the
document
was
quite
lacking,
but
we've
kind
of
gotten
it
in
what
I
think
to
be
a
decent
shape
between
the
editors
next
slide.
Please.
E
Skip
that
one,
thank
you.
We
had
one
question
which
is
in
a
way
editorial
but
kind
of
fundamentally
changes
quite
a
bit
how
we
see
that,
like
a
lot
of
the
text
and
ecker-
and
I
didn't
quite
agree
on
it-
so
we
thought
that's
great.
We
can
bike
ship
this
with
the
working
group
and
it's
on,
like
the
definition
of
the
term,
compatible
version
negotiation
from
the
draft.
E
The
way
I
personally
had
been
thinking
about
it,
you
would
have,
when
you
use
the
same
version
that
the
client
offered
that
would
just
be
using
that
version
and
compatible
version
negotiation
would
be
when
you
use
that
feature
to
upgrade
from
one
to
another
ecker's
coming
at
it
more
from
like
the
tls
side
of
things
where,
when
you
thanks
martin,
when
you
look
at
that,
wow
lost
my
train
of
thought.
It's
early
here
when
you
actually
stick
to
that
version,
that's
also
compatible
version
negotiation.
E
E
And
ecker,
if
you
want
to
add
something
hop
on,
but
otherwise
yeah
I'm
opening
up
to
the
floor.
Does
anyone
care.
F
F
E
F
F
The
other
thing
was,
a
version
is
always
compatible
with
itself.
Is
the
other
way
to
think
about
it?.
E
All
right
proposal:
we
actually
do
what
echo
wants
if
you
don't
like
that,
please
come
to
the
mic
now
and
I'm
the
jabra
scribe.
So
can
someone
just
double
check
that
there's
not
too
much
happening
in
the
jabber.
E
So
that
was
the
last
issue
we
we're
going
to
do
some
editorial
work
on
this
to
clean
it
up
kerr-
and
I
are
meeting
next
week
to
do
that,
but
we're
pretty
much
done.
We
we
have
some
reasonable
amount
of
implementation
experience.
We
don't
have
too
much
deployment
with
compatible
vn
right
now,
but
we
think
this
is
ready
and
kind
of
the
question
on
where
we
proceed
is.
E
Do
we
want
to
tie
the
progression
and
timeline
of
this
draft
to
what
we're
doing
in
quick
v2,
which
is
kind
of
the
first
way
we
have
to
really
exercise
compatible
vm
or
do
we
just
want
to
move
this
forward
or
you
know,
do
working
group
last
call
and
then
park
it?
What
do
people
think
why,
especially
the
chairs.
G
Oh
okay,
sorry,
martin,
duke
google
and
v2
author
and
I
think
v2
is
waiting
for
you,
so
it
will
not
be
a
blocking
thing
as
well
I'll
talk
about
in
like
the
next
presentation.
E
Then
I,
barring,
like
we're
gonna,
do
a
bit
of
editorial
work
in
the
coming
week.
After
that,
I
think
we
probably
want
to
request
working
group
last
call.
Then,
okay.
E
Oh
sorry,
there's
empty
back
in
the
queue.
F
Yeah
so
having
implemented
both
of
these,
I
think
that
technically
they
are
good.
I
think
that
the
version
negotiation
draft
is
a
bit
rough
editorially,
and
so
I
have
to
get
a
little
bit
of
time
to
to
see
what
you
and
echo
manage
to
put
together
next
week.
I
think
there's
some
ordering
things
and
some
terminology
stuff.
That's
a
little
bit
shaky,
but
it's
technically
sound
I've
implemented
it
I've
implemented
v2.
They
both
interrupt
with,
I
think
three
or
four
other
implementations,
and
I
think
we're
deploying
it
so
whoops.
E
Cool
a
quick
question
mt
you
had
a
few
very
good
editorial
issues
that
you
filed
on
the
draft.
We
resolved
these
like
right
before
the
draft
to
deadline
and
submitted
a
new
one
that
hopefully
addresses
those.
Are
you
referring
to
the
one
before?
Do
you
think
we
need
like
that?
It's
still
a
bit
rough
with
the
latest
one.
F
We
read
it
last
week,
I'll
have.
E
F
Let
me
take
another
look,
but
if
the
editorial
state
was
as
it
was,
when
I
last
saw
it,
I
liked
to
see
that
work
done
before
working
group
last
call
there's
a
good
chance
that
somebody
could
get
messed
up
in
that
process.
E
Sounds
good
when
you
take
a
look
at
at
that
police
foundation
with
specifics,
if
you
have
them
thanks,
yeah
cool
ecker's!
Next,
in
the
queue.
I
E
G
Next
slide,
so
the
closely
related
v2
draft,
just
for
any
of
you
who
are
not
paying
attention,
this
has
a
few
purposes.
It
is
not
adding
any
features
to
quick.
It
is
not
fixing
anything
about
quick
version,
one.
G
It
is
an
effort
to
grease
the
version
field
to
have
a
target
for
this
vn
thing
that
that
david
and
ecker
have
been
working
on,
and
also
just
while
we
have
the
quick
brain
trust
together
kind
of
develop
a
template
for
what
a
new
version
should
look
like
and
give
people
kind
of
an
example
to
follow
when
they
do
quick
versions.
That
actually
add
value
next
slide.
G
Oh
actually,
before
I
get
to
this,
there
aren't
really
much
there
isn't
much
in
the
way
of
open
issues.
The
there's
mainly
just
an
editorial
thing
about
what
goes
in
the
vn
draft
versus
what
goes
in
the
v2
draft.
So
david
and
I
are
playing
hot
potato
on
a
couple
issues,
but
I
think
these
can
go
together,
probably
to
working
group
last
call
pretty
soon.
We
have
pretty
good
interop
with
v2
there's
one
issue
that
might
be
contentious.
We
had
a
little.
G
We
had
a
few
people
who
felt
strongly
discuss
it
offline,
but
alpn
is,
is
the
one
thing.
So
after
a
bit
of
wrangling,
we've
decided
that
v2
should
use
the
h3
alpine
rather
than
h3-something
and,
of
course,
for
doq
and
for
moke
or
mock
or
whatever
it's
called
same
thing.
G
So
the
draft
currently
says
that,
yes,
all
these
alpn's
apply,
this
has
so
there's
a
few
reasons
number
one
since
compatible
version
negotiation.
There's
not
a
lot
of
cost
to
to
messing
up
just
to
using
sort
of
the
incorrect
version.
Certainly
the
case,
we
don't
anticipate
a
lot
of
v2
only
servers
and
clients
out
there.
G
As
a
practical
matter,
some
implementations
would
be
would
be
complicated
by
having
to
to
to
revise
the
the
alpn
in
use.
As
I
talked
about
a
couple
itfs
ago,
if
we
have
lots
of
versions
and
lots
of
alpn's,
then
the
registry
sort
of
explodes
and
that's
kind
of
a
bad
thing
to
do
to
the
alpn
registry
for
just
on
on
behalf
of
quick
and
finally
as
a
deployment
issue.
G
Obviously
a
lot
of
h3
things
are
are
tied
pretty
closely
to
to
the
quick
implementation,
but
in
principle,
if
you
have
an,
if
you
have
a
abstract,
quick
implementation
with
an
api,
the
application
should
be
an
alpn
to
provide
to
to
use
in
the
connection
and
if
you're,
trying
to
roll
out
new
quick
versions,
deprecate
all
the
quick
versions
in
a
quick
implementation,
then
you
would
then
have
to
change
all
the
applications,
and
that
sounds
unhappy.
So
for
all
the
reasons,
this
is
where
we've
landed
and
I'd
like
to
open
the
floor.
J
Waiting
to
see
if
audio
is
going
through,
all
right
so
to
be
clear,
I'm
not
coming
to
the
queue,
because
I
have
a
problem
with
it.
I
think
this
is
the
right
call.
I
think
this
is
the
wrong
draft.
This
needs
to
be
in
version
negotiation.
G
Okay,
I
I
will
say
that,
in
terms
of
our
forging
consensus,
we
sort
of
punched
the
issue
of
incompatible
versions.
I
think
that
this
this
logic
should
apply
to
incompatible
versions,
but
I'm
not
sure
that
is
a
universal
sentiment.
The
current
the
current
sort
of
pattern
that
we're
developing
here
is
that
if
you
are
doing
a
new
version
draft
you
should
go,
you
should
essentially
inventory
all
the
existing
alpn's
and
say
if
they
work.
G
Conversely,
if
you're
doing
a
new,
if
you're
proposing,
if
you're
registering
a
new
quick
alpn,
you
should
really
inventory
all
the
existing
quick
versions
and
sort
of
say
if
they
work
and
then
ultimately
becomes
unmanageable.
We
can
have
a
registry,
but
I
don't
think
we're
anywhere
near
that
point.
Yet.
J
E
Davidskenazi,
what
to
do
when,
like
they're
not
compatible,
is
already
in
the
vm
draft
mike
or
did
I
misunderstand
what
you
were
saying?
Let's
talk.
G
G
J
D
G
Yeah,
I'm
not
sure
how
to
write
that
down
either
like
I
have
some
good
boilerplate,
I
think
in
the
v2
draft
that
says
like
this
is
the
same,
so
it
should
be
fine
and
I'm
going
to
add
something
about
doq
since
doq
has
essentially
shipped.
E
Oh,
I'm
already
here
great.
I
want
to
add
to
a
few
of
these
things
because
so
first
off,
I
don't
really
believe
that
there
is
a
notion
of.
I
don't
know
if
this.
If
this
application
layer
protocol
works
over
this
transport,
like
any
specification
of
a
an
application,
their
protocol
will
tell
you
what
transport
it
runs
on
and
if
it's
another
transport
it
just
doesn't
work
period.
Unless
you
have
some
reason
to
believe
it
does,
and
you
you
we're
not
gonna.
E
E
But
no
so
going
into
this
a
bit
more.
This
is
quite
a
bit
of
a
mess
and
the
the
fundamental
reason
why
this
is
such
a
mess
is
that
you
know-
and
obviously
this
is
obvious
in
hindsight-
wasn't
at
the
time
we
really
messed
up
alt
service.
When
we
did
alt
service,
we
used
the
lpn
tokens,
whereas
that's
not
what
we
should
have
used.
We
should
have
used
a
combination
of
alpn
and
the
entire
transport
stack
underneath
because
when
you
go
to
another
service,
that
is
the
information
you
need
to
know.
E
I
need
to
know
what
quick
version
I
send
an
initial
at
and
if
we
are
just
talking
about
a
lpn
here,
it
makes
sense
to
reuse
the
h3
alpn
for
quickview
on
a
quick
v2,
because
the
lpn
is
only
sensit
makes
sense
only
in
the
scope
of
an
underlying
version
whose
handshake
you're
doing
and
that
way
you
can
go
up
to
the
next
thing,
but
when
you're
doing
all
service,
all
this
completely
falls
apart.
G
Yeah
right,
okay,
right
so
lucas
and
I
have
proposed
the
draft,
which
is
essentially
adding
an
alt-service
parameter.
That
tells
you
what
the
quick
version
is
which
will
solve
the
problem
for
h3.
If
people
support
it,
we
will
also
extend
that
to
service
b.
Once
you
know,
we
get
a
couple
draft
versions
under
our
belt,
so
this
is
sort
of
our
long-term
solution
to
solve
the
problem
that
you're,
describing
whether
or
not
people
will
take
it
up
is
out
of
our
control.
But
that's
what
we
have.
E
So
I
see
the
queue
is
closed,
so
I'm
gonna
go
sit
down
but
and
let
mt
and
ecker
chime
in,
but
I
think
this
is
really
important.
I
really
believe
that
we
cannot
move
quick,
v2
forward
and
kick
this
can
down
the
road.
E
I
personally
believe
that
we
need
to
solve
this
and
stop
like
just
writing
more
things
like
there's,
no
rush
whatsoever
to
deploying
quick,
v2,
no
one's
waiting
on
it,
and
I
would
like
to
see
this
solved
because
if
we
ship
quick
v2
and
then
we
realize
that
your
draft
doesn't
work
out,
then
we're
we've
made
yet
a
bigger
mess
for
ourselves.
F
Just
waiting
on
audio,
so
I
put
it
in
the
and
I'll
probably
read
this
out
anything
that's
compatible
with
quick,
both
on
a
version,
negotiation
basis
and
a
feature
basis
is
probably
okay
to
use
the
same
lpn.
F
I
think
we'll
have
to
word
that
a
little
bit
more
precisely,
I
think
you
need
a
strict
or
a
superset
of
the
features
that
the
protocol
was
using,
which
means
that,
for
instance,
if
you
define
a
new
version
of
quick
that
only
does
streams
and
doesn't
do
datagrams,
it
would
still
be
okay
for
http,
probably
maybe
depending
on
all
this
websocket
stuff.
But
that's
the
sort
of
idea
that
we're
going
to
need
to
we
would
want
to
write
down.
F
I
don't
think
it
works
if
you
lose
features
in
it
in
the
transport,
and
I
don't
think
it
works.
If
you
lose
compatible
version
upgrades
because
then
you
have
performance
problems
and
that's
part
of
the
reason
why
we're
talking
about
the
all-service
thing.
I
think
that's
primarily
why
we're
talking
about
the
old
service
thing,
because
if,
if
we're
just
doing
lpns
and
didn't
care
too
much
about
it,
we
could
do
we
do
version
negotiation
right
in
in
that
case.
So
I
think
everyone
else
here
on
the
chat
has
been
saying.
E
Very
quickly,
the
quick
version
negotiation
draft
isn't
specific
to
any
quick
version.
Obviously,
so
it
operates
at
the
level
of
the
quick
invariants.
E
F
That's
true,
but
we
also
include
a
whole
bunch
of
advice
in
there
about
what
to
do
with
retries
what
to
do
with
a
bunch
of
other
things:
zero
rtt.
I
think
it
was
as
well.
So
I
think
probably
what
we
can
do
is
put
another
one
of
those
subheadings
under
that
section
and
and
say
what
do
you
do
with
lpn
in
the
case
that
you
that
you
have
it?
I
think
that's
the
right
thing
to
do
here.
I
don't
think
this
is
a
property
of
v2.
I
think
this
is
a
property
of
pn.
G
And
I
guess
the
question
is
what
we
have
to
write
down
right
now.
I
think
we've
reached
agreement
that
v2
will
be
fi
specifically
we'll
be
fine.
If
we
just
use,
we
use
the
alpn
and
we're
establishing
a
pattern
that
people
might
follow
unless
there's
a
reason
not
to
so.
Maybe
we
can
trust
future
standards
writers
to
say.
F
I
think,
like
with
alt
service,
we
can
only
do
the
best
that
we
can
with
the
information
that
we
have
at
hand
and
the
experience
that
we
have.
I
suspect
we
do.
We
do
know
something
here
and
that's
based
on
the
implementation
experience
that
we
have.
That
suggests
that
you
can
use
the
same
lpn
under
the
sort
of
narrow
constraints
that
I
described
and
then
maybe
outside
of
that
it's
for
the
for
the
reader.
You
know
work
it
out
for
yourself.
C
Okay,
we're
we're
kind
of
out
time
for
this,
but
we'll
let
akka
respond
well
speak
quickly.
I
Yeah,
so
so,
I'm
not
sure
how
quickly
I'm
going
to
speak
since
things
have
kind
of
backed
up
for
a
while.
So
I
think
a
few
points
first,
I
think
I
think,
like
the
worrying
about
like
this
is
probably
a
little
a
little
more
premature,
but
let's,
let's,
let's
not
the
the
situation
like
largely,
is
going
to
be.
I
I
I
think,
okay
in
in
the
sense
that
you
know
trying
to
run
like
a
situation
where
you
actually
have
this
much
confusion
about
like
a
the
properties
of
the
underlying
protocol
and
be
the
properties
of
jlp
support
seems
like
probably
not
like
a
great
idea
and
we'll
probably
not
try
to
create
you
know
you
know
what
application
you're
running
again.
It's
not
like
your
aopn,
switching
between
doq
and
and
h3
right,
so
you
know
that
that
they
should
do
this
in
like
x.
I
Now,
hello
seems
a
little
weird.
I
think
what
that
I
think
what
the
you
know,
david
you're,
right,
that
this
strategist
is
agnostic
about
whether
it's
tales
or
not.
But
it's
it's
hard
to
believe
that
any
any
future
thing
we
design
isn't
gonna
need
some
aopa
and
wood
construct,
and
so,
even
if
that's
carried
in
some
entirely
different
way
in
the
transport
negotiation
in
this
in
the
you
know
protocol
it's
going
to
have
to
go
somewhere.
I
think
you
know
we
aopm
was
added
to
tls
for
a
reason.
I
I
think
there
isn't
it
has
to
appear.
I
agree
this
has
to
appear
if
anywhere
in
the
vm
draft,
but
not
and
not
in
v2.
I
I
think
what
has
to
appear
is
that
any
exterior
protocol
negotiation
signal
you
do
has
to
be
done
from
scratch
when,
when,
when
you
do
an
incompatible
vn,
which
is
to
say
so
suppose,
for
instance,
that
you're
that
you
know,
if
you
offered
v1,
you
would
have
offered
a
lp
at
a
and
when
you
offered
v2,
you
offered
aopn
a
and
b
well
when
you,
when
you
get
a
vn
that
forces
you
back
to
v1,
you
have
to
offer
a
and
not
a
and
b,
because
otherwise
the
attacker
is
able
to
force
you
into
posture
of
choosing
a
specific
aopn
offer
by
doing
a
by
by
doing
by
doing
a
vn.
I
So
I
think-
and
I
haven't
thought
about
it-
if
it's
entirely
possible,
there
are
other
things
like
that
that
have
that
property
and
in
every
single
case
they
have
they
jump
from
scratch.
Now
I
think
it
does
kind
of
imply
that
things
are
done
from
scratch,
but
I
think
it
really
has
to
be
explicitly
stated.
G
Thanks
ecker,
I
think
the
the
concern
that
I'm
pulling
out
of
this
I
mean,
I
think
I
think
we're
moving
forward
with
v2
using
the
v1
alpns,
but
there's
david's
concern
about
maybe
holding
something
up
until
this
hp
bist
draft
matures
a
little
bit,
I'm
not
sure
what
would
be
held
exactly
b
feed
the
vn
draft
of
the
v2
draft.
I
was
hoping
to
ship
both
of
them
like
soon,
I
think
they're.
I
I
don't
think
it's
necessary
for
sorry.
I
should
have
responded.
I
don't
think
it's
necessary
to
wait
for
that.
I
think
the
change
that
I'm
supposed
to
make
to
the
end
is
straightforward
if,
if
any
change
is
needed,
I'll
file
a
bug
to
like
check
for
the
change
and
because
I
don't
remember
if
we
have
any
tests
about
this-
and
I
think
that
the
v2
thing
is
fine-
I
don't
think
we
need
to
do
the
old
service
thing
frankly
at
all,
but
like
certainly
not
before
we
ship
this.
G
G
All
right
now
for
something
completely
different:
quick
lb,
oops.
G
Okay,
so
when
last
we
met
this
thing
had
exploded
through
some
scope
creep
into
this
multiplicity
of
config
options,
and
there
were
like
three
different
algorithms
in
each
algorithm
had
a
completely
different
implementation
and
had
completely
different
parameters
with
different
limits,
and
there
were
not
a
lot
of
implementations,
we'd
not
done
any
interop.
The
second
problem
is
still
a
problem,
but
the
first
one
has
improved
next
slide.
G
Okay,
so
now
I
got
rid
of
all
that
and
there's
just
the
one
thing
now.
Basically,
every
connection
id
looks
like
this
and
it
could
be
encrypted
or
not,
and
if
it's,
if
it's
the
specific
magic
length
where
it
can
be
a
single
block,
encryption
decryption
because
the
it's
17
bytes,
then
then
it
is
otherwise.
You
use
this
multi-pass
thing,
so
this
is
much
cleaner
conceptually.
G
I
think
it's
a
lot
easier
to
understand
and
it's
much
less
code
having
haven't
implemented
it
both
ways
next
slide,
so
these
were
sort
of
the
last
time
these.
This
is
kind
of
the
path
forward.
I
I
proposed
for
this.
We
did
a
quick.
We
did
get
a
crypto
review.
We
did
make
some
changes.
We
just
got
another
crypto
review,
which
was
a
little
less
positive
about
what
we
had
done
and
so
christian
and
I
are
working
through
that
they
kind
of
want
to
do
more
passes,
which
seems
like
a
problem.
G
So
we're
we're
exploring
that
on
how
to
fix
that.
I
was
going
to
delete
the
block
cipher,
but
there
was
some
push
back
and
I
think
the
new
system,
where
there's
just
a
magic
length
where
you
do
it,
you
use
a
block.
Cipher
cleans
up
a
lot
of
complexity.
It's
like
10
lines
of
code,
so
I
think
we'll
just
leave
it
in
and
I
sent
an
email
a
few
weeks
ago
about
splitting
the
draft.
G
So
there's
to
remind
you
guys,
there's
this
load
balancer
thing
where
we
encode
the
server
id
and
a
any
connection
id,
and
then
there's
like
this
mostly
unrelated
thing,
about
offloading
retry
to
some
sort
of
service
or
hardware
thing
or
whatever,
and
that
like
is
loosely
under
the
theme
of
middlebox
coordination,
but
otherwise
there's
no
really
ship
at
all.
And
so
I
think
there
was
pretty
strong
support
on
the
list
to
split
those,
and
I've
already
got
a
pr
for
that.
G
So
unless
somebody
comes
up
to
the
mic
and
says
this
is
a
terrible
idea,
I'm
probably
going
to
push
commit
that
shortly
after
this
and
then
like
from
an
edit.
So
there
is
again
some
more
crypto
review
stuff
to
do,
but
like
the
design
is
getting
pretty
close
to
being
done.
I
think
what
we
really
need
is
some
interop
and
some
deployment
experience.
G
I
think
I
think
google
has
the
intent
of
deploying
this
in
the
near
to
mid-term
I'll
say
so
that
will
at
least
give
us
some
experience
to
try
to
manage,
handle
all
the
configuration
and
all
that
I've
gotten
some
bytes
from
infinate
financial
about
their
implementation.
So
maybe
I
I
would
have
thought
by
now.
We'd
have
more
servers
that
supported
this,
but
that
seems
not
to
have
happened.
G
So
if
you
have
a
server
implementation,
I'd
really
appreciate
some
effort
in
this
space,
but
I
I
don't
think
we're
a
mile
away
from
from
last
call,
but
it
needs
a
little
more
maturation
before
we
get
there.
G
I
Hi,
so
yes,
I
I
am
just
catching
up
to
speed
on
the
on
the
top
on
this
document
I
did
read
the
inria
review
and
it
was
quite
concerning.
I
guess
you
know.
I
I
don't
think
we
should
invent
our
own
version
of
ffx
in
this
working
group,
and
so
either
we
should
remove
that
section
or
replace
it
with
fx,
but,
like
it
took
a
it,
took
enormous
long
time
to
get
ffx
right
and
and
a
lot
of
a
lot
of
a
lot
of
real
photographers,
and
so
I
I
I
think
we
should
like,
like.
I
think
I
think
it
was
like
regrettable.
I
The
center
which
we
innovated
you
know
in
cryptographically,
in
doing
sql
server.
Encryption
is
about
the
limit
of
like
what
I'm
willing
to
see.
I
etf
do
without
like
a
lot
more
computer
review
than
we
seem
to
have.
G
I
mean
the
concern
about
f
of
x
is
12
passes,
which
seems
like
a
lot.
I
Cheaper
but
like
I
mean
like
sorry,
I
cut
you
off
go
ahead.
G
G
Like
this
is
not
a
well-formed
idea
right
now,
but
I'm
thinking
about
possibly
having
some
sort
of
configuration
option
where,
like
you
kind
of,
can
turn
the
knob
on
on
how
well
you're
doing
here
in
terms
of
like
number
of
passes
or
something.
But
that's.
I
gotta
go
a
couple
rounds
with
christian
and
and
the
crypto
review
to
figure
out
what
we're
doing
there,
but
I
understand
your
concern
yeah.
I.
I
Mean
I
I
guess
just
like
like
just
like
yeah,
I
I
think
just
just
generally,
like
the
the
itf
used
to
do
this
kind
of
thing
and
like
used
to
life
freelance,
a
lot
on
the
crypto
and
and
like
we've,
really
moved
towards
having,
like
you
know
as
much
as
possible,
having
validated
crypto
that
we
have
like
high
confidence
in
and
and
so
like
that.
So
to
not
have.
I
That
seems
like
a
regression
and
if
the
answer,
if
we
can't
solve
this
problem,
then
we
should
throw
up
our
hands
and
say
we
can't
solve
the
problem
and
not
and
and
and
like
wait
for
the
photographers
to
solve
it
for
us
rather
than
you
know
and
mean,
and
you
know
it
is
possible,
there's
something
that
is
that
can
tell
this
problem,
given
the
more
limited
design
space
than
fpe.
But
I
think
for
us
to
like
write
into
post-standard
document,
something
which
we
like,
like
this
little
confidence
in,
seems
unfortunate.
G
Okay,
I
mean
I'm,
certainly
I'm
certainly
open
to
the
crafters
like
proposing
something
in
the
like
sub
16
byte
plain
text
space
that
you
know
meets
meets
the
meets
the
constraints
that
we
have.
I'm
not
married
to
this
particular
way
of
working,
and
I
understand
the
concerns
I'm
just.
This
has
not
been
forthcoming
to
at
this
point,
and
I've
not
completely
digested
this
latest
review.
So
so
I
need
to
huddle
with
christian
and
figure
out
what
we're
gonna
do
about
it.
L
Yeah
I
mean
I
I'm
I'm
kind
of.
I
have
a
lot
of
sympathy
for
what
echo
said.
Okay,
as
in
we
should
not
invent
stuff,
and
I
actually
insisted
to
have
these
crypto
reviews
to
make
sure
that
we
are
not
out
of
the
deep
end.
L
L
I
would
much
prefer
to
have
something
that
is
completely
standard
and
yeah
I
mean
if,
if
we
are
okay
with
12
passes,
let's
do
12
passes,
but
I
I
must
say
that
we
we
will.
We
have
to
discuss
with
you,
know,
folks,
to
understand
exactly
what
the
implications
are
and
if
it
is
to
say,
don't
do
that.
Well,
we
won't
do
it
or
we
should
not
do
it.
G
M
Yeah,
hello,
everybody,
I'm
elia
cooleven,
I'm
presenting
for
the
also
group
here
and
I
hope
my
authors
will
just
jump
into
the
queue
anytime.
They
want
to
say
something
and
we
go
to
the
next
slide
yeah.
M
So
multipass
draft
was
adopted
earlier
this
year
and
we
submitted
the
zero
zero
version
and
the
zero
version
had
already
a
couple
of
editorial
changes
and
clarifications
and
stuff
compared
to
the
individual
draft,
but
no
design
changes,
I
believe,
and
then
we
submitted
a
new
version
just
at
the
submission
deadline,
which
also
mainly
had
editorial
change
but
like
to
to
a
larger
amount
like
we
added
some
kind
of
overview
section
to
make
it
easier
to
read.
M
We
had
a
couple
of
clarifications
about
how
to
use
or
like
what's
the
relation
to
existing
transport
parameters,
and
there
was
some
clarifications
about
timeouts
and
so
on.
So
this
is
technical
stuff,
but
it
was
clarification,
stuff
missing
and
not
like
supposed
to
be
any
kind
of
new
design
changes
or
anything
like
that.
M
Yeah
look
at
the
div
yourself.
If
you
want
to
know
what
the
details
are.
So,
let's,
let's
jump
into
the
open
issues
next
slide.
M
M
M
This
is
an
issue
that
has
been
around
for
a
while,
as
I
just
said
when
we,
when
we
took
the
existing
multi-pass
draft
and
and
merged
them
into
this
draft,
we
really
tried
to
concentrate
on
the
bare
minimum,
and
so
this
part
was
existent
in
at
least
the
alibaba
draft,
but
we
didn't
take
it
over
and
the
function
we're
talking
about
here
is
to
indicate,
to
the
other
end
that
one
of
the
passes
or
multiple
of
the
process
actually
should
not
be
used
at
this
point
of
time.
M
So
you
just
keep
them
as
a
backup
and
that's
a
very
typical
scenario.
You
have,
like
this
hand,
hand-over
scenario
where
you
open
one
path
on
your
wi-fi
oneplus
and
your
cellular,
and
as
long
as
you
have
good
wi-fi
connectivity,
you
don't
want
to
use
a
seller
because
it's
more
expensive
but
you're
the
client.
You
have
this
knowledge
you're
requesting
data
from
the
server
and
the
server
doesn't
know
that.
M
So
the
question
really
is:
do
you
want
to
add
it
back?
As
I
said
it
was
there
in
the
alibaba
draft,
there
was
a
frame
called
path
status.
We
don't
have
that
frame
anymore
pass.
That
is
had
like
a
lot
of
information,
not
only
this
information,
but
this
was
like
one
piece
of
the
information
in
there
and
it's
also
a
function
that
is
part
of
multipath,
quick,
for
example,
because
there's
a
bit
in
multiples
quick.
So
when
multi-pass,
quick,
sorry
multi-pass
tcp
when
multi-pass
tcp
was
designed,
they
also
decided.
This
is
an
important
function.
M
They
want
to
support
it.
So
any
opinions
should
we
re-edit,
there's
no
pr
yet
but
like,
I
think-
and
we
don't
have
to
discuss
about
how
we
want
to
edit
just
like
getting
some
feedback.
If
people
think
it's
useful.
C
D
Yeah
this
this
is
spencer
dawkins.
So
I
think
that
this
is
really
basic
and
I
think
that
almost
everybody
that
has
a
cell
phone
that
has
wi-fi
connectivity,
we'll
find
that
you
know
we'll
find
this
late
useful
sooner
or
later.
So
I
think
this
is.
I
think
this
is
really
basic.
D
D
So
you
know
that's
just
kind
of
where
I
am
I'm
still
thinking
my
way,
through
kind
of
all,
of
the
different
reasons
why
people
do
multipath,
but
on
this
one
it
seems
like
you
know,
it
seems
like
this
is
also
something
where
I
know
something
about
my
end
that
I
want
you
to
know
and
that
you
can't
guess
you
know
you.
The
server
can't
know
that
I
had
this
preference
and
it
seems
like
it's
harder
to
do.
You
know
it's
hard.
D
You
know
not
having
this
will
make
things
harder
in
the
base
protocol.
This
is
in
the
base,
multi-path
extension,
but
that
would
be
my
theory.
Thank
you.
N
All
right,
tommy,
pauly
apple,
so
on
this
you
know,
I
think,
they're
pr,
you
probably
could
have
other
implicit
ways
to
tell
the
server
that
I
don't
want
to
use
this
path.
Yet
by
saying,
oh,
I
haven't
been
actually
sending
anything
from
the
client
on
this
path.
N
You
could
do
things
like
that,
but
that
would
be
relatively
complicated
and
you
know
I
think,
a
lot
of
the
cases
where
we've
seen
multipath
deployed
so
far,
at
least
in
our
experience,
is
when
you
have
very
specific
applications
that
are
using
it,
where
you
have
a
lot
of
coordination
between
the
client
and
the
server,
but
hopefully
with
multipath
quick
as
quick
is
more
and
more
ubiquitous
on
the
web.
We're
just
going
to
start
using
this
to
random
servers
that
we
don't
have
that
with.
N
The
fact
that
there
is
something
equivalent
within
mptcp,
I
think,
is
a
very
strong
argument
for
saying
we
should
have
at
least
parity
with
it,
and
I
don't
think
we
should
necessarily
go
beyond
it,
and
so
I
would
encourage
us
to
stay
minimal
and
not
add
a
huge
bunch
of
complexity
or
too
much
extensibility
right
now.
There
just
you
know,
use
whatever
the
simplest
bit
we
need
to
send,
but
let's
do
something
to
at
least
maintain
parity
with
mptcp.
M
Yeah,
if
I
remember
correctly,
this
mp
prior
option
was
designed
to
be
extendable
and
and
provide
more
features,
but
at
the
end
they
only
specified
this
one
bit
in
this
initial
version,
so
yeah
jun,
fay,
please.
C
O
O
So
that's
why
we
think
it
is
actually
important
for
us
to
have
some
explicit
way
to.
For
example,
if
we
see
the
data
is
consumed,
a
lot
on
one
path
and
we
can
explicitly
send
a
signal
to
set
up
setup
path
as
standby,
so
I
actually
agree
with
what
mira
just
said.
I
think
it's
important
to
have
this
functionality.
P
Thank
you,
marcus
in
the
room,
marcus,
armando
telekom.
As
you
know,
mia
I
already
participated
in
the
discussion
at
github
and
I
fully
support
this
idea.
The
only
question
I
have
is
one
bit
enough
at
the
end,
or
do
we
need
more
bits
in
the
multi-pass
dccp
I
will
present
on
on
friday
during
the
tsvwt
slot.
P
M
Yeah,
as
I
said,
there's
no
pr
and
no
pr
yet
so
we
will
propose
a
solution.
I
hear
definite
agreement
that
we
want
at
least
this
one
bit.
So,
let's
see
where
we
go.
Thank
you.
Q
The
advanced
feature
regarding
to
the
scheduling
you
basically
signal
the
scheduling
preference
of
your
pass.
It's
not
a
zero
one
decision.
It's
actually
you
can
signal
how
much
traffic
you
want
to
go
to
a
specific
test,
so
I
think
it
should
leave
out
the
base
drop.
It
should
be
put
into
another
extension
drop.
M
M
Yeah,
so
this
issue
is
also
easy
in
rfc
9000
only
the
clients
can
migrate
and
there
are
good
reasons
for
it,
because
it
makes
the
whole
thing
much
simpler
and
that's
also
the
main
use
case.
However,
if
I'm
right
and
I
might
be
wrong,
I
believe
the
reasons
for
having
this
restriction
is
not
valid
for
multipath
anymore,
because
in
a
multipass
case,
you're
not
closing
the
old
path,
you're,
just
open
opening
a
second
press.
So
if
the
second
pass
fails
to
some
extent,
you
still
have
the
old
one.
So
I
don't.
M
I
didn't
find
a
good
reason
to
actually
keep
this
restriction.
It's
a
change
to
it
would
be
a
change
to
what
the
base
version
one
spec
says,
and
we
try
to
keep
those
changes
minimal,
but
if
we
don't
have
a
good
reason
restricting
the
multi-pass
functionality
in
a
way
that
doesn't
allow
for
certain
use
cases.
This
doesn't
seem
to
be
useful
for
me,
so
I
wanted
to
have
some
feedback.
If
people
think
this
restriction
is
still
useful
or
if
people
are
open
to
actually
release
that
restriction.
J
So
what
I
had
envisioned
for
server
initiated
paths
was
basically
a
frame
that
looks
a
lot
like
the
server
preferred
address
transport
parameter
where
the
server
asks
the
client.
Please
try
to
reach
me
on
this
ip
address
and
then
the
server
and
the
client
is
the
one
who
sends
the
first
packet
to
open
things
up
with
an
app.
M
So
yeah
that
I
totally
agree,
but
I
think
that
would
actually
be
an
extension
that
we
kind
of
didn't
consider
of
of
the
space
functionality.
And
the
point
is:
if
the
server
tries
to
open
a
path
and
you
fail,
then
you
know
nothing
happened.
You
just
fail
in
in
the
multi-pass
case,
because
you
still
have
the
old
pass.
So,
yes,
you
know
in
many
scenarios
it
might
not
be
useful,
but
there
might
be
use
cases
where
it's
useful
and
it's
just
like
not
necessary
to
have
this
restriction.
From
my
point
of
view,.
H
I
I
guess
I
would
I
mean
I'm
sort
of
not
not
approached
that
necessarily
but,
like
my
assumption,
would
be
in
the
peer-to-peer
use
case
you're
using
ice
anyway
or
something
like
it
and
therefore
like
and
of
course
the
situation
went
different
like
are
we
ever
going
to
use?
Are
we
ever
any
I
mean?
Are
we
ever
going
to
do?
Quick
multi-path,
like
I
mean,
there's
a
peer-to-peer
case
where
you
like,
just
like
randomly
probe
new
powers
with
quick
rather
than
with
lice.
M
I
I
mean
I
mean
realistically
no
real
realistically,
like
realistically,
you
like
need
ice
to
like
punch
the
holes
in
the
nets
and
do
all
kinds
of
other
crap
like
like
that's.
Why
that's
why
I
like
that's?
Why,
like
all
the
any
anybody
passed
off
and
like
whatever
did
you
live
above
ice.
M
I
R
So
hi
brian
trammell,
google.
I
basically
came
up
to
this
or
put
myself
in
the
queue
to
say
something
very
much
like
what
mike
bishop
said.
I
I
think
what
I'm
hearing
is
the
right
way
to
do.
R
I
think
it's
it's
and
like
releasing
this
restriction
in
the
initial
multipath
extension
sort
of
points
in
that
direction,
but
I
think
you
need
to
be
very
explicit
about
what
the
what
this
is
going
to
get
you
in
the
base
case
and
then
point
to
future
work
to
actually
so
like
you
need
the
like
exercise
or
on
the
ice
stuff
and
then
very
excited
about
mike's
server
preferred
address.
So
thanks.
S
Details
details,
so
I
would
be
fairly
hesitant
to
release
this
in
the
multipath
extension.
I
joined
the
queue
just
to
say
that
at
the
time
when
we
were
doing
client
initiated
migration
in
the
original
quick
spec,
there
were
an
awful
lot
of
fairly
thorny
problems
that
we
were
able
to
punt
on
by
saying
that
the
server
can't
do
this,
and
so
I
think
just
my
initial
reaction
to
this
slide
was,
like.
Oh
yeah.
Sure,
like
you
know,
try
to
open
a
new
path.
S
It'll
be
fine
if
it
doesn't
work
nobody's
upset,
but
it
might
be
worth
going
back
and
digging
up
some
of
those
discussions,
because
there
were
a
lot
of
fairly
painful
things,
and
so
I
think
my
personal
preference
would
be
to
keep
the
restriction
here
and
it
have
a
completely
separate
document
that
opens
that
up
like
personally.
I
think
it
would
be
super
useful
I'd,
love
to
see
that
for
peer-to-peer
use
cases,
but
it
seems
like
that
would
be
best
as
a
different
document.
M
So
this
is
exactly
the
point
like
I
know
we
had
like
a
whole
lot
of
discussions,
but,
like
my
memory,
wasn't
good
enough
to
figure
out
what
we
discussed,
and
so
I
tried
to
to
read
the
draft
and
figure
out
what
the
restriction
was,
and
I
think
the
main
restriction
was
exactly
these
net
use
cases
and
the
risk
of
failure,
and
I
don't
think
that's
a
reason
to
keep
it
here
in
multiples.
M
So
if
there
have
been
more
reasons-
and
somebody
remembers
better
what
the
discussion
was
or
maybe
have
to
dig
through
github
or
whatever,
then
then
we
should
pick
them
out
and
we
should
discuss
them.
But
I
didn't
find
any
other
reasons
and
if
we
don't
have
a
good
reason
then
like
I
don't
want
to
keep
this
restriction
but
like
if
you,
if
you,
if
you
remember
better
than
I
do
then
just
send
an
email
to
the
list
or
to
me
that
would
be
helpful.
D
This
is
spencer
dawkins.
If
I
was
understanding
what
eric
was
suggesting
and
I'm
asking
both
eric
and
maria
would,
would
it
be
possible
for
this
extension
draft
to
be
silent
about
this,
and
we
could
have
the
conversation
about
what
what
could
happen
in
the
base
quick
protocol
is,
that
is
that
kind
of
what
eric
was
suggesting.
C
Okay,
I
think
eric's
a
hangover
from
when
he
joined
before.
Let's
go
to
janna
now,
please,
yes,.
T
I'm
waiting,
okay,
I
think
I'm
in
so
the
one
one
thing
I
might
note
is
that
simply
removing
the
restriction
doesn't
really
tell
us
much
about
what
the
problems
might
be.
It's
about
what
you
want
to
do
after
that.
T
But
let's
see
what
exactly
I
mean
whatever
it
is
that
we
end
up
doing
from
the
server
in
terms
of
initiating
new
paths
and
so
on
and
so
forth
will
make
it
more
interesting
for
us
to
understand
the
the
interactions
with
various
things
and
as
eric
was
pointing
out,
there
were
a
lot
of
little
corners
which
we
walked
down
when
we
were
doing
the
client
initiated
migrations
and
be
interesting
to
go
down
those
paths
again,
once
you
have
a
mechanism
here
to
speak
of.
M
Yeah,
I
agree,
but
it
sounds
like
people
are
interested
to
actually
have
that
discussion.
So
that's
good.
L
Yeah
we
we
have
had
that
discussion
a
number
of
times
in
the
author's
list
and
on
github,
and
I
I
really
disagree
with
miria
on
that.
One
where
I
come
from,
is
that
the
multibus
option
has
been
designed
to
be
as
compatible
as
possible
with
the
existing
quick
v1,
and
if
we
do
a
departure
from
the
quick
v1
restriction
that
we
only
start
from
the
client
that
departure
as
a
cascade
of
implication,
an
example
would
be,
for
example,
who
validates
the
pass
right.
L
M
Yeah,
I
mean,
as
you
said,
we're
disagreeing
a
little
bit
here,
because
we
do
change
things
in
the
in
the
base
back
right
and
we
we
ask
we
to
keep
it
minimal.
But
if
there's
no
good
reason
to
keep
it,
then
we
should
remove
it
and
there.
L
L
M
So
you're
saying
even
if
the
server
opens
the
connection,
we
still
need
the
restriction
that
first,
the
client
needs
to
send
data.
L
Yeah
and
and
basically,
if
you
don't
do
that
cascade
of
thing,
it
doesn't
work.
I
really
wish
we
dropped
that
issue
for
now
and
we
punt
it
to
an
explicit
extension
that
that's
what
mike
said,
which
is
like
bishop,
which
is
basically
to
exchange
the
address,
et
cetera
et
cetera.
Do
the
full
shebang?
Don't
don't
do
this,
one
tiny
thing
on
the
side
and
don't
hide
it
in
the
main
draft.
M
C
Okay,
let's,
let's
get
harold
in
the
room
to
speak
thanks
chris.
U
M
M
Okay,
so
I
just
I
just
note
this-
I
don't
think
we
haven't
have
to
discussion
right
now.
This
is
an
open
issue.
We
have
where
there
was
a
proposal
to
also
have
some
kind
of
effectively
zero
rtt
behavior
for
new
passes,
because
currently,
when
you
open
a
new
path,
you
have
pass
validation.
So
it
takes
a
whole
round
trip
time
until
you
can
send
data
on
the
new
path
and
the
question
was:
do
we
want
to
make
more
than
that?
C
We
have
three
people
in
the
queue.
Please,
please
keep
it
short
janme.
V
Please
everyone:
this
is
yummy
from
alibaba
for
this
issue.
We
can
include
some
token
mechanisms
to
help
endpoints
validate
the
appeals
address
quickly
and
surely
this
will
bring
complexity
for
creating
new
passes
and
we
need
to
consider
the
security
problems
very
carefully
so
at
first
we
want
to
make
sure
that
whether
people
need
this
or
not,
because
in
quick
we
want
the
points
must
to
pass
validation
first
before
sending
non-profit
package
during
migration.
V
R
Brian
hi
brian
trammell,
google.
I
I
would
just
point
out
that
a
answer
to
this
question
what
I
think
caused
the
previous
discussion
to
collapse
into
one
of
its
two
states,
so
maybe
spending
a
little
bit
more
time
on
this,
not
right
now,
but
revisiting
the
multipath
server
restriction.
Removal
after
there's
an
answer
here,
because
I
think
that
might
actually
make
the
the
path,
validation,
restriction,
removal
on
server,
initiated
paths
easier.
C
M
M
M
So
that
is
a
discussion
that
came
up
when
we
tried
to
clarify
how
the
idle
timeout
worked
and
currently,
what
we're
saying
is
that,
like
you,
have
this
idle
max
idle
timeout
parameter
and
you
just
use
it
for
all
paths,
but
given
the
passes
might
be
very
different
or
you
might
use
them
in
a
very
different
way.
It
could
be
useful
to
actually
have
different
timeouts
on
different
passes
and
you
might
want
to.
M
Actually
you
can
also
just
like
do
that
locally
and
it
will
not
totally
break,
but
it
might
be
useful
to
actually
sing
all
this
information
to
the
other
end.
So
discussion
so
far
was
basically
we
had
some
people
saying
yes,
this
is
helpful
because
it's
more
explicit
and
it
makes
it
easier
to
close
your
passes
at
a
at
a
valid
point
of
time,
and
then
there
was
also
an
argument
about.
No,
this
doesn't
give
you
much
and
it's
just
too
complicated.
M
P
M
Okay
and
that's
an
issue
which
is
actually
related
to
the
packet
number
space
question,
because
that's
only
an
issue
that
exists
if
we
use
single
packet
number
spaces
on
our
paths.
So
if
we
decide
to
use
multiple
packet
numbers
spaces,
this
issue
just
goes
away,
and
we
also
we
have
a
pr
for
this.
So
maybe
this
is
straightforward,
but
some
feedback
would
be
useful.
M
So
the
problem
is,
if
you
have
a
single
packet
number
space
and
you
send
an
acknowledgement
and
you
provide-
and
you
acknowledge,
packets
in
the
same
frame
that
have
been
sent
or
received
over
different
passes
and
one
of
those
packets
carried
an
ecn
marking.
You
cannot
as
the
receiver
of
the
act.
You
cannot
distinguish
anymore,
which
pass
the
ecn
marking
was
on,
because
the
ecn
feedback
in
quick
is
just
like
a
counter,
so
the
counter
increased,
but
you
don't
know
which
path.
M
So
we
need
to
address
this
issue.
It's
an
open
problem
and
there
and
like
what
the
current
text
proposes
is
like
three
things:
it's
a
recommendation
for
the
sender
of
the
acknowledgement
to
actually
separate
if
you,
if
the
ecn
marking
increases,
if
ecn
is
used-
and
you
see
a
marking,
then
try
to
separate
your
ex
you
can.
You
can
always
always
just
acknowledge
packets
of
one
pass
in
one
egg.
That's
a
loud
and
quick!
M
M
That's
what
the
pr
says
right
now.
The
other
option
is
always
to
just
disable
ecn
support,
and
so,
if
you
received
such
an
acknowledgement,
you
can
decide
at
this
point
to
disable
or
of
course
you
can
just
like
whenever
you
use
single
packet
number
spaces,
you
just
don't
use
ecn
at
all.
That's
like
the
easiest
solution,
but
would
make
me
sad
actually,
so
that's
the
pr
and
I
don't
know
there
was
a
bit
of
discussion
with
christian.
I
think,
but
I
believe
this
is
mostly
ready
to
merge
if
people
agree
to
these
recommendations.
W
W
So
I
agree:
okay,
happily
read
it
and
happily
give
comment.
F
M
Okay,
yeah,
then,
let's
move
to
the
big
issue,
actually
one
more
yeah,
so
the
the
big
thing
that
we
need
to
resolve
in
this
group
is
really
do.
We
want
to
use
single
packet
number
spaces
on
all
paths
or
use
separate
packet
number
spaces
on
each
path
and,
like
we
had
this
discussion
already
before,
we
adopted
the
draft
and
we
kind
of
in
a
stage
where,
at
the
current
draft,
actually
describes
both
options
in
the
hope
that
people
would
implement
both
or
one
of
the
options.
We
get
some
more
experience
with
it.
M
So
this
is
a
little
bit
the
analysis
about.
What's
the
pro
and
cons
and
I
quickly
will
run
through
it,
but
then
also
talk
about
the
pr
that
we
have
there
right
now
and
like
christian
or
anybody
of
the
other
authors
feel
free
to
jump
in
the
queue
anytime
and
just
add
stuff.
M
So,
from
an
efficiency
point
of
view,
the
multipack
and
multiple
packet
number
space
solution
is
more
efficient
because
you
can
just
reuse
the
existing
loss,
recovery,
logic
and
everything
is
like
clear
and
easy,
but
effectively
what
the
implementation
experience
showed
so
far
is
that
the
single
number
packet
space,
if
you
like
do
a
little
bit
of
an
additional
effort,
is
like
from
the
performance
point
of
view
nearly
similar
efficient.
So
that's
like
not
the
big
point
to
distinguish
things
here.
M
Code
complexity
is
something
we
discussed
a
lot
because
initially
we
thought
like
having
a
single
number
package.
Space
means
actually
less
code
changes
while
having
multiple
packet
number
spaces
really
adds
like
a
complete
new
code
path,
and
if
you
don't
use
that
feature,
it
might
not
be
used
very
often.
So
that
was
a
concern
but
to
actually
have
a
good
performance
with
a
single
packet
number
space
solution.
M
You
have
to
be
really
smart
about
how
to
to
send
your
packets
in
order
to
keep
the
exercise
small
in
order
to
have
efficient
recovery
mechanism
and
so
on
and
having
the
smartness
in.
There
is
additional
code
and
it's
additional
logic,
and
it's
probably
also
additional
logic
that
we
don't
just
want
to
leave
to
the
implementation
but
specify
in
the
draft
to
some
extent,
because
otherwise
people
will
implement
this
and
just
get
bad
performance
and
will
not
like
it.
M
So
we
have
to
give
a
advice
about
it
and
its
additional
code,
and
so
you
know
the
big
difference
here
is
like
the
third
point:
it's
the
egg
handling
where,
like
multi-part
the
multi-packet
number
space
solution,
we
add
a
new
acknowledgement
for
and
you
can
really
distinguish
which
packet
was
sent
and
in
which
path
and
everything
and
ecn
information
is
clear.
We
don't
have
the
problem
that
I
was
just
like
talking
about
before
and
in
the
single
packet
number
space
solution.
M
You
really
have
to
add
some
logic
to
make
sure
that,
like
your
x,
don't
increase,
because
if
you
have
two
passes
with
very
different
delay,
you
see
x,
you
see
holes
in
your
egg
in
your
x
space
and
that
can
increase
the
x
a
lot.
So
you
have
to
be
smart
about
how
you
send
your
packets,
how
to
distribute
the
packet
numbers
and
how
you
create
your
ex
and
yeah.
M
So
you
know
like,
as
I
said,
I
think
the
code
complexity
is
like
not
the
big
point
here
anymore,
because
they
are
kind
of
trade-offs.
But
what's
still
a
big
difference
is
that
use
of
no
connection
id
on
both
ends
is
not
supported
with
multiple
packet
number
spaces
effectively.
The
multiple
packet
number
spaces
need
some
kind
of
identifier
in
the
packet
to
figure
out
where
your
packet
belongs
in
order
to
decode
it.
So
you
need
a
connect
connection
id
and
that's
really
the
big
difference
next
slide
yeah.
M
So
we
did
discussed
about
a
lot
about
like
how
you
can
enable
this
use
cases
also
for
multiple
packet
number
spaces,
and
there
are
some
ways
to
actually
handle
this.
If
you
only
have
like
one
connection
id
in
one
direction,
it
might
not
like
it
might
be
a
little
bit
fragile
or
like
not
easy
and
some
fiddling
that
there
are
possibilities,
but,
like
this
table,
mainly
just
tells
you
that
you
at
least
lead
like
our
conclusion.
R
So
thanks
a
lot
for
both
of
these
tables.
That
makes
this
issue
a
lot
easier
to
understand.
I
would
say,
on
the
previous
on
the
previous
slide,
you'd
said
that
the
the
code
complexity
is
kind
of
a
trade-off.
You
have
to
choose
one
or
the
other.
R
I
would
point
out
that
in
the
single
pn
space
in
order
to
be
performant,
you
need
an
entire
set
of
special
cases,
whereas
in
the
multiple
pn
space
you
just
need
a
new
abstraction
where
you
might
not
have
had
one
right
like
so
you
already
have
a
loss,
recovery,
algorithm
instance.
Maybe
you
didn't
split
it
out
in
the
right
way
and
we're
forcing
you
to
do
that.
R
That
does
seem
to
be
like
a
very
green
yellow
to
me,
so
I
do
think
that
the
the
the
code
complexity
on
the
single
pn
space,
if
you're
trying
to
do
a
performance,
seems
to
me
to
be
a
negative
on
that
side
than
the
other
one.
It's
like.
Basically
do.
We
want
to
force
people
to
do
more
more
code
complexity
with
respect
to
like
a
completely
separate
code
path,
as
opposed
to
an
instance
of
an
existing
code
path
versus
the
zeroing
cid
problem.
R
M
R
F
Yes,
okay,
so
I
think
perhaps
there's
a
code
complexity,
piece
that
you
haven't
gotten
on
the
multiple
p
n
spaces
side
here,
which
is
the
key
schedule.
If
you're
going
to
be
using
multiple
packet
number
spaces,
you
can
do
separate
key
derivation
for
the
different
spaces.
Otherwise
you
will
have
knots
reuse.
F
So
I
guess
that's
probably
just
another
version
of
the
saying
that
same
thing,
which
is
you
need
multiple
instantiations,
but
it
does
change
the
way
that
you
even
start
using
a
single
path
in
that
case,
which
is
a
little
awkward.
L
I
I
have
been.
We
have
been
discussing
that
for
quite
some
time
and
the
the
point
that
the
master
that
miriah
is
making
there
on
the
additional
code
is
really
based
on
the
implementation
experience
in
my
implementation
and
and
yes,
you
do
need
the
additional
code.
If
you
want
to
to
do
that,
that
additional
code
is
actually
on
a
common
path.
L
You
can
use
it
in
the
in
the
unique
past,
but
it
has
the
advantage
of
dealing
very
well
with
things
like
the
size
of
hack
in
general
and
with
the
the
issue
of
out
of
order
delivery
in
general,
because
out
of
order
delivery,
also
messes
the
size
of
hack,
and
if
you
you
were
to
see
that
you
would
see
that,
for
example,
if
the
network
was
doing
equal
pass,
multi-path
that
will
be
it'd
be
useful
as
well.
So
in
my
implementation,
that
logic,
which
was
added,
is
in
the
in
the
main
pass.
L
The
one
one
thing
I
would
point
out
in
the
proposed
solution
that
we
have
is
that.
M
Actually
christian,
as
you're
here,
should
we
just
move
on
to
slides
up,
and
you
want
to
talk
about
this.
L
Yeah
yeah
I
can,
I
can
take
it,
I
mean
basically
because
I
I
I
look
at
the
problem
and
I
say:
okay,
we
mostly
have
an
issue
with
no
with
zero
length
connection
id.
If
we,
if
we
were
not
using
zero
lens
connection,
id
we'll
just
use
multiple
number
space
and
yes,
it's
a
bit
awkward,
I
mean
we
have
to
do
some
changes
on
the
interface
to
the
to
the
encryption,
but
that's
not
too
bad.
I
mean
it's,
it's
something
you
do
once
and
it's
done
it's
easy
to
test
now.
L
L
On
the
receiver
side,
I
mean
if,
as
a
receiver,
I
decide
to
say
hey,
I
am
going
to
receive
another
connection
id
and
I
am
going
to
also
support
the
server
when
using
multipass
when
using
multiple
paths,
then,
by
doing
that,
the
the
you
are
forcing
yourself
to
implement
all
the
arc
logic
for
multiples,
I
mean
basically
make
sure
to
not
sending
too
many
hugs
making
them
short,
etc.
L
Here
I
mean
on
the
on
the
side
of
the
the
node
that
is
not
using
zero
lens
connection
id
but
speaks
to
a
node
that
is,
in
that
case
the
complexity
is
on
the
sun
path.
I
mean,
how
do
you
manage
multiple
links?
How
do
you
manage
to
do
that
and
and
in
the
loss
recovery
logic,
mostly,
that
loss
recovery
logic
only
engages
when
the
sender
actually
sends
on
multiple
paths
at
the
same
time,
so
it's
again
optional,
we
can
have
a
sender
that
basically
says
yeah
I
mean
I'm
going
to
do
multipass.
L
I
assume
that
my
peer
will
have
a
full
length
connection
id.
If
they
don't,
I
will
do
a
fall
back
and
I'll
say
I
don't
want
to
buy
that
complexity,
we'll
just
send
mostly
on
one
bus
that
that
works.
So
basically,
I
think
that
if
we
take
that
approach,
we
get
to
a
solution
where
the
complexity
on
either
side
is
optional.
C
Thanks
christian
we're
we're
at
time
for
this
session
all
together-
and
I
appreciate
we've
got
some
people
in
the
queue
but
we're
seeing
good
interest
in
this
topic.
So
I
think
we'll
need
to
take
it
offline
and
follow
up.
V
To
support
the
effort
for
the
unified
proposal
on
last
page
for
three
reasons:
first,
the
solution
takes
advantage
from
both
single
pn
space
and
multiple
viewing
spaces.
As
for
most
implementations,
which
use
long
connection
ids,
it
could
take
the
best
efficiency
of
ack
ranges
with
multiple
opinion
space
and
for
implementations
which
use
non-connection
id.
They
could
support
mod
pass
with
single
pin
space.
V
We
don't
have
to
worry
about
choosing
from
a
or
b
we
just
take
the
best
of
both,
and
the
second
point
is
that
for
implementations,
we
just
have
one
single
solution
for
each
situation.
It
surely
would
reduce
complexity,
and
the
last
point
is
that
we
don't
have
the
risk
of
failure
for
interrupt
tests,
and
this
will
probably
happen
in
the
previous
version.
M
Thank
you
yes,
so
the
the
pr
here
probably
needs
some
editorial
work.
Still
so
don't
get
confused
by
that,
but
other
than
that,
I
think
we
have
a
way
proposed
way
forward
and
like
if
more
people
want
to
implement
and
provide
feedback.
That
would
be
very
useful.
C
X
X
All
right
morning,
everyone
welcome
to
the
superman
update
for
q
log
so
called
because
for
all
three
of
our
existing
documents,
this
last
update,
we
basically
went
from
clark
kent
to
cal-el,
meaning
that
internally
they're
still
pretty
much
the
same
person.
They
have
the
same
superpowers,
they
just
look
quite
differently
on
the
outside,
and
that
is
my
way
of
saying
that
we
did
mostly
editorial
changes
in
this
past
update,
but
of
a
very
specific
kind.
X
As
you
might
know,
one
of
the
main
things
that
qlogs
does
is
define
different
types
of
events
that
you
can
log
for
the
different
protocols
so
and
the
event
fields
that
you
need
to
log
in
the
types
of
the
data
that
is
in
those
fields
and
to
define
what
that
should
all
be
up.
Until
now,
we
had
been
using
a
typescript
alike
dialect,
mainly
because
at
the
beginning,
when
I
started
q
log,
I
didn't
know
about
any
better
options.
X
X
We
switched
from
typescript
to
cddl
and,
as
you
can
see
from
the
examples
here
on
top,
it's
actually
relatively
trivial
for
most
things,
it's
mostly
a
little
bit
of
syntax
that
changes
string
is
now
called
text
and
number
is
called
uint
and
the
question
mark
is
for
an
optional
field,
is
before
the
field
instead
of
after
things
like
that
right,
so
it
should
be
relatively
simple.
Even
for
people
not
very
experience
with
cdl
to
understand
most
of
of
the
new
syntax.
X
J
X
X
For
example,
one
of
the
things
that
are
in
the
quakecraft
is
that
you
can
have
tls
level
alert
errors
and
they
map
onto
quick
level
errors,
but
we
didn't
want
to
make
a
new
text
string,
enum
entry
for
each
and
each
of
them
individually.
So
originally
in
typescript,
we
had
like
this
very
non-official
syntax
for
doing
that,
while
with
cddl
we
can
actually
use
the
regex
operator
and
defined
as
a
little
bit
more
clearly
what
type
of
string
we're
expecting
there
for
that
kind
of
error.
X
Another
thing
that
we
can
use
is
the
unwrap
operator,
so
we
have
quite
a
few
events
that
actually
share
quite
a
few
fields.
Think
about
you
know
the
difference
between
a
pack
and
the
scent
and
a
packet
received.
The
packet
remains
a
packet
previously.
We
may
mostly
manually
copy
to
the
fields,
and
sometimes
you
know,
if
you
forget,
to
change
or
to
keep
these
changes
consistent
with
cddl
editorially.
X
It's
very
simple:
you
have
this
unwrap
operator
the
squiggly
line,
where
you
just
have
the
common
fields
in
one
part,
and
then
you
can
just
unwrap
them
at
the
location
that
you
need
them.
Basically,
meaning
you
just
copy
paste
the
fields
to
where
you
where
you
want
to
reuse
them.
Basically,
so
that's
mostly
editorial,
but
the
main
thing
that
I
was
really
happy
with
with
cddl
is
the
ability
to
be
a
bit
more
specific
about
extension
points.
X
Ctl
is
a
feature
called
sockets
or
plugs,
which
are
indicated
here
with
the
dollar
sign,
as
you
can
see
where
the
idea
is
that
it's
kind
of
like
a
a
partial
type,
so
you
define
the
type.
So
in
this
case
the
protocol
event
body,
that's
defined
partially
at
one
place,
and
then
you
can
extend
the
same
type
with
new
possible
definitions
in
a
different
place
and
in
practice
for
qlog.
This
is
very
useful
because
we
have
what
we
call
the
main
schema,
so
that
is
like
the
main
document
describing
all
the
high
level
stuff.
X
For
example,
what
is
a
generic
event?
Look
like,
as
you
can
see,
on
top,
but
then
we
have
sub
documents
that
describe
the
different
events
for,
in
this
case,
hp3
and
quick,
and
you
kind
of
want
to
properly
link
up
those
two
definitions
across
the
different
documents
that
we
have,
and
that
was
very
difficult.
I
found
in
typescript
there
we
just
again
had
the
data
for
an
event
was
anything
it
could
even
be
just
a
number
or
a
string,
not
very
well
types.
Well
here.
X
The
approach
we're
taking
now
is
that
you
have
a
very,
very
clear
listing
of
all
the
different
events
in,
for
example,
the
acp
tree
document,
and
then
you
can
say
these
belong
to
the
protocol
event
body
partial
type
that
is
defined
in
the
main
schema,
and
so
you
can
do
a
proper
checking
in
in
tooling,
for
example,
as
I'll
get
to
soon.
If,
if
you're,
actually
using
these
things
correctly,.
X
As
they
are
defined
in
the
specs
right,
those
are
like
the
main
things.
There
are
a
lot
of
smaller
things.
For
example,
we
have
the
size
operator,
meaning
we
can
be
a
bit
more
precise
about
how
large
certain
fields
need
to
be.
That's
not
super
useful
for
json,
but
let's
imagine
someone
wants
to
make
a
binary
serialization
of
q
log
down
the
line.
It's
good
to
have
these
things
defined
up
front.
X
X
Next,
to
those
things,
we
did
a
big
consistency,
update
of
all
of
this
across
the
different
documents
we
properly
named.
All
the
code
blocks
we
properly
split
up
examples
and
that
kind
of
stuff
to
make
things
a
bit
more
tidy,
but
so
that
was
like
the
main
thing
that
we
did
in
this
in
this
update,
and
I
wanted
to
get
back
a
little
bit
to
why
we
did
this.
X
First
of
all,
we
now
have
cddl,
which
is
a
iutf
standard,
is
an
existing
rfc,
which
is
better
than
having
not
an
rfc,
but,
in
my
mind
much
more
importantly,
is
this:
this
will
help
us
with
having
some
automated
tooling
some
powerful,
automated
tooling
down
the
line.
What
we
did
for
these
drafts
is
already
have
some
basic
tools
that
basically
extract
the
cdl
from
the
markdown
documents
and
combine
them
into
a
single
cdl
file,
which
we
can
then
validate
using
existing
cgl.
X
Tooling,
to
make
sure
that
we
haven't
forgotten
anything
or
that
our
type
interferences
are
are
consistent
and
that
kind
of
stuff-
and
then
we've
also
been
using
this
to
generate
dummy
json
files,
so
that
we
can
actually
check
you
know
is,
is
our
is
our
q
log?
Is
our
cdl
definition
actually
correct?
Is
it
representing
what
we
want
it
to
represent?
So
that's
what
we
have
and
then
we
hope
to
go
for
even
more
complex
things
which
are
here
in
red,
which
is,
for
example,
generating
other
representations
of
the
same
schema.
X
So,
for
example,
let's
say
you
have
you
want
to
implement
a
q
log
library
and
for
some
reason
you
want
to
have
each
of
the
events
as
a
separate
class
or
substruct
right
now.
You
would
have
to
do
that
manually,
which
again
is
annoying
to
keep
things
consistent
with
this.
You
might
be
able
to
generate
this
automatically
down
the
line.
Another
very
useful
use
case
is
automatically
validation
of
actual
q
log
json
files.
So
I
imagine
this
that
you
just
upload
a
q
log
to,
for
example,
qvis,
and
then
it
can
tell
you.
X
X
All
of
these
things,
I
think,
will
be
useful
not
as
much
for
existing
implementations,
but
especially
for
newer
ones
or
the
ones
that
still
need
to
update
to
the
newer
versions,
but
especially
as
we
look
to
extend
q
log
beyond
what
we
have
now
again,
I'm
we're
still
not
sure
if
we're
going
to
look
for
tcp
or
anything,
but
I
think
it's
at
least
I
have
concrete
plans
to
start
working
on
some
q
log
things
for
web
transport
for
mask
for
multi-path
down
the
line,
and
I
think,
having
this
kind
of
proper,
tooling
and
validation
in
place
will
make
that
process
a
lot
easier
for
everyone
trying
to
implement
this
down
the
line
right.
X
So
that's
what
we
did
why
we
did
it
now.
What
we
want
to
do
by
next
atf
is
mostly
more
editorial
stuff.
So
a
lot
of
the
pros
was
written
during
draft
23
24,
and
so
a
lot
of
things
has
been
updated
since
a
good
example
of
that
is
the
qbank
drafts
right,
where
we
have
renamed
http
headers
to
http
fields,
which
are,
of
course
carried
in
the
new
http3
fields
frame
right
and
some
other
pros
updates.
X
X
X
So
we
really
want
to
get
on
that
for
the
next
step.
Most
of
these,
I
think
we
can
do
ourselves,
but
especially
for
qpack.
I
don't
think
we
have
to
the
necessary
expertise
within
the
editors,
and
so
we
are
looking
for,
as
you
can
see
on
the
right.
Our
own
lowest
lane,
or
maybe
in
this
case
a
louis
lane
to
come,
help
us
out
with
that.
I
created
an
issue
for
that
number
199
if
you're
interested.
X
So
that's
things
we
can
mostly
do
ourselves
and
think
we
have
a
grasp
on,
but
there
are
a
couple
of
open
issues
that
I
wanted
to
bring
here
now
that
will
need
to
be
resolved
before
we
move
to
rfc.
Of
course,
one
of
the
ideas
is
split
up
the
main
scheme
even
further
to
remove
especially
operational
concerns
there.
For
example,
we
have
an
environment
fire
variable
that
you
should
set
as
a
implementation
where
the
queue
log
should
end
up.
X
X
How
do
we
handle
that
within
q
log?
The
more
I
try
to
find
a
business,
the
more
it's
it's,
I
think,
or
at
least
I'm
a
very
bad
searcher.
It
appears
that
this
is
an
unsolved
problem,
or
at
least
an
unspecified
or
unstandardized
problem
that
most
companies
just
do
what
makes
sense
for
them
internally
and
they
do
it
in
an
ad
hoc
fashion,
and
there
are
no
existing
documents
that
you
can
refer
to.
X
As
you
should
follow
these
specific
guidelines,
and
if
that
is
indeed
the
case,
then
I
wonder
if
this
is
something
we
should
be
doing
in
q
log,
because
it
seems
like
a
very
big
undertaking,
a
very
important
undertaking
as
well,
but
something
that
would
very
much
delay,
qrock,
lock
itself
right
so
either.
I'm
looking
for
people
to
tell
us
these
are
the
best
practice
documents
you
can
refer
to,
or
people
to
tell
us
what
is
like.
X
C
Thank
you
very
much
robin
you
know,
just
just
as
a
independent
perspective,
the
the
cdl
work
was
good
and
those
pr's,
if
anyone
didn't
see,
were
quite
big,
so
the
the
efforts
are
appreciated,
and
that
does
really
unblock
us
on
on
progressing
these
other
issues
that
are
both
big
and
small.
But
I
think
if
people
would
like
to
contribute,
it
would
be
really
appreciated
it.
It
might
be
just
something
simple:
to
buy
off
and
knock
off
got
a
few
people
in
the
queue
brian.
R
So
yeah
tldr
robin
let's
schedule
a
little
bit
of
time
offline
to
talk
about
the
security
and
privacy
stuff.
I'm
willing
to
help
on
that.
I
think
you've
properly
identified
that
this
is
a
gigantic
can
of
worms.
R
We
semi
tackled
some
of
these
things
in
the
ipfix
and
peace
psamp
working
groups
about
10
to
15
years
ago,
so
there
might
be
some
prior
art
there
that
we
can
draw
from.
But
let's
follow
up
on
that
offline
thanks
brian
eric.
S
Kicking
in
all
right
yeah,
I
would
say
for
the
security
and
privacy
side
of
things
I
would
advocate
for
per
field
indicators,
in
addition
to
whatever
other
guys
are
going
to
give
like
big,
plus
one.
S
Let's
chat
with
brian,
and
you
know,
figure
out
how
we
tackle
the
kind
of
high
level
concepts,
but
something
that
we've
found
internally
for
a
variety
of
things
that
get
spit
out
by
different
implementations
of
of
various
things,
at
least
within
apple
platforms,
is
that
it
is
extremely
helpful
to
have
kind
of
a
very
local
indication
of
hey
you're
in
the
middle
of
writing
some
random
code
somewhere-
and
you
may
not
be
thinking
about
the
fact
that
somebody
could
use
this
to
derive
or
understand,
x,
piece
of
information
that
could
be
considered
private.
S
So,
yes,
please,
let's
have
some
high-level
guidance
somewhere,
but
I
think
having
something
that
is
in
this
document,
not
somewhere
else
for
potentially
each
field
of
hey.
You
know
we've
thought
of
some
random
thing
that
you
could
combine
this
field
with
this
other
one
and
understand
something
about
a
person
that
you
might
not
want.
S
X
S
T
I
think
I'm
in
yeah
I'll
agree
with
what
eric
said
I
was
up
to
I
was.
I
came
up
here
to
say
that
perfield
indicators
are
very
useful.
However,
I'm
also
going
to
caution
against
going
too
deep
down
that
rabbit
hole.
This
is
there's
a
lot
of
local
semantics
attached
to
what
data
means.
T
What
there's
this
value
in
indicating
levels
of
sensitivity
of
of
various
bits
of
data
protocol
information,
because,
as
eric
was
pointing
out
oftentimes,
the
consumers
of
this
information,
don't
necessarily
understand
what
how
how
pieces
of
information
can
be
put
together
with
other
pieces
of
information,
but
at
the
same
time,
because
you
don't
have
global
view
of
how
exactly
these
traces
are
being
used,
are
they
being
used
in
a
client
side?
A
single
you
know:
are
they
being
used
in
in
in
in
in
tandem
with
other
logs
that
are
also
existing?
T
You
just
don't
know
the
scope
of
the
total
storage
that
we're
talking
about
the
total
view
that
we're
talking
about
it
becomes
tricky.
So
I'll
say
that
these
should
be
considerations,
not
rules
and
that's
appropriate.
I
think
it's
definitely
very
useful
to
have
perfect
indicators,
but
just
be
careful
about
losing
yourself
in
there.
C
So
with
that
robin,
thank
you
very
much
and
goodbye
and
we'll
go
on
to
the
last
presentation
of
the
day
from
nico.
W
Hi,
I'm
I'm
going
to
read
the
mic
hi,
I'm
gory,
I'm
not
nicholas!
So
if
nicholas
comes
online,
he
will
carry
on,
but
I'm
one
of
the
people
who
worked
on
this
along
with
christian
and
tom
so
and
stefan
from
orange.
So
let's
talk
about
what
this
is
about.
Let's
draft
please
next
slide!
Oh
nicholas
is
in
control,
yes
go
for
it
and
we
still
have
no
audio
okay
right,
okay,
so
the
premise
is
that
there
are
two
pieces
to
this
puzzle.
W
One
of
them
is
about
remembering
some
of
the
transport
parameters
from
a
previous
connection
and
using
this
to
somehow
initialize
a
new
connection
and
we've
been
doing
this
with
tcp
in
one
way
or
another
for
a
while
and
quicks
different
to
tcp,
quick's,
probably
best
to
tcp.
So
can
we
do
this
explicitly?
W
Is
it
possible
to
implement
a
way
of
caching
and
reusing
the
parameters
and
perhaps
specify
a
bit
of
logic
around
it?
So
christian
very
helpfully
implemented
some
of
this
in
pico
click.
You
can
go
to
the
url
and
download
this
and
use
it
and
we
have
been
using
it
and
testing
it.
So
the
method
does
work.
W
We
chose
one
use
case
because
nicholas
has
a
lot
of
experience
with
satcom
and
we
got
some
real
satcom
links,
some
simulated
satcom
links
and
we
evaluated
how
this
might
help
when
you
have
a
path
that
has
a
very
large
delay
and
perhaps
a
lot
of
bandwidth.
But
you
don't
initially
know
whether
to
use
it.
So
you
do
this
usual
slow
start
thing
and
it
goes
very
slowly
and
you
do
this
new
thing
and
it
can
go
very
quickly,
which
is
what
quick
should
do.
W
They
would
say
more
about
this
data.
It's
not
my
data
and
the
thing
is:
there's
a
link
at
the
bottom,
so
you
can
look
at
what
the
data
looks
like
and
look
at
the
experiments
that
were
done
and
read
the
bdp
extension
draft,
which
is
one
of
two
drafts.
We
know
factored
out
so
after
talking
about
this
for
a
few
times
at
the
ietf
we're
coming
here
with
a
bit
more
solidarity
and
what
we're
saying
is
we
think
this
is
actually
two
problem
spaces.
W
W
You
were
last
time
and
of
course,
what
does
the
same
mean
if
you
measured
it
a
year
ago
and
you
measure
it
now.
Obviously,
it's
stupid
if
you
measured
it
a
week
ago,
a
day
ago,
an
hour
ago,
what's
the
lifetime
of
this
information,
we
don't
have
answers
to
that
one
by
the
way,
but
we
think
we
should
have.
W
W
Well,
it
turns
out
that
the
overestimation
you
might
get
in
using
standard
congestion
control
and
the
starvation
of
things
probably
could
be
avoided
in
most
cases,
by
not
jumping
to
the
full
capacity
you
jump
to
a
little
bit
be
more
conservative
and
importantly,
you
get
really
out
of
that
congestion.
When
you
see
it,
maybe
this
is
actually
a
reasonable
way
to
operate.
Maybe
it's
not
far
from
things
we've
heard
in
ttpm
recently,
with
high
start
plus
plus.
W
W
W
Nicholas
next,
oh
options:
tremendous
promises
there's
different
ways
to
do
this.
Otherwise
we
won't
be
talking
about
it
and
we'd
like
to
see
this
widely
implemented
by
other
people,
because
that's
what
quick's
about
different
implementations
working
together
and
we'd
like
to
suggest
one
of
these
has
been
the
recommended
way
to
do
it.
Let's
go
for
the
next
slide
next
slide
nicholas.
W
C
And
that
is
the
important
question
here.
So
we've
had
this
topic
on
the
as
time
permits
bucket
of
quick
and
we've
always
run
out
of
time
for
the
last
few
sessions
at
least
anyway.
It
would
be
really
good
to
get
an
indication
of
people
think
this
is
something
of
interest
to
the
working
group.
C
Maybe
you
know
the
shape
the
specific
shape
of
it
needs
to
be
slightly
different,
but
you
know
these
folks
have
been
working
on
things
and
I
think
you're
looking
for
an
indication
should,
should
you
continue
working
on
it
or
switch
tracks
to
something
else,
so
I
think
you
know
we're
at
time.
We
probably
can't
have
that
chat
right
now,
but
it'd
be
really
helpful.
C
If,
if
the
group
here
and
remote
and
on
the
list
could
give
us
an
indication-
yes,
no
rather
than
just
crickets,
because
we'll
have
to
read
that
and
interpret
that
in
some
way.
So
thank
you.
Please
get
in
touch
with
that,
we're
at
time
we're
going
to
get
booted
off
in
a
minute
I
neglected
to
mention
at
the
start
of
the
session.
We
have
quick
and
hgb3
stickers
at
the
front
desk.
Please
come
and
get
some
because
I
have
hundreds
of
them
in
my
bag.