►
From YouTube: IETF111-QUIC-20210727-1900
Description
QUIC meeting session at IETF111
2021/07/27 1900
https://datatracker.ietf.org/meeting/111/proceedings/
A
Okay,
we've
hit
the
top
of
the
hour,
I'm
sure
some
more
people
will
be
joining
over
the
next
few
minutes,
but
I
think
we
can
make
a
start
on
some
of
the
administrator
of
the
meeting
just
while
we
get
going.
Basically,
this
is
the
quick
working
group
meeting
at
ietf111.
A
Thank
you
for
joining
us
present
are
matt
joris
and
I
and
lars
has
just
appeared
hello
lars
as
your
lovely
co-chairs
of
the
quick
working
group
for
this
two-hour
meeting
session.
As
usual,
we
don't
need
to
do
blue
sheets
today,
because
meat
echo
will
cover
that
problem.
For
us,
we've
had
one
volunteer
to
help.
Take
minutes,
that's
robbie
marks,
but
robin
will
be
presenting
at
later
on
during
this
meeting.
So
if
we
could
get
someone
to
to
help
out
with
the
minute
taking
that
would
be
fabulous.
A
We
we
will
need
to
do
that.
Otherwise
we
won't
be
able
to
proceed.
There's
some
important
questions
in
robin's,
slides
as
well.
So
having
a
record
of
those
things,
it's
going
to
be
very
useful.
What
else
is
there
to
say?
We
have
a
very
busy
agenda,
as
you
can
see,
we've
got
it
up
here.
We
can
bash
that
in
a
moment,
but
just
to
say
that
most
of
these
presentations
will
be.
A
You
know,
half
and
half
so
five
minutes,
plus
five
minutes
for
time
for
discussion,
we'll,
try
and
maintain
that
as
much
as
possible.
I
had
a
note
from
martin
thompson
before
the
meeting
he's
not
gonna
be
able
to
make
it
we'll
touch
briefly
on
the
the
quick
greasing
bit,
but
probably
won't
dive
into
much
discussion
there
and
that
might
give
us
time
for
some
of
the
the
additional
as
time
permits,
questions
or
sorry
presentations
that
we
have.
It's
always
important
to
note
the
note.
A
Well
so,
if
you're
not
familiar
with
our
policies
around
ietf
participation
and
contributions,
please
take
a
moment
to
familiarize
yourself
with
these
things,
just
listing
our
expectations
for
how
you
participate
here
so
yeah,
it's
a
brilliant
thing,
go
and
read
up
if
you're
not
familiar,
but
these
are
the
terms
under
which
we
engage
in
this
meeting
and
other
itf
venues
oops.
That's
the
wrong
thing
wrong
window,
so
we
are
trying
out
mika,
beat
echo's
new
feature
here
today
that
you
may
have
seen
some
of
the
sessions
yesterday.
A
A
B
Two
quick
things:
one
is
that
I
hope
to
not
be
chair
of
quick
anymore
before
this
meeting
ends
as
soon
as
saheed
presses
the
button
in
the
day
tracker
with
a
fundraiser,
but
we
have
two
chairs.
We
don't
need
three,
and
I
have
other
things
to
do
now
and
the
group's
in
great
hands.
B
So
it's
been
fun
second
thing
is:
we
typically
had
a
interrupt
report
that
we
took
out
because
I
don't
think
there
was
much
happening
last
week,
but
I
want
to
mention
one
interrupt
related
thing
and
that
is
martin
seaman
runs
this
great
quick,
intro
runner
tool,
which
seems
to
run
out
of
disk
space
both
on
his
server
and
on
github.
B
So
if
anybody
can
get
him
github
credits
and
or
more
storage
space
for
his
virtual
instance
or
he
will
gladly
move
it,
I
think
to
whatever
cloud
provider
like
wants
to
host
it
we'll,
and
I
think
we're
open
to
like
put
a
logo
on
that
page
too.
So
consider.
B
G
So
this
is
a
very
quick
update
just
on
the
applicability
and
manageability
statement.
Basically,
they
are
ready
to
go.
You
can
go
to
the
next
slide,
which
I
can't
see
yet,
but
the
story
on
the
next
slide
is
that
we
had
two
working
group
last
calls
one
in
april
and
one
later
on,
we
did
a
bunch
of
updates
after
each
of
these
working
group
last
calls
because
we
had
so
many
updates
after
the
first
one.
We
did
a
second
one
of
the
second
one.
G
We
still
had
a
few
updates,
but
like
not
as
fundamental
stuff
as
it
was
the
first
one,
and
I
just
run
you
quickly
through
that
so
everybody's
on
the
same
page,
and
then
I
have
one
more
thing
I
want
to
discuss
now.
I
really
wouldn't
need
to
see
the
next
slide.
G
G
I
You
can
also
just
click
the
ask
to
share
slides
button
and
then
it'll
just
share
them
through
through
meet
echo,
and
then
you
can
control
it.
Yeah.
G
That's
where
we
are
right
now,
right,
hopefully,
okay,
so
now
we
go
to
the
next
slide
yeah.
But
I
already
told
you
this:
we
had
two
working
group
last
calls
and
there's
one
new
issue.
I
want
to
discuss
with
you,
okay,
so
applicability
statement
just
very
quickly.
I
think,
since
the
last
working
group
call,
we
had
three
prs,
which
actually
did
change
a
little
bit
the
content
and
not
only
were
editorial
fixes
or
clarifications.
G
So
we
had
a
bunch
of
discussion
about
the
h3
llpn
token,
and
so
we
have
some
additional
text
now.
That
says
this
is
what
we
do
right
now,
but
it's
just
like
how
we
do
it
and
it's
all
open
for
future
version
versions.
How
we
do
that,
then
we
had
some
more
guidance
on
use
of
dhcp,
slightly
more
text,
not
really
any
content
change,
but
like
being
more
clear
about
things.
G
Hopefully
now
and
then
we
had
like
a
very
short
section
about
a
creek
we
can
see,
and
there
was
a
lot
of
discussion
if
this
belongs
in
the
document
or
not.
So
now
we
actually
try
to
give
guidance
that
might
be
useful
for
a
lot.
People
that
want
to
use
quick
and
now
everybody
was
okay
with
the
text
that
we
haven't
documented.
So
this
is
also
something
okay
and
then
on
the
manageability
statement
we
had
even
less
less
things.
G
There
were
some
clarifications
about
the
guidance
about
pmtud
and
there
was
quite
some
edits
about
and
it's
eye
pausing,
because
we
now
don't.
We
move
the
whole
appendix
which
talks
about
old,
quick
versions,
that's
not
relevant
anymore,
and
we
clarified
a
couple
of
points
which
were
really
not
correct.
So
you
know,
I
think
those
things
are
all
resolved.
Thank
you.
Everybody
who
contributed!
There's
david
ian
martin
gary,
and
it's
on
really.
We
have
a
very
good
discussion.
Thank
you.
G
So
now,
since
the
working
last
working
group
last
call,
we
actually
got
one
upr.
So
this
was
an
issue
that
was
raised
by
mark
on
the
mailing
list
about
usage
of
source
ports,
because
some
of
the
system
ports
can
be
blocked.
So
there
was
the
idea
to
add
some
guidance
here
to
try
to
avoid
using
this
port.
There
is
a
pr
for
it.
G
We
can
probably
still
merge
it
on
the
the
document
was
supposed
to
be
ready
to
go
for
ad
review,
but
it's
still
in
the
working
group,
so
we
can
still
do
changes,
but
I
really
have
the
question:
if
this
is
really
quick
specific
does
it
belong
in
this
document
mark
also
raised
this
issue
on
the
tees
working
group
waiting
list
where
a
lot
of
percussion
is
going
on
still
so
maybe
more
general
document
makes
sense,
and
then
the
second
question
is
also
the
first
question.
G
It
should
be
added
to
this
document,
and
the
second
question
is
also:
should
we
add
something
to
the
manageability
document
about
net
handlings
and
that
should
also
avoid
these
ports.
But
again
I
think
it's
it's
very
general
guidance
and
not
specifically
import
or
like
not
specific
to
quick.
Only
so
I
I
would
need
some
more
feedback
here
and
I
copied
the
text
from
the
proposed
pr
here,
but
it's
probably
hard
to
read
so
better.
You
click
on
the
pr
yourself.
G
K
Commit
this,
what
is
likely
to
happen
is
that
these
ports
will
be
selected
and
they
won't
work
not
because
of
any
high.
You
know
principled
position
just
because
well
some
people
just
have
to
block
these
ports
because
of
the
nature
of
these
services
so
documenting
it
at
least
means
that
clients
won't
pick
them
deliberately,
and
maybe
it'll
mean
that
those
devices
that
that
do
network
address
translation
won't
pick
them
either.
We
can
at
least
hope
on
that
part.
I
don't
think
we'll
ever
be
able
to
guarantee
it.
K
G
G
B
Yeah,
so
as
an
individual
and
not
as
a
chair,
so
I
think
I
agree
with
mark
and
martin
that
it
just
seems
I
mean
yes,
there's
been
a
lot
of
udp
traffic
in
in
the
past,
but
there's
going
to
be
a
whole
lot
more
udp,
quick
traffic
in
the
future,
and
these
problems
are
going
to
be
hit
and
documenting.
This
seems
fine
if
somebody
feels
like
they
want
to
work
on
a
broader
document
that
that's
fine
too.
We
could
put
text
in
there.
That
says
you
know
the
itf.
B
L
L
So
whether
or
not
something
happens
elsewhere
and
whether
or
not
something
happens
elsewhere
is
still
not
clear,
because
I've
raised
a
discussion
of
this
in
the
transport
area
working
group
and
there
seemed
to
be
a
few
folks
who
are
amenable
to
the
idea
and
supportive
of
the
need
to
do
something
here
and
perhaps
one
person
who
is
not,
which
is,
as
I
understand
it,
and
not
uncommon
pattern
in
the
ietf.
L
G
So
do
you
plan
to
write
a
document
for
this
working
group
but
because,
even
if
you
have
a
draft.
D
F
L
M
N
N
G
O
G
Okay,
that's
pretty
clear.
I
will
leave
the
pr
open
for
a
few
more
days,
so
people
can
have
a
look
and
comment
if
they
want
to
okay
and
then
the
final
final
question
I
have
is.
There
was
also
another
discussion
on
the
main
list
recently
where
people
talked
about
privacy
concerns
where,
where
the
server
might
be
able
to
link
different
connections
through
resumption
or
connection
migration,
knowing
that,
after
migration,
the
same
ip
address,
different
app
address
is
the
same
client
right,
and
I
think
the
question
was
there
like.
G
If
you
care
about
privacy,
maybe
you
should
actually
reconnect
to
this
link
ability.
So
that's
something
that
could
have
been
discussed
in
the
transport
document,
it's
not
in
there
and
we
still
have
a
chance
to
add
something
to
the
applicability
draft.
G
K
The
discussion
of
this
is
in
tls
document,
it's
pretty
clear
and
I
don't
think
we
need
to
repeat
it.
D
Worthwhile
to
have
at
least
a
notice
of
that,
because
the
discussion
in
the
in
the
working
in
the
transport
document
is
very
brief.
I
mean
I
think
these
points
come
from
stefan
bortsmeyer,
pointing
out
the
issue
and
it's
worth
having
the
issue
at
least
documented
somewhere
in
the
in
a
more
explicit
way
than
it
is
in
the
transport
document.
G
G
G
D
G
Looking
at
the
chairs
as
well,
maybe
they
cannot
okay,
then
I'm
actually
done
here.
That's
it.
We
are
done.
Thank
you.
Everybody.
A
That's
good
we're
we're
kind
of
staying
on
track
with
timing,
so
I
just
encourage
people
to
try
and
keep
comments
as
short
as
possible
and
presenters
to
keep
doing
good
work,
follow
miriam's
lead
who's
who's
presenting
miriam.
Would
you
like
to
start
presenting
as
well
brilliant
up?
Next
we've
got
datagram,
I
think
tommy.
You
wanted
me
to
drive
this
one
remotely.
Is
that
correct.
P
All
right,
can
you
see
that
great
yep,
okay,
so
I'll,
give
a
quick
update
on
datagrams,
there's,
hopefully
not
too
much
to
say
here-
we're
getting
pretty
far
along
so
currently
we're
on
the
draft
03
version
for
the
working
group
draft
we
now
have,
I
think
you
know
many
different
implementations
and
early
deployments
beyond
some
experimental
stuff,
we're
starting
to
see
some
of
the
work
in
mask
get
deployed
in
some
production
environments.
So
we
have
a
lot
more
usage
of
datagrams
going
on
now,
so
this
is
getting
fairly
heavily
exercised.
P
I
think,
talking
to
the
other
editors,
we
think
you
should
be
ready
for
a
working
group
last
call
with
the
next
revision
once
we
were
able
to
cut
that
after
this
meeting.
We
of
course
love
to
hear
the
chair's
input
on
that,
but
that's
why
I
want
to
focus
on
you
know:
how
do
we
kind
of
get
this
out
the
door
in
the
most
recent
revisions?
P
I
just
want
to
highlight
a
couple
of
the
updates
that
are
relevant.
There
was
discussion
around
the
transport
parameter.
We
ended
up
putting
in
the
definition
that
the
if
your
max
datagram
frame
size
is
zero.
It
indicates
that
you
do
not
support
datagrams,
and
so
it
means
you
can
default
that
to
zero,
and
that
means
you
won't
support
it
and
that's
not
going
to
mess
up
any
kind
of
default
implementations.
P
We
have
removed
that
since
that
is
undergoing
more
churn
and
discussion,
and
we
don't
really
strictly
need
to
have
that,
and
so
we
don't
want
to
tie
ourselves
to
the
work
going
on
in
mask
and
we
can
let
that
kind
of
evolve
on
its
own,
and
we
split
out
a
section
for
talking
about
how
you
can
do
multiplexing
just
to
have
more
generic
guidance
on
different
ways.
Application
protocols
could
approach
using
the
datagram
frame.
P
We
also
had
some
clarifications
around
ack
handling,
specifically
giving
extra
guidance
and
assurance
that,
just
because
you
receive
an
ack
for
a
datagram
frame,
that
does
not
mean
that
the
application
on
the
other
end
processed
it.
So
you
know
it
is
not
a
replacement
for
end-to-end
application
signals.
P
So,
there's
essentially
a
very
very
few
issues
open
one
is
just
kind
of
a
wording
thing
there's
another
one
that
I
believe
my
bishop
had
opened.
That's
this
one!
P
I
believe
it's
something
we
don't
need
to
do
anything
in
this
draft
for
but
we'd
like
to
confirm
with
the
group.
P
Essentially
it's
talking
about
the
fact
that
datagram
does
not
define
any
application
format
and
there
is
a
suggestion
to
say
you
know
what
should
implementation
do
if
it
starts
receiving
datagrams
and
it
doesn't
know
what
they
mean
from
our
perspective,
I
think
it's
very
equivalent
to
the
fact
that
if
you
receive
stream
data-
and
you
haven't
defined
what
your
application
is,
you
may
not
know
what
to
do
with
that,
and
that
really
is
just
a
problem
for
the
application
protocol
running
on
top.
That's
why
h3
is
defining
its
definition
of
datagrams.
P
So
I'd
like
to
hear
if
people
think
we
should
say
anything
about
this
in
this
document,
maybe
there's
a
suggestion
on
the
issue.
I
think
that
maybe
we
could
say
something
in
another
document
like
applicability
or
something
like
that
to
talk
about
how
you
use
the
different
features
like
streams
and
datagrams
within
quick.
A
Lucas,
hello,
just
just
as
an
individual
here,
I
think
a
way
you
might
be
able
to
weasel
out
of
this
thing
is
to
say
something
like
in
in
the
applicability
document
that
any
any
mechanism
in
quick
that
is
designed
to
carry
application
data-
might,
you
know,
have
this
kind
of
problem
and
I'm
trying
to
get
away
from
mentioning
specific
extensions.
Yes,
we
have
streams
they're
in
the
core
spec.
Yes,
we
have
datagrams
they're
an
extension.
A
We
we
don't
want
to
tie
up
applicability
with
every
new
extension
that
might
be
coming
along
kind
of
thing,
and
I
know
that's
very
unlikely
right
now,
but
having
something
generic
and
abstract
might
just
help
us
have
this
common
rule
for
everything
that
makes
sense.
So
that's
all
I
can
say
thank
you
sounds
good.
G
I
did
add
a
section
about
datagrams
in
the
applicability
statement
and
I
think
it
does
talk
about
you
know
this
works
different
than
streams,
I'm
not
sure
what
else
you
would
want
to
add
there,
but
if
it's
only
three
words
right,
we
can
probably
still
add
it.
P
G
E
N
Thank
you.
I
I'm
not
sure
so
if
I
was
to
compare
datagrams
with
stream
frames,
for
example,
which
are
slightly
different
semantic
in
terms
of
what
they
offer,
but
ultimately
they
offer
a
particular
service
to
the
application.
N
We
don't
go
out
of
our
way
to
describe
exactly
what
application
format
or
what
application
uses
of
streams
are
so
in
much
the
same
way.
It
seems
to
me
that
datagrams
are
a
mechanism.
The
application
is
the
one
that's
ultimately
setting
up
the
transport
connection
and
that
the
application
doesn't
know
what
to
do
with
datagrams.
It
shouldn't
be
sending
datagrams
and
it's
a
matter
for
the
application
to
handle,
rather
than
for
the
transport
to
specify.
P
P
K
Request
that
you
specifically
don't
do
anything
if
you're
defining
a
protocol,
you
should
definitely
define
the
protocol
and,
if
you're
speaking,
a
protocol,
you
should
speak
the
protocol.
It's.
This
is
a
bit
weird,
just
stop
and
ship
it
yep.
E
To
all
right,
assuming
that
we
want
to
write
something
somewhere,
I'm
not
sure
if
writing
that
application
document
is
sufficient,
because
it's
identity
that
we
should
have
some
rfc
2119
or
I
mean
uppercase
worth
saying
what
the
implementation
should
do
and
therefore
I
think
that
delegating
I
mean
we
should
use
in
12199
word
to
delegate
responsibility
to
the
applications
using
the
datagram
frames.
Then
I
tend
to
believe
that
that
our
requirements
should
be
in
this.
E
J
O
You
know
the
interesting
thing
about
datagrams
is
that
datagrams
are
almost
entirely
semantic
free
from
the
application
side,
and
that
means
that
there's
really
only
two
applicability
problems.
One
is
routing
the
datagram
to
the
right
place
and
the
other
one
is
doing
the
business
logic.
I
think
talking
about
the
former
makes
sense
right.
Talking
about
the
louder
is
obviously
feudal,
so
you
know
saying
that
this
is
what
you
must
do,
probably
is
sufficient.
P
P
So
I
guess
you
know
we'll
cut
a
new
version
when
we
have
all
of
our
issues
npr's
closed
and
then
chairs.
You
think
you'll
be
good
to
ask
all.
A
Yeah,
I
think
so
you
know
we've
done
a
lot
of
work
in
here
and
there's
some
implementations
interrupt,
etc.
So,
thanks
thanks
to
the
editors
for
the
edits,
we'll
confirm
the
consensus
on
the
list
for
what
to
do
with
this
issue.
But
you
know
we
have
dependencies
on
this
document
in
other
working
groups
as
soon
as
we
can
get
this
done,
the
better
so.
B
B
A
I
I'll
ask
to
share
slides
that
I
control
it
myself.
That's
all
right.
I
And
are
they
slides
coming
in
fine
cool
all
right
good
morning
afternoon
evening,
middle
of
the
night?
Everyone
I'm
david
scanasi
and
here
to
talk
about
the
virgin
negotiation
draft
so
and
my
fellow
co-author
ecker
is
also
in
the
audience
here
somewhere
kind
of
a
quick
recap.
So
I'm
not
going
to
read
through
the
beginning
of
this.
This
was
already
in
the
past
few
presentations
we
had
vn
earlier
on
in
quick.
Then
we
realized
that
it
was
harder
than
we
thought
so
we
removed
it
and
then
we
said
well.
I
We
still
need
this,
so
let's
publish
it
as
an
extension
too
quick
as
opposed
to
something
that's
in
rc
9000.
So
folks
agreed,
we
adopted
that
as
a
working
group
and
then
there
was
a
lot
of
discussion
that
the
solution
proposed
was
too
complicated.
I
So
we
had
an
interim
where
we
talked
about
how
things
were
complicated
and
martin
thompson
stepped
up
and
actually
provided
a
proposal
that
was
less
complicated
and
still
had
the
same
feature
set,
which
was
great.
So
we
wrote
up
that
on
prs
talked
back
and
forth,
and
the
latest
draft
04
has
the
simplified
design
in
it.
So
I'm
gonna
go
through
today
and
kind
of
go
through
what
changed.
I
So
as
a
quick
high-level
view,
if
you
haven't
been
paying
attention,
rc9000
just
says
that
if
you
receive
a
vn
packet,
you
abort
the
connections.
You
can't
really
by
default
with
just
that
use
it
for
vn.
So
you
need
something:
that's
not
a
problem
for
our
main
deployment,
which
is
h3,
because
you
know
the
versions
in
all
service
you're
never
going
to
use
vn
in
practice,
but
quit
being
general
purpose.
I
We
want
this
to
work,
so
this
is
why
we
are
actually
writing
this
document
and
one
requirement
that
we
thought
would
be
really
useful
was
that
in
some
cases
it
would
be
great
to
be
able
to
do
vn
without
burning
a
round
trip,
because
you
know
quick
and
applications
using
quick,
really
care
about
latency
burning
time
on
a
round
trip
is
set.
I
So
the
draft
defines
two
different
kinds
of
version
negotiation.
One
called
incompatible
version
negotiation.
This
this
one
works
between
any
two
quick
versions
in
existence
and
relies
on.
So
when
the
client
starts,
the
server
applies
with
a
vm
packet
and
then
the
client
restarts
with
a
different
version
and
then
the
second
kind
is
compatible
version
negotiation,
which
requires
two
versions
to
be
compatible.
I
So
that
doesn't
work
with
absolutely
every
version
and
it
allows
the
server
to
immediately
reply
with
the
compatible
version
without
having
to
burn
a
round
trip
for
the
end
in
the
second
first
flight.
But
what
does
compatible
mean?
For
which
versions?
Can
you
do
this?
So
the
idea
it's
conceptually,
that
the
server
can
receive
a
first
flight
with
version
a
and
it
can
convert
it
to
a
first
flight
with
version
b
and
then
process
that
and
then
reply
with
version
b.
I
So
one
important
note
is
that
it's
not
bijective,
meaning
that
it's
possible
that
one
version
you
can
convert
to
another,
but
not
the
other
way
around,
and
it's
also
true
that
it's
not
like
if
a
version
is
compatible.
Maybe
some
first
flights,
art
if,
for
example,
you're
using
a
feature
that
isn't
available
in
the
other
versions
you
couldn't
convert.
I
So
how
do
we
get
all
this
to
work?
And
this
is
the
main
part
that
has
changed
since
the
previous
itf.
I
So
during
the
handshake,
you
send
something
called
handshake
version
information
which
is
kind
of
an
invariant
between
all
quick
versions
and
the
property
that
it
needs
is
that
it's
exchanged
during
the
handshake
in
a
way
that
it's
a
that
is
integrity
protected.
I
So
in
quick,
we
use
transfer
parameters
because
those
are
part
of
the
tls
transcript,
so
if
they're
they're
not
necessarily
encrypted,
but
if
they're
modified
by
an
attacker,
the
handshake
will
fail,
and
one
of
the
goals
here
is
to
ensure
to
prevent
downgrade
attack,
meaning
that
an
on-path
attacker
that
sends
a
evilly
crafted
vm
packet
can't
cause
the
client
and
server
to
use
a
version
that
the
attacker
chooses
instead
of
the
content
server
choosing.
I
So
what
is
its
format?
It's
pretty
darn
simple.
It
has
one
version
which
is
the
chosen
version
and
then
a
list
of
other
versions
for
the
client.
It's
compatible
versions.
So,
on
the
case
of
the
client,
it
says:
hey,
I'm
trying
to
handshake
with
version
a,
and
here
are
the
compatible
ones
that
we
can
switch
to
if
you'd
like
and
then
on
the
server
side.
I
It
says
here:
let's,
let's
use
either
the
version
you
told
me
or
a
compatible
one
that
we
did,
that
we
switched
to
and
by
the
way
here
are
the
ones
I
support
so
just
very
quickly
to
add
a
slide
in
the
slides.
This
is
what
it
looked
like
before.
So
this
is
probably
the
simplest
way
to
show
that
things
have
gotten
a
lot
simpler,
thanks,
martin
and
so
one
slight
tweak
that
we.
So
we
went
back
and
forth
on
this
on
what
does
it
mean
for
the
server
to
support
versions?
I
And
so,
if
conceptually,
the
versions
you
support
are
pretty
simple,
it's
the
ones
that
you
could
speak
quick
with,
and
the
one
that
you
sent
in
your
vn
packet
and
the
ones
you
send
you
being
the
server
and
the
ones
you
sent
in
handshake
version
information.
But
it
turns
out
that,
as
you
know,
if
you're
adding
a
version,
if
you
flip
these
three
sets
at
once,
you
could
end
up
with
some
connections.
I
Failing,
let's
say,
for
example,
you
remove
a
version,
but
between
the
time
you've
sent
to
the
end
packet
with
that
version
and
the
time
the
client
reconnects
that
could
cause
a
connection
failure.
So
we
found
a
kind
of
three-step
algorithm
in
the
in
the
spec,
which
is
kind
of
pretty
simple.
It
separates
this
set
of
supported
versions
into
two
three
newly
named
sets.
One
is
acceptable
versions,
which
is,
if
I
receive
an
initial
for
this
version,
I'll
use
that,
for
I
can
use
that
version.
I
I
like
it
right
now.
I
Offered
versions
is,
if
I
send
you
a
vm
packet,
these
are
the
ones
I
will
send
and
then
fully
deployed
versions
are
the
ones
the
server
will
send
in
the
handshake
version,
information
for
downgrade
prevention
and
conceptually
all
you
need
to
do-
is
you
add
it
to
the
fur
to
if
you're
adding
a
version?
You
add
it
to
the
first
one,
you
wait
a
minute,
you
add
it
to
the
second
one.
You
wait
a
minute,
you
add
it
to
the
third.
I
That
means
you
don't
have
things
in
flight
as
you
do
and
that
solves
these
problems
and
same
in
the
other
direction
if
you're
removing
this
is
a
should
like
at
google
we're
not
planning
on
doing
this.
We
just
do
everything
at
once
and
every
time
we
add
or
remove
a
new,
quick.
I
Just
use
tcp,
it's
fine.
We
don't
care,
it's
not
worth
the
work,
so
we
have
an
algorithm
for
facing
that,
but
it's
a
shirt
in
this
pack
and
how
does
downgrade
protection
work
so
one
is
that
is,
and
this
one's
that's
already
and
I've
seen
9000.
If
no,
if
memory
serves,
which
is,
if
you
receive
a
vn
packet
for
the
version
you
tried,
then
that's
clearly
wrong.
I
So
drop
that
and
then
the
other
one
is.
If,
if
the
client
has
reacted
to
a
version
negotiation
packet,
it
needs
to
double
check
the
supported
versions
from
hashtag
version
information.
So
the
way
it
does,
that
is,
the
client
already
has
an
algorithm
somewhere
to
go
from
a
set
of
server
provided
versions
to
a
version
that
it
wants
to
use.
I
So
that's
the
algorithm
that
it
ran
when
it
received
the
vn
packet
and
all
we're
saying
is
when
you
receive
the
handshake
version
information
which
this
one
is
integrity,
protected,
rerun
the
algorithm
and
make
sure
you
land
on
the
same
one.
We
initially
had
some
other
ways
of
defining
this,
but
we
found
that
this
one
was
the
simplest
one
to
implement
and
reason
about
as
well
and
allows
us
more
flexibility
in
how
to
any
way
like
the
clients
want
to
run.
I
Have
this
algorithm
behave,
you
know
unless
they're
choosing
randomly,
in
which
case
they'll
have
a
problem,
but
perhaps
you
shouldn't
be
doing
that
and
yes,
so
there's
like
that's
about
it
for
the
changes.
There
was
one
issue
that
I
just
wanted
to
get
some
folks
thought
on
the
when
the
client
offers
its
list
of
compatible
versions.
The
draft
currently
says
that
they're
sorted
by
by
preference,
so
the
client
can
tell
the
server
which
one
it
would
prefer
the
server
use.
I
That's
not
entirely
necessary
because
the
server
is
not
required
to
do
this.
So
martin
was
saying:
maybe
we
don't
need
it
to
be
sorted?
What
do
people
think?
Do
we
care.
I
I
So
you
cut
off
at
the
beginning.
Can
you
repeat
the
start
of
that.
O
I
said:
is
this
required?
You
can
put
it
in
there.
It'll
likely
have
zero
effect,
it
doesn't
matter.
Server
is
going
to
do
what
server's
going
to
do.
If
we
come
up
with
something
better
than
that
the
clients
and
the
servers
will
work
together.
It'll
happen
organically.
I
don't
think
we
need
to
interrupt
it
right
now.
I
N
Hey,
I
agree
with
roberto
that
server
is
going
to
do
what
server
is
going
to
do,
but
this
is
information
from
the
client
to
the
server
so
that
the
server
can
do
what
it
does
slightly
better,
perhaps
and
a
strong
opinion
here.
But
if
we
can
embed
this
information,
we
found
in
the
past
that
it
could
be
useful,
this
preference
information.
So
if,
if
it's
not
at
much
cost,
I
think
it's
worth
putting
it
in.
J
K
There
is
that
you
then
have
to
repeat
the
selected
version,
and
that
gets
a
little
awkward.
K
I
won't
say
anything
about
that.
The
only
problem
with
putting
them
in
order
is
that
then
you're
forced
to
put
the
selected
version
into
into
order,
because
the
selected
version
is
not
necessarily
the
most
preferred
version
in
all
cases,
so
I
would
prefer
not
to
sort
at
all
and
let
the
server
do
what
the
server
gonna
do.
K
I
Cool
that
also
makes
sense.
That's
a
good
point
chairs.
How
would
you
feel
about
doing
a
hum,
so
we
get
a
feel
because
it
sounds
like
people
have
preferences,
but
no
one
wants
to
die
on
any
hills,
so
it
could
be
nice
to
just
get
a
feel
for
the
room
and,
if
that
there
pick
that
direction.
If
that
sounds
good.
N
I
Let
me
add
to
that
so
please.
Let's
say,
for
example,
that
quick
v2
comes
out
and
quick,
v1
and
quickv2
are
compatible,
because
we
made
some
very
small
changes
to
them.
I
can
see
that
being
very
likely
the,
but
right
when
quick
v2
comes
out.
Let's
say
the
client
supports
one
and
two
and
it
prefers
two,
because
it's
newer
shinier,
better
all
the
things,
but
two
just
came
out,
and
there
are
very
few
servers
out
there
that
support
two,
but
a
lot
more
that
support
one.
I
So
it's
reasonable
for
the
client
to
use
one
first,
even
though
it
prefers
two,
because
that
way,
if
the
server
only
supports
one
boom
and
one
in
like
with
no
added
route
trip,
it
speaks
one.
And
if
the
server
speaks
two
it'll
do
a
compatible
upgrade
and
it
immediately
speaks
to.
Whereas
if
the
client
started
with
two
and
the
server
has
never
heard
of
two
that'll
fail,
and
so
therefore
you
need
to
be
able
to
encode
in
the
client's,
supported
versions
list.
I
Sorry
in
accounts
compatible
versions
list
where
the
one
that
it
tried.
So,
in
this
case,
one
is
encoded
because
you
can't
assume
that
it's
at
the
top.
So
in
this
case
you
would
say
my
chosen
version
is
one
and
my
my
compatible
versions
are
two
and
one.
So
you
would
have
to
send
two
versions,
whereas
in
mt's
proposal,
where
you
don't
have
the
ordering
you
can
remove
that
and
just
say
I
chose
one
and
by
the
way
I'm
compatible
with
two.
I
don't
need
to
repeat
one,
because
there's
no
ordering
does
that
help.
N
H
H
So
people
can,
can
we
don't
hum,
we
raise
hands
and
we
don't
vote.
I
Where,
where
is
it
on
the
oh
top
right
show
of
hands
tool?
It's
in
yellow
thanks.
A
I
A
I
Yeah
and
so
100
people
who
don't
care
so
my
intuition
there
is
to
use
this
result
to
say:
okay,
let's
keep
it
in.
Can
everyone
live
with
that
and
by
keep
it
in,
I
mean
keep
the
the
preference
ordering
the
way
it
is
in
the
draft
today.
A
I
Sounds
good
cheers?
Can
you
add
a
comment
on
the
issue
to
that
regard?
Yeah.
Thank
you
so
that
that
was
the
only
issue
that
I
wanted
to
highlight
today.
Does
anyone
have
other
questions
or
things
they
want
to
chat
about
on
version.
A
We
we're
really
at
time
here
so
I
I
want
to
say
thanks
david.
I
think
if
people
do
have
any
any
further
comments
or
questions,
please
just
take
them
to
the
list.
Take
him
to
the
issue
tracker
and
we
have
the
process
to
to
run
there.
So
thanks
again
sounds.
I
Good,
we
have
some
number
of
issues,
a
lot
of
which
are
editorial,
so
the
editors
will
do
a
pass
to
kind
of
clean
that
up,
and
ideally
our
next
revision.
After
that,
we'll
bring
it
back,
and
if
there
aren't
any
more
issues,
we
will
try
to
wrap
it
up
and
ask
for
working
with
last
call
as
well
all
right
thanks.
Q
All
right,
this
is
potentially
very
quick
next
slide.
Please
not
much
has
happened
frankly
since,
since
110.
honestly,
this
is
kind
of
stuck
in
certain
ways.
There
are
a
few
issues
that
I
can
kind
of
run
down
with
the
people
who
care
about
them.
But
ultimately
the
issue
is
implementations.
I
went
ahead
and
implemented
the
load,
balancer
side
and
retry
service
side
of
this
of
the
spec,
because
I
figured
there
are
lots
of
servers
and
in
practice,
no
servers
implemented.
Q
This
part
of
this
is
because
address
migration
is
relatively
low
on
people's
feature
priority
list
and
without
address
migration.
The
load
balancer
bit
doesn't
matter,
so
I've
been
sort
of
waiting
for
those
servers
to
show
up
and-
and
if
you
have
one
by
all
means,
I'm
ready
to
interrupt
at
any
time.
So
we
can
the
other
thing
going
on.
Is
there
a
few
kind
of
new
use
cases
and
stuff
dribbling
in
that
sort
of
fit
in
this
framework
and
I'll
discuss?
Q
The
only
ones
come
up
in
the
last
quarter,
but
you
know
there's
three
paths
forward
with
this.
I
can
just
kind
of
take
executive
action
here
and
just
climb
through
the
issues
and
make
a
lot
of
decisions
without
a
lot
of
input,
because
people
don't
really
care
and
just
get
this
thing
done
and
ship
it
or
it
can
wait
around
for
implementations,
which
is
what
the
position
we're
in
now.
Q
We
might
speed
things
up
by
splitting
the
retry
service
stuff
from
the
load
balancer
stuff
because
they
are
not
really
all
that
related.
I
I
don't
have
a
strong
opinion
about
that.
Next
slide.
Q
And,
to
give
an
example
of
the
kind
of
sort
of
what
I
could
uncharitably
call
mission
creep,
is
we
got
a
proposal
from
ant
financial,
which
has
been
pretty
involved
in
the
retry
service
stuff
to
have
stateless
retry
offload
pardon
me,
stateless,
reset,
offload
man.
I
always
confuse
those.
I
mix
it
up
on
the
slide,
so
ignore
what
the
slide
says.
Stateless
reset
offload.
Q
Many
of
you
remember
that
if
a
server
is
down,
oh
sorry,
if
a
server
goes
down,
it
loses
state
and
then
receives
sort
of
random
packet.
It
can
send
the
stateless
reset
packet
and
the
idea
is
that
you
have
a
box
in
front
of
your
servers.
You
know
the
server's
down
for
maintenance
and
you'll.
Send
these
stateless
results.
I
personally
do
not
find
this
use
case
compelling
there's
an
issue
on
it.
I
would
welcome
your
input
on
it
right.
Okay,
can
you
return
the
previous
slide?
Q
So
that's
it.
I
don't
know
if
anyone
has
any
opinions,
strong
or
otherwise
about
how
we
should
move
forward
with
this
document
or
if
they
personally
have
a
server
that
they're
getting
ready
to
to
to
abundant
server
bit
of
this.
But
that's
all
I've
got
thanks.
Q
There's
a
there's,
a
jabber
question:
does
the
stateless
reset
oracle
problem?
Is
it
solvable
and
it's
been
a
couple
months?
I
didn't
remember
the
details,
but
the
answer
is.
Q
Yes,
okay,
so
I
am
hearing
mass
indifference
to
how
we
move
forward
here
and
not
a
lot
of
urgency
on
on
implementing
this.
So
I
think
this
will
remain
pretty
much
frozen
and
stuck
for
at
least
a
little
while
if
people
are
just
never
going
to
do
address
migration
as
a
if
for
the
foreseeable
future,
that
would
be
good
information
as
well.
We
could
do
it
on
the
list,
but
I
think
I'm
done
here
thanks.
Q
Q
D
Q
Yeah
I
mean
so
I
I
guess
I
should
respond.
I
mean
I
don't
have
any
strong
feeling
about
that
one
or
the
other
it's
another
example
kind
of
one
of
the
lateral
ways
in
which
we
can
expand
this,
but
I'm
concerned
this
keeps
expanding
while
not
we're
not
actually
getting
any
further
closer
to
completion.
D
O
So
I
think
what
you're
really
talking
about
here
is
solving
the
session
problem,
one
way
or
another
from
some
definition
of
a
session
right.
We
have
the
ids
that
we
use
that
define
the
quick
connection
session
right
and
then
we've
been
adding
extensions
so
that
as
interpreted
through
load,
balancers,
etc.
That
session
could
continue
past
certain
state
changes
right.
There
are
other
state
changes
interesting
for
this
problem
space
of
addressing
a
session.
O
Sadly,
I
don't
see
a
whole
lot
of
interest
overall
in
folks
in
solving
that
problem
right
now,
so
I
I
don't
think
it's
all
that
surprising
that
you're
not
getting
much
interaction
on
it,
because
the
problem
is
much
larger
than
just
that,
and
that
problem
is
also
not
being
discussed.
That's
it.
Q
All
right,
if
there's
there's,
there's
no
honestly
q,
there's
a
lot
of
other
stuff
to
talk
about.
So
if
people
have
thoughts
about
this,
they
don't
want
to
share.
Please
email
me:
I
may
confer
with
the
chairs
offline
about
how
to
proceed,
because
I'm
censoring
a
lot
of
indifference
about
all
the
details
of
this.
A
Thanks,
thank
you.
Mine
next
up
is
robin
to
talk
about
q
log
robin.
Would
you
like
me,
two
percent,
or
would
you
like
to
do
it.
A
Oh,
I
missed
that
I'll
approve
it
there.
We
go
another
reminder
for
watson
if
you're
listening
that
now
now
is
your
main
chance
to
star
and
take
over
from
robin
to
just
fill
out
those
notes.
Thank
you,
watson.
Take
it
away
robin.
S
S
Let's
take
a
look
at
a
simple
example
here
of
a
very
simple
q,
lock
on
the
left,
where
we
have
some
metadata
on
top
and
then
we
have
a
list
of
events
and
each
event
is
quite
simple.
It
has
a
time
stamp,
a
name
or
an
event
type
if
you
will
and
then
some
structured
data.
That
depends
on
the
event
and
most
of
the
two
protocol
documents
that
we
have.
They
look
a
bit
like
to
the
bottom
right
there.
S
They
define
what
type
of
events
you
should
log
and
what
they
should
look
like,
which
fields
they
include
how
those
fields
are
named,
what
types
they
are
and
so
on.
This
is
what
I'm
calling
the
schema
right
and
if
you
start
from
just
the
schema
on
the
bottom
right
there
you
can
imagine
there
are
many
different
ways
that
you
can
actually
serialize
this
into
a
concrete
presentation.
S
What
we
have
on
the
left,
of
course,
is
a
simple
example
of
that,
using
the
json
format,
which
is
also
the
default
that
we
use
or
have
been
using
for
q
log.
There
is
also
a
secondary
format
that
we
currently
define,
which
is
called
newline
delimited
json,
and
this
is
mainly
to
work
around
one
of
the
problems
with
json
is
that
it
is
very
strict
about
closing
all
of
the
elements
you
open
or
else
you'll
get
parser
errors,
and
this
is
kind
of
annoying.
S
If
you
are
streaming
events
in
a
live
implementation,
where
you
don't
always
have
the
option
to
properly
close
everything
down,
so
this
is
mainly
added
as
a
way
to
do
a
streaming,
cue
log
option
as
well,
so
this
is
kind
of
what
we
have
this
is
now
adopted,
and
now
we
need
to
move
to
what
do
we
actually
want
to
standardize
about
this?
What
do
we
keep?
What
do
we
add?
What
do
we
change
right?
S
What
is
the
scope
of
what
we're
going
on,
and
so
I
want
to
talk
about
these
two
things
mainly
today.
The
first
one
is
the
civilization
format,
so
the
main
reason
I
chose
json
at
the
beginning
is
because
I
think
it's
very,
very
flexible.
It's
also
very
well
supported
in
most
programming
environments,
and
so
it's
easy
for
even
novices
to
make
tools
to
interpret
keylog
logs.
It's
also
textual,
so
we
don't
even
need
new
tools.
S
We
can
use
existing
text-based
tools
that
people
are
using
for
existing
logs
network
logs
for
the
queue
logs
as
well,
but
there
are
also
obvious
downsides.
One
of
the
main
ones
is
of
course,
performance.
Json
is
not
the
most
efficient
of
formats
and
for
this
discussion,
specifically
nd
json
or
really,
as
far
as
I
can
tell
not
any
type
streaming,
json
has
actually
been
practically
standardized,
at
least
not
within
the
ietf.
S
Recently,
we've
talked
a
lot
about
using
seabor
there's,
even
someone
who
brought
up
maybe
doing
something
with
pcap
and
g,
and
I
think
we
can
all
agree
that
we
could
bike
shed
on
this
and
discuss
the
merits
of
these
different
formats
for
a
couple
more
years
to
come.
But
I
don't
really
think
we
want
to
do
that.
So
I
wanted
to
phrase
this
question
in
a
slightly
different
way
and
kind
of
say
you
know
what
what
is
what
is
the
problem
we're
trying
to
solve?
S
What
is
the
issue
actually
or
why
do
we
need
a
standard?
Do
we
need
something,
an
interoperable
logging
format
that
makes
it
easy
to
create
reusable
tools
on
top
of
that
is
a
lack
of
tools.
Some
is
that
the
main
problem,
or
is
it
the
problem
that
none
of
you
seem
to
figure
out
how
to
efficiently
lock
stuff
from
your
implementations
and
the
way
I
phrase
that
makes
clear
that,
of
course,
I
think
it's
the
first.
I
think
it's
the
lack
of
tools
and
not
the
second.
S
I
think
most
of
you
have
a
very
good
idea
of
how
to
efficiently
log
you're
already
doing
this
for
many
other
things
as
well.
So
do
we
really
want
to
define
or
put
an
emphasis
on
that,
and
this
is
kind
of
an
unfair
statement.
It's
it's!
It's
it's
a
false
dichotomy
right,
because
choosing
one
does
not
necessarily
preclude
the
other.
S
You
can
do
something
tools
based
on
an
optimized
format,
but
in
my
experience
over
these
past
few
years,
working
on
this,
it's
it's
that
the
more
optimized
formats
make
it
more
difficult
to
create,
tooling,
reusable
tooling,
or
at
least
have
have
novices
or
newer
people
create
these
tools
right
and
there.
S
S
Let's
be
honest,
I
think
people
will
keep
on
doing
that,
even
if
we
would
switch
to
cbor
or
pcapng
or
or
what
have
you
and
finally,
I
think
that
q,
log
and
json
compresses
very,
very
well
so
that
might
alleviate
some
of
the
issues
with
json
and
so
the
concrete
proposal
here.
What
I'm
building
up
to
is
what
I
would
like
to
do
is
that
we
just
say:
okay,
we're
just
going
to
stick
with
json
and
then
we're
going.
S
Gonna
have
something
custom
for
a
streaming
option,
maybe
based
on
something
existing
something
new.
I
don't
know
just
to
focus
on
that,
and
that
doesn't
mean
that
we
can't
switch
to
a
more
optimized
format
down
the
line,
but
that
wouldn't
be
the
main
focus
that
we
have
at
the
beginning
right.
S
S
The
first
one
are
very
close
to
the
wire
image
they
mainly
describe
what's
going
in
and
out
of
the
implementations,
and
then
we
have
some
additional
events,
of
course,
for
things
that
are
not
on
the
wire
things
that
are
mostly
the
internal
state
changes,
let's
say
related
to
congestion,
control
and
then
crucially,
there's
a
third
category
that
I've
called
custom
events.
These
are
not
in
the
spec
at
all,
and
this
means
that
implementations
are
free
to
define
their
own
events
as
they
see
fit.
S
Of
course,
tools
can't
interpret
what
they
don't
know,
but
they
can,
for
example,
in
cubious.
We
do
that
they
can,
in
some
cases
display
even
events
that
they
don't
know
to
make
them
even
usable
in
practice,
even
in
this
case,
for
example,
if
someone
had
a
typo
in
the
event
type,
it
still
shows
the
event
it's
still
usable
for
for
knowing
what's
going
on
right
now.
S
Well,
the
packet's
active
end
is
semantically
different,
in
that
it
only
logs
the
newly
act,
packets
at
a
specific
time,
which
is,
of
course,
core
input
for,
for
example,
the
loss
detection
algorithms
there.
So
these
are
similar,
but
not
the
same,
and
then
we
have
a
third
one.
That
is
also
related,
which
is
called
the
frames
processed
event
where
we
basically
just
log
the
frames,
the
wire
image
of
frames
that
comes
in,
but
without
the
packet
headers.
S
This
is
mainly
to
accommodate
implementations
that,
for
example,
separate
processing
of
the
headers
and
the
payloads,
or
that
might,
for
example,
want
to
aggregate
a
lot
of
frames
across
packets
into
a
single
event
for
easier
processing
again
something
useful,
but
this
kind
of
to
me
ties
into
something
that
mark
nottingham
talked
about
last
time
that
I
presented
on
q
log,
where
this
is
starting
to
look
more
like
surfacing
implementation
details
in
the
the
people
doing
the
q
log,
rather
than
actually
focusing
on
the
semantics
and
the
events
of
the
protocols
themselves.
S
Can't
I
have
like
a
template
frame
somewhere
defined
once
and
then
I
only
log
the
fields
that
have
been
changed
over
time
to
make
it
easier
for
me
now,
I'm
personally
not
a
biggest
fan
of
this
kind
of
proposal,
but
I
do
understand
where
this
question
is
coming
from
right
and
that's
kind
of
the
problem.
S
I
think
all
of
these
things
are
useful,
but
I
don't
think
we
should
or
can
support
all
of
these.
It
would
just
get
way
too
confusing
both
for
people
trying
to
implement
q
log
and
which
events
should
I
be
outputting
now,
but
especially
also
for
tools
which
events
should
I
be
looking
for.
S
How
do
I
merge
separate
events
that
might
tell
me
the
same
thing,
but
slightly
different,
and
also,
how
do
I
communicate
to
the
user
of
the
tool
which
events
are
actually
being
used
to
visualize
this
particular
data
right,
and
so
we
need
some
way.
I
think
a
design
philosophy
or
a
guideline
to
to
limit
or
to
guide
how
how
we
do
these
event,
definitions
and
I
again
have
a
concrete
proposal
for
that.
S
The
idea
is
basically
to
stay
as
close
to
the
wire
image
as
possible,
where
possible,
to
just
lock,
what's
going
in
and
out
and
mainly
deviate
from
that
for
internal
state
events,
like
I
said
for
things
like
congestion
control
that
you
don't
see
on
the
wire
right,
that's
basically
what
we
had
in
queue:
log
draft,
zero,
zero,
the
very
first
things
and
also
in
serial
one
and
then
our
proposed.
We
can
deviate
from
this
in
a
pragmatic
way
when
it
makes
sense,
for
example,
the
packets
act
for
me
does
make
sense.
S
Another
thing
that
I
would
find
is
very
interesting
is
for
q
pack,
where
currently
we're
only
logging,
the
pure
wire
image-
and
I
don't
know
about
you,
but
for
me-
that's
still
very
difficult
to
parse
to
understand.
What's
actually
going
on,
so
I
would
like
to
have
some
higher
level
events
on
there
as
well.
S
Yeah
things
like
the
frames
processed
event
that
we
would
scrap
that,
even
though
we
have
it
now
to
make
things
easier,
this
means
that
implementations
that
do
want
to
split
these
things
out
or
can't
log
these
in
the
same
point,
they
would
have
to
move
to
like
a
custom,
q,
log
or
a
partial
q,
lock
or
whatever
you
want
and
then
have
to
write
a
converter
application
to
make
it
into
a
proper
queue
compatible
with
the
tools
again.
S
This
is
something
many
people
have
already
been
doing,
and
I
don't
see
this
as
a
very
big
problem
myself.
So,
in
summary,
what
we
want
to
do
is
is
kind
of
get
an
idea
of
the
scope
of
qlog.
Where
are
we
moving
towards
and
I
propose
keeping
it
relatively
contained
around
what
we
already
have
and
even
a
little
bit
less.
I
would
like
to
stick
with
json
and
something
custom
for
streaming,
at
least
for
now,
and
then
also
reduce
the
amount
of
events
that
we
have
now.
S
These
are
only
two
of
the
main
issues
we
have,
maybe
50
more
open,
of
which
all
the
feedback
is
mostly
mostly
welcome,
definitely
welcome,
but
what
I
think
why
I'm
bringing
this
to
you
today
is
that,
if,
if
we
get
some
clear
input
on
these
two
things,
that
will
make
a
lot
of
the
existing
issues
much
much
easier
to
solve
and
to
make
progress
on.
If,
if
we
can
get
your
thoughts
on
this,
so
please
come
to
the
mic.
Thanks.
A
Thank
you
robin
just
before
we
hit
the
mic
queue.
I
just
want
to
interject.
As
a
chair,
I've
been
you
know:
q
log's,
a
newly
adopted
document.
A
It's
been
rumbling
along
for
a
while
people
have
been
implementing
this
thing,
but
yeah
like
robin
said,
there's
quite
a
few
issues
on
that
date
on
on
the
github
repo
that
we
would
like
to
start
burning
through
really,
and
these
two
questions
aren't
the
only
ones
that
will
need
to
be
answered,
but
the
sooner
we
can
come
to
some
agreement
on
this,
some
rough
consensus,
the
the
more
we
can
move
on.
Just
in
case
you
missed
it
robin.
During
your
presentation.
A
There
was
a
suggestion
that
rfc
7464
could
be
used
as
a
streaming
json
format.
So
that
might
be
something
worth
looking
up
and
to
consider
in
the
discussion
we're
going
to
have
now
in
the
next
six
minutes
or
so
and
with
that
I'll
I'll,
but
back
out
of
the
queue.
A
N
All
right,
I'm
in
the
queue
shall
I
should
I
go
ahead
right
now-
is
running
the
queue
so
very
quickly.
I
think
the
way
in
which
I've
always
thought
about
q
log
and
it's
the
reason
that
we
wanted
to
standardize
this
in
the
quick
working
group
was
because
it
is,
as
some
others
have
said,
on
the
chat
as
well
an
interrupt
format,
and
the
interrupt
is
happening
in
this
world
to
me
between
producers
on
the
one
side
and
consumers
which
are
tools
on
the
other
side.
N
So
to
me,
q
log
is
the
p
cap
off
quick
me
or
pick
up
in
g
of
quick
for
that
matter.
So
if
you
have
a
format,
basically
that
you
know
whether
you
produce
two
directly
or
you
produce
through
tools,
it
doesn't
matter
to
me
because
if
you
want
to
use
the
tools
that
are
on
the
other
side
of
the
format,
you
go
through
this
format.
N
That
is
the
value
of
q
log.
For
me-
and
I
I
like
the
the
the
the
principles
that
you
outlined
in
the
previous
slide-
and
I
think
sticking
to
that
makes
a
lot
of
sense,
sticking
very
close
to
the
wire.
The
wire
image
is
fundamental.
It's
primary!
We
need
that
everything
else
is
is
additional
useful
information,
and
that
is
always
a
question
of
how
many
people
care
about
it.
How
many
people
want
that
information,
and
so
on
and
so
forth?
N
So
in
that
sense,
I
the
examples
that
you
outlined
and
everything
you
said
made
complete
sense
to
me,
but
I
want
to
say
that
to
me,
qlog
is
always
an
intermediary
format,
so
I
wouldn't
get
moved
by
people
who
say
I
can't
produce
this
format
directly
from
my
server
or
my
client.
That's
okay.
They
can
produce
an
intermediate
format,
use
the
converter,
go
from
there
to
queue.
T
Hi,
so
this
is
paul,
hoffman,
a
definitely
a
lurker
here
and
quick.
In
fact,
I
only
care
about
q
log
at
the
moment,
so
I
agree
with
jonah.
The
design
philosophy
stuff
is
super
important.
T
One
thing
in
the
current
draft
that
gets
very
mixed
up,
though,
is
that
it
seems
like
q
log
is
either
a
logging
format
or
a
debugging
format,
and
many
of
the
choices
that
you
made
were
because
it's
mostly
used
for
debugging.
Then
you
can
expand
some
space
here
or
you
can
lose
this
or
lose
that,
and
I
think
the
working
group
needs
needs
to
determine
that
first,
because
a
logging
format
needs
to
be
much
more
efficient
needs
to
this,
and
this
and
this,
if
you're,
going
to
be
keeping
huge
logs
even
sample
blogs.
T
I
mean
we're,
certainly
hoping
that
quick
servers
are
spewing
out
gazillions
of
messages
a
moment.
So
once
that
gets
a
little
bit
more
settled,
I
think
a
lot
of
these
things
will
will
you
know,
fall
out,
and
that
will
also
answer.
Some
of
the
things
in
the
schema
are
very
loose.
In
fact,
like
names,
are
this
way,
oh,
except
when
they're
this
way?
T
That's
okay
for
a
debugging
format,
that's
absolutely
not!
Okay
for
a
logging
format,
and
then
the
last
thing
I'll
say
is.
I
think,
that's
keeping
it
as
a
schema
that
is
json
like
or
json
based
is
great,
and
I
certainly
would
expect
somebody
to
create
a
a
binary
format
that
is
inherently
smaller,
easier
to
deal
with
and
such
c
board
would
work
just
fine.
I
say
that
as
somebody's
used
cbor
for
a
while,
it
doesn't
have
to
be
the
main
one.
T
It
can
be
a
side
one,
but
that
way
as
that
tooling,
that
you're
talking
about
comes
up
there's
an
obvious
platform
for
it.
There's
libraries
for
seaboard
everywhere.
There
are
for
others
as
well,
but
then
that
way
you
can
get
it
because
you
can't
compress
jason
on
the
fly.
You'll
eat
your
entire
cpu,
so
both
of
those
desires
could
be
held.
So
I
certainly
am
willing
to
participate
in
this,
but
I
would
really
like
to
know
if
it's
meant
to
be
logging
or
debugging
if
it's
debugging
I'll
be
much
looser
thanks.
S
Yeah
I
like
that
it
obviously
started
mostly
as
a
debugging
format
right
and
it
evolves
into
logging.
I
I
hope
maybe
someone
from
facebook
can
give
a
little
bit
more
information
on
that,
because
they,
as
I
understand
it,
do
do
constant
logging
at
at
their
scale,
even
though
it's
sampled,
which
which
does
seem
to
hold
up
with
the
current
q
log
thing.
So
I'm
not
sure
if,
if
that
comment
of
yours
was
mostly
about
the
json
aspect
of
it
or
mostly
about
the
types
of
events
or
the
verbosity
of
defense,
we
log.
U
Watson
to
the
step
up
my
main
interest
in
q
log
is
that
I
might
have
to
look
at
them
occasionally
to
figure
out
what
the
heck's
going
on,
and
I
think
that
that
means
keeping
close
to
the
wire
format.
So
it's
easy
to
compare,
what's
happening
at
each
end
and
having
additional
annotations.
If
that
would
be
helpful
from
implementations,
it
was
an
extensible
thing,
which
would
also
be
very
important
and
that's
probably
more
important
than
size
and
speed,
because
I'm
just
going
to
do
this
on
connections
that
have
trouble.
H
Yeah
so,
first
off
I
was
just
saying
a
general
thing,
as
speaking
as
individual,
I
think
it
would
be
useful
structurally
to
have
the
schema
and
the
serialization
specified
separately,
mostly
because
they
are
separate
things.
So
so
it
gives
us
the
flexibility
to
say
down
the
road
you
know
have
another
for
an
arbitrary
serialization,
whereas
the
schema
once
we
decide
how
we
want
to
represent
that
best
can
just
be
more.
H
I
think
that
we
should
be
very
careful
about
prematurely
trying
to
do
any
sort
of
compression
and
the
main
reason
I
say
that
is
because
compression
algorithms
are
very
good
at
that,
and
people
when
we're
designing
file
formats
are
not
very
good
at
this.
I
know
we
do
a
lot
of.
We
do
a
lot
of
like
compression
in
quick
in
related
areas,
but
file
compression
is
a
different
game
and
like
or
things
that
are
streaming
compression
are
a
different
thing.
H
So
probably
the
people
that
you
know
spend
their
lives
researching
that
and
doing
that
are
going
to
do
a
better
job
than
we
are
and
so
to
comment
about
the
facebook
thing.
Yes,
we
log,
we
don't
even
do
the
nd
json,
we
just
log
json
straight
from
our
quick
connections
and
that
works.
H
Could
it
be
better
and
smaller?
Yes,
it
could
be
a
lot
better
and
smaller,
but
just
compressing.
That
makes
a
huge
difference
because
their
you
know
compressors
are
fast
and
q
and
q.
Log
json
is
extremely
compressible,
but
I
think
that
like
having
a
standard,
interchange
format
for
for
like
that's
just
json
is
totally
where
we
should
go,
because
that
was
the
most
agile
way
to
represent
it,
so
that
we
can
have
as
many
tools
as
possible.
N
Lucas
did
I
get
in
my
client
before
you
close
it
off.
Just
just
just
be
quick,
please,
okay,
yep,
very
quick.
I
agree
with
what
matt
said.
I
realized
that
I
didn't
actually
answer
the
questions
of
the
proposals
that
you
have
in
the
slide.
I
agree
that
having
the
json
serialization
is
actually
very
useful.
I
routinely
go
through
through
q
logs
manually
and
reading.
The
file
is
to
be
able
to
read
the
file
is
super
helpful
and
yeah.
I
think
where
it
is
is
good.
A
Yeah
we're
at
time
for
this
presentation.
I
hope
robin
you
got
some
of
the
input
that
you're
looking
for.
I
know
this
is
an
important
question
that
you
would
like
answered
in
order
to
make
progress.
I
the
sense
I'm
getting
here
is
that
jason
seems
fine
and
there
might
be
some
some
work
to
do
on
a
streaming
format
that
we
can
go
away
and
do
if
that's
not
the
case
like
I
encourage
people
to
speak
up
on
the
list.
A
We'll
take
this
question
to
the
list
to
try
and
confirm
the
consensus
that
we're
feeling
yeah
but
yeah.
I
I
think
the
important
thing
is
that
a
json
format
for
serialization
doesn't
prevent
other
things.
A
S
A
A
All
right,
so
next
up,
we
have
ian
to
talk
about
the
act.
Frequency
draft
also
a
newly
adopted
document.
The
agenda
did
say
jana,
so
I
got
myself
confused,
but
this
will
be
ian.
Presenting
we've
got
20
minutes
for
this,
so
take
it.
R
Thank
you.
Can
you
advance
next
slide.
R
Can
everyone
hear
me,
oh
cool,
so
we
have
a
few
pr's
that
are
fairly
straightforward.
You
know
just
like
renames
and
such
there
were
a
number
of
confusions
around
exactly
what
the
fields
meant,
particularly
because
it
was
one
peer,
sending
a
frame
that
modified
the
behavior
of
the
other
peer,
and
so
we
we
attempted
to
pick
naming
new
names
that
are
a
little
bit
less
confusing
than
the
old
names.
R
R
Oop
is
the
next:
oh,
let's
just
take
him
home,
so
martin
thompson
rightly
suggested
that
having
a
special
value,
that's
completely
invalid
is
is
sort
of
silly,
and
so
we
have
taken
the
suggestion
and
adjust
the
meaning
of
the
field
by
one
and
which
this
changes
zero
from
being
an
invalid
value
to
meaning
basically
send
an
acknowledgement
immediately
as
soon
as
you
receive
a
packet.
So
there's
an
outstanding
pr
for
this.
R
This
does
change
in
europe,
and
so
the
pr
also
changes
the
transport
parameter
code.
One
so
heads
up
take
a
look
at
the
pr.
If
you
care
and
also
a
heads
up
that
yeah
the
this
will
break
interrupt
slowly
next
slide.
R
R
It's
very
unlikely
that
people
are
going
to
want
to
wait
in
billions
of
packets
before
saying
an
immediate
acknowledgement,
and
so
he
wanted
to
potentially
move
it
to
a
smaller
storage
value
and
kind
of
how
should
that
work.
So
in
this
case,
we
we
said
if
you
can't
support
a
larger
value
for
whatever
implementation
reason.
R
You
must
cap
the
receipt
value
to
the
largest
supported
value,
and
the
goal
here
is
just
to
make
the
behavior
predictable
and
to
make
sure
that
you
know
people
don't
do
something
like
oh
well,
that
value
is
larger
than
like
the
largest
value.
I
support
I'm
just
going
to
ignore
it
right,
which
would
obviously
be
suboptimal,
given,
I
think
probably
the
the
smallest
large
value
you
can
imagine
supporting
is
something
like
255
or
you
know,
256.
R
once
we
make
the
other
pr's
adjustment.
I
don't
really,
you
know,
foresee
any
issues
with
this,
but
I
figured
I'd.
Give
people
a
heads
up
next
slide.
R
Okay,
so
now
we
have
the
first
of
the
no
more
complicated
issues.
So
where
can
the
frequency
frame
appear
and
by
inference
should
the
minimum
act?
Delay
transport
parameter
be
remembered
for
zero
rtt,
so
yeah,
the
real
question
is:
can
an
appearance
here
rt,
because
that's
kind
of
the
one
place
that
plausibly
you
could
try
to
use
it.
But,
yes,
you
do
need
to
remember
the
menach
delay
transport
perimeter
in
order
to
make
that
work.
R
Otherwise
you
could
accidentally
send
it
invalid
act.
Frequency
frame
next
slide.
So
let's
look
at
the
kind
of
pros
and
cons
here.
R
So
usually
we
allow
frames,
because
why
forbid
them
and
that's
a
special
case
so
that
that
certainly
is
valid
in
this
case,
maybe
there
is
a
tiny
reduction
of
acknowledgements
for
large
client
to
server
zero
rtt
upload
flights.
I
think
this
use
case
is
pretty
tenuous
at
best,
given
iw10
I'm
assuming
that
and
the
pros
of
bidding
it
are
that
max
actually
is
not
remembered.
R
So
it
is
consistent
with
the
existing
transport
brand
from
rfc
9000
and
our
min
actually
may
actually
vary
by
server
platform.
Os
and
other
attributes
which
could
increase
crt
rejections,
for
example,
if
you're
moving
from
say
linux
to
windows
server,
but
like
bookman
support
click,
then
that
might
you
know
cause
you
to
reconfigure
your
servers
in
different
ways.
R
The
editors
are
recommending
that
we
make
it
one
rtt
only
because
the
utility
of
our
frequency
and
zero
rtt
is
very
tenuous,
as
I
said,
and
we
think
that
the
issue
with
different
server
platforms
and
operating
systems,
potentially
wanting
different
minute
delay
is,
is
a
real
issue
that
we
actually
think
deployments
will
hit
but
thoughts
discussion.
R
B
B
K
Don't
think
I
can
implement
this
recommendation
without
a
massive
disruption
to
the
code
base.
No,
when
we're
sending
or
receiving
frames,
whether
it's
a
zero
rtt
or
one
rtt
at
that
level,
and
so
this
would
be
a
massive
inconvenience.
We
do
have
a
default
value.
I
think
for
the
maxact
delay
in
the
case
that
it's
absent
and
that
would
prevent
that
from
being
a
major
problem.
K
R
Yeah,
I
think
the
persistence
is
actually
not
a
big
concern.
We
decided
that,
as
you
said,
persisting
it
or
not.
Persisting
is
basically
a
a
wash.
I
think
that
the
concern
was
that
the
misconfiguration
risk
of
it
varying
across
different
locations
was
a
larger
concern.
R
K
K
E
N
Yes,
if
I
could
jump
in
briefly,
martin
I'd
like
to
hear
a
little
bit
more.
If
you
can
about
why
making
this
one
rdd
only
would
be
a
problem
for
you.
N
K
So
it's
it's
very
clear
in
the
spec.
Well,
it's
actually
kind
of
not
clear
in
the
spec,
but
there
you
go.
There
are
a
lot
of
frames
that
cannot
can
only
be
sent
in
one
rtt
simply
by
their
very
nature.
Like
apple
trips,
you
can't
you
can't
send
a
knack
frame
in
zero
rtt
because
there's
nothing
to
acknowledge,
but
that
there
are
none
that
expressly
forbid
zero
rtt
sending
and
that's
what
you're
looking
to
do
and
so
our
implementation.
I
know
that
others
have
done
the
same.
K
It
has
simply
divided
frame,
sending
and
frame
receiving
by
packet
number
space
rather
or
sorry.
Yeah
packet
number
space
not
increase.
K
N
Yeah,
the
only
I
mean
this
is
really
one
one
that
we
couldn't
find
a
clear
clear
answer
for,
and
I
think
that
honestly,
whichever
one
that
the
working
group
wants
to
go
with,
would
probably
be
fine.
N
The
one
cons
that's
that's
mentioned
here
is,
but
it's
not
clear
very
clear
is
that
you
could
hit
a
different
machine
in
the
server
farm
when
you
come
up
with
zero
rtd
and
the
minute
delay
might
actually
be
different
on
a
different
host
in
your
in
your
in
your
server
fleet
and
yeah.
Again,
it's
a.
K
N
R
Yeah,
I
think
this
is
one
where
I'd
like
to
also
just
give
people
more
time
to
think
about
it,
because
and
look
at
the
pr,
because
there
may
be
some
gotchas
that
we
are
not
documenting
in
this
slide
that
cause
us
to
go
one
way
or
the
other
on
this.
So
I
I
think
we
would
appreciate
another
piece
of
sets
of
eyes
on
on
thinking
about
this
and
making
sure
that
we
haven't
missed
some
terrible
education.
R
R
V
R
If
we
are,
I
think
we're
asking
whether
it's
legal
to
send
ack
frequency
in
a
packet
that
is
a
zero
t
encrypted
packet
as
as
well
as
taking
action
on
it
is
not.
I
think
it's
not
a
problem,
that's
not
actually
true.
There
is
also.
R
There
is
also
a
race
condition
that
I
think
potentially
gets
introduced
about,
whether
you
apply
transport
parameters
or
the
app
frequency
frame.
First
and
john-
and
I
talked
to
that-
and
I
forgot
to
put
that
in
the
slide-
and
I
can't
remember,
did
we
resolve
to
say
that
that
decide
that
was
resolvable
or
is
the
only
way
to
resolve
it
basically
to
buffer
all
at
frequency
frames
until
zero
rtt,
I'm
sorry
until
the
transport
params
are
applied?
Do
you
remember.
R
Or
transport
programs
are
applied
later
because
say
you
queue
a
zero
to
t
packet
locally
in
a
local
buffer
and
then
yep.
You
apply
transport
params
at
some
later
point
in
time,
so
you
do
like
handshake
offload
or
something
like
that,
and
so
you
both
have
a
frequent
c
frame
that
you
potentially
can
process
before
you
apply
the
transport
grams.
N
Right
that
was
the
thing
so
right.
I
I
think
that
we
should
probably
at
this
point
we're
getting
diminishing
returns.
We
should
take
this
back
to
the
to
the
thread
and
and
discuss
and
hashes
out
there
yeah
and
then,
as
as
ian
said,
anybody
with
more
concrete
issues.
Please
raise
them
there,
because
that
will
help
us
make
a
decision
on
which
way
to
go.
R
Okay,
thank
you.
I'm
gonna
move
on
to
the
next
slide.
This
is
at
least
one
more
important
question.
I
think
we
want
to
resolve.
R
R
R
If
you
set
ignore
order
to
true
exit,
nor
order
means,
of
course
ignore
the
order
when
the
packets
arrive,
and
so
now,
there's
no
real
way
to
get
an
immediate
acknowledgement
short
of
just
sending
a
lot
of
packets.
So
there
are
a
number
of
ways
to
solve
this
problem.
So
first,
we
kind
of
decided
that
we
felt
like
this
draft
created
this
problem,
and
so
it
felt
right
to
solve
it
in
this
draft.
R
Also,
the
type
of
people
who
are
looking
at
acknowledgement,
behavior
of
the
sort
that
frequency
provides
probably
also
are
interested
in
this
sort
of
act,
pull
mechanism
you
know
as
well,
so
the
solutions
we
we
have
at
the
moment
are
use
an
unused
bit
and
a
header
which
is
nice
because
it
I'll
actually
go
over
the
pros
and
cons
use
a
one
byte
frame
or
the
other
hybrid
solution
per
se
that
I
came
up
with
is
to
use
a
one
byte
frame
and
also
offer
a
stream
frame
code.
R
Point
that
elicits
an
immediate
acknowledgement
and
it'll
be
kind
of
obvious
why
that
additional
frame
could
potentially
be
useful
and
hopefully
in
the
next
slide.
So
next
slide.
R
R
You
can,
you
know
repack
it
as
it's
kind
of
just
the
way
it
was,
and
you
don't
have
to
split
okay
payload
or
do
anything
like
that
and
there's
no
bite
overhead,
but
there
are
not
very
many
header
bits
left,
so
that
is
fairly
unappealing
from
that
perspective.
Next
slide.
R
Using
one
bite
frame,
call
it
a
pong
or
I
don't.
It
doesn't
really
matter
what
we
call
it
pretty
simple
to
implement
very
simple
to
understand.
R
We
have
a
lot
of
frame
types
that
are
available
even
if,
in
the
one
bite
land
the
the
major
con
is
this
case,
where
you're
trying
to
retransmit
previously
sent
payloads
and
fit
into
a
single
packet.
You
may
end
up
basically
with
one
trailing
bite.
You
don't
know
what
to
do
with
it's,
not
the
end
of
the
world,
but
it
it
may
make
things
a
little
bit
awkward
in
some
circumstances
and
then
option
three
is
the
frame.
Oh,
the
next
slide.
Sorry.
R
R
So
our
proposal
is
to
add
the
one
byte
frame
now
and
evaluate
whether
the
stream
frame
code
point
is
really
worth
adding
later.
I
think
john
and
I
both
felt
that
using
one
of
the
few
unused
bits
in
the
header
would
probably
run
a
spiral
of
a
whole
number
of
people,
including
all
the
people
that
are
trying
to
deploy
lost
bits
and
any
other
mechanism.
But
you
know
we.
We
definitely
like
input
from
the
working
group
on
this
one.
E
R
Room
yeah,
I
agree:
tail
packets
almost
always
have
more
than
one
bed
available,
but
there
there
is
also
the
other
case
of
like
a
probe
timeout,
where
you
might
have
kind
of
a
full-size
packet
that
you
worth
the
data
you
might
want
to
re-transfer.
That's
kind
of
the
case
I
was
trying
to
allude
to
earlier.
M
M
You
have
the
packet
lost
case
and
the
reordering
case
all
can
be
hide
in
that
case,
while
at
the
other
direction.
It's
it's
not
it's.
You
know
it's
reordering
when
you
have
something
arrive
at
the
lower
number
than
the
highest,
but
do
we
actually
want
to
differentiate
things
when
you're
asking
that
this
direction
of
going
back
is
okay
to
skip
and
not
the
gaps.
R
I
my
intuition
is
is
know
that,
like
the
rfc
9000,
we
want
to
treat
these
all
identically.
I
saw
your
issue
open.
Thank
you
very
much
magnus
for
pointing
out
that
we
were
slightly
non-specific
in
the
current
draft.
I
think
my
intent
was
to
align
with
rfc
9008
unless
there's
some
reason
to
to
change
what
kind
of
in-order
receipt
means,
but
I
like
it
yeah.
I
think
that
was
just
a
very
slight
miss
specification.
R
I
think
the
one
thing
I'd
call
it
about
that
is,
I
I'm
not
sure
if
I
understand
all
of
these
cases
for
the
ignore
order
flag.
So
if
other
people
who
have
those
use
cases
think
that
it
would
be
worth
breaking
out
those
different
circumstances,
then
you
know
I
I
think
I'd
like
to
understand
those
use
cases
better.
R
Thanks,
christian.
D
R
I
was
intending
that
this
be
defined
in
such
a
way
that
this
bit
this
bite
elicits
an
immediate
act
in
the
sense
of
kind
of
as
soon
as
reasonably
possible,
but
not
for
prelude,
optimization
such
as
draining
the
receive
buffer
before
sending
acknowledgements
or
processing
all
buffered
like
locally
buffered
packets
that
are
currently
undecryptable
before
sending
acknowledgements.
R
So
if
people
think
that's
the
wrong
behavior,
then
then
they
should
definitely
speak
up,
but
I
was
intending
to
not
break
those
optimizations.
D
N
N
Now,
I'm
not
hearing
a
lot
of
that
on
the
chat,
but
if,
if
I
don't
know
how
to
go
there,
I'd,
it
would
be
really
helpful
for
us
if
we
knew
which
of
these
directions.
We
wanted
the
interwebs.
D
H
I
mean
there's
from
my
perspective.
It
looks
like
most
people
are
in
favor
of
doing
something
with
either
a
one
bite
frame
or
ping
like
being
the
same,
also
vaguely
option
two,
but
I
don't
know
if
we
want
to
like
you
doing
option
one
I
think
is.
H
We
should
be
really
really
careful
about
ever
doing
that.
I
think
that's
probably
because
then
we're
gonna
have
to
consider
all
the
other
people
that
are
doing
things
with
those
bits
right
now,
which
we
are
not
they're,
not
adopted
work,
and
so
it's
be
somewhat
unfair
to
have
adopted
work
just
preempt.
All
of
that
and
say
sorry,
you
can't
do
anything.
N
A
Yeah
you're
at
tommy-
and
I
know
you
got
a
couple
more
slides,
but
just
in
the
interest
of
kind
of
giving
a
fair
feti
to
everyone,
you
know
take
it
to
the
list
if
you
need
to
drive
some
input
into
those
things,
but
thank
you
very
much
for
the
presentation
in
the
discussion.
Everyone
yeah.
Thank
you.
R
And
thank
you
for
all
the
helpful
issues.
Everyone.
It
definitely
like
improved
the
quality
of
the
document
or
will
once
everything's
merged.
A
Cool,
so
next
up
on
the
agenda,
we
had
martin
thompson
to
talk
about
greasing
the
quick
bit.
Martin,
you
don't
have
any
slides.
I
think
you
are
here
now
you
told
me
earlier,
you
might
not
be.
I
don't
know
if
you've
got
anything
you
really
want
to
speak
to
here
I
can
see
you're
participating.
Do
you
want
to
go
ahead.
K
Yeah,
I
was
just
going
to
say,
there's
nothing
to
say:
people
have
implemented
this.
I
haven't
needed
to
touch
it
since
I
put
the
code
points
in
the
document.
I
think
this
is
ready
to
go.
A
Thanks
yeah,
succinct
to
the
point:
if
anyone
hasn't
read
this
thing,
the
the
draft
is
is
also
short
and
succinct.
I'd
encourage
that
and
you
know
otherwise,
as
chairs,
I
think
yeah.
We
probably
ready
to
take
this
to
work
in
the
group.
That's
cool
so
we'll
set
on
that
a
little
bit
and
have
a
chat
but
beyond
that.
Thank
you
which
gives
us
some
time
to
get
to
the
as
time
permits
things.
A
So
we
have
a
number
of
slides
from
martin
duke,
please
get
on
I'll
start
presenting
and
these
these
relate
to
the
version
negotiation
stuff.
Hence
why
the
presentation
is
called
version,
negotiation,
stuff.
Q
So
there
are
a
total
of
three
drafts
in
this
space
that
I'm
connected
to
one
is
called
quickv2.
It
has
a
few
purposes
which
you
can
see
there
and
it's,
but
like
mainly
it's
well.
First
of
all,
we've
had
talk
about
version
greasing
and
ossification,
and
a
lot
of
people
said
we
should
drop
a
v2
really
quickly.
Well,
this
is
a
very
quick
v2.
Q
All
it
is
is
just
doing
things
you're
supposed
to
do
in
every
version
which
is
rolling,
new
hkdf
labels
and
and
a
new
salt
and
saying
this
is
a
new
version.
So
this
is
something
you
know
so
like
this
is
here
we
could
use
it
just
as
a
vn
working
item
or
if
you
really
want
to
have
a
v2,
we
can
adopt
it.
The
version
number
is
a
certain
debate,
whether
it's
two
or
not.
Let's
get
to
that
in
a
minute
right.
Q
So
this
is
that
that
bit
is
at
least
pretty
straightforward
if
it
turns
out
there's
any
breaking
stuff
in
v1,
and
we
need
a
vehicle
for
that.
This
would
be
that
vehicle,
but
enough
about
the
draft
itself
next
slide.
So
this
has
raised
a
couple,
questions
that
are
somewhat
aesthetic
and
somewhat
practical.
One
is
just
if
version
numbers
are
incremental
or
they're.
You
know
pseudorandom,
I
mean
there's
an
implication
that
v2
is
better
than
v1
and
v3
is
better
than
v2.
Q
So
do
we
just
take
something
random
from
the
from
the
version
space
or
or
do
we
increment
them?
So,
in
other
words,
should
this
draft
be
v2,
or
should
it
be
v
foo
in
the
interest
of
time
we
can
table
that
to
the
list?
I
guess
I
I
think
the
much
more
interesting
thing
is
alpn.
Q
I
have
there
an
excerpt
from
a
previous
version
of
the
applicability
draft
which
encased
a
a
perspective
that
I
think
didn't
have
consensus,
but
obviously
a
lot
of
people
agreed
in
it,
which
was
that
the
h3
alpn
token
was
tied
specifically
to
version
one
which,
in
its
defense
does
solve
some
some.
It
does
simplify
version
negotiation
and
that
you
can
typically
use
alt
service
for
it.
It
does
have
some
downsides
to
it.
Q
One
is
kind
of
aesthetic
and
practical
and
that,
if
you
have
many
versions
and
many
alpn's,
then
you're
going
to
just
end
up
with
a
matrix
of
all
these
things,
and
you
will
get
you
will
explode
the
alpn
registry,
which
is
probably
not
good.
A
somewhat
more
practical
concern
for
implementers
is
at
least
in
my
mental
model
of
the
world.
Q
Quick
quick
implementations
know
the
versions
they
support
and
applications
provide
the
alpn
so
either
you're
you're
binding
your
applications
pretty
closely
to
quick
versions
and
making
it
hard
to
retire
old
versions
and
bring
in
new
ones
or
you're
doing
something
sort
of
janky
where
the
application
provides
an
alpn
route
and
then
the
and
then
like
quick
applies,
the
version
codes
that
associated
with
it
and
it
sounds
sort
of
janky
and
error-prone.
But
anyway,
those
are
the
downsides,
as
I
see
it.
Q
A
second
option
that
came
up
when
so
in
an
issue
on
this
document.
It's
on
my
personal
github,
which
is
listed
in
the
draft.
I
did
a
pr
to
basically
not
do
that,
and
so
there
was
a
comment
that
maybe
they
should
have
to
update
http
3,
the
hp3
draft,
to
say
that.
Q
Oh,
it
also
applies
to
to
to
this
version,
which
of
course,
potentially
could
make
the
htc
draft
just
sort
of
how
the
front
page
would
be
a
gazillion,
updated
buys,
which
you
know
again
is
sort
of
an
aesthetic
thing
a
little
bit
difficult.
Next
slide.
Q
So
the
third
option
is
just
to
use
an
iana
registry
for
this.
So
there's
like
four
flares
over
this
they're
kind
of
a
bike
shed
on
which
one,
but
you
could
update
the
aopn
registry
with
the
quick
versions
that
it
supports.
If
it
does
support
quick
versions
at
all,
you
could
edit
the
quick
version
registry
to
say
which
aopns
are
supported
or
like
again,
just
as
a
minor
variant
to
that
you
could
list
the
unsupported
versions,
unsupported
alpn's
kind
of
the
inverse
of
what
you
see
here.
Q
I
really
don't
care
what
should
easily
do
hold
a
gun
in
my
head,
I'd,
probably
say
3b
or
it's
inverse,
but
regardless
next
slide.
Q
And
then
the
fourth
one
is
not
really
trying
to
do
any
of
the
sort
of
formal
tracing
and
just
like
when
people
write
drafts
and
do
versions
and
new
applications,
they
just
do
the
best
they
can
with
the
known
versions
and
applications
which
doesn't
make
the
forensics
a
little
harder.
If
you're
trying
to
figure
out
what
alpn's
do
you
know
when
you're
trying
to
implement
stuff,
so
you
have
to
kind
of
go
through.
Q
You
have
to
kind
of
search
through
a
bunch
of
documents
figure
out
what's
going
on,
but
it
does
kind
of
reduce
the
bureaucratic
burden
of
rolling
out
versions
and
or
alpns.
So
I'm
gonna
pause.
There's.
I
have
another
slide
about
the
other
two
drafts,
but
I
do
want
to
pause
here
and
get
some
comment
on
p
if
people
care
about
this
at
all
and
if
so,
what
they
prefer
next
slide.
Sorry
mike
riley.
W
So
I'll
note
that
previously
we
didn't
decide
ilpn,
specifically
two
quick
versions:
I
remained
not
wild
about
that
choice,
but
we
did
have
consensus
on
it
on
it.
If
we
think
we
want
to
change
that,
I
really
don't
like
trying
to
enumerate
all
the
things
that
are
allowed
to
use
a
quick
version
in
the
quick
registry.
W
3A
is
maybe
a
little
bit
better,
but
what
we
agreed
at
the
time
was
that
we
could
just
write
a
very
quick
draft
that
says:
you're
allowed
to
use
this
alpn
with
this
quick
version
too.
I
don't
know
that
that
necessarily
needs
to
be
any
new,
quick
spec.
But
a
really
quick
thing
out
of
the
http
working
group
can
expand
the
scope
of
the
h3
lpn
or
just
try
it.
I
mean
ultimately,
what
it
comes
down
to
is.
Does
the
quick
version
offer
the
features
the
application
needs?
Q
So
I
I've
been
back
to
the
minutes
to
find
this
consensus
you're
referring
to
mike.
I
will
note
that,
after
looking
at
the
documents,
I'm
I'm
not
sure,
there's
clear
text
in
anything
we've
adopted
or
or
published
that
that
that
really
binds
that
carefully,
that
combines
the
alpn
so
clearly
to
a
quick
version.
In
fact,
that
applicable,
applicability,
draft
code
text
that
I
listed
as
mira
mentioned,
we
actually
changed
that
to
be
a
little
more
loose
about
how
we
resolve.
W
Q
Right,
so
if,
if
quick
v2,
if
the
quick
v2
draft
exists,
has
its
current
language,
which
is
that,
yes,
you
can
use
h3
with
this
as
well,
I
would
argue
that
meets
that
the
text
of
that.
Q
K
Quick
v2
can
just
say
this
works
for
h3.
I
think
that's
sufficient,
and
I
I
think
that's
really
all
that
you
would
need
to
do
in
your
v2
draft.
Just
explicitly
say
that
I
don't.
Okay
necessarily
need
an
updates
header
for
achieving
that.
Nor
do
I
think
the
updates
head
is
going
to
end
up
being
crazy.
Nor
do
I
think
that
that's
a
real
consideration,
so
I
I
think
this
is
all
going
to
work
out
recently.
Well,
just
by
writing
some
text.
D
I
So
my
my
concern,
if
we
don't
do
anything,
is
as
a
client,
there
are
multiple
mechanisms
that
tell
me
like
which
lpn's
the
server
supports,
so
I
can
be
all
service
that
can
be
the
https
dns
record.
I
Given
those-
and
you
know
if
the
user
wants
to
browse
to
that
origin,
I
need
to
send
an
effort
and
I'll
know.
Okay,
you
supported
should
be
three
I'll
know
that
you
support,
maybe
quick
v1,
maybe
quick
v2,
maybe
one
and
not
the
other,
maybe
both
and
that
could,
if
they're
compatible
the
compatible
vn
means
that
we'll
be
able
to
go
actually
compatible.
I
Q
Or
am
I
missing
something?
No
I
mean
that
certainly
is.
The
drawback
is
that
you
are
more
likely
to
end
up
in
version
negotiation.
That
is,
I
think,
why
I
guess,
a
plurality
or
the
majority
or
whoever
you
know
kind
of
thought.
We
should
tie
these
sticks
together,
and
you
know
none
of
this
matters
if
there's
only
a
couple
versions
in
the
long
run
but
or
applications.
But
if
there's
many
of
both,
then
then
I
think
we
run
into
some
practical
problems.
I
I
Either
that
or
like
you
know,
we
had
some
other
solutions
where,
in
all
service,
you
could
say
which
quick
versions
were
supported,
for
example,
and
then
you
need
to
do
the
same
in
https
dns
records.
So
I
think
the
simplest
would
be
to
define
a
new
alpn
and
then
move
on.
I
Q
Thank
you
no
one,
so
the
other
question
I
have
on
this
is
like.
Does
anyone
have
any
feelings
about
the
disposition
of
this
draft?
If
we
should
move
forward
drive
for
adoption,
or
if
it's
just
nice
to
go
through
the
exercise
and
have
a
target
for
vn.
Q
So
the
other
draft
kind
of
in
this
space
or
there's
two
now
so
back
at
109
and
presented
vision
version
aliasing.
I
would
say
that
the
feedback
was
generally
pretty
positive
with
a
few
with
a
few
concerns,
and
but
there
was
not
much
feedback
after
the
presentation
you
know
on
the
list
and
so
on,
so
we
did
cut.
So
I
did
do
a
little
announce
the
fallback
mechanism
and
fix
some
problems
in
that
the
details
are
not
important
right
now.
Q
Another
concern
was
that
for
those
of
you
who
weren't
there,
essentially
you
get
parameters
in
the
first
connection
and
allows
you
to
encrypt
the
initial
protect
the
initial
and
subsequent
connections
and
people.
They
really
didn't
like
that.
First
connection,
bit
so
david's
kanaz
and
I
collaborated
on
a
draft
called
protected
initials
which
takes
some
bits
from
encrypted
client
hello
in
the
tls
group
and
applies
them
to
the
entire
quick
initial
packet.
Q
So
there's
this
chart,
which
I
think
is
in
the
protected
initials
introductions
kind
of
gives
you
the
pros
and
cons
of
each
approach
and
what
they
do
and
what
they
don't
do.
I
would
view
either
ech
production
initials.
Q
You
know
for
first
connections,
then
version
analyzing
for
subsequent
connections
is
kind
of
being
a
sweet
spot,
but
I
welcome
you
welcome
you
to
take
a
look
at
these
drafts
comment
on
a
list
on
what
you
think
is
useful
here
versus
not
useful,
and
we
can
proceed
from
there
thanks.
That's
all
I
have
thanks
cheers
and
everyone
else.
A
Thanks,
martin,
I
think
just
with
regards
to
the
v2
question.
I
I
wouldn't
take
the
lack
of
response
right
here
as
a
as
a
hard.
No,
I
think
we
just
need
to
do
a
little
bit.
Thinking
about
that
and
relation
with
with
how
busy
other
people
are
with
stuff.
So
yeah,
let's
chat
about
that.
A
bit
further
offline,
okay,.
Q
A
And
with
that
we're
we're
nearing
the
top
of
the
hours,
so
we
did
have
two
more
talks
in
the
as
time
permits,
but
and
sadly
we're
out
of
time
for
those
I'm
afraid.
So,
thank
you
for
your
efforts
in
making
those
slide
decks.
People
can
check
that
out.
A
It's
all
on
the
data
tracker,
if
we'll
see,
if
maybe
we
can
get
something
in
in
the
future
as
well
for
you,
but
we'll
have
to
revisit
I'd
like
to
thank
everyone
for
their
time
and
participation
on
this,
especially
our
minute
takers,
watson
and
robin
sounds
cool.
Doesn't
it
but
I've
got
some
sad
news.
That's
occurred
during
the
course
of
this
working
group
meeting
and
that
is
that
our
longest
serving
chair
lars
is
is
no
longer.
How
long
is
it
he
looks
happy,
not
sad
zahed.
F
Yes,
so
as
lars,
I
hope
you
can
hear
me
the
first
time,
I'm
talking
in
this
session.
Yeah
lars
wanted
to
have
some
more
time
more
room
to
do
his
itf
chair
job
so
well.
We
just
allowed
him
and
actually
nothing
much
to
say.
F
I
think
everybody
in
this
working
group
appreciate
the
guidance,
the
leadership
and
the
contribution
you
did
lash,
and
we
really
thank
you
for
your
time
and
you
will
be
here
as
a
participant,
I'm
pretty
sure
about
this,
and
if
we
need
we'll
bring
you
back
so
thank
you.
B
No,
you
won't
thanks
all
this
is.
This
is
great
fun
from
above
to
here
five
years.
It
was
seriously
probably
that
the
most
fun
standardization
I've
done
in
the
itf.
So
thank
you
all
for
participating
in
making
this
a
great
group
and
with
that
we're
closing
the
session
and
I'm
gonna
get
out
of
all
the
slack
channels
that
I
should
be
in
anymore.
Bye,
guys.
A
Thank
you,
bye,
yeah,
I'd
just
like
to
reiterate
a
big
thank
you
to
lars,
as
well
personally
from
you
know,
being
a
part
of
a
bath
into
into
coming
into
the
fold
of
participating
in
the
docks,
so
then
being
a
chair
and
having
the
the
transition
and
all
of
that
stuff.
So
yeah
you'll
be
missed
and
we'll
try
and
do
our
best
to
keep
keep
the
working
group.