►
From YouTube: IETF106-QUIC-20191120-1000
Description
QUIC meeting session at IETF106
2019/11/20 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
B
A
Is
the
note
well
good
morning,
by
the
way,
this
is
the
note?
Well,
hopefully,
you're
all
familiar
with
it.
If
you're,
not,
you
can
find
it
using
your
favorite
search
engine
by
searching
for
ietf
note.
Well,
it's
the
terms
under
which
we
participate
regarding
things
like
intellectual
property,
but
also
regarding
things
like
our
behavior
in
the
room
and
on
the
mailing
lists
and
in
the
hallways
and
other
ietf
uShip
places.
So
please
take
a
look.
Please
learn
more
about
it.
A
A
A
Are
you
volunteering
Ted
your
service
to
the
community
is
is
above
and
beyond.
Thank
you,
Thank
You
Ted
can
are
you
gonna
use
the
ether
pad
Ted
or
the
Google
Doc
ether
pad?
If
folks
could
help
Ted
out
on
the
ether
pad,
if
you're
not
actually
helping,
Ted
out,
maybe
refrain
from
getting
on
to
it,
because
it
doesn't
like
lots
of
people
at
the
same
time
that
would
be
appreciated.
A
Thank
You
dkg
once
again
we're
not
entirely
sure
the
bridge
from
jabber
to
slack
is
working,
so
please
bias
towards
jabber
agenda
bashing.
Our
agenda
today,
we're
gonna,
continue
discussion
of
the
issues.
I
think
we're
in
a
place
where
we
can
discuss
just
the
HTTP
and
recovery
issues.
Today
we
have
Becker
interrupts.
C
C
A
Any
other
agenda
bash.
Sorry,
I
didn't
even
finish
the
agenda,
so
we
had
finished
talking
about
issues
including
that
talk
about
extension
documents.
Apparently
talk
about
datagrams.
First
then
verse
negotiation,
then
load
balancers.
We
end
up
with
planning
any
other
agenda:
bashing,
Provine,
sorry
yeah,
the
acoustics
in
this
room
are
horrific,
so
everyone
needs
to
speak
very
close
to
the
mic
and
very
distinctly
praveen.
E
A
A
As
as
we've
discussed,
we
currently
have
a
co-op
for
consensus
on
getting
the
recovery
and
HTTP
documents
into
the
late
stage
process.
We
are,
as
you
can
see,
winnowing
down
the
issues
lists
on
all
of
the
drafts,
so
there
are
very
few
left,
and
the
anticipation,
as
we've
discussed
and
tried
to
Telegraph
number
of
times,
is
that
the
drafts
are
going
to
settle
down
from
a
technical
standpoint
of
the
bits
on
the
wire,
but
we
will
enter
a
period
where
we
think
we
understand.
Yes,
mr.
H
A
Right
so
good
morning,
I
will
point
out.
I've
only
had
one
coffee
this
morning
and
for
someone
where
I'm
from
that's
not
any
at
all,
so
we're
hoping
that
the
drafts
will
settle
down
from
a
technical
standpoint.
We
also
anticipate
the
editors
taking
that
time
to
do
some
fairly
substantial
reworking
of
the
documents
from
an
editorial
standpoint
to
make
sure
they're
communicating
clearly
not
changing
the
technical
content,
but
changing
how
it's
presented
so
that
it
is
readable
and
and
usable
by
people
who
haven't
participated
in
this
process.
A
At
the
same
time,
we
feel
that
implementers
are
going
to
need
some
time
to
digest
everything.
That's
happened
to
work
on
their
implementations,
to
do
interrupt
work
and
also
to
get
some
deployment
experience.
That's
something
that
people
want
to
see
very
much
before
we
ship
the
RFC's
to
make
sure
that
this
thing
actually
works
in
the
wild.
And
so
the
tentative
plan
going
forward
is.
A
Is
that
once
we
get
the
documents
into
a
good
shape
and
we
close
these
issues,
we'll
go
to
a
working
group
last
call,
hopefully
before
the
end
of
the
year
and
and
as
a
way
to
notify
the
greater
community
that
we
think
were
technically
done.
However,
we're
not
going
to
request
them
to
become
our
FCS
for
some
time
after
that,
and
and
very
roughly
speaking,
that's
something
like
the
middle
of
next
year,
because
we
want
to
get
that
experience.
Give
the
editors
a
chance
to
work
in
the
documents
a
bit
to
get
deployment
experience.
A
So
it
may
not
be
our
last
working
group
last
call
on
these
and
I
know
that
that
that
dissonance
is
is
harsh
for
some.
But
but
it
is
a
what
we
think
will
be.
Our
working
group
last
call
because
of
that
we're
gonna
have
some
capacity
in
the
working
group
for
discussing
other
things,
and
so
we're
talking
about
how
we
can
start
the
discussion
of
extensions
to
quit.
A
A
So,
for
example,
there's
AB
off
I
think
later
today
about
web
transport
and
that's
a
potentially
working
group
forming
buff
I,
believe
no,
not
quite
yet,
but
one
may
appear
afterwards
at
some
point
and
but
but
we
wouldn't
want
to
do
that.
Work
in
this
working
group,
for
is
the
important
thing
to
convey
so
I
think
that's
where
we're
at.
We
have
an
interim
scheduled
in
Zurich.
A
I
Of
this
Christian
yeah
I
mean
I
like
the
idea
of
stabilizing
the
protocol
before
publishing
the
RFC
and
having
some
time
to
test
and
verify.
However,
if
you
delay
pushing
the
document
for
publication
at
the
same
time,
you're
also
pushing
the
IDF
review
pushing
the
idea
of
review.
Yes,
I
mean
that
so,
basically,
if
you're
not
doing
the
ietf
last
call,
you
are
not
getting
the
feedback
from
people
who
are
actually
in
the
working
group,
and
so
that
creates
an
uncertainty
about
the
duration
of
its
own
ITF.
I
E
E
I
think
working
group
coming
out
and
saying
something
formal
would
be
useful
instead
of
having
a
vague
date,
waste
deadline
like
target
a
particular
draft
where
we
say:
hey,
we're,
gonna,
freeze
this
for
a
little
while,
so
that
people
can
deploy
I,
think
that
would
be
very,
very
useful
and
regarding
IETF
process,
I
think
that
can
go
on
in
parallel,
like
if
you
find
any
major
issues
from
deployment
we
can
come
back
and
like
fix.
Those,
but
I
think
like
having
a
very
stable
draft
formally
declared,
would
be
very,
very
useful.
J
The
saman
Thompson
I
think,
based
on
the
issue
list
that
we
have
in
front
of
us
and
the
discussion
that
we
had
here
earlier
this
week.
The
next
version
of
the
draft
might
be
that
one
that
prevents
looking
for
I'm,
not
100%,
confident
about
that,
but
I
think
we
should
certainly
try
to
make
that
the
case.
We
have
a
long
time
between
now
and
the
interim
meeting
and
I
think
the
editors
are
pretty
firmly
committed
to
nailing
all
of
those
issues.
J
J
The
ITF
review
is
largely
based
on
Directorate
reviewing
in
the
modern
day
and
so
simply
going
to
each
of
the
directorates
now
and
asking
for
review,
which
is
a
practice
that
most
of
the
directorates
already
support
for
various
things
is
sufficient
and
will
get
us
some
wider
review
and
I
would
I
would
say
that
the
the
steps
that
I
would
like
to
see
happen.
We
finish
the
next
draft.
J
What's
going
to
happen,
there
is
that
we
will
have
25
and
then
a
26
will
contain
a
whole
bunch
of
editorial
changes
based
on
review
and
working
good
last
call
and
feedback
and
27
will
be
Directorate
feedback
and
then
we'll
sit
on
that,
but
25
and
27
should
be
functionally
identical.
If
we're.
If
we're
changing
something
there,
then
that's
a
pretty
high-energy
event.
That's
maybe
optimistic,
but
that's
the
that's
the
sort
of
thing
that
I
would
like
to
see
happen.
B
K
C
K
So
I
mean
you
just
click
the
button
if
there's
further,
like
I,
think
that
gets
us
exactly
what
we
want
if
there
is
further
lake-like
discomfort
that
maybe
the
entire
community
like
if
there
are
there
are
like
whole
areas
that
aren't
in
this
room
so
like
maybe
there
are
people
only
won't
write.
A
review.
I
know
that
we
do
not
generally
use
IETF
at
IETF
dot
org
for
Nicolle
discussion,
but
there
is
a
mailing
list
that
we
can
say
hi
by
the
way
you
know.
K
Maybe
you've
heard
we're
like
replacing
TCP
with
TCP
sorry
yeah
yeah.
It's
like
last
call
coming
up
great
like
so
last
call
preview.
Can
we
actually
send
that
to
last
call
at
IETF
that
org
I
mean
we
can
also
send
it
to
ya?
I
mean
yes,
it's
like
almost
last
call.
That's
like
you
know,
preview
of
last
call
please
start
reading
the
document.
Now
it's
big
and
important
yeah,
so
I
mean
like
we
have.
We
have
like
I
think
we
have
a
lot
of
options
here
to
pipeline
this
and
I.
K
L
Hi,
this
is
Sean
Turner,
just
driving
the
point
home
we've
kind
of
had
some
running
code.
We
did
with
TLS
1.3
I,
don't
know
that
we
did
it
perfectly,
but
you
can
tell
the
ADEs
the
ADEs
talk.
You
can
request
the
early
reviews.
You
can
send
the
messages
to
everybody.
You
can
Twitter
it.
You
can
be
like
look.
This
is
coming.
If
you
want
your
chance,
here's
the
github
repo
knock
yourself
out
I
mean
there
are
ways
that
we
can
do
this
I
do
not
think
we
should
try
to
get
it
top.
L
C
Where's,
what
Sean
said
do
you
know
do
other
things,
and
people
will
still
show
up
at
the
last
minute,
so
you
don't
like
it.
The
I
think
that
running
code
as
well
I
think
this
generally
seems
like
a
solid
plan,
the
I'm,
probably
the
person
who
thinks
that
calling
it
last
call
when
we
know
it's
like
not.
The
last
call
is
pretty
silly,
but
you
know
what
it's.
C
Shed
thanks
David
and
the
reasons
that
a
bike
shed
is
that
often
there's
a
presumption
we
do
or
last
call
that
if
issues
not
raised
in
the
last
call,
then
they're
out
of
order
when
they're
raised
later
and
and
so
on,
the
I
think
praveen's
entirely
right
that
it's
good
to
have
a
last
call.
C
Let's
try
to
flush
out
as
many
of
these
as
we
can
and
like
they'll
spend
sometime
sometime,
you
know
doing
you
know,
doing
testing
and
doing
some
errant
ation,
and
that
will
know
that
turn
up
some
things
about
the
change
and
those
issues
will
obviously
be
in
order.
But
what
also
needs
to
be
an
order
is
upon
reflection.
Having
looked
at
this
longer,
not
from
data,
but
from
looking
this
longer.
There
are
things
which
are
not
good.
We
can
start
changing
them.
C
My
point
is
those
are
still
in
order
at
that
point
in
the
process
where
they
would
not
be
in
order.
Ordinarily,
if,
like
you
were
sitting
there
working
group
for
two
years
with
the
last
call
and
like
you
aged
it,
like
it,
ITF
I
was
called
people,
think
you
were
a
jerk
and
so
I'm
just
that
dust.
But
that's
the
reason.
That's
the
reason.
You're
hearing
resistance
from
the
last
call.
N
David's
Kazi
google
just
want
to
quickly
say
this
is
a
good
plan.
I
like
it,
especially
from
our
implementation
side
of
things
it.
The
past
couple
years
have
really
felt
like
running
after
a
moving
target
because
we're
like
we
have
quick.
Oh
they
changed
the
header
format.
All
right
run
change
that.
Oh,
they
change
this
and
it's
we're
almost
there
now,
and
so
this
quiet
period
is
gonna.
Allow
us
to
actually
catch
the
target
and
actually
put
it
in
production
and
get
numbers
that
we'll
share
with
the
group.
F
A
F
M
F
A
J
So
mutton
told
some
was
going
to
talk
about
those
two
specifically.
This
quiet
period
will
allow
them
to
catch
up
as
well,
because
there's
like,
like
those
people
implementing
chasing
the
moving
target,
has
been
difficult
for
those
documents
and
I.
Think
it's
probably
a
good
idea
to
to
allow
that
to
sell.
It
might
be,
then,
that
we
can
publish
all
seven
documents
at
the
same
time,
which
would
be
a
really
nice
thing
to
be
able
to
do.
A
A
One
of
the
important
takeaways
that
people
should
have
here
is
is
that
if
you
do
think
you
have
an
extension,
that's
generic
not
specific
to
an
application
you'd
like
to
propose
for
quick,
write,
a
draft
bring
to
our
attention
we'll
reserve
time
for
it.
If
it
sees
some
discussion
on
the
list
we'll
think
about
whether
we
want
to
adopt
it
and
then
we're
going
to
start
that
mechanism
going,
but
this
working
group
will,
we
think
be
the
locus
of
that
and
one
thing:
I
should
mention:
I'll
put
a
meal
email
on
the
list
shortly.
A
If
we
do
do
that
and
I
think
this
is
from
talking
the
area
directors
the
direction
we
want
to
go
in
we're
going
to
need
to
make
a
charter
change.
Our
charter
currently
says:
thou
shalt
not
work
on
a
datagram
extension
woops.
So
we
need
a
slight
chart.
Okay,
that's
not
them!
That's
me:
we'll
need
a
slight
Charter
change
to
enable
us
to
work
on
on
those
extensions,
we'll
put
a
proposal
out
for
comment.
The
area
directors
will
change
the
Charter,
do
that
do
whatever
they
need
to
do
and
we'll
move
forward.
A
N
C
N
Okay,
so
let
me
summarize
this
so
first
thanks
Kazuo
for
finding
this
issue.
In
short,
most
packets
in
quick
are
encrypted
and
in
particular,
their
integrity
protected,
and
what
this
kind
of
a
slight
side
effect
so
the
crypto,
but
what
it
also
means
is
that
if
you
get
a
random
bit
flip
on
the
network,
it's
the
same
as
an
attack,
but
we
detect
it
and
we
drop
the
packet.
N
One
exception,
though,
is
retry
packets,
those
on
Mike
initial,
don't
have
any
kind
of
protection
and
if
the
retry
token
gets
a
bit
flip
on
it,
we
don't
notice,
especially
if
the
UDP
checksum
is
disabled.
That's
particularly
bad!
If
that,
because
if
the
retry
token
is
invalid,
the
server
will
reply
with
a
second
retry
and
according
to
the
spec,
the
client
must
not
like
do
that.
Dance
the
second
time.
So
then
the
connection
is
dead
in
the
water.
N
N
Everyone
already
has
an
Aes
GCM
implementation
for
initial
packets,
so
yep
right
there
perfect,
we
just
reuse
a
a
das,
GCM
128
the
key
was
initially
zero,
but
I
could
rightfully
pointed
out
that
it
might
be
nicer
to
have
one
that
changes
and
then
a
Martin
Thompson
had
the
good
idea
to
just
reuse.
The
initial
salt
from
okay
well
then
get
in
line
to
say
that,
and
so
we
use
that
key.
N
C
Right
so
I
guess
two
points
plus
listed
there
for
the
first
point.
First,
on
quick,
graphically
which
I
mean
I
know,
this
is
a
fine
point.
We
shouldn't
use
the
initial
assault
as
the
key
we
should
take.
The
right
we
should
Katie
has
something
off.
The
initial
assault.
I
understand
like
this
is
not
like,
like
I'm
going
hands
vigorously
about
like
how
is
it
really
secure
anyway,
but
nevertheless
a
good
practice
on
the
more
substantive,
a
acetal
nodding?
So
that's
fine,
that's
it
cause
it.
C
It's
a
constant
any
case
on
the
on
the
issue
that
I
think
that
Dave
and
I
were
forth
on
is
whether
any
of
this
material
should
be
encrypted,
in
particular
the
retried
token
be
encrypted,
and
so,
as
I
I
think
I
think
my
claim
is
that
encrypting
it
has
the
nice
property
that,
like
two
centuries
there
is
any
structure
each
I
took
in
France,
ossification
Martin's,
counter-argument
I
believe
is
that
recall
was
that
on
wait.
C
What
so
I
think
they're
two
arguments
while
you
might
want
there's
one
or
why
you
might
not
want
to
do
it,
which
is
it
costs
the
cost
in
AES
or
operation
or
two
for
the
GCM
computation
further
for
the
AAS
computation
on
the
the
martin
counters.
I
believe,
with
this
will
be
a
fixed
fixed
mask,
so
you
can
just
X
or
the
fixed
mask
which-
and
so
this
is
there's
no
performance.
There's
no
performance
consequence
really.
C
The
I
have
two
ways:
there
are
two
the
most
there's
some
benefit,
because
it
means
it
means
it's
quite
a
bit
more
work
for,
like
for
someone,
doesn't
in
the
draft
to
see
the
structure
of
data
on
the
on,
because
all
you're
getting
because
you're
getting
X
or
something
unknown
on
now,
I,
where
I
was
going
to
say,
is
there's
actually
two
ways
you
go
here.
One
way
it
all
three
ways:
one
way
is
the
current
thing
we
recycle
it
at
all.
C
The
second
way
is
the
fixed
AES
key
with
a
fixed
mask
which
turns
into
fixed
masks,
so
you
can
on
which
has
additional
advantage
by
the
way
that
it
means
you
don't
like
redo.
It
means
you
can
use
exactly
the
same
computations
you
ordinarily
would
do
for
backup
reduction
on
the
on
one
more
thing,
that's
worth
adding
is
that
if
you
were
willing
to
accept
the
AES
operation,
then
you.
K
N
So
it's
can
Aussie
Google
a
lot
of
these
I,
get
that
you
have
really
good
instincts
about
cryptography
and
that
QE
use
is
bad
and
a
big
red
flag.
That
should
just
make
your
knee
twitch,
but
when
the
key
is
public
in
the
spec
implications
are
a
lot
less
interesting
and
therefore
like
this
really
becomes
a
beauty
contest
I.
In
my
personal
opinion,
the
thing
you're
trying
to
encrypt
is
the
retry
token,
which
the
server
encrypts
with
an
actual
key.
That
is
not
public.
N
So
maybe
we
just
need
to
say
that
the
shark
of
structure
should
not
be
visible.
We
already
say
that
the
retry
token
must
be
distinguishable
from
a
new
token.
For
example,
the
there
are.
We
could
have
built
this
similar
to
initial
meaning
that
you
do
in
HK
DF
based
on
the
connection
IDs
and
that's
is
better
for
ossification
Nick.
Banks
was
scared
of
that
because
he
thinks
that
we'll
have
really
bad
performance
Beck's
on
his
box
that
sends
the
retry
as
Google
I.
Don't
think
we
care
much,
but
that's
a
reasonable
thing.
N
At
the
end
of
the
day,
ossification
of
Paulo
bites
that
are
already
encrypted.
Is
not
something
I
worry
about,
so
from
my
perspective,
this
is
a
bike
shed.
We
really
need
the
token
sorry.
We
really
need
the
integrators
check
at
the
end,
but
what
we
use
for
the
key-
and
if
we
put
the
token,
is
a
complete
bike
shed
I'm
totally
in
favor
of
flipping
a
coin
and
being
done
with
this
and.
E
I'm
gonna
cut
the
queue
Praveen
I
just
had
a
clarification
question,
so
UDP
checksum
is
only
man
optional
for
ipv4,
because
it's
protected
by
the
activity
for
checksum
on
ipv6,
it's
mandatory.
So
my
question
was:
are
we
trying
to
get
protection
for
the
packet
beyond
what
is
provided
by
the
IP
or
UDP
checksum
here
I'm,
just
trying
to
understand
why
we
need
to
do
this.
N
There
it's
crazy
you
as
a
client
application.
You
do
not
know
what
IP
version
your
packet
Walker
over.
Even
if
the
socket
you're
using
is
ipv4
ipv6,
you
could
have
a
nat64
in
the
network.
There
are
a
bunch
of
NAT
sticks
force
today
or
if
they
see
an
ipv6
packet
with
a
checks
party
time
with
a
checksum,
they
will
rewrite
it
to
v4
and
clear
the
checksum.
So
there
are
cases
where,
like
you,
will
not
get
it
and
you
have
zero
control
over
that
from
the
application.
So
I
think
there's
really
a
use
here
on.
C
C
O
O
J
So
mum
Thompson
I,
wanted
to
ask
the
question
whether
anyone
thought
this
was
critical
in
some
way.
I,
don't
I,
don't
see
any
indication
that
people
think
that
this
is.
This
is
a
hole
worth
dying
on.
It's
just
that
we
have
a
preference
from
some
people
not
to
have
it
encrypted
and
preference
from
others
that
it
be
encrypted.
J
One
of
the
the
key
observations
here
is
that
we
don't
want
to
run
HK
DF
and
we
don't
want
to
install
the
different
AES
key,
because
both
of
those
operations
are
expensive
enough
to
show
up
in
the
test
and
Nick
had
some
pretty
good
numbers
on
what
a
cost
to
send
retries
with
the
full
HK
DF
plus
a
yes
key
installation,
all
those
sorts
of
things
so
I
think
we
have
agreement
on
that.
But
ecers
point
about
the
three
points
in
the
spectrum
that
we're
discussing
is
I.
J
J
For
instance,
the
destination
connection
ID
the
first
eight
bytes
there
off,
for
instance,
as
the
as
a
nonce,
so
that
you
get
a
different
output
depending
a
different
mask
for
the
token
and
so
I
think,
if
we
I
suggest
that
we
just
do
a
beauty
contest
on
those
and
see
what
people
think
because
I
don't
think
anyone's
particularly
concerned
about
one
versus
the
other.
Here,
oh
yeah,
that's
that's
a
that's
a
good
question.
Does
everyone
think
they
understand
the
three
options?
P
Colloquy
regarding
other
three
options
that
Martin
Thomson
mention
I
think
between
the
first
two
options
and
the
preference
would
be
to
go
for
just
doing
GMAC
compared
to
a
doing
X
or
the
reason
is
that
I
mean
endpoints
can
retain
a
pre-generated
X,
okay
for
one
inch
of
salt.
But
if
we
are
going
to
have
multiple
salt
based
on
the
versions,
then
it
becomes
increasingly
difficult
to
pre
generate
all
the
xor
patterns
for
each
draft.
Therefore,
I,
regarding
the
first
options,
I
think
the
first
one
mixed
losses
so.
O
J
So
the
the
the
thesis
here
is
that
the
that
the
costs
involved
in
Nick's
experiments.
Yes,
the
cost
involved
in
those
experiments
was
largely
to
due
to
the
fact
that
you
have
to
run
H
KDF
a
very
a
number
of
times
in
order
to
get
the
keys,
and
you
have
to
install
the
AES
case,
whether
when,
when
you
get
a
new
AES
key,
you
have
to
run
a
number
of
operations
in
order
to
get
it
in
the
state
that
you
can
use
it.
And
so
those
costs
tend
to
dominate
in
these
things.
J
But
the
thesis
is
that
running
the
AAS
operation
to
XOR
those
bits
is
very,
very
cheap
and
when
you're
talking
about
using
AES
GCM,
it's
basically
cost
nothing,
because
you
have
to
touch
those
bits
anyway
to
get
the
cheap
GMAC.
And
so
that's
the
thesis
that
were
operating
on
here.
You
know
there
is
a
cost,
but
I
would
suggest
that
it's
very
hard
to
measure
that,
because
of
the
way
that
this
all
works
so.
G
F
A
J
A
K
I
C
Okay,
this
is
actually
making
worry
about
this
home
a
little
bit.
The
first
was
in
the
pr2.
All
these
fix
the
problem,
because
you
all
have
found
the
point
one,
the
different
there's
different,
like
colors
I'm,
like
fixing
work,
as
you
all,
have
found
the
first
of
us
in
the
PR,
which
is
to
say
just
integrity,
check,
nothing
is
encrypted.
C
The
second
is
that
an
integrity
check
plus
things
are
encrypted,
but
with
this
completely
fixed
key,
so
that
basically
the
basically
the
encryption
is
the
same,
the
same
plain
text
will
be
encrypted
the
same
way
every
time.
The
third
option
is
on.
The
third
option
is
an
integrity
check
plus
a
form
of
encryption,
and
so
that
you
will
with
system
with
a
fixed
key,
but
with
a
diversifying
nonce,
so
that
the
same
plane
is
will
be
encrypted
differently
each
time
there
was
that
those
are
the
three
options.
I
I
A
J
A
A
F
C
That's
what
I
do
so
our
tactical
difference
is
difference.
Value
like
I
mean
it's
just
a
matter
of
what
how
you
judge
those
but
basically
there's
a
question
of.
Do
you
judge
put
the
performance
cost
of
the
EES
versus
the
somewhat
improved
cost
of
the
obfuscation
of
of
the
aggression?
That's
like
that's
what
you're
but
George?
Okay,
the
cue
is
still
cut.
K
K
E
Like
deployment
ways
yeah
if
crypto
will
add,
cost
right,
so
the
question
here
is:
if
you're
only
looking
for
integrity
check
play,
then
it
seems
overkill
to
me
to
like
go
crypto.
So
I
have
a
strong
preference
for
the
Nugget
option.
Sorry
I
have
a
strong
preference
for
the
no
crypto
finalcut.
Oh
that's
very
hard
to
tell
because
it
will
need
to
be
deployed
and
measured
right.
So
the
only
I
rough
measurements.
Okay,
so
you
do.
A
J
Yeah
so
Martin
Thompson
that
that
assertion
is
based
on
an
assumption
that
this
is
expensive
and
we
don't
have
those
numbers
Nick
produced
numbers
for
the
other
thing,
which
was
definitely
much
more
expensive
but
I,
don't
think
we
we
can
say
based
on
just
a
yes.
Crypto
is
more
expensive
when
we
were
running
G
hash,
so
running
AES
alongside
that,
may
not
cost
anything
noticeable.
I.
Don't.
A
G
A
N
Question
as
owner
of
the
issue
and
writer
of
the
PR,
what
I'm,
really
my
opinion
is
on
this
is
irrelevant.
What
I'm
getting
from
the
room
is
that
some
people
assume
that
it's
one
is
better
because
they
think
something
is
cheap.
Some
people
assume
that
the
other
is
better
because
they
think
something
is
expensive.
No
one's
willing
to
like
lay
down
the
road
and
kicking
and
screaming
on
this.
N
A
N
A
R
F
You
look
at
the
list
on
the
PR
there's
a
bunch
of
approvals.
At
the
same
time,
there's
been
a
discussion
that
just
happened
here,
and
this
is
exactly
coin
flip
territory.
I,
don't
think
that
you're
gonna
find
resolution
there.
Yes,
it
is
propose
already
based
on
what
we
had
earlier
and
now.
This
is
there's
more
discussion.
Now
that's
causing
people
to
maybe
think
that
we
could
do
something
slightly
different,
but
it
was
proposed
already.
We
can
always
open
it
again
later,
as
David
says,
if
we
have
data.
F
S
S
A
A
T
Mike
Bishop,
if
I,
can
offer
a
suggestion.
As
a
previous
comment
noted,
the
distinction
between
these
two
one
clearly
offers
a
little
bit
of
benefit
and
there's
question
about
what
the
cost
is.
Would
anyone
volunteer
to
get
the
didn't,
get
the
data
on
the
cost
and
see
if
there
actually
is
a
difference?
T
I
I
E
A
E
A
Thank
you,
yeah.
Our
scribe
is
reminding
us
to
say
our
names
when
we're
speaking.
So,
let's
move
on
to
the
other
documents.
Let's
start
with
recovery,
I
think
so
recovery
editors.
We
have
six
open
issues.
Are
there
any
that
you
need
to
get
some
input
from
the
working
group
on
and
you
want
to
discuss.
K
I
have
a
highlight
at
the
end.
That
I
think
is
the
question.
At
least
I
want
guidance
on
this
actually
mostly
comes
down
to
pseudocode.
Oh
yeah,
for
the
working
group
proposed
PR
attempts
to
include
a
mechanism
to
limit
Steven
to
increase
during
slow
start
we're
not
pacing
by
itself.
This
does
not
actually
fully
guarantee
the
must
that
we
have
in
another
normative
section.
My
opinion
is
I
would
rather
not
add
pseudocode
for
cases
that
we
don't
recommend.
K
We
say
you
should
pace
packets
if
possible,
and
you
may
not-
and
if
you
do
not,
then
you
need
to
limit
bursts.
But
if
people
would
like
pseudo
code
for
maze
as
well
as
shoulds,
then
we
need
to
kind
of
revisit
that
and
expand
the
pseudocode
quite
substantially.
So
this
is
not
actually
a
normative
issue,
but
I'd
like
some
guidelines
on
like
what
is
the
expectation
of
the
pseudocode,
doesn't
need
to
like
cover
all
the
things
you
may
do.
K
For
example,
there's
no
texture,
no
pseudocode
right
now
around
detecting
packet
reordering
and
increasing
the
pakery
ordering
threshold.
As
a
result
of
every
rank
and
that's
another
one
that
I
can
see
adding,
but
it's
fairly
complex,
and
so
it's
a
fair
bit
of
work.
Yeah,
sorry
ian's.
What
Google
said
that?
But
I'd
like
some
guidelines
here,
is
really
what
I'm
asking
for
not
only
about
this
PR
but
about
going
forward
and
I.
F
A
U
F
Ahead,
Jenna
Jenna
younger,
so
I'll
make
just
a
couple
of
notes:
the
pseudo-code
lives
in
the
appendix
that's
by
design
the
reason
it's
in
the
appendix
is
because
it
doesn't
have
any
normative
status.
The
text
is
normative.
The
pseudocode
is
merely
a
guidance
to
implementers
about
how
to
implement
the
text.
That's
already
in
the
draft
and
to
the
extent
that
we
want
to
add
stuff
or
not
add
stuff
there.
That's
really
up
to
us.
It's
all
guidance
in
there
anyways.
F
V
Kinnear,
so
I
think
it
does
say
that
we
should
be
pacing,
and
so
it
feels
as
though
we
should
have
a
copy
of
the
pseudocode,
which
is
kind
of
the
main
like.
This
is
what
you
are
expected
to
do
and
to
John's
point.
If
there
is
an
implementation
or
a
group
of
implementations,
where
we
know
that
there's
kind
of
another
branch
that
can
be
taken
in
terms
of
what
the
implementation
is
doing,
it
seems
fine
to
have
a
section
in
the
pseudocode,
for
here
is
kind
of
the
alternative.
U
Gori,
gori,
fair,
has
look
at
a
good
thing
at
the
back.
Cuz
I
would
agree
with
what
we
just
said,
and
in
this
case
we
have
496,
always
some
pseudocode
it
for
another
protocol
to
kind
of
build
on
anyway.
So
let's
keep
the
main
path
really
clean
on
the
shoulds,
and
if
you
want
to
do
the
must
other
and
that's
fine,
but
there
is
already
a
reference
to
how
to
do
it
on
this
one.
M
Alright,
Andrew
McGregor
I
kind
of
agree
what
people
are
saying
about
the
pseudocode,
but
we
should
keep
this
documentation
because
it's
entirely
possible
that
an
implementation
may
have
spacing
support
that
isn't
always
functional
and
then
need
to
know.
What's
do
what
to
do
in,
we
need
to
know
what
to
do
when
it's
not
working.
Q
A
F
M
K
It's
clear
to
me
that
the
working
group
is
split
and
not
so
different
away
as
Jenna
and
I,
so
yeah
I
think
I
think
this
is
useful,
like
we
can
keep
going.
Pravin
did
want
to
note
that
there
were
a
few
issues
that
didn't
get
a
design
tag
yet
because
we
are
in
this
limbo,
where
I'm
not
really
sure
we're
in
the
late
stage
process
and
I
haven't
tagged
thing
is
designed,
so
you
might
want
to
go
through
and
see
if
there
are
things
that
aren't
tagged
it
with
either
editorial
or
design.
K
M
A
G
F
This
is
worth
noting
because
we
are
changing
something
or
there's
a
proposal
to
change
something
here
that
we
had
agreed
upon
earlier
to
not
do
we
had
agreed
earlier
in
this
working
group
that
we
wouldn't
have
normative
reference
to
two
normative
references
to
TCP
RFC's
for
the
mechanisms
that
we
use
here.
However,
Gauri
has
noted
that
in
his
review
that
we
don't
have
any
strong
basis
outside
of
these
references
for
some
constants
that
we
use.
F
For
example,
we
use
the
69
28
value
of
initial
initial
window
of
10
the
basis
for
that
is
our
fc-69
28,
which
is
a
tcp
RFC.
Similarly,
we
use
reordering
threshold
of
3.
That
is
again
I,
can't
remember
which
one
that
is
so
that's
one
issue,
that's
the
first
one
30
to
45.
So
there's
a
there's,
a
suggestion
and
a
proposal
in
a
PR
there
now
to
make
that
RFC
normative.
It
seems
completely
reasonable.
F
The
argument
is
that
6
928,
even
though
it
was
written
written
for
TCP,
the
only
discussion
there
is
about
the
size
of
the
initial
window
and
to
the
extent
that
the
size
of
the
initial
window
matters
it
doesn't
matter
what
the
protocol
any
uses
it's
about
a
network
load.
So
we
should
be
able
to
use
that
RFC
directly
and
use
that
as
the
basis
for
making
our
decision.
I
won't
talk
about
80-85
as
well,
but
I,
let
it
finish.
6
9
20
at
first.
U
F
So
there's
actually
so.
That
raises
an
interesting
issue
which
we
may
have
to
deal
with
later
on,
which
is
that
this
is
a
standards
track
document
and
69
28
is
well
it's
a
standard,
strack
document,
but
it's
experimental
so
we'll
be
putting
a
dependency
on
that.
Maybe
that's
alright
I'm
happy
to
figure
it
out
later
on.
If.
O
U
U
C
That
seems
like
a
great
solution.
Right
I
mean
I,
thought
I.
Was
that
soon
as
a
great
solution,
you
know
the
the
purpose
of
the
text
is
to
explain
what
the
Hector
doing
and
like
pointing
their
document
explains
if
it's
not
a
normative
reference,
it's
like
it's
just
like
citing
a
paper.
You
say
we
said
it
this
value
and
go
read
this
paper
to
find
out.
Why.
O
U
U
B
C
A
F
E
J
K
If
people
have
time
now
right
now,
if
they
care
about
the
hen,
handshake
deadlock,
prevention,
stuff
I
would
request
that
they
take
a
look
at
31
61
and
the
attached
PR
and
give
me
feedback
as
to
whether
I
should
just
close
it
with
no
action
or
not.
But
we
don't
need
to
discuss
it
now.
But
I
asked
for
a
review.
Just
cuz
I
think
only
three
people
or
two
people
have
looked
at
it
and
then
the
other
issue
is
I.
Think
Praveen
had.
O
E
Billion
I
I
just
opened
few
recently
so
I
would
request
the
editors
to
review
them
and
there
are
other
folks
as
well.
There's
at
least
seems
to
be
one
safety
should
that
I
found
in
the
AB
Limited
case,
and
there
is
two
deviations
from
TCP
calculations
for
a
certain
time.
So
we
should
reconcile
those.
T
So
this
one
is
notable
and
that
it's
requesting
something
that
is
kind
of
new
to
HTTP
and
kind
of,
not
so,
basically
in
each
one.
You
have
headers
before
the
body
you
have
trailers
after
the
body
and
if
you're
doing
chunks,
you
have
chunk
extensions,
trying
to
sort
out
which
are
their
own
thing
in
h2
we
don't
have
chunky
extensions,
but
you
can
send
headers.
You
can
send
trailers.
This
is
a
request
that
we
remove
the
restriction
that
there
only
be
trailers
after
the
body
and
say
that
you
can
send
trailers
at
any
point.
T
T
A
X
Drain
go
ahead,
that's
her
McManus
on
its
face.
This
is
perfectly
reasonable.
This
was
perfectly
reasonable.
Also
in
each
one
and
as
I
recall,
we
didn't
add
a
hook
for
it
into
h2
because
it
was
never
used
and
given
the
no-one's
clamoring
for
it
this
time
I.
Can
you
speak
up
just
a
tiny
bit
and
given
that
there's
no
strong
use
case
presented
here,
I
think
we
ought
to
really
let
that
decision
ride.
J
B
A
B
B
W
M
W
W
Just
because
of
the
chartering
discussion,
the
Charter
must
change,
because
it's
taking
over
a
protocol,
that's
very
important
to
the
internet,
and
we
can't
have
continuing
7-year
blocks
of
time
that
cannot
adjust
the
basic
protocol
because
the
working
group
that's
working
on
the
next
version,
it
doesn't
want
to
address
changes
that
might
have
come
up
for
the
previous
version.
Ok,
I,
just
don't.
N
I
would
much
rather
see
this
as
an
extension
and,
if
look
like
we've
been
doing
with
like
gos
1:3
and
htv-3
popular
extensions
get
ported
into
the
next
version
of
the
protocol.
This
is
not
a
population
of
h2
that
warrants
putting
into
h3
so
I
would,
and
on
top
of
that,
this
would
require
some
work,
which
I'd
rather
not
have
to
do
so.
I
strongly
encourage
this
to
become
an
extension,
and
if
it
gets
widely
used,
then
we
can
add
it
to
the
next
revision.
Victor
the
keys
cut.
A
So
I
think
we're
hearing.
You
know
that
the
impetus
for
this
has
to
come
from
the
HTTP
working
group
and
I
think
from
from
a
charter
standpoint.
You
know
we're
chartered
I
think
to
allow
the
semantics
of
HTTP
to
that's
been
a
problematic
phrase
in
the
Charter
for
a
while,
because
really
the
semantics
of
http/2
are
driven
by
the
generic
semantics
of
HTTP,
and
so
this
is
probably
an
HTTP
core
issue.
A
If
those
semantics
are
excuse
me
clarified
in
HTTP
core,
we
can
choose
to
bubble
those
into
HTTP
three,
and
that
would
be
the
path
I
think
that
would
be
most
sensible
for
this
I.
Think
personally,
I
agree
with
Martin.
This
is
this
is
pretty
cool
I.
Think
a
lot
of
people
can
imagine
use
cases
for
this
if
we're
commonly
available.
What
I
like
about
it
is
that
it's
backwards
compatible
with
existing
api's.
A
You
know
it's
there's
one
thing
about
the
wire
serialization
and
decoding
the
frames
and
everything,
but
there's
another
thing
about
how
do
I
bubble
this
up
to
applications
and
if
my
API
doesn't
support
having
trailers
in
the
middle
I
can
just
collect
them
and
then
make
them
available
at
the
end.
If
the
trailer
supports
normal
trailers,
the
API
supports
normal
trailers,
so
I
think
it's
deployable.
But
let's
have
the
discussion
in
the
HT
working
group
Roy.
A
A
T
Think
most
of
the
others
have
pending
resolutions,
or
at
least
assignments.
So
if
you
have
an
issue
assigned
to
you,
please
make
progress
on
that.
Pr
I
would
appreciate
it
and
otherwise
I
think
we
might
be
better
served
by
giving
the
other
presentations
so
I
didn't
catch
that
we
might
be
better
served
by
the
other
presentations.
Okay.
A
V
All
right
we're
gonna,
try
to
keep
this
decently
short
because
it's
fairly
straightforward
and
we've
got
a
lot
of
other
good
stuff
to
do
so.
I'm
gonna
try
to
get
through
all
the
slides
and
then
we'll
talk
a
little
bit
at
the
end
of
that.
So
we've
talked
about
this
before
and
we've
had
some
side
meetings
and
other
things
in
which
many
of
you
have
engaged.
V
We
also
notice
here
that
quick
provides
functionality
beyond
what
you
just
get
over
DTLS
and
UDP
a
lot
of
that
being
the
multiplexing
of
these
streams
and
the
last
reason
we
really
wanted
to
do.
This
was
because
it's
a
very
straightforward
extension
that
makes
sure
that
we
can
kind
of
grease
and
use
the
extension
mechanism
and
make
sure
that
it's
working
next
slide
please.
V
Comes
with
the
fact
that
quick
generally
is
going
to
work
pretty
well
for
that,
for
example,
DTLS
retransmits,
handshake,
packets,
on
a
fixed,
timer
and
quick
is
going
to
be
using
the
full
mechanisms
of
quick
to
get
packets
to
the
other
side
and
there's
also
a
bunch
of
other
quick
features
that
are
really
nice.
So
things
like
transport
parameters
and
the
ability
to
negotiate
things
upfront
having
acknowledgments
of
this
Datagram
data.
V
So,
even
if
it's
not
retransmitted,
you
can
tell
whether
or
not
it
got
there
and
potentially
adjust
your
sending
rate
or
or
make
other
choices
based
on
that
information,
and
you
can
also
multiplex
additional
content
over
the
same
transport.
So
you
have
the
ability
to
sit
there
and
and
have
your
control
stream.
Have
some
other
content
going
and
bring
up
multiple
different
streams
worth
of
data,
one
of
which
is
this
unreliable
Datagram?
Next,
please,
the
design
has
simplified
greatly
since
we
first
started
talking
about
it.
So
thank
you
all
for
your
feedback.
V
There's
now
two
frame
identifiers-
and
it's
really
just
the
the
last
bit
there
determines
whether
or
not
there's
a
length
field
present.
So
if
there's
a
link
field
present
to
you,
have
a
Datagram
frame,
followed
by
the
length
of
that
Datagram
frame
and
then
whatever
data
you
want
and
if
it's
not
present,
then
it
extends
to
the
end
of
the
packet.
V
So
in
terms
of
the
details
that
we've
hammered
out
since
we
started
talking
about
this
Datagram
frames
are
act
eliciting
they
are
not
retransmitted.
Ping
is
the
same
way,
but
in
at
least
our
implementation.
We
discovered
we'd
made
some
interesting
assumptions
about
what
a
Calissa
ting
meant
in
terms
of
recovery
and
congestion
control.
So
we
need
to
make
sure
that
we
have
the
distinction
between
frames
that
are
not
retransmitted
and
frames
that
are
separately
from
frames
that
are
act
versus
not.
They
also
do
not
contribute
to
flow
control
limits.
V
However,
they
are
congestion
controlled,
so
you've
got
your
congestion
control,
getting
your
pipe
of
data
to
the
other
side
of
your
connection,
but
there
is
no
sane
way
to
do
flow
control
here,
so
we
took
that
out.
Flow
IDs
are
also
gone.
Thank
you
for
your
lovely
feedback.
However,
they
didn't
go
super
far.
David
has
written
up
a
draft
that
adds
flow
IDs
into
the
HT
p3
datagrams,
which
can
live
on
top
of
this.
So
it's
pretty
much
exactly
the
same,
but
it
solves
a
lot
of
thief:
low,
ID
questions.
V
V
We
were
really
really
happy
to
see
that
at
the
hackathon
we
actually
got
some
inter
up
on
here,
and
there
are
several
more
implementations.
Even
since
I
wrote
the
slide
that
I
know
I've
been
looking
at
adding
Datagram
support
because
it
should
be
pretty
quick,
but
we've
got
several
that
that
are
already
doing
it
and
also
thank
you
folks
for
the
Wireshark
toast
sector,
tea
screenshot.
Oh,
that's
pretty
fun!
You
have
a
little
data
gram
thing
and
then
there's
a
blob
that
currently
says
quack
so
yeah,
so
it
so
far.
V
The
feedback
from
from
implementers
has
been
that
it's
not
super
difficult.
To
add
support
for
this
frame
type,
there
have
been
some
really
good
discussions
around
API
and
how
you
let
an
application
know
that
something
was
probably
lost
or
certainly
act.
So
those
are
good
things
to
keep
in
mind,
but
don't
actually
affect
the
on
the
wire
bits
at
all.
Next
slide,
please.
A
Just
to
be
clear,
you're
suggesting
and
call
for
adoption
just
for
the
quick
diagram
document,
not
the
HTTP
three
bonding
correct
right.
There's
an
interesting
question
there
that
when
we
do
get
to
the
HTTP
binding,
whether
that's
done
here
or
in
the
HTTP
working
group,
because
our
intention
has
been
to
hand
off
h3
to
them
after
we're
done
exactly
yeah,
Mike
Mike.
T
Bishop,
this
is
not
a
question
directly
on
datagrams,
but
something
that
datagrams
brings
up.
I
noticed
that
you
reserved
two
adjacent
frame
types
so
that
you
could
embed
a
flag
and
the
type
just
like
the
the
inspect
frames
do
do
our
new
reservation
policies
that
require
random
assignment.
For
that.
Y
T
Question
I
think
we
just
changed
the
registration
policies
for
well,
okay,
we
own
we
discussed
in
cupertino
and
we
have
a
PR
and
part
of
the
guidance,
and
that
is
that
I
Anna
is
that
we
randomly
allocate.
That's
not.
Are
you
allowed
to
randomly
allocate
to
adjacent
points
under
the
current
text
or
a.
N
R
S
So
I
think
there's
a
lot
of
hidden
problems
here
that
we've
suffered
through
in
the
past
again
I'm
sure
we're
going
to
adopt
this
and
I
bet.
People
are
going
to
figure
this
out,
which
I
guess
is
a
good
thing,
because
then
we
can
replace
it
with
something
better.
But
it
might
be
a
good
idea
to
at
least
put
some
text
in
about
the
fact
that
intermediaries
with
no
flow
control
is
a
recipe
for
fun.
A
O
You
think
we
should
adopt
it.
Yeah
crystal
and
I
think
we
should
adopt
it's
good
work,
I've,
no
opinion
on
timing,
I!
Guess
that's
for
other
people
who
care
more
about
this
I
just
want
to
say
that
they're
interesting
security
implications
that
come
from
this
particular
extension
as
a
Waterdown
example
of
something
that
might
go
wrong.
Imagine
an
application
is
using
a
Datagram
frame
to
send
a
very
secret
specific,
sensitive
message
like
fire,
the
missile
and
you
have
an
attacker
who's
like
randomly
dropping
back.
It's
waiting
to
see
whether
or
not
things
are
retransmitted.
O
O
O
N
Z
And
I
think
this
is
important
work
to
do
now,
especially
because
I
think
it's
gonna
surface
things
that
need
to
be
fixed
in
other
drafts,
especially
recovery.
Presumably
a
big
user
of
this
will
be
media
applications
and
flow
control
and
congestion
control
sections
of
recovery,
and
even
this
draft
are
they
need
a
lot
of
work.
You
know
there
are
a
lot
of
conflicting
and
country
things,
and
and-
and
it
needs
to
go
broader
than
just
this
group-
there's
other
groups
in
other
areas
that
need
to
weigh
in
on
this
too.
O
Martin
Duke,
f5,
very
exciting
and
I
think
are
important
work.
My
only
concern
is
bandwidth.
I
think
there
are
a
few
what
we're,
starting
to
punt
things
into
extensions
that
we
really
really
need
soon
after
v1
ships
to
make
quick
work
properly,
like
version
greasing
like
maybe
this
version,
Association
thing,
etc
and
I
think
we
can
probably
walk
into
a
gum
at
the
same
time,
but
I
would
like
to
I
think
there's
a
lot
of
key
resources
that
need
to
commit
to
support
to
work
through
all
these
things
simultaneously.
AA
Hi
Colin,
Perkins
I
think
as
a
minimal
Datagram
extension.
This
is
a
very
reasonable
thing
and
I
think
we
should
adopt
it
before
we
finalize
it.
I
would
be
interested
to
see
how
people
are
using
datagrams
and
what
people
are
building
on
top
of
it.
We
may
find
that
a
slightly
less
minimal,
Datagram
extension
turns
out
to
be
a
more
appropriate
design
once
we
have
a
little
more
experience
flow,
IDs
being
the
obvious
thing
in
that
space.
Thank.
AA
K
K
Ian,
thank
you.
I
am
Brian.
We
are.
We
are
both
from
Google,
although
entirely
different
parts
of
the
company,
yeah,
actually
I,
basically
came
up
to
say
what
Mo's
ed
and
then
what
Ian
said
and
then
what
Lucas
said
so
do
it?
Do
it
now.
F
Gen-I
and
god
I
support
adoption
of
this
document,
but
I'll
it
is.
It
is
a
bit
of
a
fur
gun
like
Roberto,
says
but
I
think
it's
an
obvious
foot
gun.
We
all
knew
this
was
coming.
That
said,
I
will
I
will
say
this
one
point,
which
is
that
I
think
this
is
useful
in
other
way,
people
are
already
implementing
it,
so
we
actually
get
to
exercise
the
extension
mechanism
and
I
think
that's
super
useful
that
we
don't
need
to
prioritize
this,
because
the
functions
here
aren't
critical,
I.
G
A
N
My
name
is
David
skinned,
Ozzy
I
work
at
Google
and
I'm
here
to
talk
about
quick
version
negotiation.
So
this
draft
is
joint
work
by
our
horse.
Calling
myself
next
slide.
Please
so
first
point
wait
a
minute:
what
are
we
talking
about?
Doesn't
quick
already
have
first
negotiation,
so
the
yes
to
some
extent
the
quicken
variants
define
a
version
negotiation
packet,
it
uses
version
zero
and
it
sends
a
bunch
of
versions
in
quick
v1.
So
the
quick
transport
draft.
N
It
says
if
you
receive
one
of
those
to
fail
the
connection
report
back
up
to
the
application
that
quick
is
broken.
We
cannot
use
quick
because
the
server
doesn't
support
this
version.
That's
fine,
when
all
were
the
only
version
in
the
world
is
version
1,
because
if
the
server
doesn't
support
it,
then
you're
dead
in
the
water.
But
what
if
another
version
were
to
come
about?
The
client
could,
in
theory,
just
get
that
back
and
reconnect
with
that
version.
N
If
it
supports
it
problem,
if
you
just
do
that
in
AV,
you
could
expose
yourself
to
a
downgrade
attack.
What
if
an
attacker
sent
that
packet
and
because
the
client
supports
version
1
and
2,
and
it
turns
out
that
one
is
no
longer
safe,
because
we
found
a
really
bad
bug
in
we
don't
want
an
attacker
to
force
you
to
use
one,
even
though
both
the
client
and
service
support,
that's
the
main
thing.
N
So
if
we
were,
for
example,
to
say
like
we
have
quick,
v1
and
then
quick
v2
comes
around,
some
of
the
servers
will
support
one,
and
some
of
them
will
support
one
and
the
browser
can't
just
go
to,
and
then
in
the
case,
where
some
servers
don't
support
it
spend
a
road
trip,
then
go
to
one,
that's
too
expensive
and
that's
a
complete
deal-breaker.
So
what
do
we
do
next
slide?
Please.
N
So
if
going
back
a
year
but
I
know,
we've
been
doing
this
for
too
long
I,
don't
remember.
When
we
did
what
at
some
point,
we
had
this
as
part
of
the
spec,
the
original
transport
parameters
code
had
support
for
downward
prevention
and
the
way
it
worked
is
the
server
would
just
send
all
its
versions
and
the
client
would
say:
wait
a
minute
I
or
originally
sent
you
this
version
and
now
you're
telling
me
that
we
switch
to
something
else,
but
you
support
it.
N
The
first
one
in
the
first
place,
what
the
hell
that
must
be
an
attack
abort,
but
someone
at
the
time
I've
forgotten.
Who
pointed
out
that
if
you
have
multiple
servers
as
in
you
talk
to
a
first
one,
it
tells
you
now
I,
don't
support
this
version
negotiation.
You
try
again
and
you
land
on
a
different
server,
because
you
got
load-balanced
and
they
don't
have
the
exact
same
set
of
versions.
N
Then
you
can
end
up
in
a
situation
where,
for
the
client
it
looks
like
a
downgrade
attack
when
in
reality,
you
just
hit
two
different
servers
that
are
both
equally
trusted
and
both
have
the
private
keys
for
the
TLS,
sir.
So
that's
bad,
both
in
the
multi
CDN
case,
but
also,
if
you're,
just
in
your
own
network,
in
currently
deploying
your
software
and
so
there'll
be
a
period
of
time
where
the
new
version
is
hit
here,
but
not
there.
N
So
at
the
time
we
were
starting
to
argue
about
how
to
fix
this
and,
as
with
all
the
problems
that
are
a
lot
of
the
problems
that
aren't
blocking
for
quick
view
on.
We
said:
okay,
blunt
this
out,
we
don't
need
it.
We
want
to
focus
on
getting
quick
view
on
in.
We
can
always
build
this
as
an
extension
later.
This
is
that
extension.
N
It
comes
with
two
main
goals:
solving
the
downgrade
prevention
problem,
but
also
this
spending
an
RTT
problem.
It
allows
us
to
deploy
quick
v2
on
the
Internet
in
a
way
that
both
the
client
and
server
use
it
without
the
deal
breaking
spend
of
an
RTT
next
slide.
Please,
in
order
to
do
that,
we,
this
draft
introduces
a
concept
which
is
compatible
versions,
the
idea
there
being
that,
if
quickly,
two
ends
up
looking
very
much
like
quick
view
on,
let's
say,
for
example,
a
quick
v2
is
quick.
N
V1
plus
the
Datagram
frame
that
wouldn't
change
much
for
HTTP
3,
so
you
could
say
these
versions
are
very
similar
would
be
great
if
we
didn't
spend
the
RTT,
so
we
define
compatible
versions
as
when
the
server
receives
the
client
floors
flight.
So
in
quick
view,
1,
that's
the
initial
containing
the
client
hello.
If
it
can
understand
that
and
map
that
to
a
v2
or
version
B,
as
I
put
it
here,
first
flight
then
they're
compatible,
so
they
make
mechanism.
N
Is
the
client
sends
the
first
flight
of
the
version
it
thinks
that
is
most
likely
to
be
supported.
So,
for
example,
if
v2
just
came
out
that
would
be
v1
and
the
server
and
the
client
says,
oh
by
the
way,
I.
Also
support
me
to
which
I
know
is
compatible
with
you
want,
and
then
the
server
says.
Oh
I
support
that
too.
Let
me
convert
your
initial
to
whatever
a
first
flight
in
is
in
v2
and
run
with
that.
So
then
you,
the
initial,
that
the
server
responds
with,
is
now
a
v2
initial,
pretty
simple.
N
We
just
need
to
make
sure
that
we
don't
allow
downgrade
attacks
or
foot
guns.
Next
slide,
please!
So
how
do
we
do
this
and,
as
with
most
extensions,
it's
a
transport
parameter,
so
the
transport
parameter
contains
different
things
on
client
and
server
on
the
client
Ted.
Do
you
want
me
to
go
through
the
slides
or
is
it
for
now
I.
N
The
end
okay,
so
the
just
return
on
the
client
contains
a
few
things.
The
first
one
is
the
currently
attempted
version,
so
that's
redundant
from
the
version
that
is
in
the
long
headers
of
that
initial.
But
what
it
means
is
that
version
lands
in
the
TRC
schedule
and
if
we've
learned
anything
from
the
history
of
TLS
is
that,
if
things
are
not
in
the
key
schedule,
then
you
can
end
up
with
some
really
funky
attacks.
Where
someone
swaps
it
and
you
don't
realize
it.
N
You
also
add
your
previously
attempted
version,
meaning
if
I
tried
a
got
a
version
negotiation
and
tried
B
when
I'm
trying
be
the
I'll
say
here
that
oh
I
tried
a
originally
by
the
way
and
I
send
a
copy
of
this
verse
negotiation
packet
I
got
from
the
server
and
just
a
payload,
which
is
just
a
list
of
versions
that
were
in
the
version
negotiation
packet.
So
the
idea
there
is
on
the
server
when
you
get
that
you
can
check.
N
If
that
you,
if,
in
response
this
previously
attempted
version,
you
could
have
potentially
sent
this
VN
packet
and
the
client
sends
one
last
thing,
which
is
the
list
of
compatible
versions
that
the
server
could
in
in-band
switch
to
if
it
supported
them.
Next
slide,
please
from
the
service
perspective.
The
first
thing
is
the
negotiated
version
so
similar.
That's
also
the
version
that's
going
to
be
on
long
headers,
but
that
way
we
landed
in
the
key
schedule
and
then
the
supported
versions
list.
N
So
the
idea
there
is,
if
there
are
incompatible
version
in
particular,
especially
let's
see
version
3,
is
not
compatible.
You
can
tell
the
client
hey
by
the
way
for
next
time.
I
support
this,
so
the
client
can
cache
that
if
it
wants
the
same
way
today,
we
cache
things
based
on
all
service,
for
example,
next
slide.
N
So
how
does
this
work
for
downgrade
if
the
attacker
is
tampering
with
the
version
in
the
long
headers,
though
the
first
part
in
the
transport
prior
won't
match,
so
the
TRC
schedule
will
fall
over
and
the
way
quick
is
defined.
You
have
a
bunch
of
versions
in
a
bunch
of
long
headers,
but
the
first
one
you
get
is
what
you
establish
on
that
connection.
So
if
it
changes
after
that,
you
drop
them
on
the
floor.
So
you
have
that
strong
map
in
here
and
then,
if
an
attacker
forges
a
virgin
negotiation
packet.
So
it's.
J
N
Automatically
sorry,
that
is
what
I
meant.
The
server
like
both
sides
upon
receiving
stuff
need
to
make
sure
that
they
match
agreed,
and
so,
if
an
attacker
forges
this
a
great
one,
the
list
of
things
like
because
those
are
there,
it
can
say,
wait.
The
server
can
say
wait.
A
minute
I
would
have
never
sent
you
this
the
end
packet
because
I
know
all
my
servers
support.
This
version,
that's
bad,
and
if
the
server
deployment
has
different
banks,
where
knows
like
over
here,
I
support
one
and
two
and
over
here
I
only
support
one.
N
N
All
right,
so
the
mechanism
is
pretty
simple.
The
draft
tries
to
describe
that.
The
draft
originally
only
did
compatible
versions
and
kind
of
a
few
weeks
ago
before
the
deadline,
we
kind
of
rushed
in
to
add
the
downgrade
prevention,
and
so
it
makes
the
draft
really
clunky
to
read.
So
that's
on
me
I
think
we
need
to
work
a
lot
on
the
framing
and
refactor
it
to
make
it
a
lot
more
sensical,
but
I
think
the
mekka
mechanism
is
correct.
N
A
I
can
see
a
lot
of
people
want
to
talk
about
this.
We
have
about
five
minutes,
so
I
would
suggest
that
we
focus
on.
Is
this
an
area
that
the
working
group
thinks
it
needs
an
extension
in?
Is
this
a
reasonable
starting
point
for
that
extension,
knowing
that
it
can
change,
and
do
we
need
to
do
it
now
or
do
we
need
to
wait
and
questions
of
clarification
if
you
need
them
to
answer
the
previous
questions,
please
go
Ted
10:30.
AB
My
clarification
question
actually
relates
to
the
discussion
we
had
yesterday
with
a
LPN.
We
can't
hear
you
Ted,
okay,
I,
wanted
to
ask
the
relationship
between
this
and
the
discussion
we
had
yesterday
about
potentially
using
a
LPN
with
full-stack,
as
opposed
to
just
the
particular
application
layer.
If
we
do
a
LPN
with
full-stack.
Does
that
obviate
the
use
of
this
okay?
Thank
you.
N
S
N
N
P
Cattle
Hawks,
thank
you,
hope
presenting
in
the
draft
I
think
this
is
an
important
work.
On
the
other
hand,
I
wonder
if
there's
immediate
need
for
this
to
be
adopted
due
to
two
reasons.
One
is
that
for
something
like
datagrams
can
be
negotiated
using
a
future,
and
my
decided
that
we
only
need
a
version
number
change
when
the
pockets,
the
critics
putter
pockets,
that's
being
exchanged
during
the
handshake
changes.
The
second
reason
is
that
for
something
like
version,
something
like
version
aliasing,
we
need
a
different
kind
of
downgrade
protection.
P
For
example,
the
proposal
that
we
had
using
a
new
token
frame
was
to
use
the
value
of
the
token
to
see
if
a
down
gray
should
be
prevented,
so
this
one
doesn't
address
the
old
cases
where
downgrade
should
be
prevented.
So
that's
one
of
the
reasons,
I
kind
of
wonder.
If
we
can
park
this
until
we
have
a
something
and
do
we
have
a
new
quick
version
that
actually
uses
a
different
country,
chimerism
I'm.
J
J
This
document
needs
a
lot
of
editorial
work,
I,
provided
that
feedback
I
think
there's
different
ways
to
spell
it,
but
the
basic
framework
of
what
David
described,
which
is
to
have
the
incompatible
and
compatible
upgrades,
is
quite
valuable
and
so
I
think
we
should
adopt
it
now
and
treat
it
like
the
other
extensions
I.
Don't
think
we
need
to
block
quickly
one
on
this
being
complete,
but
I
do
think.
We
need
to
be
working
on
this
so
that
we
have
a
story
for
when
we
do
have
quick
v2
and
we.
C
Well,
this
makes
in
the
right
place
for
us,
Eric,
rajala
I'm,
one
of
the
authors,
I,
think
we've
traded
out
this
I
frankly
have
some
misgivings
are
at
the
incompatible
negotiation,
but
we
can
fight
that
out
in
the
working
group,
but
I
think
like
having
that
like,
like
having
a
good
item
to
sort
to
work
on
negotiation.
I
think
like
we
should
do
and
we
can
figure
out
the
details
later
or
this
is
good.
A
good
video
start.
K
You
but
Google,
yes,
I
also
think
we
should
adopt
this,
though,
if
if
it
turns
out
that
you
know
due
to
the
editorial
work,
it's
easier
to
do
so
sometime
between
now
and
the
next
ITF
in
a
few
months.
That's
that's
fine,
but
I.
Think
sometime
in
the
next,
you
know
three
to
six
months.
We
should
be
doing
this
in
this
working
group.
W
W
N
Just
a
ten-second
answer
so
currently
quick
track
at
the
transport
layer
doesn't
have
a
concept
of
minor
versions,
but
maybe
the
compatible
versions
is
that
so
we'd
have
to
maybe
that's
there's
a
mapping
layer,
the
let's
take
it
offline,
bring
that
to
this
because
I
think
that's
an
interesting
point.
Okay,.
M
A
O
I
O
Basically,
you
want
load
balancers
to
work
off
connection
ID
rather
than
four
tuples,
that
you
don't
break,
not
rebinding
resilience
and
so
on,
and
so
the
ideally
for
a
low
state
load
balancer.
The
connection
idea
should
contain
the
server
ID
mapping,
and
so
this
is
just
basically
proposing
several
standardized
methods
of
encoding.
The
server
ID
mapping
in
the
connection
ID,
so
that
the
load
balancer
can
understand
a
packet
and
or
understand
the
routing
of
a
pack
get
to
the
correct
server
next
slide.
O
O
So
that's
also
in
the
draft
and
then
also
there
is
interest
in
eventual
Hardware
crypto
offload
for
quick
and
after
some
conversation
of
Intel
I
was
certainly
convinced
that
that
that
would
very
much
benefit
from
somehow
encoding
the
length
of
the
CID
inside
the
CID,
and
so,
if
you
are
trying
to
use
those
services
to
provide
standard
way
to
do
so,
so
we
can
use
ones.
You
know
a
fixed
number
of
chips
that
to
solve
that
problem.
Next
slide,
please!
So,
basically,
you
know
the
the
underlying
value.
O
O
This
is
fairly
clear
from
the
draft,
but
I
do
want
to
highlight
this
point
and
a
lot
of
conversations
about
security.
It's
a
very
much
a
binary
thing
where
something
is
secure,
or
it
is
not.
The
point
of
a
lot
of
these
server,
ID
and
coding.
Algorithms
is
to
is
to
hide
the
fact
that
a
connection
that
two
connection
IDs
that
belong
to
the
same
connection
are
going
to
the
same
server.
O
O
O
One
is
one
is
what
method
you're
using
to
encode
the
original?
That's
the
issue,
connection
ID
in
a
retry
token,
to
allow
these
retry
services.
A
second
thing
is
whether
or
not
using
the
CID
length
encoding
and
then
the
third
thing
is
what
algorithm
you're
using
to
encode
the
server
ID
in
the
connection,
ID
next
slide
and
then
there's
more
stuff
there
for
that's
parameter
specific
to
each
of
those
algorithms.
O
Now
so
again,
the
first
part
of
this
configuration,
then
the
second
part
is
well.
How
does
the
configuration
get
around?
The
draft
definitely
enables
you
to
go
ahead
and
use
whatever
existing
config
distribution
infrastructure
you
have.
That
is
probably
encouraged,
however,
for
people
who
don't
have
that
there
is
a
very
simple
in-band
protocol.
O
I
have
no
particular
love
for
what
I
wrote
down
there
and
if
the
working
group
as
a
whole
kind
of
gave
me
a
solid
message
to
go
with
one
of
these
three
one
of
these
four
options,
I
would
be
happy
to
do
so
with
two
caveats
number
one
find
something
needs
to
be
used.
This
number
one
and
number
two
I
do
want
to
talk
a
little
bit
of
putting
in
a
different
draft.
So
there
is,
of
course,
the
general
problem
of
proxy
to
server
communication
in
TCP.
O
O
Some
other
discussion
points,
so
one
thing
is
one
unfortunate
property
of
all
of
us
is
that
server
infrastructures
are
deciding
whether
or
not
clients
are
linkable
when,
fundamentally,
that
is
something
that
affects
the
client
most
of
all
and
would
be
great
to
have
a
way
to
communicate
somehow
with
clients
how
linkable
they
would
be
if
they
attempted
to
migrate,
and
then
clients
could
make
the
decision
as
to
whether
they
wanted
to
go
ahead.
Migrators
tear
down
the
connection
start
over
the
whole
new
connection.
O
Should
they
change
IP
address,
so
you
could
do
something
good
transport
foreigner,
particularly
if
something
like
quick
OB
was,
was
a
you
know,
a
standard,
a
reference
point
on
what
was
a
Lincoln
Bowl.
What
was
not
so
I
would
appreciate
feedback
on
that,
and
maybe
there
should
be
a
quick
extension
second
issue.
Retry
services
are,
they
are
course
version
specific,
because
retry
packets
are
version
specific.
All
the
other
stuff
is
based
on
connection
ID
connection.
Id
is
part
of
the
invariance,
a
link
field,
which
probably
will
not
have
a
problem.
O
So
do
we
really
want
to
have
things
that
are
quick
version
specific
and
not
version
specific
in
same
draft
if
we
rev
quick,
that
might
just
be
an
editorial
pain
without
waiting
deep
into
the
algorithms
for
people
who
weren't
reading
it?
There
are
some
some
of
these
methods
involve
encryption
and
some
of
them
don't
there's
one
that
is
sort
of
an
obfuscation
algorithm
that
attempts
to
do
some
mathematical
operations
to
make
the
server
ID
less
obvious.
O
A
lot
of
people
will
say:
I've
heard
from
a
lot
of
people
that
that
don't
see
that
it's
clear
that
that's
cheaper
than
just
doing
crypto.
On
the
other
hand,
I've
heard
some
pushback
from
others
that
that
they
would
really
I
see
didn't
see
a
an
option
like
this
and
this
okay.
This
actually
kind
of
moves
on
to
the
second
point,
which
is
I,
think
we've
struggled
a
little
bit
to
get
really
robust
engagement
with
a
lot
of
the
low
state
load,
balancer
vendors,
particularly
in
the
cloud
environment
Nick.
O
My
co-author
has
talked
a
little
bit
with
the
Azure
people
and,
and
frankly,
a
lot
of
it
is
what
one
obstacles
to
get
through.
The
initial
reaction
of
this
sounds
like
a
pain.
Can
you
change
quick?
So
we
don't
have
to
do
anything
which
is
probably
not
a
realistic
view
of
things.
So
you
know
really
it's.
What
are
people
will
you
know
this?
This
room
is
essentially
the
community
of
pic
server
implementer.
So
obviously
your
feedback
is
quite
important.
O
It'd
really
be
nice
to
do
I'm
still
trying
to
work
to
try
to
feedback
from
the
other
end
of
this
of
this
communication.
To
answer
some
of
the
questions
like
the
third
one
I
have
on
this
slide.
Next,
please
so
I
would
like
to
move
for
adoption.
There
were
a
few
fairly
minor
issues
that
are
open
on
my
private
github,
but
ultimately,
I
don't
make
move
forward
on
these
on
these
larger
questions
without
larger,
broader
working
group
input,
and
these
algorithms
are
not
that
complicated
I've
already
played
with
implementing
them.
O
A
Yeah
and
that's
it:
okay,
Thank
You
Martin.
So
again,
we
only
have
a
few
minutes
to
discuss.
I'd
like
people
to
focus
on
is
this
appropriate
for
this
working
group?
Is
this
the
right
time?
Is
this
the
right
starting
point,
and
especially
I've
heard
a
question
of
whether
it's
appropriate
to
do
this
in
the
quick
working
group
or
elsewhere?
So
if
you
can
speak
to
that,
that'd
be
interesting.
Please
go
to
some.
J
J
So
I'd
like
us
to
look
pretty
seriously
that
sort
of
thing
one
of
the
things
this
looks
a
lot
like
is
the
sort
of
work
that
happens
in
the
in
the
ops
area
in
this
organization
and
I
predict
that
we'll
have
some
young
people
looking
at
this
one
I
don't
know
how
you
want
to
manage
that.
But
expressing
these
as
yang
models
is
something
that
I
I
suspect
will
happen,
whether
you
like
it
or
not,
I'm.
O
J
O
J
E
Praveen
I
think
the
graph
is
great,
especially
like
the
fact
that
we
are
giving
options
that
are
incrementally
deployable
instead
of
going
to
like
a
really
expensive
solution.
It
provides
multiple
solutions
on
the
way
there
and
I
think
this
extension
of
the
three
we
talked
about
is
the
most
important
because
it
directly
affects
quick,
wave
and
deployment
and
like
data
centers
in
the
cloud
so
in
terms
of
adoption,
I
think
it's
the
most
important
and
they're
already
talking
with
like
third
party
load
balancers
about
this,
so
I
think
it's
very
timely
and
very
very
important.
AC
Make
a
livin
so
I
think
Martin
mostly
said
what
I,
what
I
wanted
to
say:
I
think
it's!
It's
good
that
you
put
like
the
different
parts
apart
and
like
as
much
as
I
would
get
because
he's
belong
together.
I
think
it's
like
easier
to
rip
them
apart.
I'm,
especially
uneasy
about
like
defining
a
completely
new
protocol
here
for
configuration.
I
think
what
could
be
in
scope
here
is
like
the
configuration
it
solved,
the
yang
model
and
the
algorithms
for
sure,
because
we
have
an
open
issue
on
that,
one
in
the
manageability,
Draft
and
I.
AC
Don't
care
if
it's
in
that
draft
or
if
it's
separate
draft
but
I,
think
it's
very
much
in
scope
and
I
just
wanted
to
add
that
there
is
ongoing
efforts
about
proxying
in
general,
so
that
could
could
fit
and
we're
moving
on,
and
we
probably
will
propose
some
more
work
at
the
next
meeting.
So
that's
that's
in
process
and
we
should
sync
up
a
little
bit
more.
O
I
O
V
Eric
Kinnear
keeping
things
quick
what
tommy
said,
but
I'd
also
like
to
say
yes,
we
should
also
adopt
this
because
we
as
people
trying
to
deploy
things
at
scale
on
the
server
side.
It
will
be
very
helpful
to
have
guidance
in
this
area
and
I.
Think
that's
going
to
be
instrumental
in
in
making
a
quick
that
that
many
people
can
deploy
as
opposed
to
a
few
people,
can
deploy.
Y
So
I
think
this
is
great
work.
Thank
you
really
fantastic
to
actually
define
every
way
of
doing
it.
We
absolutely
should
adopt
this.
We,
this
is
one
of
those
things.
That's
missing
from
being
able
to
actually
go.
Large-Scale,
deploy,
I
agree
that
the
protocol
actually
probably
should
be
split
out,
because
some
of
us
will
just
want
to
express
this
as
yang's.
Some
people
want
to
express
this
as
JSON
other
people
want
to
do
in
band
different
people
and
they
do
different
things,
but
ultimately
I
hate
to
use.
Y
There
was
an
example,
but
how
we
exchange
the
bits
this
day
and
age
really
doesn't
matter
for
configuration,
so
just
define
a
semantics
and
everybody
else
will
figure
out
how
to
encode
it,
but
the
actual
algorithms,
that's
what's
more
critical,
and
that
requires
Interop,
and
that's
that's
awesome.
What
you've
done
so
absolutely
we
should
adopt
us.
Thank
you,
one
more
remote.
S
A
O
A
M
A
G
F
Well,
the
if,
while
you're
pulling
this
up
I'll
talk
about
very
quickly
Martin's,
Eamonn
and
I
have
been
working
on
a
quick
Interop
runner
that
basically
automates
interrupts
testing
among
various
implementations.
We
introduced
this
at
the
interim
and
we
had
a
couple
of
implementations
playing
at
the
inn
at
that
time,
and
now
we
have
a
larger
number
of
implementations.
I
can't
remember
how
many
we
have
at
last
con
over
six
or
seven
implementations
are
now
in
the
interrupt
runner
and
they're
all
working
with
various
tests.
F
We
have
I
think
managed
to
get
most
of
the
tests
that
currently
are
part
of
the
interim
suite
in
the
Runner.
The
value
of
using
this
runner
is
obviously
that
you
automate
all
tests
and
people
are
exporting
their
implementation,
so
you
can
run
against
anybody's
implementation
at
any
point
in
time
that
you
want
it
generally,
it's
logs.
It
generates
various
things
that
you
can
look
at
for
debugging
purposes
and
we
encourage
those
who
haven't
yet
put
their
implementations
into
the
runner
to
please
go
ahead
and
do
so
that's
about
it.
F
I
was
going
to
show
a
chart
that
it
produces
the
matrix,
but
that's
yeah
and
there's
a
if
you
have
any
questions
about
it.
Please
write
to
me
or
to
Martin,
there's
also
a
slack
channel
that
you
can
join.
What's
the
slack
channel
a
quick,
Network,
sim
I
think
it's
called
okay,
yeah,
quick
Nicole,
then
great.