►
From YouTube: IETF102-QUIC-20180718-0930
Description
QUIC meeting session at IETF102
2018/07/18 0930
https://datatracker.ietf.org/meeting/102/proceedings/
A
Ok,
let's
go
ahead
and
get
started.
This
is
quick
if
you're
not
here,
for
quick
you're
in
the
wrong
room.
If
we
could
ask
the
folks
coming
in
to
close
the
doors,
that
would
be
helpful.
Thank
you.
So
this
is
quick.
We
have
one
session
in
this
meeting
Lara's
my
co-chair
wasn't
able
to
make
it
to
this
IETF,
so
he's
going
to
be
joining
us
remotely
I'm
flying
solo.
Also,
our
area
director
Spencer
his
remote.
As
you
can
see,
we
have
a
lot
of
other
remote
people
as
well
on
on
media
echo.
A
This
is
the
note
well
hopefully,
by
this
part
of
the
week,
you're
familiar
with
this.
These
are
the
terms
under
which
you
participate
in
the
ITF
or
a
summary
thereof,
really
regarding
intellectual
property
regarding
your
behavior
and
how
you
treat
other
people
and
and
copyright
and
lots
of
other
things.
So,
if
you're
not
familiar
with
these,
please
do
a
search
for
IETF
note
well
on
your
favorite
web
search
engine
and
you'll
be
able
to
find
this
document
and
understand
what
you
commit
to
when
you
participate
in
this
process.
A
So
Joe
has
very
kindly
offered
to
scribe
many
many
thanks
for
that
heroic
Duty,
you're
performing
the
blue
sheets
are
on
the
sides.
If
we
can
start
circulating
those
and
once
they
get
to
the
back,
if
you
could,
please
send
them
to
the
center,
so
those
folks
get
a
chance
to
sign
in
as
well
that'd
be
great.
Our
agenda
today
is
pretty
tight.
I'm
gonna
try
and
keep
things
snappy
we're
gonna
have
some
updates
from
the
base
draft
editors.
We're
gonna
touch
base
with
the
operational
draft
editors.
A
We're
gonna
discuss
some
of
the
issues
on
the
base
drafts.
Excuse
me
and
talk
then
talk
about
the
foreseeable
future.
What
we
plan
for
the
next
steps
with
we're
can
group
what
our
status
is,
what
how
we
think
we're
gonna
finish
this
work
and
then,
if
we
have
time
we
have
a
last-minute
addition
to
the
agenda,
which
is
an
update
on
the
spin
network
from
Marcus
any
bashing
of
the
agenda.
B
B
It
was
an
anchor
and
no
longer
so
probably
the
biggest
change
that
has
happened
recently
is
the
stream
zero
special
crypto
thing
has
turned
even
more
special
and
now
longer
consume
stream
means
gonna
go
into
details
about
that
in
a
little
bit
so
we'll
move
on
next
it
looks
like
a
lots
changed
in
13,
and
that's
because
a
lot
has
we
added
support
for
ecn,
thanks
Magnus
for
all
of
the
extra
work
that
was
put
into
that
one.
With
my
changes
to
connection
ID
handling,
that's
not
the
end
of
that.
B
We're
discovering
now
that
connection
IDs
are
interesting
and
complex
and
new
ground
and
breaking
that
new
ground
has
turned
out
to
be
a
little
bit.
Painful
Mike's
got
some
more
on
that
later
on.
We
added
the
ability
for
service
to
advertise
the
preferred
address
on
which
they
would
like
the
connection
to
migrate
to.
B
We
made
stateless
reset
symmetric,
and
we
now
actually
can
extend
quick
in
various
ways.
Specifically
new
frames
in
this
case
there's
a
bunch
of
changes
to
HTTP
queue
pack,
I
think
we're
up
to
Oh
on
cue,
go
one
on
cue
pack,
which
means
that
there's
we're
still
a
little
bit
early
on
that
one,
and
so
that's
that's
had
a
lot
of
churn
next
place.
B
This
is
a
considerable
improvement
over
where
we
were
six
months
ago,
mainly
because
we've
gone
through
and
labeled
about
half
of
these
as
part
or
version
two
and
for
those
people
are
not
familiar
with
the
way
we're
dealing
with
this
with
identified
a
bunch
of
issues
that
we're
not
going
to
solve
now
and
those
are
still
open
in
the
in
the
issue
tracker,
but
they're
labeled
with
a
quick
v2
label.
It
basically
means
yes,
we
think
this
is
important,
but
it's
not
important
for
us
to
solve
right
now.
B
My
assessment
of
these
and
I
think
the
other
editors
are
in
agreement
on
this.
Is
that
they're,
pretty
close
to
done,
and
the
main
reason
that
they're
not
done
is
because
it
takes
work
and
if
anyone
can
help
us
out
with
any
of
this,
that
would
be
greatly
appreciated
and
so
we're
going
to
be
looked
looking
to
assign
work
to
people
over
the
next
little
while.
B
But
from
my
perspective,
the
end
is
in
sight,
yeah
and
it's
just
a
matter
of
sitting
down
and
typing.
Some
of
these
will
require
some
discussion
in
the
working
group
to
reach
conclusions
some
of
them.
The
conclusions,
at
least
to
me,
seem
fairly
obvious,
but
we'll
discover
interesting
surprises
along
the
way,
but
they're
relatively
minor
for
the
most
part,
and
so
I
expect
that
we'll
be
able
to
have
something
fairly
concretely
complete
by
the
next
time
we
meet
here
at
one
of
these
plenary
meetings,
so
the
end
is
in
sight.
C
A
B
A
Some
more
details
about
that
yeah.
Okay,
thanks!
So
next
up
on
the
agenda
is
Ian
talking
about
stream
zero.
While
we
get
ready
for
that,
I
skipped
something
I
apologize.
John
was
gonna.
Do
a
very
quick
update
of
what
happened
on
the
weekend
with
a
hackathon.
Just
wanna
use
this
mic
Brooke
quick,
maybe.
D
There
were
some
people
working
on
draft
13,
but
many
were
still
working
on
draft
12
I
heard
some
mumbling
about
trying
to
get
packet
number
encryption
right
here
and
there,
and
people
also
seemed
to
be
working
on
that.
So
I
think
it
would
be
nice
to
actually
move
forward
to
stream
32
to
draft
13
draft.
That
you
know
is
a
big
big
shift
from
what
we've
done
in
the
past.
E
What
they
all
add
here
is,
if
you
go
to
the
github
I
believe
kazoo
has
produced
a
PR
that
has
a
NOC
kazuo
kakazu
as
bridge
to
PR.
That
has
a
example,
client
hello,
with
a
simple
message
as
soon
as
a
client,
hello
with
a
P
any
and
so
that
you
may
find
to
be
a
useful
structure
for
figuring
out
why
your
Pina
doesn't
work.
I'll,
try
to
I'll
try
to
get
mink
to
dump
chip
to
make
one
and
dump
out
all
intermediate
points
like
their
crappy
version.
E
Martin's
draft
because,
like
that
does
like
it's
very
it's
very
hopeful
to
like
have
that
kind
of
stuff.
That's
the
one
case
where,
like
basically,
you
can
just
ingest
the
message
with
no
state
at
all
and
like
running,
and
we
produce
the
exactly
the
crypto.
Everybody
else
adds
so
that
may
help
people
get
PID
workings,
peony
I
know.
Is
it
all
tricking
people,
okay,
thanks.
F
D
A
Just
just
for
folks
who
are
new
to
this
or
haven't
been
following
closely,
we
have
a
very
active
group
of
implementers
I
think
we
have
what
on
the
order
of
10
ish
implementations
right
now,
we've
been
meeting
before
in
the
hackathon
before
the
main
IETF
meetings,
as
well
as
adjacent
to
our
interim
meetings,
and
they
use
a
slack
group
to
communicate
if
you're
interested
in
all
of
that,
please
get
in
touch
with
me
or
Lars
or
or
any
of
the
editors,
and
we
can
introduce
to
that
effort.
So
in
string
0.
G
Alright,
so
I'm
gonna
do
a
summary,
mostly
what
we
talked
about
in
in
Shuster.
So
if
you
were
there,
then
it's
pretty
much
just
a
rehash.
I
am
gonna
go
fairly
fast,
just
to
because
most
these
issues
are
fairly
decided,
but
I
want
to
kind
of
go
over
them
because
Martin
in
particular,
is
going
to
talk
about
so
in
the
later
next
slide.
G
H
G
For
reference
as
a
background
TLS
over
TCP,
you
have
TLS
messages
on
TLS
records
and
those
may
or
may
not
line
up
with
TCP
segments,
so
TLS
record
can
span
multiple
segments
or
it
can
line
up,
or
you
know
there
could
be
multiple
TLS
records
within
a
segment.
That's
sort
of
sorted
that
architectural
model
in
the
next
slide
in
the
draft
12,
which
is
prior
to
the
stream
zero
changes.
This
is
sort
of
the
quick
stack
as
you
can
see.
It's
a
little
more
complicated.
G
So
you
know
at
the
very
least,
it's
probably
one
expert
level
of
indirection
here
next
slide,
so
draft
13,
the
design,
the
otherwise
known,
is
the
quick
record
layer,
design
kind
of
squashes
this
down
by
one
layer,
and
it
makes
it
so.
The
TLS
messages
are
in
quick
frames,
which
are
then
in
quick
packets,
which
is
sort
of
the
like
crypto
record
layer
for
the
transport,
and
this
was
already
like
very
much
have
one
hour
to
TN.
0
rgz
worked
anyway,
so
one
or
two
T
and
0
RTG.
G
There
was
no
TLS
record
layer
involved,
unlike
TLS
or
TCP,
so
in
some
sense
this
is
like
a
unification
of
the
design
we
already
had
where
before
it
was
kind
of
half
in
one
world
in
half
in
another
next
slide,
we
had
a
new
crypto
handshake
frame
so,
instead
of
actually
putting
it
over
stream,
zero
there's
a
special
frame.
It
removes
a
few
features
like
you
can't
fin
it,
because
you
never
really
wanted
to
be
able
to
fin
a
crypto
stream.
G
There's
no
SKU
ID,
because
that
doesn't
make
any
sense
and
each
encryption
level
restarts
at
offset
zero
to
avoid
any
ambiguity
about
like
what
happens
if
the
start
offset
of
like
the
next
encryption
level
is
before
the
end
offset
of
the
previous
encryption
level
and,
like
various
other
weird
things,
is
not
a
stream
frame.
So
therefore
it
is
not
flow
controlled
by
default.
G
So
the
the
Newport
has
clear
rules
about
where
every
handshake
message
is
sent.
There's
no
double
encryption.
Quick,
doesn't
really
need
to
know
anything,
but
the
TLS
handshake,
just
basically
Keys
get
exported
it
and
Kayla
says
like
send
this,
and
that
this
encryption
level,
and
vice
versa,
quick
passes
messages
and
data
back
to
TLS
and
says
this
was
received
at
this
encryption
level.
G
Yeah
no
double
encryption
and
you
also
get
built-in
path
validation
so
before
before
we
actually
had
path,
challenge
and
path
response
and
they
were
allowed
during
the
handshake,
because
we
didn't
have
a
good
way
of
actually
making
sure
that
the
peer
is
on
path,
but
now
the
receipt
of
anything
at
the
second
encryption
level.
Well,
technically,
the
third
uvaca
can
tell
us,
but
of
handshake
encryption
actually
proves
that
you're
on
path.
So
in
practice
you
don't
need
a
separate
on
path.
Validation
next
slide,
so
where
the
costs
it
basically
requires
a
new
API
for
TLS.
G
That's
that's,
probably
the
largest
cost,
at
least
that
we're
aware
of
obviously
regards
a
few
changes
of
the
quick
traps
as
well.
But
at
this
point
there
are
already
some
implementations
available
and
at
least
that
the
interrupt
thing
is
actually
a
little
bit.
Okay,
I
think
there
might
be
three
people
who
have
interrupts
now,
but
I'm
not
sure
so
next
slide.
G
So
another
change
we
made
at
the
same
time
there
were
some
issues
around
not
having
separate
packet
number
spaces.
In
particular,
there
was
an
issue
that
Christian
identified
the
long
time
ago
where,
by
spoofing
at
I'm,
an
unencrypted
packet,
you
could
essentially
create
a
hole
and
the
peers
Paquette
scream,
and
then
they
would
never
be
able
to
recover
from
it
for
the
rest
of
the
connection
and
that
hole
would
actually
be
in
the
encrypted
portion
of
the
packet
number
space
non.
G
The
unencrypted
portion
kind
of
the
details
are
on
issue
thousand
18,
but
the
the
critical
thing
that
needs
to
happen
is
when
you
get
an
acknowledgment,
you
need
to
know
whether
that
packet,
you
know
what
encryption
level
that
packet
was
at,
that
this
is
being
a
so
you
need
some
unambiguous
way
of
knowing
you
know.
This
peer
is
acknowledging
five
and
it
was
at
this
encryption
level.
G
The
other
issue
that
was
open
is
that
acknowledgement
of
packets
at
one
level
could
actually,
at
least
in
the
current
loss
recovery
dock
at
the
time,
caused
spurious
retransmissions
of
packets,
a
different
encryption
level.
So
you
could,
you
know,
receive
a
initially
encrypted
packet
and
that
could
come
after
say,
zero
RTT.
Maybe
you
can't
process
here
are
2t.
Suddenly
you
acknowledge
the
initially
encrypted
packet
and
fast
retransmit
kicks
in
and
says
like.
G
Oh,
this
peer
must,
it
must
have
lost
all
the
0
T
and
you
retransmit
it
anyway,
even
though
the
real
problem
is
you
just
couldn't
decrypt
it,
and
so
therefore
you
couldn't
acknowledge
it
next
bug.
G
So
the
design
for
separate
record
number
spaces
is
basically
using
an
acknowledgment
or
an
AK
frame
in
the
packet
number
space.
You
want
to
acknowledge
so
there's
no
special
Act
frame
or
a
new
akram.
It's
just
you
know,
act
frames
in
initial,
acknowledging
initial
packets,
actually
Amsden
handshake
in
knowledge,
handshake
packets.
You
can't
send
out
frames
in
0
to
C,
because
there's
no
point
and
one
RTG
at
frames
obviously
acknowledge,
but
actually
they
acknowledge
about
0
to
DN,
1
or
T
packets.
G
G
Thus
you
can
so
it
also
makes
the
rules
quite
simple,
so
the
rules
for
recovery
for
all
the
different
packet
spaces
are
pretty
much
identical.
There's
no,
like
special
cases
like
if
this
is
at
this
encryption
level.
Do
this
and
so
on
and
so
forth
and
yeah
I
removed
some
temptation
to
do
some,
some
relatively
stupid,
optimizations
that
we
came
up
in
the
meantime
with
and
the
more
we
went
through
them,
the
more
we
realized
they
were.
They
were
foot
guns
that
would
get
you
in
a
heap
blow,
Dro
yeah,
so
next
slide.
G
So
the
big
cost
is
basically
due
to
structure
cost
in
tracking
cost.
So
I
think
the
logic
is
fairly
straightforward,
but
the
you
do
need
at
least
temporarily,
to
potentially
keep
around
multiple
kind
of
AK
frame
data
structures.
You
may
need
to
keep
around
multiple
flights
and
packets
data
structures
and
the
other
cost
is
you're
gonna
end
up
with
more
of
what
we
call
coalesced
packets,
previously
referred
to
as
compound
packets,
where
you
have
two
quick
packets
in
one
UDP
Datagram,
so
excellent,
so
transport
retry.
G
So
this
wasn't
really
in
the
original
list
or
this
kind
of
got
baked
into
the
original
list
of
comments
about
issues
with
stream
0.
But
in
this
process
it
also
became
clear
that
there
is
a
desire
to
have
a
non
TLS
kind
of
retry,
stateless
reset
mechanism,
that's
much
more
similar
to
like
TCP
syn
cookies,
and
you
know
this
is
mainly
for
DDoS
reasons.
G
G
G
So
the
big
goal
is
to
move
retraining
to
the
transport,
and
that
is
largely
worked
out
well,
but
there
are
a
variety
of
G's
health
issues
that
we
kind
of
didn't
fully
think
through
when
we
first
wrote
up
the
text
so
next
slide,
but
I'll
go
over
them
that
the
design
here
so
the
retry
is
not
encrypted
at
all.
So
it's
just
plain
text:
it's
a
long
header
packet.
It
actually
repeats
the
destination
connection,
ID,
that
was
in
the
clients
initial,
and
the
reason
for
that
is
to
do
proof.
G
That
is
on
basically,
and
then
it
has
token
content,
and
so
the
server
sends
me
a
read/write
has
a
token
in
it.
I,
put
that
token
in
the
next
initial
that
I
send
back
to
the
server
and
then
in
theory.
This
token
is
proof
that
you
know
I
am
on
path
and
I
am
the
IP
that
I
say
yeah
and,
as
you
can
see,
the
initial
has
the
Chokin
content
kind
of
where
at
the
beginning,
right
after
the
long
header.
G
So
yes
and
the
other
issues
that
this
this
token
is
actually
also
used
for
zero
RTG,
so
there's
a
new
token
frame
inside
the
encrypted
envelope.
That
allows
you
to
refresh
your
token
once
you're
forward
secure
and,
of
course
you
must
not
mix
these
tokens.
So
you
know
tokens
that
are
are
used
in
the
handshake
and
they're
used
for
basically
DDoS
prevention,
you
know,
must
want
to
not
be
used
later
for
0td
next
slide.
G
I
kind
of
went
over
these
before
but
retry
minimizes,
CP
usage
on
the
server
side
when,
when
under
DDoS
and
other
attacks
situations-
and
it
definitely
simplifies
the
interaction
for
for
the
stateless,
reject
use
case,
so
oh
yeah
and
there's
no
packet
type
and
pigua
TiVoed
like
where
you
put
things
next
slide
all
right.
So
these
are
the
issues
that
are
currently
open,
an
additional
description.
G
There
were
a
lot
of
things
that
were
intended
to
written
down
and
just
word
not
written
down
quite
right,
so
draft
13
just
has
a
bunch
of
little
errors
that
thankfully
Merton
mostly
fixed
at
this
point
and
and
did
a
nice
job
of.
But
then
there
are
a
few
other
issues
and
at
this
point
I'm
as
well,
just
like
that's
next,
not.
I
A
A
G
It
does
work
operations
drafts.
Does
anybody
remember
these?
We
actually
forgot
about
them
for
a
while
and
we're
the
editors.
The
point
of
these
is
to
provide
advice
to
application
developers
and
mapping
designers
on
using
quick
write.
So
we
are
chartered
in
this
working
group
to
work
on
one
application:
wrapping
two
HTTP
with
htv-2
like
semantics.
G
This
is,
however,
a
general
first
transport
protocol,
so
we'd
like
to
have
sort
of
either
template
information
or
other
considerations
that
you
have
to
have
when
you're
mapping
something
other
than
hbt
you
onto
quick,
and
then
we
wouldn't
have
another
document
which
is
advise
to
application
and
network
administrator's
network
management
tool.
Designers,
on
what
deploying
quic
is
going
to
mean
for
management
tasks
that
they
you
know
commonly
do
so.
G
The
idea
is
to
say
sort
of
a
users
guide
to
the
quick
wire
image
for
people
who
are
only
looking
at
the
wire
image,
so
they
don't
have
to
go
into
the
2d
protocol
itself.
So
this
is
really
the
fifth
focus
area
in
the
working
group.
Charter
applicability
we
updated
at
this
time.
We
made
basically
no
changes
because
there
had
been
no
text
added
to
it
since
the
last
time
we
just
revved
it
to
keep
it
from
a
core
unexpired.
G
We'll
talk
a
little
bit
about
the
scope
of
this
document
because
I'm
not
sure
we
have
it
quite
right
like
so.
Originally
we
had
one
document
which
was
applicability
manageability.
We
decided
to
split
those
in
part
because
one
looked
like
it
was
going
to
be
done
way
earlier
than
the
other
one.
Just
you
know
with
respect
to
the
amount
of
discussion
we
needed
to
do.
We
haven't
changed
as
much,
but
there's
been
like
no
controversy,
so
there's
been
no
attention
right
like
so.
G
It's
like,
ok,
great
well,
we'll
take
a
look
at
the
open
issues
and
and
talk
about
how
we
might
want
to
scope
if
we
want
to
restock
this
in
a
couple
of
minutes.
In
the
meantime,
we've
done
some
thinking
about
manageability
the
flux
in
the
wire
image.
So,
as
we've
been
developing
the
protocol
basically
trying
to
track
changes
in
the
headers
was
completely
pointless,
so
we
just
didn't
do
it.
G
So
the
changes
in
0
1
we
updated
to
the
rev
13
headers
we
added
coalesced
packets.
We
fixed,
see
IDs,
we
encrypted
the
packet
number.
This
actually
allowed
us
to
delete
a
lot
of
text
because
we
were
talking
about
what
the
packet
number
might
be
used
for
and
it
turns
out
now
it
can
be
used
for
absolutely
nothing.
We
mention
sni
for
application
identification
so
when
sni
gets
encrypted,
may
have
to
tear
that
back
out
too.
G
G
There's
a
little
bit
more
work
to
do
on
load,
balancing
and
usage
of
the
connection
IDs,
but
it's
our
opinion
as
editors
that
this
document
at
that
point
is
mostly
done
right
like
so.
The
the
the
protocol
has
a
relatively
minimalistic
wire
image,
so
the
users
guide
to
the
wire
image
is
here,
so
you
can
do
with
it,
not
much
that's
by
design
right.
We
there
may
be
some
commentary
that
comes
into
that
where
we,
you
know
talk
about.
Why
not
much,
but
in
terms
of
the
technical
content,
this
is
pretty
close
to
done.
G
We
looked
at
the
open
issues
and
pretty
much
all
look
like
originally
I
said:
okay,
well
and
classify
these
by
applicability
and
manageability,
and
the
manageability
list
in
terms
of
open
issues
on
manageability
that
we
need
to
actually
talk
about
is
empty.
Classifying
the
open
issue
is
the
applicability
draft.
We
have
sort
of
three
classes
of
these.
One
is
endpoint
implementation
issues,
an
endpoint
API
issue,
so
this
is
a
little
bit
less
about
writing
bindings
and
more
about
what
sort
of
interface
the
the
transport
protocol
should
give
to
the
layer
above
great.
G
So
we
originally
envisioned
this
as
a
document
or
a
be
like
a
binding
writers
guide,
and
it
turns
out
that
we
almost
that
the
a
lot
of
these
issues
came
from
discussions
on
the
base
drafts
and
they
were
sort
of
shunted
over
to
the
ops
drafts
and
it
looks
like
there
might
be
from
these
issues.
Some
demand
for
some
sort
of
abstract
interface
document,
whether
that
goes
in
the
applicability
or
abstract
interface
specification
where
that
goes
in
the
applicability
document
or
has
its
own
document,
is
something
that
we
should
talk
about.
G
G
Do
we
want
to
talk
about
multiplexing
over,
like
some
multiplexing
multiple
sets
of
streams,
oversee
IDs
on
the
seam
on
5-tuple
talk
about
trade-offs
in
whether
or
not
you
do
keep
a
lives
or
decide
that
you're
going
to
do
RT
t
r,
0
RT
t
presumption
when
you
have
an
NAT
state
drop
and
then
the
third
of
these
are
sort
of
operational
guidelines
for
serving
quick.
So
this
I
put
a
question
mark
on
applicability
here,
because
they're
currently
filed
is
probably
on
applicability
and
they
were
in
the
original
scope
of
applicability.
G
For
the
open
issues
that
we
know
belong
on
applicability,
we
marry
and
I,
we
can
suggest
text,
but
we
might
not
be
the
best
potential
authors
for
some
of
these
like
so.
Some
of
this
is
like
version,
roll
out
and
roll
back
in
large
clusters
and
migrating
Google,
quick
to
quick
right.
So
people
who
have
experience
for
that
I'm,
not
looking
anybody
in
particular
might
have
you
know
better
input
than
we
have
on
that.
G
If
you
want
to,
if
you
look
at
one
of
these
issues
and
would
like
to
assign
yourself
to
it,
that's
graciously
accepted
PRS
are
also
graciously
accepted.
I
wanted
to
have
a
bit
of
a
talk
now
on
document
organization
audience,
especially
on
applicability,
cuz
right
now,.
H
G
Talking
to
implementers
we're
talking
to
sort
of
clients
at
the
application
layer
and
we're
talking
to
deployers
with
that
document
right
now,
and
it
might
be
that
that's
too
much
and
we
might
want
to
split
this
out
into
a
third
document
we
might
want
if
we're
doing
too
much,
we
might
want
to
push
them
all
back
into
one
document.
Right
I
mean
there's
since
we're
picking
this
work
back
up
now
and
hope.
You
know
that
manageability
should
basically
be
done
in
the
in
the
Bangkok
timeframe.
G
D
Chennai
and
God
so
I
think
when
we
talked
about
the
applicability
document
a
while
ago,
the
application
layer,
users
and
deployers
were
sort
of
important
and
cold
parts
of
this.
What
for
implementers,
are
you
doing
in
the
document?
I,
don't
see
anything
in
there
that
I
think
of
as
these
fundamentals
really.
G
So
a
little
bit
kind
of
CID
generation,
port
number
use
right
like
so
so,
not
necessarily
like
directly
implementation
level
stuff.
There
was
actually
a
discussion
about
how
to
handle
ICMP,
quoting
which
got
pushed
over
to
the
ops
drafts
and
then
got
picked
back
up
in
the
base
drafts
because
that's
actually
where
it
belonged,
but
there's
still
some
stub
stuff
about,
like
so
handling
ICMP.
What
your
error
interface
should
look
like,
so
these
I
think
are
things
that
are
really
talking
to
to
implementers.
J
Also,
there's
touches
really.
The
mere
could
have
been.
This
touches
a
point
about
the
the
interface
discussion,
because
the
interface
needs
to
be
implemented
by
the
implementers
and
then
used
by
the
application
right
so
and
it
currently,
the
document
gives
hint
about
where
an
interface
should
be,
but
maybe
what
want
to
say
more
than
that.
D
D
G
G
G
D
I
So
gory,
Fairhurst
and
I
think
the
there's
a
very
different
audience
between
people
wanted
to
know
an
API
and
use
this
to
those
who
are
trying
to
run
operational
Network
and
want
to
know
how
its
behaving
and
what
packets
are
important
for
them.
The
details,
probably
not
in
your
list
the
details
in
how
you
write
it
right,
so
we
should
be
careful
as
we
write
it,
that
we've
picked
the
audience
because
experience
that
the
ITF
is
people.
K
Tell
me
Polly,
Apple
I
would
just
like
to
read
area
which
what
I
think
you
and
John
I've
been
saying
of
the
actual
abstract
interface
feels
like
it
should
be
a
different
document
that
can
certainly
reference
this.
So
I
think
this
should
describe
all
of
the
bits
that
are
required
to
give
a
more
normative
claim
about
what
an
abstract
interface
should
be,
but
that
should
be
left
for
a
separate
exercise
which
I
think
we
should
also
do
soon
and
start
working
on.
G
Is
that
volunteering
to
edit
yes,
okay,.
A
Cool
Thanks,
we
talked
to
Lars
about
that
a
bit
more
I
guess
now
that
we're
having
a
chat
at
the
table,
we're
chartered
right
now
to
deliver
these
documents
at
the
same
time
as
that
the
base
protocol,
it
occurs
to
me
that
a
lot
of
the
folks
who
could
contribute
to
this
work
right
now,
heads
down
on
that
work.
Would
it
be
beneficial?
Do
you
think
maybe
just
delay
these
documents
publication
by
not
work
on
them
but
publication
by
even
just
a
month
or
two,
so
you
can
get
a
little.
L
G
Suggest
basically,
like
one
interim
cycle
right
like
so,
a
half
meeting
cycle
is
probably
enough.
Really
there's
one
and
a
half
months
right.
Well,
we're.
A
M
Hardy,
let
me
just
make
sure
I
understand
what
what
you're
proposing
before
I
come
in,
because
there's
a
little
hard
for
me
to
hear
mark
what
I
hear
heard
you
say
was
you
were
proposing
that
publication
of
these
drafts
might
be
later
than
the
publication
of
the
protocol.
That's
is
that
correct.
I
was.
M
Think
it
might
cause
pain.
Speaking
personally
I'm
particularly
concerned
about
some
of
the
operational
guidelines
aspects
coming
later
than
protocol
documents,
in
particular,
because
we're
putting
the
protocol
documents
out
once
we've
got
significant
interoperability
testing
under
our
belt
I'm
concerned
that
people
will
then
almost
immediately
start
using
these.
In
fact,
some
people
are
already
starting
to
use
these
and
that
those
guidelines
might
be
useful
earlier
rather
than
later,
I.
Think.
M
J
Mia
Cuban
and
I
agree
I
think
they
should
be
punished
at
the
same
point
of
time,
but
for
me,
publication
is
like
when
the
editing
phases
is
finished
and
we
actually
have
the
RFC
published
right
and
I
think
that
can
be
aligned
like,
even
if
you
send
it
to
publication,
for
the
is
II.
Is
she
at
different
times
and
so
on?
We
came
then,
at
the
end,
like
wait
until
we
published
II,
hope
I.
A
N
G
A
You
thanks
Brendan,
okay,
so
next
up
we're
gonna
go
into
if
I
can
figure
out
this
thing
discussion
of
open
issues
on
the
base
drafts.
The
editors
have
selected
six
different
topics
of
areas.
They
have
questions
for
working
group
and
in
clusters
of
issues.
We're
gonna
go
through
those.
We
want
to
leave
some
time,
as
we
mentioned
before,
for
planning
and
then
future
looking
kind
of
planning
at
the
end
of
the
session.
We
also
have
the
presentation
on
this
bin
bit
so
I'd.
A
E
A
B
A
B
B
Have
a
retry
so
that
we
can
do
this,
so
this
kind
of
address,
validation,
stuff
that
Anne
was
talking
about
before
token
based
it's
pretty
simple.
However,
we
identified
a
whole
bunch
of
issues
around
this,
so
this
in
spoofing
problems.
We've
talked
about
for
a
long
time.
The
format
was,
you
know,
looping
0rt,
to
interactions
and
how
you
do
coalescing.
So,
let's
talk
through
this,
so
the
summary
and
talked
about
this
supply
sense,
an
initial
the
server
sense
of
retry.
It's
got
a
token
in
it.
B
Close
puts
that
token
in
its
next
initial-
and
we
continue
along
with
the
pattern.
Interesting
point
here
is
because
the
server
chooses
the
connection
idea
that
the
client
uses
the
connection
ID
from
the
retry
packet
is
in
the
subsequent
initial
packets,
a
property
that
was
a
little
surprising
as
a
result.
Out
of
the
of
the
stream
zero
design,
teamwork
was
that
retry
can
be
sent
multiple
times.
B
P
B
Q
B
Name
I'm
sure
we
can
paint
that
bike
shed
any
color
you
like,
but
that's
not
important.
Okay,
so
first
issue
was
that
the
format
was
complete
mess
in
13
at
the
top.
There
is
a
little
picture
of
the
octet
that
we
have
in
the
in
in
the
format
in
13,
but
there
was
an
observation
that
it
is
basically
impossible
to
decrypt
and
decode
one
of
these
messages,
because
of
the
way
that
the
packet
number
is
encrypted
and
of
arbitrary
length.
People
have
pointed
out
it's
a
bit
of
a
problem.
B
Also,
the
point
that
retry
apparently
included
a
packet
number,
which
is
apparently
encrypted,
but
we
don't
actually
know
how
and
so
there's
a
proposal
out
there.
That's
got
a
lot
of
positive
feedback
and
we
simply
just
reorder
the
the
packet
around
and
we
don't
include
a
Packer
number
in
the
retry
message.
B
The
consequence
of
that
is
that
we
can't
coalesce
retry
with
anything
else,
but
then
we
don't
actually
have
any
need
to
do
that
as
far
as
I'm
aware
so,
I
wouldn't
suggest
we're
not
worried
about
doing
that.
Now
is
your
opportunity
to
tell
me
that
I'm
crazy-
and
this
is
a
bad
idea,
because
I
haven't
merged
the
pull
request
yet,
but
I
fully
intend
to
because
we
got
a
backlog
of
things
behind
this
change,
right,
salt
and
next.
B
So
because
we
have
this
multiple
retry
packet
interaction,
there's
actually
no
reliable
way
at
the
client
to
distinguish
a
retry
in
response
to
their
first
initial
or
the
second
one
and
there's
an
interesting
problem
here.
If
you
retransmit
your
initial,
as
you
might
have
to
do,
you
can
get
multiple
retries
coming
back
and
ultimately,
with
varying
delays
on
the
networking
and
get
retries
from
the
initial
mixed
up
with
retries
from
the
second
initial,
and
you
don't
know
which
one
is
which,
particularly
when
connection
IDs
aren't
actually
changed
at
the
server
end.
B
So
the
suggestion
is
that
if
the
server
doesn't
intend
to
proceed
with
that
with
a
handshake
or
it
might
not
proceed
with
the
handshake
after
this
initial,
it
has
to
provide
a
new
connection
ID,
there
must
be
different
to
the
one
that
it's
received
and
it's
got
to
be
a
certain
size.
I
mean
this
relatively
straightforward,
just
gin
up
another
random
number
and
shove
it
in
there.
B
That's
not
particularly
difficult
for
a
server
to
do
and
there's
no
reason
for
it
not
to
do
that,
but
this
means
that
every
single
retry
that
you
receive
will
have
a
unique
value
for
this
original
destination
connection.
Id
that
you
see
as
a
result
you
can
identify
which
initial
it
is
responding
to
guaranteed,
as
opposed
to
maybe
in
the
past,
and
so
in
the
case
where
you
have
a
packet,
that's
retransmitted
from
the
client
that
arrives
after
the
client
has
sent
another
initial.
B
You
can
distinguish
between
the
one,
that's
just
echoes
from
the
past
and
the
one.
That's
actually
a
response
to
your
your
currently
outstanding
initial
on
this
on
this
connection,
so
that
fix
is
also
in
that
PR.
So
we're
moving
on
feel
free
to
go
if
you,
if
you
disagree
with
anything
here,
so
spoofing,
it's
relatively
straightforward
to
spoof
or
retry.
If
you
see
an
initial,
basically
all
we
have,
there
is
the
randomized
destination
connection
ID,
that's
in
the
initial
packet.
B
B
B
Retries
I'm,
going
to
give
up
and
kazuo
had
identified
an
interesting
one
where
you
send
a
you,
can
have
the
the
attacker
effectively
spoof
or
retry,
and
propose
a
different
connection
ID
if
they
strip
the
token
from
the
the
subsequent
message,
if
they're
able
to
strip
the
token
from
the
subsequent
message,
then
because
they're
man-in-the-middle,
ultimately,
what
the
real
server
will
see
is
something
that
looks
like
a
real
initial.
But
it's
actually
got
the
attackers
choice
of
connection
ID
in
it
not
sure
what
you're
going
to
do
with
that.
D
D
B
The
only
requirement
here
is
that
the
the
values
are
unique
across
the
this
particular
one.
This
one
particular
exchange,
there's
no
there's
no
requirement
for
them
to
be
globally
unique
or
anything
crazy
like
that,
of
course,
they're
really
big
random
numbers
chances
are
they're,
gonna,
be
unique
anyways.
So
that's
out
of
Quigley
unique.
It's
adequately
unique.
Yes,
yeah.
R
B
It's
none
of
the
invariants
we're
promising,
but
maybe
it's
one
of
those
things
that
we
were
not
actively
clamping
down
on
yeah.
So
if
you,
if
you
think
about
the
way
that
we've
done
invariance,
is
that
we've
made
there's
there's
a
wire
image,
the
quick
version
one
has
and
that
that
wire
image
has
a
certain
set
of
properties,
some
of
which
were
actually
guaranteeing
that
will
maintain
across
individual
version
two.
This
is
not
one
that
we're
going
to
make
that
promise
for
no
it's
not
an
invariant
stuff.
It
doesn't
need
to
be
the
the.
R
E
E
You
could
use
the
you
could
use
the
ciphered
PN
from
the
form
that
was
also
now
random
right,
so
I
guess
I'm,
just
late
like
making
a
lot
of
rules
about
I,
just
like
starting
getting
very
sad
about
killers,
well
with
the
CID
when
like
actually,
we
have
other
entropy
in
apakah,
we
be
exploiting
and
all
weird
which
I
use
roundtrip.
So.
E
I
appreciate
that,
but
I
think
this
is
lame,
so
you
know
I
I
guess
my
point
is
like
I.
Don't
think
I
don't
miss
it
if
I
think
it's
an
argument.
Yes,.
B
E
Well,
I
mean
I.
Think
you've
like
we
now,
like
a
bunch
of
like
a
bunch
of
the
new
complexity,
about
how
these
generated
this
huge
in
terminal
a
non-terminal
on
then
that'd
be
much
easier
than
do
would
be
to
like,
always
if
you're,
if
you're
a
server
which,
like
quite
you,
want
a
shiny
at
all
like
generate
no
connection.
Ids
very
neat
bouncing
around
a
lot.
It
just
like
there's
a
lot
of
new
additional
complexity
in
the
protocol
to
handle
like
this
case,
which
would
be
held
on
their
ways.
So
I'm
not
sure
that.
E
B
B
E
Okay
and
I
guess
like
like
I'm,
not
sure
where
to
go
from
here,
but
like
the
I,
want.
E
B
S
Know
I
mean
I,
think
it's
sort
of
related
to
this,
but
like
basically
right
now,
we
have
this
rule
where
you
once
you've
bound
to
the
destination
connection
idea.
I
assume
the
D
1
D,
2,
T
3
are
different
destination
connection
of
these
right,
so
it's
like
once
you're
bound
to
the
destination
connection,
the
server
a
server
initial
case.
You
stick
without
connection
area,
even
if
the
server
responds
with
retransmissions
or
different
connection,
amenities
or
coalesced,
packets
or
different
connection.
S
Ladies
I
wonder,
why
is
why
we
made
the
different
decision
to
do
a
retries
are
special
and
they're
allowed
to
change
the
connection,
a
T,
because
if
we
bind
to
the
original
connection
ID
with
retry,
then
this
wouldn't
be
a
problem
at
all
or
mitigated
at
least
so
it's
a
you
get
one
retry
which
decides
your
connection
ID,
but
you
can
do
multiple
retries
and
that's
for
the
spoofing
case.
So
there
was
one
reason
we
actually
there's.
An
explicit
reason
we
allowed
from
multiple
retries
was
was
that
you
know
had
the
hollow
retry
request.
S
A
model
of
retry
was
actually
subject
to
the
same,
like
a
worse
attack
that
if
you
could
do
only
one
other
request,
because
TLS
enforces
that
on
you
and
so
of
like
two
questions:
I
guess:
no.
Those
are
two
questions.
One
was
that
like.
Why
don't
we
just
buy
into
the
connection
ID
at
the
first
one
and
the
second
one
was.
B
G
Sweat
Google,
I,
I
think
this
is.
This
is
fine
and
I
think
we
discussed
this
at
length
as
a
proposal.
I
don't
have
a
particular
objection
to
his
approach
in
in
practice,
because
I
do
realize
that
this
is
only
for
initial
packet,
so
as
long
as
it's
I
only
have
to
repeat
like
eight
or
sixteen
bytes
saving,
whatever
magic
eater.
Sixteen
bytes
of
the
initial
is
like
a
special
case
and
I
can
just
throw
that
into
a
buffer.
So
I,
don't
I,
don't
really
have
a
strong
opinion.
I
do
understand.
G
Acker
is
kind
of
situation
where
you
you're
forced
to
change
the
connection
ID.
So
in
our
case
I
think
we
probably
will
never
actually
use
multiple
Rita's
in
our
deployment.
At
least
I
don't
expect
to
in
the
near
future.
So
this
is
sort
of
like
a
non-issue
from
an
operational
perspective,
but
aesthetically.
It's
like
I
can
get
a
little
ugly,
hey
I
kind
of
agree.
So
but
I
don't
really
have
a
strong
opinion
so
but
it'd
be
fine.
It's
not
causing
me
any
pain.
G
K
Polly
Apple
I
agree
very
much
with
Ian
that
I
can
kind
of
live
either
way.
This
is
a
little
bit
convoluted
seeming
just
seems
like
a
perfectly
reasonable
way
of
solve
it.
Edgar's
proposal
also
would
be
fine.
The
point
I
wanted
to
make
was
that,
as
I
had
I
think
I
gave
you
feedback
during
the
hackathon
I
think
we
need
reworking
of
connection
ID
text
in
general
and
what
would
help
a
lot
for
this
is
if
we
just
had
a
more
clear
section
explaining
the
best
practices
around
connection
ID
in
the
overall
philosophy
for
it.
K
B
B
T
Up
I'm
fine
with
his
proposal
as
well
as
with
ekiz
proposal.
What
I'm
worried
about
is
the
spoofing,
so
middle
box
could
advertise
that
it's
a
feature
that
is
validation
and
thereby
add
around
trip
to
every
every
quick
connection,
and
there
were
a
couple
of
proposals
how
to
prevent
that.
Basically,
by
putting
stuff
into
the
into
the
handshake
to
validate
that
no
men
in
the
middle
did
a
retry
and
I
think
we
should.
We
should
go
with
some
of
those
solutions.
H
S
A
You're
breaking
up
pretty
badly,
can
you
take
it
to
Jabbar
and
we
can
channel
you
from
there.
E
S
So
being
able
to
use
the
tag,
assumes
you
have
easy
access
to
it
if
you're
using
something
like
encryption
offload,
the
quick
server.
That's
are
the
quick
client
that
would
have
to
validate
that
wouldn't
have
access
to
that
very
easily,
so
I'm
the
CID
is
something
that
has
a
lot
of
direct,
easy
access
to
and
think
think
it's
the
best
solution.
E
S
E
That's
true:
it
was
putting
out
to
me
separately
that
this
is
actually
a
consequence
of
a
decision
not
to
encrypt.
We
try
and
have
we
just
encrypted
retry
we
actually,
we
are
generic
solution,
would
work
so
I
think,
but
I'm
really
sad.
If
that
is
another
generic
solution
for
this,
that,
like
doesn't
souffle
girl,
Ally
custom
screwing
around
so
I
agree.
My
solution
is
informatica
better,
so
we
had
one.
We
prevented
it.
A
B
A
E
B
B
Yeah,
this
is
that
this
was
the
proposal
that
I
have
so
two
options,
so
we
don't
actually
make
any
promises
about
successive
handshakes
or
whatnot
in
the
presence
of
an
attacker.
That
has
the
sort
of
capabilities
that
we're
talking
about
right
here.
So
specifically,
we
don't
prevent
attacks
during
the
very
first
phases
of
the
of
the
handshake
from
a
tactic
that
can
see
the
client
hello.
It's
is
possible
for
an
attacker
to
generate
a
server
hello
that
a
client
will
accept.
B
E
B
E
So
right
so
just
just
to
make
sure
we're
all.
On
the
same
page,
the
that
the
premise
of
the
previous
slide
I
guess:
two
slides
I'm.
Sorry,
she
surprised
the
previous
law
plus
I'm
in
a
military
requirement
for
that
for
that
for
the
for
the
DC
ID
or
that's
the
ID
actually
from
that
from
the
initial
obviate
any
minute
obvious
path.
Attackers
from
the
correct.
B
V
D
Denying
God
I
largely
agree.
Obviously
that's
that
sentiment,
but
I
just
know
that
this
is
a
slightly
different,
slightly
different
flavor
of
attack,
and
then
it's
because
it
allows
a
man.
It
allows
a
man
in
the
middle
to
basically
direct
all
traffic
towards
a
particular
server,
potentially
because
it's
able
to
restrict
the
connection
IDs
so.
B
This
is
probably
something
that
this
is
something
that
came
up
in
the
conversation,
and
we
also
that
we
concluded
that
an
attacker
does
not
need
to
bother
doing
this
in
particular,
because
if,
if
they
wanted
to
go
and
do
that,
then
what
they
would
do
was
act
would
it
would
be
far
more
efficient
for
them
to
us
act
as
a
client
and
simply
set
the
same
connection
ID
on
every
request
that
they
sent
to
the
server.
So
I
don't
believe
that
to
be
a
problem.
B
This
is
probably
advice
for
the
applicability,
/
manageability
documents
that,
if
you're
running
a
server,
maybe
you
shouldn't
be
using
the
connection
ID
in
the
initial
packets
from
a
client
to
determine
routing,
because
because
in
that
case
you
could
have
a
very
large
population
of
clients,
you
had
send
all
their
packets
to.
You
know
the
one
poultry
little
instance
sitting
in
the
back
end
that
suddenly
gets
overwhelmed,
and
then
they
could
do
that.
You
know
that
one
and
then
they
move
on
to
the
next
one
and
kill
that
and
so
on.
D
B
So
it's
an
interesting
point
and
probably
motivates
the
the
suggestion
that
one
that
I'd
like
to
come
back
to
that
idea
later
I
think.
Well,
we
don't
have
time
for
that
now
and
I
haven't
looked
at
that
yeah
okay,
so
another
thing
is
on
the
list
of
PRS
that
are
about
to
be
merged,
largely
because
I've
gotten
a
lot
of
good
feedback
on
this
is
the
what
happens
if
you
go
to
retry
to
what
happens
to
you,
zero
RTT,
and
so
the
proposal
here
is
well.
Yes,
you
can
send
zero
artesian
again
and
acres
analogy.
B
There
was
well
tcp,
fast,
open,
didn't
work,
but
you
would
still
be
able
to
until
s0
ICT.
At
that
point,
it
was
a
version
there
is
that
version
negotiations
fairly
similar
in
this
regard
is
unlikely
to
work
in
the
sense
that,
if
you're
changing
quick
versions,
it
may
not
be
the
case
that
your
zero
RCT
ticket
is
any
good
for
whatever
you
do
in
the
new
version,
but
there's
nothing
inherently
prohibiting
that
the
rationale
for
this
was
well.
B
It's
unlikely
that
the
keys
that
you're
using
to
protect
zero
RTT
packets
will
change
as
a
result
of
having
received
a
retry
and
effectively
starting
again,
and
so
there's
a
there's,
a
requirement
here
in
here
that
when
you
get
the
retries
and
another
initial
starts
anymore
0
while
TT,
you
don't
reset
the
packer
numbers
back
to
zero,
because
two
time
pads,
they're
awesome,
and
so
that's
the
primary
reason.
There
was
another
reason:
I
forget
now
what
it
was,
but
the
two
time
pad
thing.
That
was
the
primary
concern.
Here,
though,.
E
B
S
What's
the
berlanger
yeah,
so
I
was
about
to
bring
that
up,
but
could
could
we
actually
say
that
if
you
get
an
retry
packet,
then
the
server's
officially
said
that
I'm
not
committing
to
State
anymore?
That
would
be
very
convenient
to
say.
Client
should
retransmit
zero
RTT
data
immediately
very
clear
semantics
in
that
regard.
The.
B
E
I
mean
we
also
make
some
noises
about
how
this
is
conceptually
a
new
connection
and
I
wonder
if
the
right
noises
are,
you
should
generate
that
you
should
ditch
all
the
TLS
tape,
they're,
a
new
client,
hello
and,
and
that
would
then
you
wouldn't
have
to
screw
him
or
the
Packer
numbers
arms.
Every
German
analogy
right,
uses
new
connection.
E
E
This
packet
number
reset
issues
good
and
so
like
I,
just
worry,
it's
subtle,
right
and
so
I,
don't
think
it's
a
stir,
but
it's
like
saying
you
could
easily
get
wrong
and
you
and
it
would
only
in
a
would
not
be
it
would
like
just
like
if
you
just
screwed
it
up,
EP,
sorry
right
so
so,
let's
say
I
mean
like
I
would
be
taking
a
different
different
metaphor,
for
this
would
be
saying
like
this
is
a
new
condition
metaphor
as
opposed.
Thank
you
like
you
got
a
reset.
W
Yeah
so
Pat
McManus,
so
I
think
the
raising
the
issue
was
really
great,
because
I
had
just
allowed
this
in
my
code,
because
you
know
it's
not
actually
zero,
RTT
anymore
after
the
retry
there's
been
a
round
trip
and
just
intuitively
this
made
no
sense.
So
it's
good
to
clarify
and
I
agree.
It
can
actually
be
done
because
it's
you
know
now
call
it
one
less
RTT
or
something.
That's
still
still
win
and
I
like
the
with
new
packet
number
rule,
because
that
is
consistent
with
the
overall
philosophy
that
one
does
not
retransmits
right.
B
E
Right
I
think
I
think
like
having
people
resend
initial
with
zero
and
then
other
things
with,
like
you
know:
52,
that's
not
that's
like
not
gonna,
be
okay,
I
mean
I,
don't
least
gonna
cross
problems.
I
mean
it's
gonna,
causing
confidence
so
but
I
think
either
we
should
either
we
should
say
ear.
We
should
go
with
like
discussion
that
we
send
the
initial
with
that
with
the
next
sequential
packet
number
or
we
should
go
or
or
we
should
do
like
when
I
said,
I
client,
hello,.
H
N
Mike
Bishop,
the
other
reason
that
we
didn't
reset
them
and
something
for
server-side
implementers
to
be
aware
of
was
that
the
if
the
0tt
packets
were
either
buffered
or
delayed,
they
can
arrive
after
your
new
initial
or
be
processed
after
your
new
initial,
and
they
are
still
valid
for
the
connection.
So
you'd
better,
not
change
the
data
that
send
them.
B
All
right,
Thanks,
so
yeah.
B
Same
thing
so
different
angle,
on
the
same
question:
if
you
get
a
zero
ICT
packet,
you
haven't
seen
an
initial.
What
do
you
do
with
it?
Currently,
we
we
have
a
write
up
that
basically
says:
if
you
get
a
zero
ICT
packet,
you
must
not
generate
a
retry
in
response
to
it.
You're
only
allowed
to
send
a
retro
and
response
for
initial,
so
the
support
for
doing
this
is
pretty
weak.
B
There's
not
really
too
many
reasons
in
either
direction
for
this
one,
but
basically,
rather
than
shrug
our
shoulders
on
this
one
we'll
say,
should
not
for
this
one
or
May
or
something
along
those
lines.
Basically,
because
there's
no
reason
not
to
do
this
sort
of
thing
and
it
may,
in
some
cases
improve
performance
if
a
server
receives
a
zero
ICT
packet
and
wants
to
keep
things
moving
along,
it
can
do
address
validation
immediately
rather
than
then
wait
for
the
possibly
lost
initial
pattern
that
the
client
sent,
because
that
can
take
time
far
too
much
time.
A
B
And
rekey
saris
near
for
name,
so
this
is
another
one
that
I
think's
relatively
urgent,
although
not
hugely
important.
We've
had
a
lot
of
discussion
about
how
to
manage
keys.
One
of
the
nice
things
about
the
the
updated
design
is
that
it's
pretty
crisp
about
when
keys
are
used
and
what
what
packets
are
protected
with
what
keys
and
what
cryptographic
handshake
messages
are
protected
with
which
keys.
But
we
have
this
sort
of
fuzzy
bit
when
it
comes
to
getting
rid
of
those
keys.
B
B
B
We
never
really
know
whether
this
the
halting
problem,
in
the
sense
that
you
never
really
know
that
the
handshake
is
done
and
the
other
side
is
done
with
that.
The
handshake,
because
so
the
simple
solution
for
this
is
to
treat
each
packet
number
spaced
separately.
And
you
say
that
the
packet
number
space
is
done
when
the
read
and
write
keys
for
that
packet.
Number
space
read
and
write
keys
for
the
next
space
are
ready
so
for
handshake
keys.
B
When
you
have
your
one
RCT
keys
and
you're
using
them,
then
you're
done
with
the
handshake
keys
and
there's
a
little
bit
of
a
little
bit
of
nuance
to
that
because
of
the
arrangement
of
the
handshake.
But
at
that
point
you
can
just
set
a
timer,
and
so
you
set
a
timer
at
that
point
and
that
runs
for
I.
Don't
know
a
couple
of
RT
owes
a
couple
of
round
trips.
It
doesn't
really
matter.
It
could
be
a
pretty
long
time.
B
In
fact,
we've
sort
of
established
because
of
the
way
that
the
tellers
keys
separation
works,
that
you
potentially
just
keep
the
keys
forever.
It's
just
an
overhead,
its
memory
they're
not
actually
a
threat
to
anything
at
that
point,
but
still
you
set
a
timer
and
during
the
time
that
you're
maintaining
that
key
you'll
be
able
to
receive
acknowledgments,
resend
crypto
frames
as
necessary
and
send
it
knowledge
modes,
and
so
that
seems
pretty
reasonable
once
you've
gotten
rid
of
the
keys.
B
B
So
the
way
to
think
about
that
is,
is
you
look
at
each
packet
number
space
and
there's
also
some
consideration
here
for
serial
ITT,
but
I've
got
the
two
kind
of
critical
ones
here.
So
at
the
point
that
the
client
receives
the
server
hello
and
at
the
point
that
the
server
sends
the
server
hello,
they're
basically
finished,
sending
crypto
frames
to
each
other
and
that's
the
point
where
they
set
a
timer
and
at
the
point
they
set
the
timer.
They
can
set
that
timer
for
a
number
of
round
trips
and
we're
just
maintaining
that
time.
B
First
long
as
we're
willing
to
effectively
continue
acknowledging
things
essentially
or
or
repairing
the
cryptographic
handshakes
day,
the
same
applies:
I
mean
that
direction
of
messages
is
different,
but
the
same
basic
procedure
applies
for
handshake
messages.
You
just
when
you
see
the
last
one
right
to
set
the
timer.
B
To
signal
that-
and
in
that
case
you
can
stop
sending
the
crypto
frames
now,
of
course,
what
will
happen
is
that
there's
the
potential
for
acts
to
be
lost
and
you'll
have
acknowledgments
in
flight,
and
you
might
potentially
want
to
do
some
more
acknowledging
at
that
point,
but
you
can
actually
safely
drop
the
act
frames
if
the
other
side
agrees
that
this
is
acceptable.
There's
a
couple
of
corner
cases
in
this
and
I.
B
Don't
think
we
need
to
worry
about
that
someone's
gonna
jump
up
and
tell
me
that's
wrong,
but
that's
I,
I'm,
not
planning
to
document
that
optimization
by
the
way,
I.
Think
people
who
want
to
embark
upon
these
sorts
of
optimizations
will
have
to
think
through
the
consequences
of
all
of
this
and
be
absolutely
certain
they're
not
going
to
break
the
other
side
there
are.
B
There
are
models
that
you
can
use
for
this
that
require
that
the
other
side
is
aware
of
what
you're
doing,
because
what
will
happen
is
it
will
think
that
it
needs
to
retransmit
crypto
frames,
and
you
don't
need
them.
It
doesn't
need
them,
but
it's
resending
them
and
expecting
acknowledgments,
and
if
you're
dropping
those
packets
on
the
floor,
you
could
end
up
with
crypto
frames
being
sent
for
a
considerable
amount
of
time
after
the
handshake
is
well
done,
and
it's
just
wasteful
at
that
point.
B
B
A
number
of
people
have
suggested
that
it
would
be
nice
to
have
a
signal
that
the
the
handshake
is
done.
That
is
a
positive
signal.
Once
you
get
the
one
RTT
keys
from
maybe
from
each
side,
but
I'm
thinking,
probably
from
the
server,
although
I've
shown
it
in
both
directions
here
that
basically
says
I'm
done
with
a
handshake.
I
have
everything
I
need
in
order
to
proceed
with
this
connection,
and
once
you've
received
that
message,
you
can
stop
sending
any
packets
on
the
previous
back
a
number
spaces.
G
B
G
B
And
so
this
is
that's
an
interesting
observation.
It's
yes,
it
doesn't
perfectly
solve
the
problem,
but
at
the
point
that
you're
an
entire
round
trip
after
the
zero
ITT
Keys
have
been
done.
It's
not
quite
that
good
I,
don't
think,
because
the
the
handshake
done
is
sent
in
close
proximity
of
the
last
of
the
zero
RCT
messages.
So
I
don't
think
it's
as
good
as
you
think
it
is.
B
G
E
Because
in
order
live
streaming
of
the
0tt
data,
the
client
continues
saying
CRT
TV
until
it's
process
until
it's
process,
the
entire
server
first
flight,
and
so
basically
they'll
last
like
like
essentially
like
last
circuit.
A
packet
is,
like
you
know,
I
say
is
like
sent
like
you
know,
milliseconds
before,
like
the
see
fit,
yeah
yeah,
so
I
mean
now
you're,
like
Tyrone
I
mean
now
you
even
do
it.
B
Back
in
them
back
in
the
soup,
so
this
is
my
proposal
document
that
you
might
want
to
clean
up.
The
keys
suggest
that
you
might
want
to
use
a
timer
for
that
purpose
and
leave
the
rest
of
the
optimizations
for
those
people
who
want
to
have
fun
debugging,
thorny
key
management
issues,
I,
don't
care
so.
N
B
X
David's
Ganassi
Apple
just
want
to
say
that
I
really
really
agree
with
this
mentality.
There
are
already
too
many
things
in
the
spec
that,
like
sound
a
little
bit
too
smart,
sometimes
and
as
an
implementer
first
reader,
you
go
away
what
is
going
on.
So
if
people
want
to
go
crazy
and
optimize,
something
and
maybe
documenting
on
site,
that's
fine,
but
keep
it
out
of
the
main
spec.
O
S
E
E
Yeah,
your
proposal
is
that
we
don't
actually
say,
do
you
want
us,
do
you
want?
Would
you
like
that
to
have
like
here
is
a
set
of
timer
rules
which
you
could
follow
or
just
you
may
wish
use
a
timer,
and
this
is
when
you
this
is.
This
is
the
time
you
should
be
setting
the
timer.
If
you
were
to
set
it
yeah.
B
W
Can
drive
it
even
the
simpler
version
of
that
I
mean
if
infinite
is
okay,
we
might
just
need
some
essentially
non-normative
text
that
acknowledges
you
know
if
there
is
some
point
at
which
this
happens,
and
you
know
you
may
or
may
not
make
use
of
that
fact.
I
wouldn't
I,
don't
think
we
need
to
set
timer
values
in
particular,
I'm,
not
gonna,
suggest
a
specific
value.
No,
no.
B
This
is
one
that
will
never
die.
Oh
that's
right.
All
right,
so
Mikkel
made
an
interesting
observation
turns
out.
Stateless
read
reset
is
by
design
indistinguishable
from
a
quick
packet.
You
can't
decrypt
it.
No
one
can
decrypt
it
that's
kind
of
the
point,
but
it's
also
the
case
that
you
send
stateless
reset
when
you
receive
a
packet
that
you
can't
decrypt
and
both
Clinton's
server
can
do
this.
It
doesn't
actually
take
to
quiet
their
client
on
the
server
to
produce
this
problem,
because
you
can
certainly
point
to
service
it
and
each
other.
B
B
I'm
not
sure
that
I
want
this
problem,
so
we've
discussed
a
couple
of
simple
solutions
and
there
it's
not
really
that
dire.
A
problem
when
you
think
about
it.
This
way-
and
it
turns
out
stateless
reset-
is
pretty
small,
its
variable
length
for
various
reasons,
but
it's
pretty
small
and
so
it'll
be
smaller
than
most
packets.
That
you're
gonna
receive
certainly
most
packet
sort
of
receive
that
you
don't
don't
know.
B
B
A
B
Know
double
negative
only
send
it
if
it
is
smaller
than
the
packet
that
was
received.
Thank
you.
Mark
there's
also
a
slightly
more
complex
solution.
That
says:
well,
maybe
it's
nice
if
you
send
a
stateless
reset
in
response
to
these
packets,
because
if
someone
is
say
doing
that's,
for
instance,
telnet
and
the
first
path
and
the
first
backup
that
I
sent
after
a
long
idle
period
is
control-c,
it's
a
single
character
and
that's
going
to
be
a
small
packet
and
your
connection,
ID
is
small.
B
There's
a
pretty
good
chance
that
you
will
have
this
packet
be
relatively
small
coming
in,
and
wouldn't
it
be
nice,
if
maybe
there
had
at
least
some
chance
of
being
able
to
benefit
from
the
salaat,
the
Malaysian
that
we
have,
that
will
kill
the
connection
off,
and
so
you
have
some
sort
of
proper
built,
probabilistic
dropping
and
as
long
as
their
probability
is
not
not
zero
I'm
dropping
it,
then
everything
should
ultimately
resolve.
Yeah
you'll
get
a
couple
of
legs
on
that.
B
On
the
you
know,
a
couple
of
bounces
in
the
in
the
sort
of
perverse
scenario
where
both
sides
have
lost
state
and
have
decided
to
stateless
Lee
reset
each
other,
but
it'll
ultimately
resolve
itself
and
so
I
think
that's
a
reasonable
solution
and
that's
all
I've
got
on
this
deck.
Did
anyone
have
any
questions
or
suggestions
or
disagreements?
I
wanted
to
get
this
in
front
of
people.
Who've
got
a
little
experience
with
this
one.
Just
one.
Y
Y
I
So
Cory
Perez
I
mean
there
are
networks
that
I
should
drop
package
due
to
a
QM
and
other
things,
etc,
and
there
are
pet
works
that
duplicate
packets
for
odd
things.
So
it's
questions.
Do
you
want
to
work
everywhere
in
the
internet
or
do
you
want
to
work
on
just
one
part?
So
it's
a
trade
off
for
you.
Numbers
won't
help
you
I'm
afraid,
because
there's
always
gonna
be
this
tail
end
of
stuff.
That
is
really
weird
yeah.
Well,
if
the.
B
I
X
X
A
Q
Eric
Nygren
I
think
the
a
key
thing
is
making
sure
that
whatever
we
have
is
very
deterministic
that
this
ends
fast,
because
even
if
they're,
the
same
size
of
an
attacker
can
induce
this
Pinconning
by
by
injecting
something
that
two
end
points
start
ping
pong
game
back
and
forth.
It
doesn't
matter
that
that
the
size
that
their
small
packets,
not
increasing
in
size,
it's
that
ping
pong
itself.
They
just
keep
injecting
things.
These
drugs
are
having
a
growing
set
of
stuff
ping
pong
NC,
not.
B
Q
And
I
guess
that
may
be
worth
thinking
through.
Are
there
any
cases
where
you
can
trick
things
and
in
ways
of
various
format
might
make
it
look
to
two
endpoints
that
they're
different?
So,
for
example,
if
there
is
some,
if
there's
some
encapsulation
or
translator
in
between
that
is
going
and
adding
a
little
bit
of
stuff
on
to
them,
so
that
between
those
two
end
points
you
get
something
with
it
that
violates
that
property.
So
maybe
we're
thinking
of
of.
K
I
understand
that
the
set
this
preset
is
stateless,
but
can
an
endpoint
carry
some
decaying
State
of
the
number
of
resets
that
it
has
been
sending
and
essentially
just
have
some
plausibility
like
you
could
very
easily
say
that
if,
within
the
past
ten
seconds,
I
have
sent
a
ridiculous
number
of
research,
I've
detected
that
there's
something
pong
and
just
stop
doing
it
for
an
hour
just
back
off
and
actually
have
that
drop
rate,
be
relative
to
the
amount
that
you
recognize
ping
ponging
possibilities.
We
do
actually,
okay,
something
along
those
lines.
Yeah.
Z
Okay,
come
on
next,
okay,
so
Jonathan
axe
I
mean
that
my
first
point
is:
does
it
actually
have
to
be
that
the
condition
is
that
it's
the
packet?
Is
you
know
you
know
small
or
equal
to
say
recenter?
Could
it
just
be
if
it's
exactly
the
size
of
a
say?
Let's
reset,
it's
obviously
something
smaller
can't
be
causing
the
ping-pong
either
so
they're
variable
sized,
they
are
variable
sized
yeah.
All
right
and
the
other
point
is
I
was
gonna
say
is
in
the
case
we
were
describing
of
the
you
know.
Z
They
tell
that
I
mean
so
I
mean,
even
if
your
probability
is
quite
high,
like
50
percent
you're
still
going
to
almost
certainly
get
us
a
let's
reset
within
three
or
four
RTT,
so
yeah
I
think
I.
Think
it's
probability.
If
you
want
to
do
probabilistically
I
think
you
can
safely
be
quite
high
to
cause
a
very
fast
decay
and
in
most
preset
cases,
you'll
get
recovery
within
two
or
three
graduate
retransmissions
yeah.
H
E
E
The
second
point
is
you
must
do
something
bookkeeping
anyway,
if
how
many
say
this
recess
you
sent,
because
otherwise
you
end
up
like
sending
our
crap
close,
save
this
resets,
like
for,
like
the
enormous
fault
of
packets
that
are
coming
in
and
I,
believe
three
locations,
I
think
it's
gonna
say
do
this
any
right,
so
so
so
like
says
justice
and
bookkeeping,
anyone
any
suggestion
is
eminently
reasonable.
Third
points:
can
you
go
to
this
slide
this
that
this
really
seemed?
This
really
seemed
like
it
might
be?
E
Okay,
because
if
you're
only
getting
really
small
packets,
you
know
I'm
like
that
much
in
flow
and
any
real
any
real
application
that,
like
a
replication,
refine
shutdown
is
gonna,
send
you
big
packets
and
so
I
just
give
it
going
back
to
principle,
one
or
principle
0,
depending
on
what
kind
of
computer
scientist
you
are.
That
seems
like
they
probably
be
just
fine,
and
my
first
point
is
I.
I,
concur
with
I
think
Nick's
point
about
randomization,
random,
being
bad
and
in
that,
but
in
that
in
in
in
that
vein,
it's
possible.
G
Ahead,
yes,
I
was
gonna,
basically
say
what
I
Korea
said:
BER
me,
which
is
we
use
something
we
call
a
time
waitlist
and
do
exponential
back-off
for
first,
a
lisp
reset
to
try
to
avoid
situations
like
this.
The
other
is
on
packet
duplication.
We
have
seen
this
before
in
real
networks.
It
basically
never
happens
unless
it
happens
all
the
time,
so
you
have
like
one
broken
switch
that
just
basically
sends
every.
G
Like
that,
so
it's
it's
like
a
very
bimodal
thing.
We're
like
everything's
fine
into
like
all
of
your
servers,
are
ddossing
each
other,
and
then
we
also
had
to
solve
this
exact
problem
for
client
initial
a
long
time
back
because
it
wasn't
the
keys
weren't
asymmetric
or
there
was
no
way
to
tell
the
difference
between
a
server
initial
area
one,
and
in
that
case
we
actually
decremented
the
TTL.
G
So
we
would
like
make
sure
the
TTL
decreased
as
the
client
initial
went
back
and
forth
and
I
wanted
to
add
that
that's
one
potential
mitigation
is,
you
can
copy
the
TTL
in,
and
that
does
ensure
that
there's
no
infinitely
I
don't
know
if
that
leads
to
much
input
information.
But
it's
a
very
simple
solution
and
it
totally
works.
So
I
was
hi
friend,
Trammell
I
was
basically
gonna.
Come
up.
There
come
up
here
to
say
what
he
had
just
said.
There's
another
thing
that
you
could
do
so
I,
really
like
the
simple
solution.
G
There's
another
thing
that
you
could
do
this
simple
solution
by
tweaking
the
size
of
the
stateless
reset
and
saying
that
the
state
like
not
just
saying
don't,
send
it
if
it's
smaller
in
the
packet
was
received
but
say
that
the
stateless
reset
must
always
be
packet
size
it
at
most,
pocket-sized
minus
something,
and
then
you
essentially
the
TTL
itself
is
encoded
in
the
packet
size.
If
you
can,
that
requires
probably
a
little
bit
more
rethinking
of
how
stateless
reset
works.
But
what
wait?
What
are
this?
What
are
the
packets
they're
smaller
than
stateless
reset?
G
AA
AA
You
know
removing
state
in
the
software
based
on
the
termination
of
a
connection,
so
it
would
be
definitely
good
to
have
this
deterministic,
because
otherwise
it
leads
the
software
to
basically
go
through
a
huge
number
of
slows
and
in
concept
it's
simple,
but
the
scale
is
quite
quite
huge.
So
it's
it's
not
very
easy
to
say,
like
just
run
a
timer
and
delete
things
that
have
or
expire
flows.
W
So
I'm,
for
a
long
time,
a
significant
part
of
me
has
held
the
thought
that
stateless
reset
is
something
we
should
jettison
altogether,
because
we
have
a
nice
beautifully
authenticated
protocol
and
you
know
allowing
someone,
that's
clearly
not
participating
in
that
chain.
Fluence.
Your
behavior
just
feels
wrong.
I've
been
convinced
that
operationally
like
you,
you
can't
work
around
this
at
some
point,
but
I
think
that
does
argue
for
the
simplest
possible
solution
here.
D
B
A
N
All
right,
good
I
think
we've
all
had
a
lot
of
intuition
about
how
we
think
they
ought
to
work,
and
it
turns
out
that
trying
to
specify
that
gets
kind
of
hairy
so
where
we
started
out
in
pre
and
pre
peony
land
was
that
each
connection
ID
was
in
sequence,
had
a
specified
packet
number
gap
to
help
avoid
having
a
packet
number
of
give
you
likability,
and
then
they
have
a
stateless
reset
token
problem
with
that.
Is
you
get
head
of
line
blocking?
N
Because
if
you
miss
one
of
them,
you
can't
use
anything
further
down
the
list,
because
you
have
to
know
the
cumulative
packet
number
of
gaps
and
once
we
had
probing
and
connection
migration,
it
gets
really
weird
trying
to
sort
out
while
I
use
this
other
kin
with
the
packet
number
gap
on
the
other
path.
But
what
do
I
do
here
on
the
main
path
where
I
haven't
changed
connection
ideas?
N
So
once
we
had
packet
number
encryption,
we
said
great,
we
can
get
rid
of
all
of
that,
and
packet
numbers
are
just
this
class,
not
packet
number.
So
our
connection
ideas
are
this
cloud
of
things
that
we've
received
when
we
can
use
any
of
them
all
right.
We
can
use
any
of
them
anytime.
We
want
head
of
line
blocking,
isn't
an
issue
great.
This
is
simple,
but
to
break
link
ability.
N
If
your
peer
changes,
then
you
also
need
to
change
and
exactly
how
do
you
figure
out
that
the
peers
change
was
in
response
to
your
change
and
not
the
peer
changing
at
the
same
time?
So
you
need
to
change
and
I
started,
trying
to
write
down
the
rule
of
how
you
would
figure
that
out
and
I
couldn't
and
if
somebody
implemented
it
and
actually
did
figure
out
a
rule
that
would
be
nice,
but
what
we
came
up
with
was
we
actually
do
need
that
sequence
number.
N
E
Clear
to
me
that
when
you're
pure
changes
CID,
you
need
to
change
so
any
pair
might
changes
Yeti
for
fun.
That
I
mean
well.
You
do
to
change
your
CID
the
time
you
change
your
CID
is
when
a
path
has
changed,
and
so
the
question
is:
is
that
P
is
the
pure
changing
CID?
The
only
see
only
how
the
path
has
changed.
It's
not
clear
to
me.
That's
true!
So.
N
The
spec
recommends
that
you
change
your
connection
ID
when
you
think
there
might
be
in
that
rebonding
like
you've,
been
quite
quiescent
for
a
while
sure
and
if
you
want
to
any
kind
of
unlink
ability
across
those.
If
you're
pure
just
responds
with
the
connection
ID,
you
were
using
to
you're
new
to
your
new
point
of
connection
no.
E
I
understand
the
point,
but
why
bother
saying
is
that
so
you
have
you
have
a
behind
in
that
and
B
not
behind
that
and
any
thinks
he
might've
had
a
not
rebinding,
and
so
he
uses
in
his
CID.
But
B
knows
he
didn't
have
in
that
refining
and
so
B
knows
that.
B
knows
that
B
knows
that
the
path
changed
and
so-and-so
be
responsive,
I
mean
so
so
in
that
case
you
don't
have
to
change
your
CID.
E
And
I
I
appreciate
about
what,
but
what
I'm
saying
is
that
if
you,
if
you
receive
a
new
key,
if
you
receive
a
new
connection
ID
from
someone
whose
dress
has
not
changed,
then
it's
not
clear
to
me
that
you
actually
have
to
change
your
own
connection,
ID
and
so
like
that.
That
could
see
that
Mikey,
you
add
this
lock
step
I
lost
that
condition.
I
I.
B
E
But
I
get
I
guess,
but
we're
talking
about
has
inconvenient
so
I'm,
like
you
know,
that's
one
I
guess
I
guess
I
like
like
this
is
her
like
I,
guess,
I'm,
trying
to
figure
out
like
how
much
this
is
a
historical
talk
and
how
much
this
is
like
a
motivation
for
pretty
good
design.
If
it's
a
historical
clock,
I'll
sit
down
and
shut
up,
but
if
this
is
like
driving
us
towards
like
some
particular
design
like
as
our
eyes
as
a
motivation,
then
I'm
not
gonna,
shut
up
visits
right
way,
decal.
So.
E
E
N
E
G
E
Guess
like
so
one
thing:
it's
like
I
mean
like
curly
design.
Half
of
my
head,
which
means
apply,
is
wrong
like
like
like.
Why
can't
you
just
straight
the
rule
to
update,
like
you
only
update
when
you
believe
that
heroes,
it
has
sent
you
a
new
to
transport
parameters.
Sorry
new
transport
address
than
the
one
you're
currently
receiving
the
CID
on.
D
Know
that
you
need
to
you
could
be
very
easily
assumed
that
all
of
these
are
associated
with
path,
changes
right
and
I.
Think
the
the
problems
or
whatever
you're
going
to
say,
is
still
going
to
hold
true
like
each
of
these
events,
whenever
the
connection
area
is
changing,
if
you
assume
that
the
path
is
also,
the
network
is
also
changing.
D
E
I
know
that's,
but
that's,
but
the
reason
you're
carrying
to
infinity
is
because
everybody's
ping
pong
ping
pong
in
the
pink
point
in
the
the
it's
changes
right,
and
so,
if
you
like,
if
he,
what
I'm
saying
is
it's
like
imagine
the
case
where,
like
the
dresser,
completely
stable,
right
and
one
side
changes
because,
as
mike
says,
he
thinks
he
may
have
had
an
hour
he's
been
offline
for
45
seconds.
He
might
have
an
hour
binding
event
on,
and
so
it
sends
it.
But
he
hasn't
changed.
N
That
is
a
an
interesting
suggestion.
Let
me
talk
about
the
other
issues
that
we
have
with
this
model
and
it
may
be
that
going
to
that
would
solve
more
than
them
more
of
them
and
that's
one
of
the
solutions
that
can
be
on
the
table.
So
there
are
some
advantages
to
having
a
sequence
number.
So
that's
what
we
have
in
draft
13,
which
says
that
there
is
a
sequence
for
each
connection,
ID
that
gets
issued
by
your
peer,
and
that
makes
the
behavior
pretty
easy
to
specify.
N
If
you
are
starting
a
new
path,
then
you
have
to
use
a
higher
sequence
number
than
you
have
ever
used
before.
For
this
connection
and
on
each
path,
you
will
never
send
a
packet
where
the
connection
is
a
sequence.
Number
is
less
than
the
highest
you've
ever
Center
received,
so
that
forces
the
ratcheting
behavior
on
a
per
path
basis.
N
So,
to
give
you
a
quick
example,
I'm
using
a
B,
a
B
and
C,
so
you
can
keep
track
of
the
sequence
numbers.
Multiple
paths
are
hard
to
draw,
so
forgive
my
bad
drawings.
So
we
have
a
connection.
Both
sides
are
using
connection
ID,
a
which
is
a
different
value
in
each
direction
and
also
they're
doing
some
probes
on
a
side
path
to
get
ready
for
a
potential
migration.
They
use
B
for
that.
N
The
fact
that
you
probed
doesn't
change
that
you're
using
a
on
the
main
path
and
if
on
the
main
path,
you
change
to
see
the
other
side
moves
to
see
that
doesn't
change
the
other
path.
You
can
stay
on
B
if
the
other
side
tries
to
roll
forward
again
to
D,
but
for
some
reason
you
don't
have
D,
you
just
go
to
the
next
one
that
you
do
have
and
then
the
Pierre
has
to
match
you.
So
that
way
you
only
have
consistent
rules
for
how
you
move
through
the
connection
IDs
that
have
been
issued.
N
Let's
say
you're
using
a
on
the
main
path
and
you
have
lots
of
other
paths
you
want
to
probe
so
I'll
use,
B,
C
and
D,
because
I
just
got
those
from
the
Pierre,
but
they
all
get
dropped
because
those
paths
don't
work
now.
One
side
says
I
have
no
spare
C
ideas.
The
other
side
says
I,
don't
need
to
give
him
anymore.
He's
got
three
outstanding,
so.
U
F
N
Also,
if
the
only
packet
that
you
ever
sent
that
a
given
connection,
ID
gets
dropped
and
eventually
you
forget
about
it,
but
it's
not
actually
dropped.
It's
just
severely
delayed
that
can
trigger
a
stateless
reset,
so
they're
actually.
So
there
also
needs
to
be
some
kind
of
coordination
between
the
peers
when
they're
no
longer
going
to
consider
a
particular
connection,
ID
to
be
associated
with
this
connection
or
you
get
into
some
weird
states.
A
common
easier
place
for
this
to
happen
would
be.
N
N
But
if
you
can't
coordinate
when
it's
safe
to
forget
a
connection
ID
and
remove
it
from
state,
then
you
can
trigger
a
trigger
stateless
resets
and
there
are
cases
when
you
might
actually
want
to
expire
a
whole
swath
of
Si
IDs.
For
example.
Let's
say
it's
encrypted
you're
doing
Q
rotation
you're
about
to
rotate.
Again
you
have
to
make
sure
that
everything
that
was
done
with
not
just
the
previous
key
phase,
but
the
key
phase
before
that
will
never
ever
ever
be
seen
again
for
the
life
of
this
connection.
N
D
Eyeing
that
I
don't
think
that
at
a
connection,
ID
works
reordering
that
was
reordering,
which
can
cause
race
conditions
like
so
could
send
a
stateless
reset
and
then
some
the
data,
a
connection
ID
of
the
data
connection
ID
sent
drop
in
the
network
retransmitted.
But
before
that
the
stateless
reset
is
sent
all
kinds
of
things.
N
B
AB
B
Would
be
the
sum
of
the
ones
that
they
would
be
willing
to
remember
posture,
those
that
might
be
in
flight
and
that
would
work
I
think
much
more
simply
than
what
you
have,
and
the
only
thing
you
would
then
need
is,
because
we
have
this
reciprocation
requirement
you
would
require.
If,
if
I
sent
you
a
connection
ID
with
an
ID
of
10,
you
would
have
to
make
sure
that
you
sent
me
all
the
connection
eyes
these
up
to
10
in
response,
and
then
it
says,
I
think
that
that'd
be
a
simpler
design.
I.
N
B
B
Don't
think
they
have
to
agree
actually,
but
not
not
entirely,
but
the
only
thing
that's
important.
There
is
to
remember
that
if
you
support
10,
if
you
say
that
you
support
10
different
connection
IDs,
you
have
to
count
the
ones
that
you
sent
but
not
how
to
acknowledge.
Yet
in
in
that
set
otherwise
that
yeah,
you
have
to
assume
that
the
other
sorry
you
have
to
assume
that
the
other
side
remembers
10,
plus
the
ones
you
haven't
had
acknowledged.
Yet.
E
This
is
pretty
complicated
and
made
my
hair
fall
out.
Let
me
for
that
yeah
first
of
all,
I
think
like,
given
that
we
have
another
proposals
here
or
I.
Think
like
trying
to
merge
this
at
this
moment
is
going
to
be
very
problematic,
I,
my
am
and
frankly,
I'm
trying
to
think
we
need,
like
a
Tamron
model,
this,
like
figure
out
like
whether
or
not
it's
gonna,
break
because
I'm
having
a
lot
more
reasoning
about
it,
which
I
mean
what
really
worries
me
cuz,
like
these
things
like
like
my
sprucing
things,
is
like.
E
We
can
work
that
I
worry
about
having
the
rival
and
I
put
a
call
predicated
and
kazoo
hopping
a
brainstorm,
so
the
with
that
said,
I
think
this
would
be
the
first
time
we
declared
the
semantics
of
some
place
to
the
protocol
dependent
on
receiving
in
knowledge
Minh
from
the
pure,
and
that
makes
me
kind
of
sad,
like
I,
normally
think
of
like
packets
going
into
some
bucket,
or
they
just
like,
gradually
migrate
to
their
side
and
then,
like
you,
know,
ideology
mints,
remove
things
from
the
bucket
and
so
having
that
fed
back
upward
to
me
so
I
have
to
like
do
some
other
action,
like
scares
me
so
I
think
to
be
the
first
time,
we've
done
it
if
it's
not,
which,
if
it
isn't,
we
should
take
those
things
out
and
and
if
it
is,
we
probably
shouldn't
start
I.
E
A
X
Skin
Ozzie
Apple
all
at
a
co-worker's
point
about
this
being
pretty
complicated
and
starting
to
lose
hair,
also,
most
clients
and
my
understanding
correct
me
if
I'm
wrong
will
probably
be
sending
c.length
connection
ID.
So
we're
sounds
like
we're
building
a
lot
of
complexity
that
won't
necessarily
be
used
just
something
to
consider.
Please
keep
it
simple.
AB
N
AB
A
Thinking
the
same
thing
yeah
and
then
perhaps
get
it
together
and
talk
about
it
in
New
York.
Is
that
a
reasonable
timeframe?
Okay,
show
of
hands
who's
interested
in
participating
on
such
a
thing,
the
usual
suspects?
Okay!
Mike!
Do
you
want
to
take
charge
of
collecting
those
folks
together?
Sure,
okay,
great?
So,
if
you're
interested
in
helping
out
with
that
design,
team
talk
to
Mike
and
we'll
expect
to
hear
back
before
New
York,
okay,.
N
And
then
the
last
slide
is
just
to
say
that,
in
addition
to
the
things
that
could
actually
cause
your
connection
to
break,
there
are
also
some
annoyances
involved
in
this.
The
spec
currently
says
the
CID
from
the
handshake
has
a
sequence
number
of
negative
one.
Some
people
don't
like
negative
numbers.
N
The
surface
preferred
address
also
gives
you
connection
ID
and
it
doesn't
actually
say
what's
number
that
is,
and
the
client
the
handshake
connection
ID
doesn't
have
a
state
list
reset
token
for
the
client
and
if
you
want
to
be
able
to
define
one,
how
do
you
do
that?
So
these
are
all
other
less
connection,
critical,
CID
issues
that
have
been
discussed
on
lists.
Okay,.
N
A
A
N
Right
so
quick
overview
of
notable
changes
since
London
I
will
try
to
move
through
these
and
it
fairly
rapid
pace.
So
we
can
get
to
recovery
if
you've
read
the
drafts.
That
should
not
be
news
to
you.
The
HCP
frames
no
longer
defined
flags
on
every
frame
type
priority
was
the
only
thing
that
was
using
it.
So
now,
it's
just
part
of
the
priority
definition:
yeh
saving
bytes
its
GP
to
uses
zombie
streams
that
were
never
opened
and
are
implicitly
closed
to
maintain
the
priority
tree.
N
That
has
some
not
happy
implications
on
server
memory,
consumption
and
also
quick,
can't
implicitly
close
streams.
So
it's
GP
over
quick
now
has
a
partner
priority,
placeholder,
explicitly
defined,
and
also
along
with
that.
We
now
have
to
account
for
the
fact
that
0,
which
used
to
mean
the
root
of
the
tree,
is
actually
a
request
stream.
So
we
have
to
have
something
know
something
else
to
handle
that
this
lets.
N
You
do
more
aggressive
pruning
on
your
servers
priority
state
because
you
can
keep
track
of
which
nodes
are
in
active,
active
or
placeholders,
placeholders,
never
get
removed.
You
can
remove
and
trim
out
the
inactive
ones
with
some
fairly
simple
rules
like
you
can
take
out
any
branches
that
are
all
in
active
nodes.
You
can
collapse
down
any
interior
regions
that
are
all
inactive
nodes,
and
the
client
can
predict
that
you're
going
to
do
this,
and
so
it
helps
the
server
in
the
client
have
a
consistent
view
of
the
priority
tree
from
both
sides.
N
The
other
big
change
in
HTTP
is
that
we
have
now
put
a
tight
bite
on
unidirectional
streams.
They
are
essentially
self
describing
the
first
byte
tells
you
what
type
of
stream
to
expect
it
to
be,
and
this
is
a
point
of
extensibility.
We
have
four
types
defined,
currently
4q
pact
and
then
in
the
core.
Spec
one
is
for
control
stream.
The
other
one
is
for
push.
If
you
don't
understand
it
abandon
the
stream,
which
may
involve
the
stop
sending
frame
at
the
quick
layer
and
in
terms
of
recommendations.
N
Extensions
if
it's
data
that
develops
over
time,
like
the
cue
pack
table,
use
a
stream.
If
it's
data
that
you
have
in
your
hands,
all
at
once
go
ahead
and
define
a
frame
like
you
did
in
h2,
but
this
is.
This
gets
basically
the
same
semantics
as
new
frame
types
when
we
discussed
it
and
she
saw
the
hum-
was
roughly
split
between
yes,
we
want
this
and
we're
not
sure.
Yet
we
had
a
follow-up
discussion
on
the
list.
That
was
mostly
positive.
N
There
were
some
drawbacks
that
we
know
are
no
come
along
with
this
debugging
without
tools
gets
a
little
bit
harder,
because
now
you
can't
remember
what
stream
ID
belongs
to
which
what
stream
ID
is
to
control
stream?
You
can't
look
at
the
unidirectional
stream
and
say:
oh,
of
course,
that's
push,
but
considering
we're
talking
about
an
encrypted
binary
protocol
I'm
really
impressed
if
you
can
debug
it
without
tools
anyway,
and
there's
the
slightly
annoying
state
that
if
you
receive
data
on
a
stream
out
of
order,
it
becomes
open.
N
This
has
an
interesting
implication
on
the
structure
of
the
document,
which
is
that
push
is
now
more
or
less
separable
push
streams
are
just
another
unidirectional
stream
type
that
you
don't
have
to
support
and
if
you
receive
you're,
never
gonna
get
one
anyway.
If
you
don't
send
max
push
ID
greater
than
zero,
you
don't
have
to
understand
max
push
ID
frames
or
generate
them.
If
you
don't
support
push,
priority
explicitly
has
pushed
out
something
that
can
be
prioritized,
but
if
there
are
never
any
push
IDs,
you
don't
have
to
care
about
that
combination
of
bits.
N
N
This
is
I,
think
just
left
over.
We
missed
something
so
quick
says
for
0tt.
If
the
server
accepts
zero
rqt
data,
the
server
must
not
reduce
or
reduce
any
limits
or
alter
any
values
that
might
cause
the
client
to
accidentally
violate
its
true
settings.
When
you
finally
get
the
transport
parameters,
that's
very
sensible.
Its
GP
over
quick
actually
has
what
we
decided
way
way
back
in
the
Tokyo
interim
and
then
later
decided
was
a
bad
idea
and
the
transport
which
says
the
server
will
permit
the
client
to
violate
its
settings
temporarily.
N
Until
it
knows
the
real
settings
and
then
the
client
has
to
retroactively
Li
comply
that
gets
really
ugly
with
things
like
Q
pack,
where
you
can
remember
that
the
allowed
table
size
used
to
be
64
K.
So
you
say
all
right.
I'm
going
to
use
56
K
of
that
stuff
at
32,
K
cookie
and
they're
send
a
request
using
that
cookie,
and
then
you
see
the
settings
frame
and
find
out
you're
allowed
4
K.
N
What
do
you
do
at
that
point?
You
can't
you
can't
reduce
the
table
without
evicting
the
value
for
a
stream,
that's
already
in
flight,
and
you
could
kill
the
request.
But
then
you
kind
of
missed
the
point
of
zero,
TT
and
I
think
the
right
solution
here
is
we
got
it
right
in
transport
and
we
just
missed
updating
the
HTTP
dock,
but
that
has
some
architectural
implications,
which
means
if
we
were
to
make
HTTP
match
the
transports
requirements.
N
Now
you
have
to
ask
HTP
about
a
particular
0tt
ticket,
whether
you're
allowed
to
accept
zero
TT
on
that,
and
that
is
also
kind
of
painful,
so
I
think
the
question
to
the
working
group
is
which
type
of
ugly
do.
We
prefer
and
I'll
note
that
the
server
already
has
to
dig
out
the
old
HTTP
settings
to
figure
out
if
the
client
is
violating
the
old
ones
and
to
distinguish
between
malicious
versus
confused.
E
Eric
rajala
and
they
fight
the
premise.
A
quick
is
tight
integration
between
the
layers
and
so
like
wall.
It
is
like
disgusting
Oh
like
it
seems
like
that's
the
disgusting
we
decided
to
mean
like,
like
it,
I,
think
I.
Suppose
this
is
him
things
I
know.
I,
know,
I,
know
he's
like
things
everything
should
we
tell
the
bottom,
but
I
mean
like
you
know.
Our
philosophy
I
think,
is
to
have
tight
integration,
and
so,
when
we
discover
a
new
point,
is
that
integration?
E
S
N
A
Okay,
thank
you
Mike.
You
leave
the
pink
box,
so
it's
1143
we've
got
recovery.
We
want
to
talk
about
forward
planning
and
if
we
have
time,
Marcus
has
some
updates
on
the
spin
bit.
I,
don't
know
if
we're
gonna
get
to
Marcus
but
we'll
try.
It
depends
on
how
much
people
want
to
chat,
and
if
you
it
can
you
do
five
minutes.
G
So
there
are
a
few
editorial
issues
open
for
recovery,
but
I
tried
to
pick
out
some
of
the
non
editorial
ones.
That
I
would
like
to
resolve
at
some
point
before
v1.
They
don't
need
to
be
resolved
today,
but
at
least
start
thinking
about
them
next
slide.
So
this
one,
hopefully
we
can
resolve
right
now.
The
earlier
transmit
threshold
is
currently
one
quarter
and
the
dock
that
actually
followers
I
believe
the
at
least
initial
Linux
implementation.
I,
don't
know
if
that
I'm
still
using
one
quarter
for
early
retransmit.
G
On
the
other
hand,
in
the
other
spots
in
the
dock,
we
recommend
1/8
of
an
RTT
of
Ray
during
threshold.
So
the
inconsistency
is
a
ugly
and
B
I
have
data
in
map
RG
tomorrow.
That
indicates
probably
1/8
is
a
better
number
for
quick
anyway,
so
my
proposal
would
be
to
make
both
of
them
1/8
until
we
have
some
more
data,
both
for
consistency
and
sanity,
and
that
also
allows
us
to
use
a
more
rack
like
logic
and
combine
the
early
retransmit
timer
in
with
the
time-based
lost
attention
more
cleanly
gordy.
I
Fires
to
very
quickly,
we
talked
about
this
in
TCP,
an
where
the
mechanisms
to
use
with
TCP.
Is
it
not
better
just
do
the
discussion
in
one
place
and
and
be
over
with
it
or
do
we
want
to
talk
about
congestion,
control
separately
and
recovery
separately
for
the
two
things
and
just
used
to
meeting
slots
Janet.
D
I
Sure-
and
that
has
big
issues-
I
think
that
we
really
really
need
to
delve
into
so
this.
This
makes
it
complicated
when
we
really
don't
know
what
we're
doing
with
TCP
even
and
then
we
start
doing
it
account
here,
but
I'm
don't
take
up
time,
but
it's
just
hard
to
do
this.
A
G
And
then,
if
people
come
up
with
data
in
three
months
or
six
months,
then
you
know
we
can
of
course
reevaluate
this
with
more
data,
but
right
now
we
don't
all
the
data.
I
have
says:
eight
one
eight
seems
fine,
so
I
think
one
does
not
object.
I
will
I
will
do
that
and
close
us
next
issue
max
data
before
sending
an
act.
This
is
actually
kind
of
important.
This
came
up
with
the
ecn
conversations.
G
The
current
text
attempts
to
be
a
little
bit
more
general
than
the
current
than
the
default
TCP
implementations
and
that
it
allows
you
to
send
acts
less
frequently
than
every
two
packets
and
a
lot
of
networks.
This
is
an
excellent,
optimization
and
decreases
the
ACT
low
rate,
like
very
substantially
experimental
evidence,
shows
that
this
can
be
very
valuable.
G
However,
reno
is
also
the
documented
congestion,
controller
and
reno
is
act
clocked,
and
so
you
will
end
up
with
slower,
slow
start
growth
and
a
few
other
issues
if
you
use
reno
and
only
acts
a
every
20
or
40
packets,
and
so
I
think
we
need
to
kind
of
reconcile
those
two
things
of
you
know.
If
we're
gonna
recommend,
we
know
we
need
to
have
reasonable
default
acknowledgment.
L
G
This
isn't
really
as
much
about
appropriate
by
counting
as
it
is
about
the
fact
that
right
now
the
doc
literally
says
as
long
as
you
acknowledge.
You
adhere
to
your
delayed
act,
timer
of
whatever
you
say
it's
going
to
be,
which
defaults
to
25
milliseconds.
You
can
act
as
infrequently
as
you
want,
there's
just
no
prescription
whatsoever
about
say,
asking
every
two
packets
or
at
packing
effort
after
every
n,
bytes
and
so
I'm,
proposing
adding
that
back
in,
but
how
Island
appear
to
specify
what
they
want.
G
L
Q
L
D
Anger,
Yoshi
I
think
this
might
help
you
understand
the
difference.
Quick
does
not
require
you
to
act
after
every
two
packets,
so
you
could
actually
act
once
every
half
hour
didi
if
you
want.
So.
The
main
point
here
is
to
allow
the
sender
to
compensate
for
that
in
its
controller.
By
saying
this
is
I
can't
really
compensate,
so
you
must
send
acknowledgments
every
two
packets
or
something
like
that.
So
this
is
really
like
the
sender
string,
the
receiver,
how
frequently
it
it
needs
acknowledgments
back
from
the
receiver
based
on
bytes.
A
V
Right
so
so,
because
of
appropriate
by
Counting,
we
don't
really
need
an
act.
Every
two
packets.
We
just
need
an
immediate
act.
If
you
will
process
at
least
two
packets
in
the
current
batch,
so
we
can
still
do
stretch
acting
like
if
there's
bulk
data
flow
and
you
got
like
hundred
packets,
you
can
still
send
one
immediate.
Have
you
don't
need
to
send
fifty
acts
and
I
think
that's
still
good
enough.
So
that's
why
we
should
mind
it
that
you
should
generate
an
ACK
at
least
every
two
packets,
but
it
could
be
more.
G
V
G
V
A
A
So
planning
for
future
activity
than
working
group,
we
have
a
September
interim
planned
in
New
York
City.
It's
on
the
website.
Registration
is
going
to
close
in
three
1/2
weeks.
If
you
haven't
registered
yet
please
do
so
very
soon.
It
is
I
hope
we're
not
going
to
be
space
constrained.
It's
the
same
size
of
venue
approximately
as
we've
had
before,
and
the
number
of
people
coming
has
been
falling.
So
that's,
hopefully,
we'll
be
okay.
A
We're
gonna
do
an
interrupt
beforehand,
as
we
always
do,
and
we're
gonna
need
to
figure
out
the
scope
of
that
Interop
work.
So
we're
gonna
need
a
Sept
7th
implementation
target.
Are
you
sure,
okay
cuz
website
says
six?
We
need
update
the
website,
then
we
need
somebody
to
own
driving.
That
definition
is
anybody
willing
to
do
that.
W
B
E
B
E
B
A
Take
this
to
the
wiki
and
enter
the
list,
if
necessary,
so
talking
to
Lars
talking
to
Spencer
talking
to
the
editors,
we
think
we're
getting
close
to
done
and
that's
a
message
that
we
want
to
send
to
the
working
group.
Very
clearly,
you
know
we're
are
chartered
milestones
are
November.
There
still
seems
to
be
an
intent
that
we
want
to
try
and
meet
those
milestones
in
in
in
spirit
at
least.
We
may,
of
course
come
across
issues.
A
A
You
know
we
anticipate
the
editors
are
gonna,
do
not
only
be
able
to
share
a
number
of
issues
to
resolve,
as
you
heard,
we
also
have
a
fair
amount
of
that.
A
trail
work
to
do
they're,
gonna,
try
and
make
the
documents
more
reader
friendly.
So
so
don't
feel
like
you
need
to
go
in
and
pick
apart,
the
grammar
or
the
spelling
or
anything,
but
in
terms
of
what
the
documents
contain
or
don't
contain.
Please
start
looking
at
that
with
the
eye
that
we're
gonna
be
finishing
pretty
soon.
A
You
know
we
had
one
big
change
as
a
result
of
previous
meetings
with
a
stream,
zero
work
and
we've
incorporated.
That's
good
I
think
that
it's
it's
fair
to
characterize
it's
to
be
progressively
harder
to
make
a
big
change
as
we
move
down
the
road
towards
completing.
So
so
please
do
keep
that
in
mind.
I
think
we
talked
briefly
before
we're
talking
about
shipping
the
base
drafts
we're
talking
about
shipping,
the
OP
straps,
possibly
at
the
same
time,
or
at
least
that's
the
default.
A
If
we
want
to
vary
that,
we
need
to
come
to
some
agreement
about
it,
it
doesn't
sound
like
we
have
that
easily.
Yet
we
have
the
invariance
draft
as
well,
which
will
probably
do
a
slight
refresh
on,
but
otherwise
I
think
it's
gonna
stay
the
same.
Those
are
all
ship.
At
the
same
time,
the
other
elephant
in
the
room
is
the
spin
bit,
which
is
probably
why
Brian's
standing
up
if
all
goes
to
plan
Bangkok
is
working
we're
going
to
make
that
decision.
A
You
know
we
said
before.
We
need
an
experience
that
it
demonstrates
it's
useful
in
managing
operating
networks.
So
if
you
are
gathering
that
data,
please
bring
it
to
the
working
group,
ideally
before
Bangkok.
It
gives
people
time
to
a
Viton
to
look
at
that
data
and
get
comfortable
with
it
and
and
feel
free
to
talk
about
in
a
working
group
list.
If
you
want
to
coordinate
with
other
folks,
but
I,
don't
think
we're
gonna
be
delaying
last
call.
If
we
don't
have
that
kind
of
data.
This
is
a
decision
we
need
to
make
to.
G
Ship
may
I
make
a
friendly
suggestion
here
sure
so,
we've
taken
in
the
past
information
about
to
spend
that
off
the
interim
agendas.
Just
so
there's
no,
you
know
perception
that
you
know
you
have
to
like
actually
addressed
a
lot
order
to
do.
This
noticed
a
lot
of
the
people
who
are
doing
sort
of
core
work
on
the
spin
bit
or
at
the
interims
anyway.
So
maybe
we
should
revisit
that
for
New
York.
G
G
G
V
A
A
A
We've
also,
you
know
if
we
finished
up
in
Bangkok
the
idea
is
we
go
to
working
group
last
call
then
wgl
see
made
collect
some
issues.
We
need
to
discuss.
Ietf
last
call,
if
it
happens
in
a
timely
fashion,
may
collect
some
issues
we
need
to
discuss.
So
we
may
need
an
early
2019
interim
larva
I've
discussed
this,
and
we
think
what
we're
going
to
do
is
start
putting
together
the
arrangements
for
such
an
interim.
A
You
know
January
or
February,
something
like
that,
but
not
actually
pull
the
trigger
on
it
and
and
say
we're
going
to
do
it
until
after
Bangkok.
So
we
know
if
we
need
it,
we
don't
know
when
to
waste
people's
time,
but
just
to
give
you
a
heads
up.
So
that's
the
way
we're
thinking
we've
been
doing
this
pace
of
having
one
interim
between
each
major
ITF
and
if
we
do
that,
it's
going
to
be
in
Europe
Asia,
because
the
next
ones
in
the
u.s.
A
any
questions
about
that.
Okay,
one
other
thing
that's
come
up-
we
have
two
minutes
left
is
the
name
of
quick
itself
from
what
we've
observed
still
causes.
Confusion
amongst
implementer
is
amongst
people
who
are
writing
about
it
in
the
press,
potential
users-
and
this
has
been
bothering
us
for
a
while,
because
there's
quick,
google,
quick
and
ITF
quick
and
we
say
gee,
quick
and
quick,
but
we
don't
do
consistently
in
people
outside
of
this
room.
Don't
use
it
at
all
talking
through
this
changing
the
name
of
quick
at
this
stage
doesn't
seem
realistic.
A
I
mean,
after
all,
we
have
really
nice
stickers.
So
instead,
remember
community
made
of
what
we
think
is
a
very
good
suggestion
is
to
maybe,
when
we
ship
this
thing,
when
we
ship
it
will
call
it
quick
version
with
the
idea
that
Google's
verse,
no
quick,
was
version
one.
So
I'm
not
going
to
open
this
up
to
a
working
group
decision.
I
almost
consider
this
an
editorial
issue.
I
wanted
to
give
a
people
heads
up
about
that.
If
it
gives
you
heartburn
or
you
have
concerns,
come
and
talk
to
Lars
and
I
just.
AC
A
little
bit
of
comparison
from
acne,
we
discuss,
we
had
the
same
discussion
about
renaming
because
we
had
let's
encrypt
acne
and
ietf
acne,
and
they
were
originally
available
for
a
long
time
and
what
has
ended
up
happening.
Even
though
the
working
group
decided
to
do
nothing
was
that
the
industry
is
calling
IETF,
Acme
v2.
So.
A
Calling
anyway
yeah
okay
thanks,
so
that
also
has
another
implication.
We've
had
this
background
discussion
about
quick
version
to
what
the
next
thing
is.
After
you
know,
quick
for
HTTP.
Obviously
that
would
become
quick
version
3
or
something
else,
but
since
we're
now
talking
about
getting
to
the
end
of
this
work,
those
sorts
of
conversations
become
more
in
scope
and
we
should
start
talking
about
what
the
thing
after
we
ship
quick
version.
A
I
was
about
to
say,
1
version
2
is
going
to
be
so
please
talk
amongst
yourselves,
talk
to
Lars
and
I
and
we'll
start
figuring
out
what
that
next
phase
of
work
might
look
like
I.
Think
we're
done
for
today.
So
hopefully
we'll
see
a
good
number
of
you
in
New
York
City
in
September,
and
then,
of
course,
we'll
see
everyone
in
Bangkok.
Thank
you,
oh
and
and
Marcus.
So
Marcus
had
slides
on
the
spin
that
we
didn't
get
to
those.
They
were
kind
of
requests
to
the
last
minute.
Marcus.