►
From YouTube: IETF101-QUIC-20180319-1330
Description
QUIC meeting session at IETF101
2018/03/19 1330
https://datatracker.ietf.org/meeting/101/proceedings/
B
C
C
So
this
is
the
ietf
note
well
in
its
new
form,
even
more
readable,
if
you're
not
familiar
with
this
I
would
encourage
you
to
use
your
savor,
a
favorite
web
search
engine.
To
look
for
IETF
note.
Well,
it's
the
conditions
upon
which
we
under
which
we
participate,
regarding
intellectual
property
and
regarding
privacy
and
regarding
taking
photographs
which
is
a
new
one
and
regarding.
C
Behavior
in
general,
behaviors
an
attendee,
so
please
have
a
look
at
that
if
you're
not
already
intimately
familiar,
we
have
two
sessions
in
this
if'.
Today
we
have
the
hackathon
report,
an
editor's
update,
the
editors.
Some
of
you
may
have
noticed
the
editors
were
quite
busy
last
week,
then
we're
going
to
talk
about
a
proposal
for
a
quick
over
DTLS
and
stream
zero
adjustments
from
ekor.
C
Then
martin
is
going
to
talk
about
invariants
and
hopefully
we're
gonna
get
to
talking
about
ecn
and
perhaps
a
short
couple
of
very
brief
presentations
about
stuff,
that's
being
presented
elsewhere
in
the
ITF
this
week
on
Thursday
we're
going
to
talk
about
the
spin
bit
proposal
from
Brian
and
if
we
have
any
time
left
over
we'll
talk
about
either
issues
or
stuff.
That's
build
out
of
the
agenda
today.
Do
we
have
any
agenda
bashing?
E
D
F
C
G
D
G
D
We
go
right
so
so
can
you
say
that
this
is
the
current
version
of
T
the
interrupt
matrix?
It's
the
interactive
version.
Sorry,
okay!
Is
this
better?
Okay,
I,
don't
want
to
I!
Guess
I
have
to
eat
it
all
right!
I
can
do
that.
So
there's
a
bunch
of
implementations
of
quick.
Some
are
closed,
tour
some
open
source-
and
this
is
I-
think
the
seventh
or
eighth
time
we
got
together
to
interrupt
s
time
against
each
other.
D
Basically,
if
you
look
at
this
slide
on
the
left-hand
side,
you
see
the
different
client
implementations
talking
to
the
different
server
implementations
of
each
client
is
a
row.
The
greener
a
cell
is
and
the
more
yellow
it
looks
the
better
so
you're
seeing
that
that
mostly
we
have
some
pretty
good
interrupts
while
you
don't
because
there's
a
tuner
in
front,
but
we
have
some
pretty
good
interrupts
on
things.
Like
version
negotiation,
the
handshake
data
exchange
connection,
close
resumption,
zero,
RTT
and
state.
D
Let's
reset
also
what
those
letters
are
for
some
informations
are
liking
the
behind
and
haven't
really
updated
to
the
forth
invitation
draft.
D
So
it
certainly
is
still
going
to
be
challenging
to
ship,
RFC's
and
implementations
by
November,
but
I
think
it's
still
very
feasible.
So
the
cadence
has
been
good
and
I
think
the
activity
has
been
nice.
We're
probably
also
going
to
be
doing
a
virtual
interrupts
Day
again
sometime
between
now
and
June.
Probably
gonna
do
a
doodle
poll
either
on
the
slack
or
animating
list.
I
would
guess
that
would
be
maybe
roughly
middle
May
or
something
sort
of
between
now
and
and
Stockholm,
and
we
haven't
really
discussed
this
widely
amongst
the
implementers.
D
But
my
my
feeling
is
that
for
the
next
virtual
interrupts
a
and
then
the
Stockholm
/
Easter
interrupts
event.
We
would
do
implementation
trough
5,
which
is
probably
going
to
be
the
11
releases
of
the
drafts
that
aren't
out
yet
plus
whatever
TLS
1-3
is
at,
which
is
26
well
27,
mile
an
acre
whatever
is
the
newest
and
most
widely
available
TLS
one
328,
okay,
clack
God,
we're
still
almost
done
TLS
1
3
right.
So
so
anybody
can
join
right.
D
So
if
you
have
an
implementation
and
I
learned
yesterday
of
at
least
one
more
implementation
that
might
sort
of
get
ready
or
if
done
in
rust,
and
so
if
you
have
an
implementation
and
you
haven't
participated
yet
right,
please
come
along
and
everybody
can
invite
anybody
else
to
the
slack
channel
or
you
can
talk
to
mark
and
me.
If
you
don't
know
anybody,
there's
public
servers,
so
most
of
the
implementations
actually
have
a
server
out
there,
that
you
can
talk
to
an
inner
up
test
and
we're
using
this
pretty
widely.
D
F
E
F
H
H
We
tried
to
tag
close
or
a
sorry,
almost
everything
so
quickly
to
issues
or
issues
that
we
don't
think
are
in
scope
for
v1
v2
will
happen,
and
these
will
ideally
be
in
scope
for
that,
it's
always
possible
that
something
gets
put
into
v2
now
and
gets
pulled
back.
There
was
no
rules,
it's
also
possible
things
get
put
into
v2
and
by
the
time
we
ship
v1,
we
decide
we
don't
care.
H
Parked
is
considered
non-blocking,
but
is
something
we
will
re-examine
before
we
ship
you
one.
So
there
are
some
things
that
are
a
wait
awaiting
more
data.
Some
things
are
winning
more
implementation
experience.
Some
things
are
just
waiting
and
someone
else
to
care
more
than
any
of
the
others.
Do
I
think
that
covers
most
of
them.
H
H
H
Currently,
the
HTTP
mapping
uses
stream
IDs
the
quick
stream
IDs
to
be
clear.
It
does
not
have
its
own
stream
IDs.
The
idea
is
to
add
request
IDs
to
the
HTTP
mapping,
and
so
there
would
be
an
ideal
at
the
HTTP
layer
as
well
as
at
the
quick
layer,
and
they
wouldn't
necessarily
be
linked.
The
motivation
is
HTTP.
2
allows
you
to
have
phantom
strains
in
priority
and
quick
has
no
way
of
creating
phantom
streams,
and
so
you
know,
having
request
IDs
in
the
HTTP
mapping
would
allow
this.
H
So
there
may
be
other
solutions
to
this
problem,
but
there
is
currently
a
problem
with
priorities
in
HTTP
mapping,
HTTP
to
queries
back
to
quick
that
there
are
some
features:
the
this
phantom
stream,
in
particular
the
you
can't,
can
currently
simulate
and
quick
so
and
yeah.
The
idea
is
essentially
to
propose
to
extend
push
ID
to
all
requests.
So
next
slide
connection
ID
privacy.
Can
we
encrypt
connection
IDs?
Yes,
but
it
will
hurt
a
lot.
H
So
we've
we've
talked
about
this
quite
extensively.
There
are
options.
They
are
technically
feasible
in
some
circumstances,
under
some
assumptions,
but
they're
all
either
they're,
all
pretty
horrible
in
some
way
shape
or
form,
to
the
extent
that
it's
not
clear,
anyone
would
be
willing
to
deploy
any
of
them.
H
I
H
Priming
the
client
with
the
connection
ID
for
0rt.
So
currently,
when
you
have
a
zur
RTT
resumption
token,
you
basically
still
generate
a
random
connection.
Id
whatever
you'd,
like
is
a
client.
This
would
say
when
you
do
0tt
come
back
with
this
connection.
Id
one
motivation
is,
it
makes
it
easier
to
the
limit
replay
attacks,
because
you
know,
if
you,
you
know,
have
various
routing
infrastructure,
that
kind
of
Maps
a
certain
ranges
of
connection
I'd
use,
a
certain
machines.
All
of
the
replays
would
end
up
hitting
a
small
number
of
machines.
H
Potentially
it's
easier
to
prevent
and
detect.
Ddos
is
and
other
similar
things.
It
might
also
limit
some
other
types
of
attacks.
I
haven't
really
thought
through,
so
it's
possible
to
have
a
new
session
check
session
ticket
frame
and
include
a
connection
ID
in
it.
That
seemed
to
be
kind
of
the
most
aesthetically
pleasing
option
of
all
the
options.
H
It
does
require
an
even
wider
interface
between
the
TLS
library
and
and
quick
which
we
will
discuss
later,
but
it
makes
it
pretty
clear
that
if
you
come
back
with
this
session,
you
know
this
is
the
connection
ID.
You
should
use
yeah,
so
next
slide
get
back.
How
far
do
we
want
to
diverge
from
H
back.
H
J
Allan
Fidel,
so
we
now
have
adopted
what
was
called
Q
cram
and
we
renamed
it
to
Q
PAC.
If
you
missed
that
it
is
very
in
its
current
form,
very
similar
to
H
packet.
R
uses
most
of
the
instructions,
and
there
are
a
few
places
where
instructions
came
in
from
HVAC
can
be
used
in
one
context,
but
not
in
another.
J
So
there's
a
PR
from
Mike
that
redoes
all
the
instructions
to
be
completely
different
from
H
pack,
but
there's
no
more
places
where
instructions
don't
make
sense,
and
it
also
creates
some
more
small
compression
efficiencies
but
they're
pretty
minor.
So
we
want
to
hear
from
people
who
have
HVAC
implementations
if
they
are
going
to
implement
Q
pack.
Do
they
want
to
share
a
lot
of
code
with
H
pack
or
do
they
want
to
start
from
scratch,
because
I
think
it
reduces
the
share
ability
if
we
adopt
that
PR.
J
So
it's
kind
of
pending
that
there's
also
a
change
in
that
PR
for
how
strings
are
encoded
and
how
the
static
table
is
referenced.
There
was
a
and
there
some
other
things
like
integer
encoding
buck,
had
a
proposal
about
perhaps
finding
a
more
efficient
way
for
for
doing
that
and
introducing
a
static
table
which
is
different
from
the
HVAC
static
table.
So
all
these
would
take
us
further
away
from
H
pack.
H
Thanks
Evan
next
lend
a
homework
section
next
slide,
so
we
will
be
talking
about
this
at
a
fair
amount
of
detail
later
on,
but
to
say
the
list:
there
are
some
issues
with
Acts
and
the
handshakes
that
are
some
edge
cases
that
are
complex
and
kind
of
ugly
and
in
some
cases,
actually
can
cause
some
undesirable
side
effects.
I
don't
really
want
to
go
into
all
of
them,
because
I
think
eckers
talk
will
be
a
better
time
to
go
into
it.
So
I'm
just
gonna
skip
to
discuss
now.
Yes,
what
great?
H
So
this
is
the
one
issue
I
really
want
to
talk
about,
because
I
actually
would
like
an
answer
from
you.
If
you
don't
give
me
an
answer
that
I
might
just
like
yeah,
basically
do
whatever
the
editors
and
I
feel
is
right,
so
you
know
now
it's
time
to
express
your
opinion.
Currently,
both
padding
and
ping
instigate
act
frames
and
they
both
count
towards
bytes
in
flight
so
individually.
That
makes
sense,
but
that
has
some
problems.
One
is
in
the
current
state
of
the
world.
Padding
and
ping
are
redundant.
H
There
is
really
no
reason
to
use
one
over
the
other
in
the
current
spec
they're,
both
one
byte,
they
both
elicit
acknowledgments
and
they
both
come
towards
bytes
in
flight.
The
other
issue
is
that
there
is
no
way
to
add
padding
to
act
only
packets
or
to
all
a
only
packets.
Technically,
you
could
add
it
to
some
number,
but
those
would
also
be
immediately
acknowledged
in
the
side
effects.
There
are
fairly
heinous.
H
But
there
is
a
principle
here
which
is
that
if
it
instigates
an
acknowledgement,
it
needs
to
be
added
to
bytes
in
flight
and
vice
versa,
and
the
problem
is,
if
you
add
something
to
bytes
in
flight
and
then
don't
you
know,
caused
it
to
send
an
acknowledgment
in
a
relatively
timely
fashion.
You
will
never
remove
it
from
bytes
in
flight
and
then
you
will
get
deadlocked
and
that
will
go
badly.
I
mean
it
might
eventually
fix
itself,
but
it's
going
to
take
a
very
long
time.
H
H
Okay,
so
therefore,
you
need
an
acknowledgement
to
remove
a
packet
from
from
bytes
in
flight
next
slide,
so
option
one
remove
ping
because
it's
redundant,
but
that
means
you
still
couldn't
add
padding
to
all
acally
packets
option.
Two
is
make
it
so
padding
does
not
instigate
an
acknowledgment
and
does
not
count
towards
bytes
in
flight.
H
So
that
means
you
could
send
having
only
or
padding
plus
AK
they
wouldn't
count
towards
bytes
in
flight.
At
congestion.
Control,
however,
is
hard
and
adding
padding
to
acts
makes
the
bigger
and
it
could
potentially
make
at
congestion
control,
actually
important
a
post
you
now
where
we
hope
it's
unimportant,
so
that
would
be
the
probably
the
largest
negative
and
questions
and
helmets
go
for
it.
Hello,.
K
K
Congestion
control
is
something
which
is
pretty
much
selective
and
selected
by
the
sender.
The
sender
knows
what
kind
of
connection
control
strategy
they
apply.
They
do
that
in
order
to
make
sure
that
they
are
sending
the
right
amount
of
packet
and
on
server.
So
if
a
sender's
connection
control
congestion
control
strategy
requires
that
whatever
padding
they
added,
we
act
it's
extremely
simple
for
them
to,
instead
of
ahead
of
adding
12
and
56
pads
to
add
one
ping
and
12:55
pads.
K
So
the
sender
as
if
you
have
the
dissension
in
big
and
bad
the
sender
that
requires
value
that
requires
congestion,
control
acknowledgment
can
Canada
ping,
it's
an
implementation
issue
and
we
could
specify
that.
But
at
the
same
time,
if
you
have
a
pure
part,
you
can
use
it
in
arcs
to
basically
hide
your
traffic,
so
I'd
like
some
scene
at
option
to
with
it,
with
a
caveat
that,
if
you,
if
your
partition
control,
requires
that
you
are
well
just
better
ping.
L
Everyone
yeah
everyone's
really
loud
when
I
was
sitting
over
there,
okay.
So
the
observation
I
I
said
was
that
if
the
titular
notion
of
padding
is
that
it's
cover
traffic
and
that
if
it
does
not
participate
in
the
congestion
control,
that's
probably
visible
in
the
congestion
control
behavior
of
the
sender,
and
you
can
do
a
lot
to
determine
how
much
is
padding
and
how
much
is
traffic
and
therefore
its
use
as
cover
traffic,
is
probably
fairly
minor.
H
M
N
P
N
H
Would
kind
of,
in
my
mind,
like
restate
another
way
of
saying
some
of
the
things
that
Christian
was
saying
and
seeing
like
that
resonates
with
people
which
is
by
making
padding
not
count
towards
bytes
in
flight?
That
means
that
having
in
paying
do
become
different,
and
it
gives
the
sender
the
maximum
amount
of
different
of
flexibility
in
how
they
want
to
decide
which
packets
should
be
acknowledged
in
which
package
should
not
be
acknowledged.
Q
Part
of
control,
because
you
have
to
take
into
account
that
you
should
probably
only
use
padding
when
you
don't
have
any
other
data
to
say
so,
your
data
sent
should
be
your
data
range
should
be
really
low.
Anyway,
you
don't
want
to
send
like
a
huge
amount
of
padding
data.
What
you
probably
need
is
to
take
more
a
take
the
recommendation.
We
usually
have
for
low
bandwidth.
Udp
traffic
put
a
rate
limit
in
there.
K
I
propose
I,
do
not
propose
to
not
count
padding
as
bad
but
in
flight
I
do
count
the
padding
bytes
as
bytes
in
flight.
It
does
not
follow
that
I
have
to
acknowledge
each
and
every
packet
that
has
apart
I
mean
in
order
to
meet
your
requirements.
You
have
to
make
sure
that
when
I
act
a
lie
some
time
to
time,
but
it
doesn't
look
like
otherwise,
then
you
could
not
be
on
it
on
just
an
arcology.
H
K
K
R
Their
problem,
so
I
guess
it
probably
just
will
be
understand
what
padding
is
so
the
thing
padding
is
absolutely
necessary
for
is
to
pad
out
the
CH
into
the
initial
packet,
so
I
can
make
1200.
Now.
You
might
also
think
padding
was
for
whisper
making
pact
aside
for
fading
traffic
analysis
and
that's
what
we
thought
we
didn't
tell
us
1/3
the
the
when.
R
R
R
R
On
the
ends
and
crypto
so
I
guess
so,
I
guess
the
question
is
what
is
padding
for
here?
It
really
is
for
cover
traffic,
then
it's
probably
better
thought
of
as
a
favor
that,
through
transport
is
doing
to
the
application,
in
which
case
you
don't
want
to
be
in
which
is
really
already
counters.
Bytes
and
I
want
to
be
just
like
a
frame
type
that
basically
just
sits
there
and
like
it's
like,
as
if
the
application
would
wanna
know
every
operation
have
them
that
their
own
way
of
adding
account
right.
R
R
I
The
concern
with
the
second
one
is
that
we
start
sending
lots
of
acknowledgments
and
they're
really
really
big
and
they
actually
congest
the
network,
but
we're
not
accounting
for
the
public.
That's
the
real
risk
here
and
to
Mary's
point
I
think
we
can
probably
avoid
that
by
simply
saying
that
we
can
just
not
send
them
as
often
there
is.
There
is
potentially
value
in
hiding
what
acknowledgments.
I
Hiding
well
so
it's
actually
more
difficult
hiding.
What
acknowledgments
you
have
is
impossible,
potentially
from
someone
doing
trapping
analysis,
but
hiding
the
distinction
between
an
acknowledgement.
Only
packet
and
a
packet
that
contains
real
data
is
extremely
difficult
because
of
what
Patrick
was
pointing
out
with
respect
to
the
fact
that
you
can
observe
the
rates
of
packets
that
are
sent
in
the
fact.
I
I
Don't
I,
don't
know
what
about
the
same
movement
into
at
this
point.
Acts
aren't
that
interesting,
because
in
in
theory,
the
person
doing
traffic
analysis
has
seen
the
packets
that
you
got
and
seeing
the
packets
that
you
didn't
get
and
can
make
some
sort
of
assumptions
about
they're
all
sort
of
acknowledgments
you
generated
so.
H
It's
about
anger,
I
tend
to
agree
with
Patrick's
and
our
needs
characterization
that,
like
I,
think
hiding
the
difference
between
acts
and
real
data.
It
seems
like
I,
don't
know
whether
that's
not
useful,
but
it
seems
if
the
purpose
of
padding
is
to
hide
like
real
data
transfers
and
introduce
shadow
traffic,
then
maybe
we
can
characterize
padding
is
more
like
meaningful
in
the
context
in
which
it's
used.
For
example,
it's
used
in
an
a
call,
a
package.
It
is
considered
to
be
like
you're
trying.
H
A
crime,
so
it
must
be
treated
as
if
it
was
not
without
congestion
control,
whereas
if
you're
sending
batting
only
packets
are
sending
padding
inside
stream
data
data
with
returns
mobile
frames,
then
it
must
be
counted
towards
congestion,
control
and
act
as
if
the
rules
same
rules
that
you
use
for
first
stream
data.
And
if
you
get
an
act,
you
should
use,
treat
the
padding
as
if
it
was
act
as
if
it
was
pure
act
data
and
so
that
I
think
that
seems
to
provide
the
most
value
for
me.
H
N
Denying
got
I'm
I
think
this
there's
actually
I'm
coming
around
to
seeing
or
Kristina's
trying
to
point
out,
I
think
it's
it's.
It
seems
reasonable
to
count
padding
towards
bytes
implied,
but
not
require
an
acknowledgement
for
them,
because
a
center
that
chooses
to
count
that
does
count
them
to
us.
If
the
sender
is
all
counted
towards
bytes
in
flight,
a
sender
that
chooses
to
send
actress
padding
frames
can
also
choose
to
send
an
additional
packet
periodically
to
evoke
an
act
from
the
peer
that
gives
it
enough
information
to
keep
moving
the
bytes
in
flight
forward.
N
That's
generally
adequate
for
a
center.
That's
choosing
to
do
that.
What
this
tells
me
also
is
that
all
we
want
to
do
is
allow
for
this
flexibility
to
remain
at
the
center,
not
have
to
specify
it
ourselves.
I.
Don't
think
that
we
should
I,
don't
have
a
strong
desire
to
do
this,
but
it
seems
to
me
that
padding
ought
to
count
towards
by
simply
just
because
of
the
nature
of
what
it
is
all
that
we
choose
not
to
count
loads
bytes
in
flight
again,
because
the
nature
of
what
they
are.
N
They
don't
fundamentally
evoke
acts
and
they
they
have
a
size
limit,
the
padding
can
be
used
arbitrarily
and
you
could
have
a
packet.
That's
ten!
That's
9/10,
padding
in
just
1/10
actual
information,
so
it
seems
reasonable
to
counter
towards
bytes
in
flight,
but
to
leave
it
open
about
and
not
to
instigate
an
ACK
and
allow
for
the
sender
to
send
theoretically
a
window
update
or
a
ping
to
get
an
act
to
open
up
its
bytes
in
flight.
So.
H
N
K
P
I
So
can
you
formulate
the
questions?
I'm
gonna,
formulate
the
questions.
We
have
option
one.
We
have
option
two
and
we
now
have
an
option
three
and
the
option.
Three.
Is
that
that
having
frames
count
toward
bytes
in
flight
but
do
not
elicit
an
acknowledgment
which
violates
the
principle
that
Ian
articulated
earlier,
but
we
will.
We
will
put
a
fence
around
that
and
say
that
if
you,
if
you
embark
into
this
territory,
you
you're.
H
D
I
My
my
plate
of
people
involved
in
this
and
who
have
an
opinion
is,
is
to
think
about
this
and
think
which
ones
you
could
not
possibly
live
with
and
I.
Think
that's
probably
some
resolution
should
use
to
draw
the
line
rather
than
anything
else,
I'm,
not
hearing
any
strong
views
on
things
that
are
completely
bad.
So
if,
if
people
want
to
think
about
that,
we
can
make
a
decision
eka.
R
Ronnie's
Google
is
Google,
it's
come
at
Google
sense
right,
so
I
guess
yeah
I
sort
of
dropped
the
bomb
on
a
list.
A
few
weeks
ago,
I'm
gonna
try
to
tell
the
story
and
what
seems
like
a
logical
order,
which
is
probably
not
what's
not
the
true
historical
order,
but
maybe
isn't
more,
is
a
more
reasonable
order
for
understanding
situation.
R
So
as
a
Saeta
logical
order,
so
I
was
gonna
talk
about
like
how
things
are
currently
structured
and
a
brief
survey
of
like
some
of
the
things
that
are
making
us
sad
about
the
way
things
are
currently
structured,
a
little
bit
of
root,
cause
analysis
and
then
a
discussion
of
the
sort
of
crypto
scheme
proposal
that
a
bunch
of
us
sort
of
worked
on
for
a
while
and
then
finally
discussion
of
like
layering
and
DTLS.
R
R
Like
a
my
implementation,
like
at
the
bottom,
there,
quick
packets,
and
on
top
of
that,
there's
like
the
streams
engine
whose
job
it
is
to
like
make
sure
things
get
delivered
and
like
on
top
of
that,
like
sort
of
jointly
sit
like
the
TLS
handshake,
hasn't,
got
a
special
case
application
and
then
the
application
and
the
rest
applications.
And
so
this
arrows
supposed
to
be
that,
like,
unfortunately,
the
TLS
handshake
has
to
like
talk
to
like
the
quick
packet
engine,
quite
a
bit
flight
stall.
R
R
Unfortunately,
so
like
the
thing
is
that
on
the
TLS
stream,
like
just
uses
like
the
special
stream
stream
zero-
and
this
is
like
kind
of
like
turned
out
to
be
like
a
huge
hassle
that
is
made
of
a
bunch
of
a
super
sad.
So
it's
got
a
bunch
of
annoying
properties,
all
of
which
kind
of
have
the
dual
of
it.
Being
this
special
snowflake
and
I,
like
the
in
slide,
is
like
string
0,
its
special.
So
first
of
all,
it's
like
there's
really
no
particular
order,
but
I
tried
to
group
them.
R
The
first
slide
is
supposed
to
be
things
about,
like
date
and
the
second
size
movie,
things
about
acts
but,
like
frankly,
they're
all
kinda
intertwined.
So
first
of
all,
sometimes
true.
Zero
was
like
encrypted
and
signs
it's
not
encrypted.
So
like
everything
up
like
the
NST
is
encrypted.
But
then
the
entity
is
like
it's
not
encrypting
the
nasties
encrypted.
So
like
hey.
This
is
just
dumb
and
be
like.
It
causes
real
problems.
R
When
you
want
to
retransmit
because
say,
like
the
say
like
say,
like
the
the
you
know,
the
s
Finn
gets
lost
right,
I'm,
David
implementation,
which
sends
s
Finn
and
St
concurrently,
which
is
totally
legal
and
like
the
S
fing
gets
lost,
but
you've
already
cut
over
into
like
in
in
the
like
writing
mode.
And
so
now,
when
you
need
to
retransmit,
you
need
to
retransmit
the
S
Finn
in
the
clear
with
the
NS
t
encrypted.
So
that's
kind
of
a
hassle.
R
It's
a
hassle,
Edition
wise,
also
hassle,
because
it
means
like
a
special
like
a
Fordham
to
the
TLS
stack
that
like
says.
Listen
these
bytes
by
the
way
are
like
in
the
clear
and
these
bytes
are
encrypted,
which
is
like
not
a
stack
that
not
thing
saying.
The
stack
normally
wants
to
give
you
and
you
can't
see
it
by
inspection
by
the
way,
because
at
this
point
we
so
cleverly
hid
like
the
content
types
that
you
dislike
them
crap
like
encrypt
the
data.
So
that
leads
me.
R
That's
not
true
here,
because
because
if
the
problem
is
you're
also
like
sending
things
at
the
same
time,
and
so
you
need
to
know
like
where's
the
boundary
from
the
flight's
exactly
what
stated
the
stack
in
depending
on
your
doing,
HRR,
there
are
multiple
round
trips,
and
so,
like
you
need
to
know,
for
instance,
is
the
first
thing
that
comes
out
of
the
server's
mouth.
Is
that
a
server
holo
or
is
an
HRR?
R
Or
maybe
it's
a
stateful
hrrm
a
it's,
the
stateless
HRR
and
all
these
things
you
have
to
know
and
make
sense
to
the
system?
Users
also
goofy
in
that
you're
like
exempt
from
flow
control
during
the
handshake.
You
want
the
hint
you
to
finish,
but
then
you're
not
exempt
later,
and
so
now
you
can
like
you
can
like
overflow,
the
could
the
flow
control
window
now
you're
like
a
negative
territory
which
is
shitty.
R
This
is
sort
of
like
a
trivial
thing
but
like
nevertheless,
if
he's
of
annoyance
to
me,
like
TL
s13,
his
own
notion
of
like
when
the
zero
RTT
boundary
is
when
you're
like
sending
like
Zeta
until
at
some
point,
there's
directive
T.
But
although
we
sends
the
RTT
data,
we
don't
send
it
until
s130,
RG
c
we're
sending
quick
cor
TT
and
so,
like
the
t,
oz
stack
like
sends
the
smack
stream
data
thing,
which
says
how
much
zero
T
tau
can
send.
R
But
the
way
it
works
is
this
is
us
an
infinite
amount
of
zero,
our
Chi
Chi
data
and
then
there's
some
other
thing
where,
like
you
tell
the
or
you
tell
the
quick
stack,
how
much
data
you
can
actually
send
and
oh
by
the
way,
till
I'm
three
also
has
this
thing.
This
is
I'm
done,
sending
0
TT
and
like
the
next
thing
will
be.
You
know.
The
next
thing
will
be
like
handshake
packets.
R
You
still
have
to
send
that
because
that's
the
only
way
to
make
sense
of
the
things
in
the
TLS
layer,
but
it's
like
only
doing
that
because
we're
not
setting
any
zero
T
TD.
Actually
so
that's
kind
of
stupid.
Another
minor
thing
you
can't
like
bundle
encrypted
and
unencrypted
eight
in
the
same
packet,
which
is
really
annoying.
If
you
want
to
do
like
zero
duty,
because
you
want
to
send
CH
and
then
whatever
was
going
in
like
in
like
zero
to
T,
you
don't
want
to
do
that.
You
can't
now.
R
The
good
news
is
maybe
like
the
saving
grace
that
I
guess
is.
Maybe
you
have
our
like
cruddy
m
to
use
your
like
1200,
so
it'll
help
that
much
you
know,
select
a
bundle
fin
and
other
things
too,
and
it's
like
all.
This
is
me
and
by
the
way
like.
Not
only
did
it
like
know,
a
lot
about
the
TLS
stack
is
doing.
You
need
a
like
a
lot
about
crypto
like
a
pretty
substantial
fraction,
like
the
TLS
record
layers
like
recapitulated
in,
like
the
quick
implementation.
R
R
The
AK
rules
are
just
like
baffling,
you
know,
sometimes
it's
but
never
supposed
to
act,
something
like
never
supposed
to
act.
Anything
at
a
higher
tier
of
encryption
than
you
currently
are.
But
again
it's
put
you
in
a
position
where
you're
like
trying
to
send
after
in
the
handshake
and
like
something
seems:
no
parasites
are
encrypted.
R
So
it's
confusing
when
you
can
somewhat
Christian
point
at
this
nice
attack
where,
because
you're,
showing
sequence
number
space
in
AK
space,
it's
possible
to
get
in
a
situation
where
the
attacker
sends
you
a
unencrypted
packet
that
is
in
the
stream
space
that
the
client,
if
your
the
client
would
have
been
sending
for
a
cipher
text.
And
then
you
lacked
that
in
the
unencrypted
packet
and
at
this
point
the
client
thinks
you
send
it.
R
But
you
haven't
gotten
it
because,
because
you
told
you
that
encrypted
so
that's
a
mess
you
can
also-
and
you
know
it's
also
really
easy
to
have
situations
happen
where
you
get
contradictions
between
the
acts
and
the
handshake
state.
So
the
easiest
version
of
this
right
is
the
client
send
CH
and
the
server
sends
SH
and
for
some
reason
the
server
does
not
give
you
an
act
for
a
knack
for
the
CH.
R
Now
you
know
for
a
fact
that,
like
that
the
server
got
the
CH,
you
wouldn't
accentuate
SH
otherwise,
but
like
you're
still
like
retransmitting,
this
CH
madly,
until
until
the
accent-
and
this
is
a
real
problem
with
this-
with
the
C
fin,
where
this
actually
comes
a
real
issue,
so
you
can
get
Soniya,
so
you
can
get
in
these
kind
of
a
canary
states
and
finding.
R
This
is
a
problem
that
I
think
Martin
seaman
raised
where
the
C
fin
reliability
is
a
problem
because
they
again
it's
because
of
the
various
crypto
tiers,
where
you're
supposed
to
be
ignoring
data
is
unencrypted.
But
if
the
act
for
the
C
thing
gets
lost,
then
he's
retransmitting
to
you.
We
just
drop
this
down
on
the
floor.
Cuz,
it's
the
wrong
level
encryption
and
now
you
can
tack
it
properly.
So
this
again,
this
is
kind
of
like
a
pain
in
the
ass
and
find.
The
next
slide
is
like
extreme.
R
Zero
is
like
in
people's
code,
like
everyone's
code
is
like
littered
with
these
special
case
situations
where
you
have
to
handle
stream.
Zero
Martin
likes
ears
laying
from
Packers
code
from
moss
quick,
but
it's
not
just
like
moss
quick.
You
know,
I,
like
Martin,
rewrote
like
the
mink.
R
T
R
So
which,
by
the
way,
has
been
made
much
worse
by
euni-unni
by
died
because,
like
you
need
streams,
zero,
it's
like
totally
that's
a
totally
valid.
Okay.
Next
slide.
So,
like
you
know,
we're
kind
of
like
like
I
had
this
lie,
which
I
didn't
put
in.
There
was
like
this,
the
shame
spiral
of
like
what
happens,
which
is
like
you
know.
We
like
we.
You
know
we
find
some
problem
and
they
were
like.
R
The
crypto
is
using
enough
from
the
transport
the
applications
using
that
the
trip
the
crypto
transport
can
function
properly,
even
while
the
application
transport
is
completely
spun
up
and
like
this
question
mark
question,
mark
question
mark
is
precisely
the
source,
the
problem
that
we
now
have
but
like.
Hopefully,
if
we
do
those
things,
we
have
profit
so
neck
side.
R
So
so
this
is
the
part
where
I'm
like
telling
the
story
in
like
the
natural
order,
run
the
true
order,
which
is
like
that
I,
like
said
like
well,
why
don't
we
do
some
really
radical
thing
and
then
we
and
then
a
bunch
of
us
with
Ian
I,
think
being
this
together
this,
but
a
bunch
of
us
sat
down
and
tried
to
like
think
about
what
so,
like
the
less
radical
thing
that
would
work,
and
so
there's
been
a
bunch
of
discussions
of
this
or
general
idea,
which
we
I
think
I'm
gonna
like
to
say
like
what
we
call
it
window
they're
calling
this
or
we
not
how
to
call
it
that
I
got
a
call
that
okay,
do
it
well,
okay,
so
so
the
idea,
the
idea
goes
under
the
name
of
like
crypto
streams
and
crypto
ax,
which
is
you
can
imagine,
immediately
became
cream
and
cracks.
K
P
R
God
yeah,
that's
great!
Thank
you.
So
so
general
idea
is
like
pretty
straightforward,
which
is
that
you
stop
having
like
you,
get
rid
of
streams
or
overcomes
a
regular
stream,
and
you
have
like.
Essentially,
you
replicate
the
streams
or
frames
into
like
crypto
streams
frames
and
like
different
frame
ID,
and
they
probably
do
have
a
stream
ID
in
them.
R
Otherwise
are
basically
the
same,
and
you
replicate
the
AK
frames
in
the
crypto
accra
track
frames
and
then,
basically,
as
you
can
see
from
this
diagram,
like
everything,
goes
in
like
the
TLS
handshake
like
uses
that
crap
and
everything
that
goes
in
the
application
handshake
these
other
extreme
stuff,
even
though
the
otherwise
basically
identical,
and
so
this
lets
you
separate
out,
of
least
in
theory.
You
know
the
data
and
also
the
acts.
R
You
still
need
this
goofy,
like
you
know,
this
can
figure
out
on
the
side
and
one
of
the
things
I
think
we
had.
We
had
not
consensus
about,
but
has
been.
R
Also,
imploded
is
separating
out
the
Crypt
actually
having
two
crypto
streams
and
I
guess,
maybe
to
corrupt
relax,
I'm,
not
sure
because
of
the
post
handshake
data,
which
has
to
be
it
has
to
be
sent
encrypted,
and
so
you
know
yeah
you're,
not
that
over
the
boundary
is,
but
you
could
you
could
either
have
you
know
one
equipped
or
even
cut
it
over?
You
can
have
two
crypto
streams
and
that
made
the
boundary
clearer,
so
that
was
also
flows
a.
S
S
Okay,
then
I
I
think
maybe
we
want
to
make
a
larger
distinction
between
the
kinds
of
streams
that
you're
going
to
use
to
create
crypto
because
you're,
what
you're
fundamentally
doing
is
saying
that
the
crypto
stream
is
a
special
case
of
a
reliable
stream,
which
is
now
part
of
quick
and
there
may
be
other
uses
for
that
underlying
reliable
stream.
That
want
to
behave
this
way,
even
though
there
are
other
ways
of
creating
reliability
and
regular
stream.
I'm.
S
I
think
what
you've
done
is
to
say
that
these
crypto
streams
have
a
fundamental
property
of
reliability,
which
was
the
problem
you
had
before
bootstrapping
a
TLS
on
top
of
something
is
unreliable
and
I
think
that
fundamental
property
of
reliability
has
applications
outside
of
the
one
you're
using
it
for,
and
we
might
want
to
generalize
it
so
that
you
can
use
this
form
of
reliable
stream
in
certain
other
cases.
Okay,
so
I
think
everybody
else
hates
this
idea,
but
I
write.
R
Well,
I
guess
I
would
say
about
that
is
right,
I
guess
what
I
would
say
about
that
is
that
the
is
true
they
have
the
minimum
property
reliability,
but
the
other
property
they
have
is
exemptions
from
a
whole
bunch
of
other
rules
that
quick
actually
imposes
those
do
otherwise,
and-
and
so
it's
dangerous
for
application
should
not
be
using
them.
I,
guess
or
I
would
say,
and
they
are.
R
S
You're
going
to
tie
the
fundamental
properties
of
old-style
TTP
reliability
to
always
unencrypted,
then
I
agree
with
you.
You
never
want
to
use
them
for
anything
else,
but
I
think
that
you're
you're
creating
a
different
marriage
of
convenience
that
is
not
required
by
the
fundamental
underlying
so
problem.
C
N
N
The
difference
being
that
people,
then
then
it's
basically,
this
is
just
I
want
to
avoid
the
question
of.
Is
it?
Is
this
just
stream
0
again,
because
it's
not
a
crypto
stream,
we're
talking
about
really
the
that's
an
well
nation
question
of
how
exactly
you
deal
with
those
frames
that
are
arriving
well.
R
R
Okay
next
slide,
so
it's
like,
as
I
sort
of
said.
This
is
probably
like
the
minimal
change
that
does
much
of
anything
at
all,
so
I
think
it's
all
the
following
problems,
I
think
reasonably
well,
it
sells
the
flow
control
problem
because
you
just
like
exempt
thing
from
stream
zero
and
you
don't
have
to
like
process.
It
washes
it.
It
provides
some
cost
back
clip
query
button
to
encrypt
in.
R
What's
not
the
cost
of
that,
as
I
said,
easy
makes
it's
like
even
more
invasive,
like
widening
of
the
Thomas
interface
solves
the
whole
sort
encrypted
packets,
because
you
would
just
basically
say
if
we
ever
receive
a
a
crypto
stream.
Ak,
sorry,
a
regular
if
you
ever
receive
a
crypto
stream
AK
for
a
packet
that
was
sent
and
on
the
crypto
stream.
You
freak
out
and
so
that'll
cost
connection
information,
as
opposed
to
as
opposed
to
holes.
R
But
we
already
know
how
lucky
we
are
to
determine
a
detection
as
a
man
on
the
side,
and
it
makes
the
acronyms
more
straightforward
again
because
basically,
things
that
come
in
the
crypto
stream
yeah,
the
crypto
AK
and
things
that
came
in
the
regulation-
graphing
regular
rack,
so
you
tagged
right-
and
this
has
some
similarities.
Obviously,
with
like
the
detail:
SF
bak
ax,
which
cut
out
the
same
property
there's
other
problems
like
I
had
a
long
list
this,
all
some
of
them
as
I,
said
I.
R
Think
there's
there's
some
discussion
of
of
solution
for
the
sea
fin
reliability
problem.
It
was
called
handshake
done.
The
reason
I
have
it
like
a
second
bucket
is
a
cos,
a
cos.
It's
like
a
different
situation
in
B
I,
don't
like
it
as
much
so
his
other
problems.
I,
guess
you
know
we
could
here
try
to
solve
or
we
could
live
with
them.
R
I
guess
one
thing
one
thing
I
should
say
before
I
like
show
my
next
couple,
slides
is
like
I
am
not
in
any
way
saying
one
could
not
build
a
version
of
quick
without
like
doing
any
of
this
crap.
Like.
Obviously
you
could
we
have
like
working
implementations.
We
can
just
keep
banging
on
it
long
enough,
like
we
can
make
it
work
right.
What
I'm
saying
is
that
we're
making
a
budget
we're
doing
a
bunch
of
special
anointing
special
case
saying
this
regime
spiral?
R
R
R
I
was
thinking
how
to
talk
about
this
that
this
morning,
by
the
way
so
like
that's,
why
I'm
I
I'm,
probably
more
destroy
than
usual
like
how
do
we
get
into
the
state
and
the
way
you
get
into
this
state
is
and
so,
and
so
the
problem
is
like
I
said
the
very
bidding
we
have
like
the
transport,
and
we
have.
We
have
the
transport
train.
You
set
up
the
crypto,
we
have
a
crypto
itself
right
and
and
the
in
order
to
make
that
work
properly.
The
crypto
transport
needs
a
lot
of
knowledge.
R
Birth
through
car
through
crypto
is
doing,
and
so
they
have
to
like
really
have
this
bear
hug
and
so
like.
Basically,
that's
why
I
put
on
the
box
like
this,
and
so
basically
the
you
know
what
this
crypto
strings
for
fastings
guys
is
basically
rip
out.
The
parts
of
like
like
remove
take
the
parts
are
quick
that,
like
responsible
for
doing
the
transport,
the
minimal
transport
you
needed
and
like
clone
them
over
and
bear
hug
them.
The
TLS
and
I
should
have.
R
The
table
listing
things
about
each
other
right,
and
so
so
that's
this
is
so
so
like
this
is
what
you
have
to
do
to
make
this
to
meet,
does
not
suck
so
somewhere.
This
has
to
be
super
wide
boundaries,
so,
like
the
other
way
to
make
us
have
right
balance.
So
when
we
grind
boundaries
like
clone
off
at
the
Creek
parks
and
like
make
that
boundary
porous
the
other
part
of
the
baton,
we're
ready
to
do
it
next
slide.
It's
like.
Oh,
it's
like
it's
like
isolate
a
lot
crap
into
his
own
thing.
R
Hence
like
DTLS
right
so
so,
like
the
blue
parts.
Obviously
are
things
that
are
new.
R
So
I
said
this
in
the
list,
so
I
don't
think
explaining.
What's
like
what
I'm
suggesting
is
particularly
difficult,
what
I'm
suggesting
is
take
the
parts
of
quick
that
involve
crypto,
throw
them
away
and
like
run
the
project
with
and
vote
transported,
run
over
TLS
DTLS
and
that's
like
basically
what
I'm
proposing
so
in
this
slide
at
least,
and
so
the
primary
signal
you
need
now
is
this
arrow
ready
that
says
like
you
can
now
write
data?
That's
basically
that's!
R
Basically
the
major
signal
you
need
next
slide,
so
I
mean
this
is
like
obviously
a
much
much
bigger
change.
The
good
news
is
like
mostly
deletion.
This
like
went
through,
and
it's
like
involves
like
ripping
it
like
the
leading
a
bunch
of
code
like
all
pretty
much
all
quick,
TLS
and
modest
chunks
of
crude
transport
on
it
does
revert.
Some
of
this
isn't
earlier
and
Ian
points
out
to
me
that
we
previously
talked
about
having
crypto
at
least
somewhat
so
to
flow
control,
and
now,
of
course,
we
totally
understand
full
control.
R
That's
not
really
a
big
deer,
not
gonna,
send
very
much
data
anyway,
but
it's
a
decision.
You
wouldn't
need
some
small
change.
The
details
1.3,
which
I
can
talk
about
a
little
bit,
I
think
the
primary
issue
here
is:
we
spend
a
lot
of
time,
hacking
on
the
quick
pack
of
structures
and
in
some
ways,
they're
better
than
the
defect
structures
and
was
there
worse
and
so
and
and
there's
a
more
advanced,
connector
ID
structure,
though
actually
kind
of
got
borrowed
from
DC
law
societies.
R
So
so,
basically,
like
you
know,
we
would
have
to
figure
how
to
how
to
do
with
that
problem
by
probably
got
that
crap
and
into
DT
LS,
which
are
mostly
the
matter
of
copying
and
pasting.
The
good
news
is
like
essentially
every
problem
over
on
the
first
three
slides
like
this
addresses.
Maybe
some
really
new
problems
you
don't
know
about
but
addresses
all
those
problems
next
slide.
So,
oh,
this
is
I
meant
to
show,
but
it's
like
roughly
it's
the
same
anyway.
R
So
that's
the
good
news
and
it
says
see
if
what
I
don't
understand,
I
told
you
it's
4
a.m.
and
the
other
thing
I
should
mention
is
like
like
we
now
have
like
several
partial
implementations
of
this
and
I.
Can
in
like
the
cases
I'm
aware
of
in
every
case
is
amended,
I
can
evolve
like
it's
bleeding
a
crap
kind
of
code,
though
I
think
in
fairness.
R
R
Have
two
people
come
up
anyway,
so
this
so
this
slide
might
be
might
be
called
like
things
that
are
like
less
they've
made.
People
less
happy,
though
so
obviously,
like
DTLS,
woke
up
become
too
quick
on
wire
image.
We
have
a
lot
of
flexibility
in
that,
because
I
sort
of
stuck
my
foot
in
the
door
of
DTI
last
night
before
went
out
the
door
so
right
before
they
closed
it.
R
So
you
know
ultimately
I
think
we
could
graft,
essentially
all
the
quick
wire
formats
on
to
detail
ice
if
we
so
wished,
probably
not
exactly
but
like
it
would
get
very,
very
close.
There's
if
you
don't
do
that,
there's,
maybe
a
small
amount
of
packet
expansion.
If
you
do
do
that,
there's
none
the
I
making
the
I
just
said.
I
said
the
interface
between
detail
lesson:
quick
was
very
thin.
It's
not
quite
it's
there's
one
more
thing.
R
Besides
the
arrows,
which
is
you
need
to
have
access
to
these
details,
packet
number,
because
you
want
to
use
them
in
the
in
the
axe,
and
so
that
requires
basically
being
able
to
ask
the
question:
what's
the
nest
packet
you're
gonna
send
out
and
what
was
the
packet
was
the
sequence
ember
the
pack
of
the
Cambridge
came
in
that's
like
very
trivial,
it
doesn't
very
little
changes
and,
of
course,
you
also
need
to
screw
with
the
axe
a
little
bit
to
carry
the
detail
has
a
pock
separately
in
the
AK.
R
That's
actually
now,
if
not
not
so
different
from
having
the
separate
act
frames,
you
have
to
do
to
make
crypto
a
quirk
and
I
think
the
other
thing
that,
like
obviously,
is
true
so
I
should
just
say
it
is
like
the
details
spec
and
like
the
implantation
or
like
less
mature
than
duty
law,
7.3
spec
in
applications
right.
So
that's
certainly
true.
Next
slide
right,
so
I
think,
like
so
two
things.
First
of
all,
like
everybody,
you
can't
even
talk
about
the
one
I'm
going
to
talk
about.
R
If
you
start
they
want
to
talk
about.
But
what
I
like
to
talk
about
is
like
not
layering
but
architecture
and
like
what's
the
right
architecture
going
forward
and
and
like
how
do
we
make
the
decision?
How
do
you
think
about
that
decision
properly
in
a
sensible
way
and
like
so
you
know
if
people
think
this
is
like
folding
architecture
I'm,
like
totally
like
you
know,
I'm
like
happy
to
hear
that
from
them
right,
I
think
I've
tried
to
make
the
best
case
I
can
for,
like.
R
You
know,
first
of
the
various
options
we
have,
or
they
some
inaccurate
case.
I
can
so
I
guess,
but
also
I.
Think,
like
the
optics
of
me,
standing
up
here,
pitching
like
one
thing
like
kind
of
shitty.
So
you
know
Ian
did
a
really
good
job,
I.
Think
of
helping
us
build
a
consensus
on
some
other
options,
so
I
was
hoping
he
would
come
up
and
likely
to
stand
here
together
like
we
talked
about
the
obviously
rational
people
and,
like
worst
case
all
that
clock
really.
R
H
H
Think
we
can.
We
can
do
better.
So
you
know
on
a
personal
level,
I'm
not
like
at
all
up
for
swallowing
the
entire,
like.
You
know,
DTLS
proposal,
all
all
at
once,
as
I
said
in
the
list,
but
I
think
something
like
of
the
form
of
these
like
cryptic
screen
frame,
crypto
akram,
pretty
much
solves
all
the
issues
that
are
outstanding
and
there's
a
pretty
plausible.
At
least
concept
I
will
say
that,
based
on
my
experience
with
the
quick
working
group,
I
think
almost
anything.
We
do
there's
certainly
a
a
larger
change.
H
We
make
the
more
the
quick
working
group
will
just
beat
it
back
into
the
shape
of
whatever
the
quick
working
group
seems
to
want.
So
I
don't
know,
but
we
seem
to
like
one
slide.
It
was
like.
Oh,
we
could
just
take
DTLS
here.
On
top
of
the
crypt,
are
the
quick
framing
layer,
yeah
and
and
I
was
like
well
all
the
quick
working
group
is
gonna.
H
Do
is
be
detail
us
into
the
exact
shape
of
like
whatever
the
crypto
stream
for
impulse
like
crypto
AK
was
or
something
right,
I
mean
so
I
do
think
that
there's
a
certain
sense
in
which
you
know
protecting
the
system.
A
lot
will
actually
just
end
up
being
a
lot
more
changes
in
the
long
run
because
won't
perturb
it
a
lot
and
like
move
back
I,
don't
know,
that's
my
take
on
like
how
the
group
has
tended
to
work
so
I,
don't
I'm,
not
a
big
fan
of
like
massive
perturbations,
but
I
do
think.
R
Yeah
I
mean
so
like
when
they
offer
to
say,
is
you're
doing.
This
is
like
one
way
to
think
about
like
actually,
this
sort
of
earlier
diagram,
with
crypto
streams,
is
like
replicating
the
DTLS
reliability.
Machinery
like
we
replicated
eg,
less
reliable
machinery
over
TLS
right
and
so
like
I,
said
sorry
what
you
just
said
right.
H
Whereas
I
was
thinking
of
it
as
I
already
have
these
reliable
streams
and
so
I'm
gonna
reuse,
the
code
I
have
instead
of
like
implementing
GDLs
as
well.
Right
so
I
mean
part
of
the
perspective
is
like.
If
you
only
have
quick
than
like,
bringing
in
DTLS
is
actually
bringing
in
a
net
amount
of
code
and
adding
like
more
complexity.
But
if
you
already
have
DTLS
that
nature
seems
simpler
to
do
the
opposite.
So,
okay,.
C
H
J
H
Really
really
icky
right,
it's
it's
it's
bad,
so
we
have
to
do
something
and
it's
gonna
look
something
like
this.
Like
so
I
think
this
when
we
started
quick
in
the
IETF,
we
had
this
idea
that
we
were
gonna
have
sort
of
a
c-shaped
architecture
where
there's
quick
above
quick
below
they're
connected
to
each
other
and
there's
TLS
in
the
middle
that
was
kind
of
the
research
iasts
idea.
We
had
and
I
think
what
we're
finding
is.
That's
not
working,
so
we
need
an
architecture
where
we
essentially
do
have.
H
We
explicitly
acknowledge
that
we've
got
two
different
transports
running
right.
Whether
one
of
them
looks
more
like
quick
or
is
DTLS
is
I.
Think
an
orthogonal
question
I'll
point
out
that,
if
we're
going
to
go
with
this,
it's
probably
a
little
strange
to
how
the
only
non
encrypted
stream
Xanax
be
called
the
crypto
streams
and
ax.
But
that's
a
a
you.
J
H
Respect
to
I
also
think
that,
like
so
in
general,
I
think
the
idea
that
DTLS
is
the
encrypted
wire
image
for
future
transports
encapsulated
over
UDP
also
makes
a
lot
of
sense
to
me.
If
there's
not
a
lot
of
appetite
for
jumping
into
that
right
now
is
the.
Is
there
a
way
to
make
sure
we
can
get
there
right
like?
So?
H
R
H
X
Mike
Bishop
Akamai,
so
I
wanted
to
comment.
This
is
way
too
low.
Okay,
so
we
part
of
our
goals
per
virtual
negotiation
were
that
we
would
be
able
to
flexibly
move
between
lots
of
choices
that
we
might
make
in
the
future
that
we
don't
know.
Now
we
want
to
diverge
from
our
current
plans
and
one
of
those
is
that
aversion
implies
your
crypto
protocol
and
we
have
agreed
that
for
a
version,
one
its
TLS
1.3
or
its
possible
successors.
X
R
I
think
that's
sort
of
true
I
mean
so
I
mean
it's
true
in
some
sort
of
trivial
sense.
But
let
me
push
back
on
that.
Sorry.
That
was
that
you
mean
that
way.
Like
I
mean
so
so
I
mean
we've,
so
we've
committed
basically
a
president.
R
We've
basically
committed
that,
like
all
future,
things
have
to
look
like
the
current,
quick
framing
ticket
or
whether
in
variants
that
are
very,
very
narrow,
right
and
so
I
think
you
would
do
the
same,
but
cut
you
would
do
the
same
thing
with
GCLs,
namely
you
just
you
just
take
what
the
DC
awesome
variants
are,
which
are
not
to
someone
with
a
quickening
variance
and
call
those
the
invariance
in
this
debt,
and
you
would
clearly
encrypt
the
encrypt
the
initial
handshake
packet.
So
you'd
have
it.
R
H
R
Okay,
so
the
quick
short
version,
of
course,
there's
no,
however,
has
no
Persian
Gulf
the
so
on
this
thing,
I
guess
I
wish
I
wish
I
wish
I'd
caught
into
a
spirit
of
full
disclosure
on
that
currently
the
DTLS
details.
R
Initial
negotiation
is
in
the
clear
not
encrypted
the
first
two,
the
first
two
frames
are
on
are
in
the
clear,
and
so
obviously
we
decided
to
office
get
those,
and
so
we
need
to
have
an
office
keishon
format
for
those
which
we've
just
lift
essentially
directly
from
quick,
and
so
that
would
have
the
have.
The
version
number
is
right
there,
but
that
would
be
a
new
work.
My
point
says
new
work,
so
that's
why
I
meant
by
full
disclosure
one.
H
We're
not
draining
the
Quixote
appropriate
rate
on
that
occur.
What
what
I
think
echo
saying
is
in
some
sense
I
would
make
that
possibly
could
allow
ETLs
to
fit
in
a
future
version
of
what
to
encompass
DTLS
inside
of
it,
because
we
would
essentially
still
be
using
the
quick
version
to
do
first
negotiation,
and
we
could
still
maybe
use
that
first
bit.
R
Yeah,
in
fact,
I
mean
we
talked
about
saying
that
that
several
I
think
like
like
I,
think
certainly
like,
if,
if
the
two
sent
to
which
the
objection
concerns
are
about
the
wire
image
like
I,
think
it's
a
lot
of
flexibility
around
wire
image.
If
we
need
to
do
at
on
like
like
I,
know
the
people,
let's
working
group
but
I'm,
pretty
sure
we
can
impose
whatever
yrm
as
you
want,
as
long
as
it's
sane
on
them,
but
I
mean,
and
that
may
I
may
about
the
issue.
I
no
idea,
but
and.
X
Then
my
second
observation
is
that
it
it
feels
like
special
casing
stream:
ID
equals
zero
and
special
casing
stream
frame.
Is
this
type
of
this
type
bite
or
this
type
bite,
is
semantically
equivalent
and
really
it's
the
differences
between
them?
That
solve
a
lot
of
those
problems
like
it's.
A
stream.
Zero
is
only
ever
since
in
under
handshake
I.
R
Think
it's
some
love
us
true,
but
I.
Think
if
you
look
at
people's
code,
it's
absolutely
full
of
like
special
casing.
The
stream
ID
space
so
like
I
mean,
if
I
had
to
make
one
change
of
one
change
only.
It
would
be
to
merely
take
streams
that
are
out
of
the
stream
ID
space
and
call
stream
zero,
because
even
that,
like
we
I
mean
like
like
I,
didn't
like
only
one
change
that
would
that
wouldn't
make
any
authority.
Actually
she
said
he
better
be
the
code
awesome
or
we're.
D
Y
Y
Y
Number,
four
speaking
as
Spencer
now
I
follow
this
working
group
more
closely
than
I
follow
any
of
the
other
transport
working
groups.
Sorry,
other
transport,
chairs
and
I.
Don't
have
a
good
feeling
about
version
negotiation
today
and
I.
Don't
have
a
good
feeling
about
the
working
group.
Remember
expectations
for
how
often
we
would
roll
version
numbers
I,
think
I've
seen
mailing
lists
traffic
that
went
in
both
directions.
Annette,
you
know
like
why
not
and
will
be
on
one
for
five
years
and
that
kind
of
that
thread
kind
of
dropped.
C
G
T
Moon
I
just
wanted
to
second
answers.
Second
comment,
which
is
that
we
definitely
need
to
make
an
informed
decision.
I
think
and
experimentation
is
definitely
a
key
to
that
Iker
saying
that
some
people
are
sort
of
working
on
this
particular
detail
s
architecture
in
it.
I
would
like
to
convince
more
time
myself
to
you,
know,
experiment
and
see
what
is
the
right
approach?
It's
not
entirely
clear
to
me
just
by
looking
at
this
particular
diagram
or
the.
T
Certainly,
if
people
can
spare
the
cycles
an
opportunity
to
get
the
architecture
right,
hopefully
without
affecting
the
schedule
too
much
and
I
don't
want
to
comment
on
that,
because
that's
kind
of
outside
my
wheelhouse
but
I
would
think
this
is
something
we
should
definitely
be
pushing
for
to
make
sure
that
the
architecture
is
not,
we
don't
short
short
cut,
make
any
shortcuts
with
respect
to
that
architecture.
Strictly
because
of
you
know,
scheduling
reasons
or
whatever.
N
July
and
go
have
I,
think
three
points:
I'm
gonna
be
I'm.
Gonna,
try
to
be
quick
first
point
is
I
was
thinking
about
what
other.
So
this
is
the
devil.
We
have
the
one
in
this
picture
and
the
other.
The
next
picture
is.
R
N
P
N
Devil
we
have
in
the
devil.
We
don't.
You
know
the
arguments
that
you
already
state
of
that.
So
thank
you
for
doing
by
the
way.
Thank
you
for
doing
this
whole
presentation.
This
was
excellent.
I
think
it
was
very
useful
and
the
exercise
is
very
useful
as
well.
The
I
it
just
occurred
to
me
that
one
one
thing
that
hadn't
showed
up
anywhere
is
that
packetization.
If
it's
going
to
sit
in
DTLS
right
now,
the
quick
short
header
has
three
packet
types
and
that's
based
on
the
size
of
the
packet
number.
N
The
packet
number
size
uses
the
be
de
the
the
center
uses,
the
PDP
to
determine
how
large
a
packet
number
to
use,
and
so
that's
another
bit
of
interface
that
will
have
to
change.
If
we
want
to
retain
that
efficiency.
If
we
choose
to
discard
it,
that's
fine
but
I'll
remind
folks
that
quick,
a
lot
of
quick
is
about
efficiencies
and
losing.
That
would
not
really
be
valuable.
R
So
we
spent
we
spend
a
bunch
of
time
on
the
on
the
GTA
loss,
record
format
and
I.
Think
like
may
some
optimizations,
which
may
or
may
not
be
enough
right,
optimizations
and
so
I
think
like
like
I
I,
think
that
the
the
front
and
frankly
you
know
this
is
an
area
where
I
think
there's
working,
probably
nurses,
a
problem
better.
R
N
We're
started
I'd
be
cuter
pseudonym.
What
about
the
do?
Do
this
design
choice
for
the
packet
number
say
because
I'd
like
to
know
how
it's
decided,
but
that's
a
separate
matter,
I'm
just
trying
to
point
out
that
there
are
issues
here
that
we
that
occurred
to
me
now
as
I'm
sitting
here
at
the
desk,
and
that's
basically
the
the
first
point
I
wanted
to
make
so
there
there
are
likely
issues
there,
the
one
difference,
those
are
those
are
probably
about
efficiency,
not
about
correctness.
That
may
be
an
important
distinction
to
make.
N
The
second
point
I
wanted
to
make
was
that
you
were
talking
about
the
shame
spiral.
The
shame
spiral
starts
a
little
bit
further
before,
and
you
know
this
because
we
were
in
the
same
conversations
early
on
about
how
to
replace
quick,
crypto
with
TLS
or
repls
and,
at
the
time
and
I'll
remind
folks
again
that
quick,
Kryptos
different
from
both
TLS
and
ETLs,
in
that
it
was
very
intertwined
with
quick.
It
was
different
from
TLS.
In
there
it
wasn't
a
puri
stream-based.
N
It
didn't
assume
a
stream
based
model.
It
had
this
tight
connection
with
quick
in
that
quick
knew
about
the
packets
and
quick
Loney
about
it
as
well,
and
so
there
was
a
type
integration.
We
chose
to
let
go
of
the
tight
integration
and
try
to
replace
it
with
TLS.
That's
where
the
shame
spiral
starts
in
DB.
Ellis
was,
of
course,
not
at
the
same
place
as
it
is
now
at
the
time.
R
C
N
N
Last
point:
I'll
make
it
very
quickly.
You
asked
the
question:
what
what's
the
right
architecture?
I,
don't
know
that
that
is
one
I
will
point
out
that
what
we
all
care
about
here
is
modularity
I,
don't
know
that
that's
necessarily
solved
by
thinking
about
this
as
layering
or
not
I.
Think
what
we're
looking
for
here
is
the
the
porous
boundary
there
you're
talking
about
is
the
API
and
we
are
trying
to
nail
down
the
right
API
and
that
ties
to
the
fact
that
we've
never
used
TLS
in
this
particular
way
in
the
past.
N
K
K
I
I
am
very
interested
with
this
midterm
evolution
to
propose,
because
one
of
the
thing
I
like
about
it
is
that
we
can
do
incremental
evolutions
without
losing
the
implementation
momentum
that
we
have,
because,
yes,
they
are
forum
that
also
about
a
dozen
implementation
interworking,
so
the
phone
are
not
unsolvable,
so
I
would
rather
take
an
incremental
approach
in
which
we
plug
the
good
thing
from
these
proposals
and
keep
the
momentum,
but
I
have
not
actually
seen
the
details
of
these
with
those
twin
proposal.
Nobody
has,
as.
P
D
Z
Right
so
I
wanted
to
make
the
two
points
in
the
main
point
I
wanted
to
make.
Is
that
so
there
are
three
pieces
of
infrastructure
would
have
here.
There
is
the
crypto
handshake,
there
is
crypto
transport
and
since
race,
yeah
I'm,
sorry
very
quick.
There
is
the
crypto
framing
and
then
doris
transport
and
actively
we
accepted
that
they
want
to
duplicate
one,
and
initially
we
decided
to
duplicate
crypto
framing
because
it
was
easier.
Z
Z
Z
Z
One
for
krypter
and
one
for
the
data
and
there's
proposing
mailing
list
and
I
think
like
and
if
you
think
about
it,
it
makes
a
lot
of
sense
because
we
already
have
different
connection
ideas
for
to
can
change
and
for
the
short
header,
so
we
bifurcate
them
even
further
set
would
not
be
that
bad,
but
I've
never
heard
any
reaction
to
that
idea
on
the
mailing
list.
That's
my
first
point
and
the
second
point
I
wanted
to
make,
which
is
sure
to
make
that
vector.
S
S
R
I
think
yeah
I
think
they
I
think
I
agree.
This
is,
as
I
said,
it's
a
simple
thing
that
could
possibly
work.
I
think
that
it
it
we
really
worth
exploring
whether
it
can
in
fact
possibly
work
properly.
That's
like
we're
snowing
right
now
happy
to
happy
to
collaborate
with
Ian
and
whoever
else
wants
to
collaborate
on
trying
to
work.
The
details
like
I,
think
I
agree.
That's
an
important
next
up,
Patrick.
L
Okay,
so
from
a
code
point
of
view,
I
want
to
kind
of
echo
what
Brian
said
a
little
bit
earlier
and
refine
that
where
he
said
like
you
know,
he
started
out
with
you
and
I
was
like
mega
ooh
I
started
out
being
like
yeah.
You
know
I
got
streams.
You
have
figured
out
it's
there
like.
We
heard
this
on
the
mailing
list
and
I
actually
hacked
up
this
DTLS
implementation,
which
is
by
canoe,
which
we'll
get
to
and
I,
was
like
honestly
surprised.
L
L
L
My
favorite
thing
I
like
about
is
we
can't,
should
be
clear
about
like
what's
in
a
protected
State
in
an
unprotected
straight
state
for
what's
on
screen
zero,
which
is
the
source
of
a
whole
bunch
of
problems
right
now,
but
the
core
architectural
question
I've
got
is
you
know,
do
we
want
this
narrow
API
or
this
wide
API?
So
I
feel,
like
you
know,
historically,
almost
a
little
lied
to
as
a
developer.
Here,
if
I
go
back
to
my
days,
reading
these
drafts
I
got
stream
zero.
L
It
abstracts
away
when
I
need
to
know
about
TLS
I
got
to
carry
it
around.
I
should
be
snacks,
I
can
carry
TLS
around
get
good,
get
bytes
in
and
out,
but
it
turns
out,
like
I
need
to
know
an
extensive
amount
of
information
about
what
is
happening
exactly
when
and
that's
no
longer
a
byte
stream.
But
almost
every
interface
to
TLS
I've
ever
seen,
because
the
API
is
a
byte
right,
I
need
like
events
for
session
tickets
and
hrs
and
blah
blah
blahs
after
right,
Martin
email.
L
It
says
what
is
this
bite
that's
appearing
when
I
don't
expect
any
data
to
be
on
that
stream
and
what
protection
do
I
send
it
under
and
how
do
I
akan
and
so
on?
And
the
more
I've
learned
actually
recently
about
some
particular
cases
with
security
and
the
finish
dates
of
the
handshake.
I
am
actually
ready
for
someone
make
a
pretty
big
security
error
because
of
difficulties
in
that.
L
So
I
don't
love
the
way
this
is
currently
reconciled
and
I
think
it
would
be
good
to
talk
about
that
as
the
focus
of
kinda
how
we
move
forward.
When
you
say
that
you
mean
what's
on
screen,
you
said
that
the
the
width
of
the
TLS
API
and
what
we
need
out
of
it
in
order
to
serve
the
requirements
of
quick,
okay.
H
H
But
architecture
I
think
that
we've
been
talking
about
the
layering
and
stuff
that
are
actually
for
me.
H
Those
bits
is
like
the
more
interesting
layer
in
question
to
be
who
owns
those
final
layer
of
bits,
and
so
architectural
II
I
think
that
the
way
that
is
currently
designed
is
DLS
is
a
service.
He
also
went
through
the
service
and
quick
doing
the
one.
That's
egressing
device
transport
during
the
one
c
crossing
the
bytes
I
think.
H
A
client
follows
a
frame
in
itself
and
it
is
also
a
stream
frame
which
can
be
terminated.
A
server
hello,
it's
dukedom,
so
TLS
quick
understands
every
single
message
that
the
TLS
layer
is
sending
to
it
and
so
in
the
middle
of
those
two
are
this
crypto
streams
API,
so
I
think
that
there
is
a
architectural
e.
This
is
like.
The
original
decision
of
using
TLS
service
is
the
right
way
to
go.
We
said
to
find
the
right
middle
ground
between
those
two
extremes.
E
The
first
is
that
this
approach,
with
the
crypto
streams
in
crypto
acts,
looks
like
it
will
probably
have
a
very
small,
conceptually
small,
practical
difference
to
the
implementations,
but
will
be
a
great
conceptual
simplification
to
the
specification
and
more
conceptually
modulized
things
things
quite
a
lot.
I
think
that
will
be
a
very
good
feature
going
forward,
especially
when
we
look
at
some
of
the
things
being
proposed
for
quick
version
2
and
beyond,
such
as
his
hesitates
say
the
world
of
multicast.
E
AA
Andrey
pop
of
Microsoft,
so
I'm
grateful
to
Eric
for
bringing
up
this
discussion,
because
I
think
the
issue
of
a
complex
API
between
the
transport
MPLS
does
exist
and
quick
right,
so
I
think
it's
an
existing
issue
and
if
we
can
come
out
of
this
discussion
with
a
better
API
between
the
chillest
act
and
the
transport
stack
with
less
coupling,
perhaps
that
would
be
a
great
result.
The
other
thing
that
this
discussion
highlights
I
think
is
that
a
lot
of
people
are
not
happy
with
the
DTLS
transfer
services
or
the
transfer
services.
AA
P
M
P
I,
theoretically,
like
the
video
support
but
I,
do
have
a
concern,
and
that
is
about
the
condition.
Control
using
DDS
well
mean
that
we
cannot
easily
see
the
information
that
should
have
been
gathered
during
the
handshake
phase
to
the
congestion
control,
quick.
So
after
the
handshake,
you'll
be
less
than
ideal
in
the
normal
case
or
too
much
data.
If
there
was
a
loss.
I
think
that
is
one
of
the
examples
that
and
that
you
know
that
belongs
to
the
nature
of
having
two
transports
and
there
could
be
others
I'm,
not
sure
what
excuse.
R
Oh
yeah,
that's
a
real
concern.
I
think
can
somewhat
be
ameliorated
by
having
racking
the
shame
spiral,
a
wider
API
that
basically
gives
you
know
it
gives
it
gives
the
calling
application
information
about
exactly
what
happened,
because
the
DZ
Alice
does
have
acts,
and
so
they
do
give
it
much
name
information,
though
it's
slightly
less
precise,
so
that
would
help
some
but
I
agree
you're
going
to
maybe
some
there
may
be
some
compromise,
sir.
AB
Not
able
to
negative
Microsoft
so
thanks
I
prefer
for
this
presentation.
The
beginning
of
this
session.
The
entire
session
was
all
very
nice
diagram
about
all
the
latest
advances
and
interoperability
great
chart
by
the
interoperating.
All
that
sort
of
stuff
replacing
or
using
the
deep
t-dub
extreme
DTLS
proposal
would
basically
throw
that
obey
or
go
back
several
months.
I.
Don't
think
we
want
to
do
that,
I,
don't
think
we
need
to
do
that.
AB
AB
F
AC
Polly
Apple
so
yeah.
Thank
you
for
bringing
this
up.
Definitely
the
the
streams
or
architecture
issues
you've
mentioned
are
very
much
felt
in
the
implementation.
I
do
have
concerns
about
coupling
this
with
DTLS
as
DTLS
for
issues
that
have
been
brought
up
already
as
relying
on
the
DTLS
spec
adjusting
there
may
be
other
details,
use
cases
that
will
not
want
to
adjust
to
this.
AC
If
we
go
to
the
next
slide,
when
the
DTLS
architecture
I
would
be
curious
to
see
what
would
happen
if
we
essentially,
we
tried
to
imitate
this
architecture
because
it
is
very
clean
for
the
implementation
look
rather
than
putting
DTLS
there.
Just
have
quick
running
the
TLS
handshake
there.
So,
essentially
just
does
don't
call
it.
AC
D
TLS,
keep
it
the
quick
wire
apartment,
because
if
we
want
to
turn
detail
s
wire
format
into
quicks
wire
format,
that's
essentially
what
we're
doing
anyway,
and
if
we
have
to
come
to
this,
let's
think
about
in
this
clean
architecture,
with
more
of
the
layering
rather
than
the
weird
C
interlocking
thing
we
had
before,
keep
them
separated
out,
but
let
quick
as
a
transports
still
run
the
thing.
So,
let's
think
of
it
as
a
quick
handshake,
then
gating
the
rest
of
the
quick
transport
on
that
I.
AC
R
Can
you
go
forward
to
slides
so
I
started
to
keep
going
into
you
see
on
our
diagram
right?
So
we
floated
this
when
you
and
I
were
talking,
which
is
basically
to
essentially
cut
the
details
record
layer
off
and
replace
it
with
a
place
with
a
cracker
layer
which
sorry,
which
would
be
to
cut
the
details
record
layer
off.
R
AC
H
And
this
kind
of
goes
back
to
my
point
of
liking.
If
we
did
something
like
this,
this,
we
would
end
up
immediately
like
beating
it
back
into
something
that
looked
a
little
more
like
quit,
but
maybe
not
as
much
like
quick
toes
today,
but
not
to
say
that's
a
bad
thing.
I'm.
Just
saying,
like
you
know,
cuz
DTLS
our
streams
in
say
this
yeah.
I
I
Victor
talked
about
duplication
of
various
capabilities
that
we
we
have.
The
the
record
protection
was
one
was
seen
early
on
as
being
one
that
we
could
have
safely.
Duplicate
turns
out
it's
the
same
code.
It's
just
got
different
constants
in
it.
For
the
most
part,
there
are
problems
with
the
current
key
schedule
that
I
don't
think
I
could
talk,
talked
about
with
respect
to
forward
forward
secrecy
and
post
compromised
security.
I
But
if
you
think
about
this
as
quick
packets
containing
DTLS
records,
then
you
actually
end
up
in
a
in
a
space.
That's
very
very
similar
to
this.
You
use
different
encodings
for
offsets
and
lengths
and
things
like
that
in
side,
each
one
of
these
things,
whether
it
be
a
cream
framework
or
not.
It's
essentially
the
same
thing
and
you
can
drive
these
things
in
in
very
similar
ways.
The
question
is:
who
owns
the
the
packet
numbers
or
the?
I
Although
the
record
sequence
numbers
and
that's
you
know,
doesn't
really
matter,
even
you
can
pick
one
or
the
other.
It
seems
like.
We
have
a
bit
of
an
attachment
issue,
perhaps
to
some
of
the
work
that
we've
done,
but
I'd
also
remind
people
that
there
is
such
a
thing
as
a
sunk
cost
fallacy,
and
we
should
be
aware
of
that.
I
If
we
can
make
it
if
we
can
make
a
change
that
makes
things
better
for
us,
then
we
should
do
them
I'm
very,
very
sensitive
to
the
amount
of
time
that
this
is
going
to
take
us
to
do.
But
I,
don't
think
that
any
of
these
changes
take
any
more
time
than
any
any
of
the
others.
It's
just
a
matter
of
addressing
all
of
the
the
architectural
concerns
or
the
the
differences.
R
Yes,
so
I
feel
like
I'd
like
the
next
step
is
that
we
definitely
have
to
explore
this
some
more
as
it's
only
interest
in,
and
we
should
see
like
like,
like
we.
Only
kind
of
came
was
like
three
days
ago
so,
like
so
like
what
I
suggest
is
you
know
like,
but
like
we
can
form
a
Coalition
of
the
Willing,
which
I
think
is
gonna
have
to
include
me
and
other
people
who
want
to
be
involved.
I'd
ask
the
chairs
to
help
us,
keep
it
small,
because
we
could
make
progress.
R
P
C
And
when
we
first
talked
about
this,
you
know
the
guidance
I
gave
you
was
I,
don't
care
if
well
I
care,
but
it's
not
the
end
of
the
world.
If
we
decide
to
do
this
and
it
takes
time
to
implement
it,
but
we
can't
have
be
spinning
our
wheels
deciding
right
from
what
I've
heard
here,
I'm
a
lot
more
comfortable
that
I
don't
hear
and-
and
please
give
them
I
can
correct
me
if
I'm
wrong
I,
don't
hear
anybody
saying
no!
This
is
crazy.
C
R
R
R
C
S
Actually,
that
was
the
clarifying
question
I
was
going
to
ask:
was
it
sounded
like
your
direction?
Was
the
the
PR?
That's
the
unnumbered
slide.
Nine
here
was
going
to
be
the
starting
point
and
if
that's
the
case,
it
seems
like
the
quick
packet
format
and
all
of
the
applications
and
streams
that
don't
relate
to
the
handshake
can
go
forward
without
the
presumption
that
they're
going
to
be
any
change
to
a
DTLS
record
format
or
otherwise
uptake
of
DTLS.
S
If
that's
the
case,
I
think
I
understand
where
you're
going
in
and
it's
clear,
but
if
there's
going
to
be
a
range
rather
than
a
point
solution,
I
think
it's
it's
much
harder
to
know.
What's
going
on
so
I
would
suggest,
rather
than
multiple
that
you
and
your
Coalition
of
the
Willing
come
up
with
a
single
proposal
and
we
go
from
there.
Okay,.
I
I
C
C
S
A
question:
is
there
anybody
who
is
interested
in
determining
which
of
the
two
this
one
or
the
slide,
nine
version
of
it
who
wouldn't
be
part
of
the
Coalition
of
the
Willing?
Because
if
the
answer
is
everybody,
who's
going
to
be
interested
in
the
distinction
between
those
two
which
are
very
similar
in
many
ways,
is
going
to
be
part
of
the
Coalition
of
the
Willing.
Let
them
decide
and
move
on,
and
don't
you
know
it's
it's
both
a
way
of
implementing
that
that
slide
9.
This
is
a
subset
of
slide
9
in
my
which.
S
R
I
guess
I
I
would
I
mean
I,
think
the
reason
she's
hearing
hesitation
for
me
Ted,
is
on
those
two
things.
First
of
all,
I
roll
admit
I'm,
having
trouble
giving
up
on
the
GLS
record
format,
but
if
half
do
fine
but
but
like
ignoring
that
for
the
moment,
like
I,
think
it's
reasonably
likely
that
we
know
we'll
have
a
discussion
and
we're
like
these.
Are
the
pluses
and
minuses
these
things
but,
like
we
ultimately
like,
like
you
know
like
there's
a
lot
of
people
in
to
working
group.
R
W
R
S
D
Can
I
greed
is
so
personal,
so
one
thing
I
might've
brought
up
at
the
end,
but
so,
if
you
look
at
our
milestones,
which
be
moved
by
six
months,
which
caused
us
to
get
a
lot
of
email
from
implementers,
but
they're
calling
for
us
to
ship
our
FCS
to
the
isg
in
November
this
year,
right
if
we
are
now
basically
giving
ourselves
from
now
until
June,
it
seems
unlikely
that
we're
going
to
make
November
if
the
implementers
are
okay
with
that
we're.
Okay
with
that
right.
D
But
but
so
let's
be
clear,
this
is
very
likely.
Gonna
move
the
milestones
which
is
as
I
said.
It's
fine.
If
that's
what
people
feel
is
a
good
decision
right,
if
you,
rather
we
architect
this
this
bit
of
quick
and
and
maybe
may
explain
it
in
simpler
format
and
make
it
easier
to
implement
correctly
or
trade
it
off
against
the
slipping
timeline
which
might
set.
For
other
reasons
right.
D
This
is
not
the
only
thing
that
that's
still
floating
around,
but
but
it's
it's
a
sort
of
it's
a
choice
that
we
we
you
know
need
to
make,
and
we
need
to
be
aware
that
that
you
know
this
is
not
a.
This
is
not
a
decision
that
that
won't
have
an
impact
that
isn't
to
say
that
sticking
with
the
current
design
that
is
arguably
harder
to
implement
and
understand,
isn't
going
to
cause
implementers
to
take
longer
either
right.
D
So
this
is
not
clear
that
this
is
necessarily
gonna
cause
more
delays
than
what
we
currently
have,
but
any
amount
of
change
potentially
disrupt
their
military
activities
right.
So
we
would
especially
sort
of
maybe
you
don't
wanna
come
to
microphone.
But
if
implementers
are
concerned
with
this
change
and
the
impact
and
they
have
on
their
roadmaps,
we
would
love
to
hear
that
if
you
don't
want
to
go
to
the
microphones
and
email
to
mark
and
me
if
you're
implemented
at
the
leaves.
D
H
P
H
H
C
R
C
I
Of
the
point
of
this
discussion,
we
were
having
was
so
that
we
could
do
risk
one
of
the
more
important
parts
of
the
protocol,
which
is
those
parts
of
the
protocol
that
are
going
to
change
over
time.
So
if
it
is
the
outcome
of
this
discussion,
the
previous
discussion
that
we
are
not
going
to
change
the
packet
layup,
which
I
think
we're
gravitating
towards
there's
only
one
other
thing
left
that
we
have
to
change
in
the
invariance
Draft.
And
that
is
this
thing.
I
I
The
other
one
is
the
sender's
expression
of
what
they
want
the
other
side
to
use
so,
basically,
once
for
routing
and
one
for
the
routing
on
the
reverse
path,
their
variable
length
with
a
very
short
range
of
potential
values,
and
importantly,
when
you
send
short
headers,
there's
only
the
packet,
the
connection
ID
for
the
recipient,
we
use
the
the
pair
of
connection
IDs
during
the
handshake
in
order
to
essentially
negotiate
what
connection
idea
is
going
to
be
used
later
on.
But,
ultimately,
all
you
really
need
is
the
the
connection.
I
Id
that's
going
to
be
used
to
route
the
packet
or
the
other
one.
So
next
is
an
overview
of
the
process.
So
when
you
send
an
initial
packet
as
a
client,
you
choose
a
randomized
destination
connection
ID
because
you
don't
haven't
talked
to
this
particular
server
before
so
you
don't
know
what
it
actually
wants.
You
just
pick
something
at
random
and
part
of
the
reason
we
have
this
initial
packet
is
so
that
we
can
especially
distinguish
these
from
other
packets
that
maybe
get
different,
different
sorts
of
routing
our
treatment.
I
Again,
on
the
on
the
on
a
retrial
round,
the
client
uses
the
randomized
one
that
it
used
previously
and
the
server
says:
hey
here's,
your
connection,
ID
and
the
destination,
but
the
the
retry
packet
contains
connection
idea
that
the
server
would
like
the
client
to
use
thereafter.
Of
course,
when
the
server
sends
a
handshake,
it
can
change
its
mind.
I
So,
if
you
think
about
the
case
where
you
have
multiple
server
instances
and
one
of
them
wants
to
share
some
load
stateless
lis,
it
picks
a
connection
ID
that
it
uses
its
current
knowledge
of
the
current
topology
of
the
of
the
cluster
and
the
loads
that
it
has.
But
that
might
change
over
time
and
so
the
server
that
that
receives
that
that
next
handshake,
maybe
it's
not
the
one
that
maybe
you
think
they
can
pick
a
more
precisely
targeted
connection
ID
for
this
final
se
ID.
I
That's
all
in
the
draft.
It's
pretty
straightforward!
There's
a
lot
of
moving
parts
in
this
particular
diagram,
but
that's
every
single
moving
part
that
we
have
you'll
notice
at
the
end
of
the
short
headers,
only
one
pretty
straightforward
next,
so
the
old
invariants
looked
like
this,
the
relatively
straightforward
you
had
a
bit
and
if
you
had
the
bit,
then
you
had
a
connection
ID
there
and
a
version,
and
you
always
had
a
connection
ID
next
slide.
I
The
new
and
variants
are
a
little
more
complicated
and
a
little
bit
more
complicated
because
they're
two
connection
IDs
and
they're
variable
lengths,
and
we
do
some
interesting
bit
packing
for
the
lengths
in
the
octet
that
follows
the
version.
You
know
also
note
that
we
moved
the
version,
the
advantage
of
that
being
that
now
version
is
in
the
same
place
in
every
one
of
the
long
packets.
I
When
you
have
variable
length,
connection
IDs
ahead
of
the
version
that
if
you
need
to
be
jumping
around
all
the
time
and
that's
kind
of
suboptimal,
so
we
we
moved
it.
One
of
the
reasons
we
have
the
connection
ID
up
front
before
was
so
that
would
remain
in
the
same
place.
But
as
soon
as
you
put
a
length
in
front
of
it,
it
was
no
longer
in
the
same
place,
so
forget
that
next
slide,
please
now
the
short
header
is
almost
identical
to
how
it
used
to
be,
except
for
one
thing
we
removed
the
bit.
I
That
said
whether
the
connection
idea
was
there
or
not,
because
one
of
the
things
we
can
do
with
this
particular
design
is
say
that
the
connection
ID
is
zero
length
and
voila.
Now
you
have
no
connection
ID,
you
don't
need
to
have
a
bit
you
just
as
a
server,
for
instance,
when
you
saved
it
to
the
client
use
this
particular
particular
connection.
Id.
If
you
don't
want
them
to
have
a
connection
ID,
you
tell
them
it's
zero
length
and
that
works
out
pretty
nicely.
I
Next
I
think
we're
almost
there
yes,
so
this
is
really
the
main
cost
of
this
design
so
for
clients
that
were
in
general
endpoints
that
rely
on
having
a
connection
ID
for
routing
the
packet
toward
them
find
themselves
in
a
bit
of
a
bind,
because
the
peer
that
receives
it
receives
a
packet
can
generate
a
stateless
reset
based
on
the
design
that
we
have,
but
it
won't
be
able
to
generate
the
correct
connection
ID
to
put
on
that
packet
such
that
it
will
get
back
to
the
client
that
sent
it.
I
It
doesn't
know
where
it
came
from
in
the
the
vast
cluster
on
the
other
side
that
that
is
potentially
being
used
by
the,
for
instance,
the
client.
So
what
we
do
now
is
we
advise
simply
setting
a
random
connection
ID
and
acknowledge
that
this
might
have
some
problems.
It
might
be
that
stateless
reset
is
now
recognizable
as
as
a
stateless
reset,
because
suddenly
this
this
particular
flow
has
a
new
connection
ID
appearing
on
it.
I
That's
not
such
a
so
much
of
a
problem
if
people
use
new
connection
ID
with
some
frequency,
but
it's
still
a
potential
exposure,
but
probably
the
big
one
is
that
the
stateless
reset
might
not
arrive
at
the
right
clamp,
and
the
suggestion
here
is
that
well,
stateless
reset
is
only
an
optimization
if
you're
forced
to
use
connection
IDs
for
routing
on
your
end,
then
don't
expect
to
be
receiving
stateless
resets
and
just
deal
with
it.
:.
I
I
think
we're
almost
at
the
end.
Maybe
we
go
to
the
end
and
we'll
we'll
see
okay,
so
the
PR
which
has
been
merged.
Actually,
all
the
peers
have
been
merged,
create
the
connection
of
easement
diversion
negotiation
version
field
version,
negotiation,
changes
of
work
as
well,
but
it's
just
using
the
same
long
header
format.
So
now
not
really
a
fundamental
change.
I
It's
still
still
just
a
long
packet
with
the
version
number
of
0
there's
a
little
bit
of
a
gotcha
in
terms
of
return
route
ability
checks
when
you
do
a
retry
and
the
server
requires
a
zero
length
connection
ID.
But
but
that's
not
the
big
problem,
because
you
had
validation
on
a
previous
round
trip.
So
the
text
covers
all
of
this
I.
Don't
think,
there's
been
any
anyone
pushed
back
on
any
of
these
changes.
E
I
I
E
I
E
D
Though
I
think
we
go
for
clean
clothes,
the
line
on
this
after
ROI
and
if
there's
more
need
to
discuss
it,
we're
going
to
do
two
discussion
on
Thursday,
because
we
have
two
lightning
talks
that
are
previewing
talks
that
are
given
elsewhere
later
in
the
week
that
we
want
to
get
to.
But
one
so
I
mean.
R
While
as
a
tactical
theoretical
matter,
this
is
not
guarantee
ability
to
DMS
was
done
as
a
practical
matter.
It's
not
as
the
fingerprint
and
therefore
you
can
D
much
was
done
so
I
doesn't
require
a
first
Newhouse
I'm.
You
worry
about
that
I
mean
I,
guess,
there's
a
warning.
You
know
1
&
2
to
the
128th
chance
that,
like
you're
gonna,
have
a
problem
but
seems
like
pretty
well
it's
much
worse
than
that,
because
it's
done
has
fingerprints
yeah.
R
Like
so,
of
course,
if
we
do
Collins,
don't
let
me
different,
but
the
next
thing
I
wanted
to
say
definitely
double
check
with
you.
Is
we
discussed
on
the
mailing
list
adopting
the
version
negotiation
scheme
that
I
suggested
for
DTLS
in
and
quick
where
the
court
version
is
negotiated
in
the
primary
conversion
is
negotiated
by
offer
answer
rather
than
by
rather
than
by
a
certain
project.
I,
don't
think
that
breaks
the
variance,
but
we
need
sure
doesn't
it.
I
R
R
I
M
I
I
D
C
C
I
I
Obviously
anything
we
do
with
this
document
after
publication,
whenever
with
that
publication
date
is,
is
still
subject
to
the
same
sort
of
we
don't
know
if
we
can't
predict
the
future
and
and
maybe
we
made
the
wrong
decisions
and
whatnot
but
I
think
the
the
key
point
here
is
that
well,
while
the
the
patient
is
on
the
table,
things
might
go
in
and
out
that
cause
us
to
regret
our
decision
to
publish
an
invariance
document
right,
and
there
is
no
version
of
quick
that
depends
on
those
invariants
to
a
published
version
once
everything
there's
not
really
that
much
of
an
advantage,
but
also
as.
I
To
the
the
invariance
here
and
have
some
sort
of
commitment
to
them,
that's
really
what
you're
looking
for
you're
looking
for
people
that
nail
these
down,
such
that
they
they
don't
move
around
and
that's
what's
been,
concerning
people
with
like
being
in
particular
who's
trying
to
you
know,
move
a
good
chunk
of
the
internet
traffic
onto
this.
This
new
invariance
gamer's
is
finding
particularly
distressing,
so
we
need
to
give
in
some
certainty
also
if
we
publish
a
series.
F
I
C
I
D
O
W
Oh,
we
have
a
draft
in
TS
vwg,
which
is
packetization
layer
path,
empty
discovery
for
diagram
transports,
we're
trying
to
layout
standard
algorithms
for
doing
diagram
path,
MTU
discovery
and
we've.
We
set
this
up
for
SUV
SP
over
DT,
LS,
UDP
and
tunnels.
We'd
love
to
have
quick
involved
next
slide,
and
for
this
we
require
that
path.
Layers
can
signal
their
MSS,
send
pro
packets
process,
packs
big
messages
and
have
some
way
to
do
reception
feedback,
so
they
can
work
our
it
ICMP
black
calls
and
I'm.
W
With
these
none
of
the
students
blue
expose
anything
to
the
path.
So
it's
it's
friendly
with
quick,
next
slide
and
yeah,
and
this
we
just
have
some
quick
questions
for
you
and
we
would
love
to
have
conversations
about
people
who
have
data
on
path
M
to
use,
and
we
would
like
to
speak
to
anybody
within
quick
working
group
who
wants
to
work
with
us
on
Datagram
path,
layer,
path,
MTU
discovery,
and
if
you
want
to
know
more
about
this,
we
have
our
presentation
on
Thursday
afternoon,
TS
vwg.
Thank
you.
Thank.