►
From YouTube: IETF103-QUIC-20181106-1350
Description
QUIC meeting session at IETF103
2018/11/06 1350
https://datatracker.ietf.org/meeting/103/proceedings/
A
A
B
A
A
A
A
A
A
This
is
the
note
well
I,
hope
you're
familiar
with
it
by
now,
but
if
you're
not
these
are
the
terms
under
which
you
participate
in
the
ITF
regarding
things
like
code
of
conduct
and
intellectual
property,
please
have
a
read
and
you
can
find
more
information
on
the
ITF
website.
That's
not
no
we're
not
using
that.
A
We
have
the
blue
sheets
check.
We
have
a
scribe
check.
You
have
a
really
a
check,
we're
ready
to
go
so
our
agenda
for
today
we're
gonna
have
a
hackathon
report
briefly
from
Lars
and
then
updates
from
the
editors,
including
updates
on
the
operation
drives,
which
we
need
to
start
paying
more
attention
to
then
we're
going
to
discuss
a
few
brief
issues.
B
B
A
And
then,
after
that,
we'll
have
a
quick
discussion
of
the
human
right
to
review
document
that
came
across
the
doorstep
and,
as
time
permits
talk
about
making
a
quick
to
play
ball
from
me
quickly
bouncing
from
Martin
Duke
and
finally,
a
quick,
unreliable,
datagram,
Sir
Thomas
folly.
We
have
another
session
tomorrow
and
that
session
is
almost
entirely
devoted
to
the
spin
bit.
So
if
you've
come
here
expecting
the
fireworks,
you
have
to
wait
a
little
bit
longer
at
the
end
of
tomorrow.
A
G
G
G
Be
one
big
part
of
that
discussion,
so
we're
not
quite
done
yet
but
that'll
happen
tomorrow
in
terms
of
interrupts,
we
had
as
usual
during
the
hackathon
a
table.
We
actually
had
two
tables
this
time
or
with
people
that
are
implementing
quick
and
it
pays
we'll
be
testing.
The
current
implementation
drafts
I
think
we
had
like
around
ten
implementations
participate
in
some
form.
G
It's
looking
better
in
terms
of
getting
back
to
at
least
basic
interoperability
in
terms
of
version
negotiation,
handshake
data
exchange
and
closing.
There
was
a
big
sort
of
disruption
when
we
redid
the
stream
zero
thing
and
we're
still
sort
of
seeing
the
after-effects
of
that
that
there's
a
bunch
of
augmentations
that
haven't
moved
to
one
of
the
new
versions
that
have
to
refactor
stream
zero.
G
There's
very
few
stacks
that
are
doing
it
and
I
think
there's
very
few
sex
specifically
that
I
would
call
like
you
know
the
ones
that
are
going
for
a
large-scale
deployment
that
are
currently
doing
it.
That's
a
bit
concerning
it's
one
of
the
reasons
why
that
the
milestones
are
probably
unlikely,
but
it
is
the
the
spreadsheet
and
greener
is
better
of
basically
darker
green
earth.
Better
more
letters
are
better.
Clients
are
down
vertically
servers
are
cross
horizontally,
so
you
can
see.
G
There's
some
white
just
I,
try
to
move
the
implementations
that
haven't
shown
up
sort
of
to
the
right
so
to
concentrate
the
information
a
little
bit
there,
but
there's
and
there's
still
some
white
and
some
very
light
green
and
that's
not
super
good.
But
it's
it's
looking
better
and
I
think
the
trajectory
is
there.
We're
gonna
do
another
interim
meeting
with
an
interrupt
event
before
it
in
January.
Hopefully
it's
going
to
look
a
lot
better,
then
and
then
by
Prague,
so
definitely
we're
Park.
G
We
should
be
interrupting
around
17,
probably
already
in
Japan,
but
I
really
hope
that
on
17
we're
gonna
see
a
lot
of
more
Turkey.
I
know
Alan
wanted
to
say
something
specifically
about
interrupts
around
HQ
I,
don't
know
if
you're
in
the
room
or
remotely
or
if
somebody
else
wants
to
say
something
in
his
stead.
I
don't
feel
qualified.
To
summarize
where
we
are
with
that,
but
I
think
we
were
somewhere
Jenna.
E
G
G
So
if
the
hope
is
that
really
seventeen
will
be
the
thing
that
we're
gonna
let
sit
and
simmer
for
a
while
and
let
the
implementations
actually
catch
up
and
do
some
much
wider
scale,
inter
up
with
with
the
caveat
and
I
just
said,
isn't
easy
p.m.
so
but
which,
on
the
recovery
draft,
specifically
I
think
we
are
still
trying
to
get
more
help
from
the
TCP
guys
to
make
the
quick
recovery
as
compatible
and
comparable
to
what
TCP
is
doing
as
possible
and
so
I
think
for
recovery.
G
We
are
probably
gonna
see
some
changes
that
are
not
necessarily
only
going
to
be
driven
by
Interop.
However,
the
recovery
is
complex,
but
it's
not
actually
a
whole
lot
of
code
that
needs
to
change
for
it.
So
hopefully,
changes
to
recovery
will
not
cause,
but
they
shouldn't
cause
interoperability
issues
to
begin
with,
but
they
also,
hopefully
don't
disrupt
the
implementations
too
much
christian
in.
J
When
I
look
at
the
metrics,
as
you
say,
there's
a
bunch
of
advanced
features
that
are
not
tested
at
all
like
we
have
two
academic
or
prototype
implementation
that
do
migrations
right
and
pretty
much
nobody
else
does
and
that
the
test
nisman
we
should
seriously
consider
at
which
point.
Do
we
decide
that?
Okay,
the
reason,
but
nobody
does.
It
is
probably
because
the
spec
is
too
hard
or
something
and
just
cut
it
for
v1.
We
should,
on
my.
G
Feeling
I
put
it
this
way
is
that
we
haven't
seen
a
lot
of
inner
up
around
that,
because
the
rate
of
change
to
everything
else
is
so
high
that
you
gonna
spend
a
lot
of
cycles
as
an
implementation
team.
Basically,
on
you
know,
keeping
up
with
everything
else
that
lift
gives
you
basic,
interrupts
that
people
just
haven't
gotten
to
that.
Yet
if
it
remains
the
case
that
people
aren't
getting
to
that
yet
in
the
next
couple
of
weeks
and
months,
I
think
we
should
definitely
have
a
discussion
on
what
to
do
with
those
features
and.
A
K
The
data
tells
you
you
should
help
other
people
with
our
implementations,
which
is
closed
now,
so
I
mean
feel
free
to
tell
me
I'm
I'm,
not
a
hoarder,
and
that
you
want
to
raise
another
point,
but
the
one
when
I
notice
about
this
graph
and
also
about
generally
my
experience
and
I've
written
this
first
before,
is
that
things
that
are
easy
to
test
get
tested
and
things
that
are
hard
to
test.
You're
not
could
test
it
even
if
actually
might
be
implemented
and
I'm
particularly
I,
live
in
particular
fear
of
recovery.
K
But
but
even
like
you
know,
like
my
experience,
has
been
like
I
mean,
there's,
there's
basically
no
difference
between
14
and
15
and
I
like
updated
it
to
15
like
in,
like
five
minutes,
and
like
was
just
too
lazy.
They're
like
sit
around
like
try
dinner
out
for
everybody,
and
so
you
know,
and
that's
like
easy,
so
I
I
guess
I
would
be
up
for
and
I
know.
Large
deny
bookends
middle
work
individually,
trying
to
figure
out
like
some
set
of
arrangements.
K
That
would
make
it
easier
for
you
to
test
things
and
I
guess
only
we're
gonna.
Let's
us
recovery
so
so,
like
you
know,
they
would
like
my
ideal
would
be
I
could
just
tape
my
implementation
put
in
a
docker
image
and
drop
it
into
test
harness,
and
we
come
back
with
this
matrix
on
May.
That's
too
fancy,
but
on
so
I
think
like.
If
other
people
are
like
minded
about
this,
maybe
we
could.
E
I'll
just
note
that
this
is
not
a
complete
set
of
all
the
features
of
quick.
So
of
course
no
I
know
I'm.
Just
wanting
to
I
think
I
should
have
started
with
that
I'm
responding
to
questions
point.
We
are
nowhere
near
the
point
where
we
are
deciding
or
even
thinking
about
what
to
cut
out
of
this
stuff.
So
I
I,
don't
I,
don't
think
we
should
be
going
down
the
path
of
discussing
that
right
now.
So.
G
I,
give
it
a
credit,
so
it's
a
good
topic
for
how
to
automate
some
of
this,
and,
and
some
of
us
have
some
automation
for
their
particular
stack,
but
there's
nothing
that
generates
that
whole
matrix
and
that
would
be.
That
would
be
very
useful.
The
other
thing
that
would
be
not
so
so
getting
something
like
a
packet
drills
suite
it's
difficult
for
quick
what
because
it's
encrypted,
but
but
that
would
be
very
useful.
So
there's
some
discussion
that
maybe
we
should
have
around
one
of
the
interrupts
events
about
what
we
can
do.
I.
K
Guess
I
would
think
before
then,
because
a
little
as
will
spend
the
around
prevent
writing
that
test
furnace.
So
you
know
again:
I
be
I'm.
Gonna
raise
my
hand
if
there's
someone
you
know
people
be
interested
in
like
working
on
this
like
come,
find
me
or
lose
your
hands
now
or
something
and
we'll
try
to
like
get
together.
K
Do
something
I'm
really
super
flexible
at
what
we
do,
but
I
like
to
have
something
happen
because,
like
I
say
it's
just
incredibly
expensive
and
and
I'm
really
quite
worried
about,
like
they're
I
mean
there's
got
to
be
like
nine
thousand
recovery
cases
we
have
to
test
and
like
every
single
one
of
them
is
getting
testing
and
like
I,
don't
see
how
our
pawsome
we
do
that
manually.
If.
G
E
E
G
B
Interesting
mucking
around
here,
alright,
so
what
happened
in
15?
This
is
since
the
last
time
we
met
as
a
working
group
in
clip
that
at
the
interim,
15
has
quite
a
number
of
changes
here.
There
are
a
few
other
minor
ones
that
are
not
in
this
list,
but
these
are
the
ones
I
think
are
not--,
even
the
first
ones,
not
really
a
change.
I
think
we
changed
some
packer
numbers
as
a
result,
but
we
took
the
two
frame
types
immersed
and
into
one.
B
B
We
now
insist
on
validating
the
presence
of
the
retry
and
having
a
server
prove
that
it
was
the
one
that
saw
it.
That's
new.
What
it's
also
new
is
that
there's
only
one
of
them.
Obviously,
that
design
doesn't
work
particularly
well.
If
there
are,
there
would
be
multiple
rate
rise
and
though
there
was
a
number
of
other
reasons,
we
got
roof.
Retries
we
have
the
design
team
come
back
with
some
recommendations
about
how
to
deal
with
connection
ids
in
their
ongoing
management
and
added
a
frame
to
deal
with
that.
B
There's
also
some
changes
in
there
to
prevent
deadlocks.
I
think
none
siemens
talk
goes
further
into
that
and
other
consequences
of
having
that
that
design.
So
we're
likely
to
see
a
little
bit
more
change
in
17
around
that
one.
Maybe
HTTP
and
kupack
had
have
you
it's
kind
of
major
changes,
settings
and
our
pirates,
and
this
is
the
one
that
I
hate
our
largest
reference
is
encoded.
B
Modulo
back,
sentries,
I
think
I
lost
that
one
that's
awkward,
and
then
we
have
a
new
static
table,
but
I
don't
think
anyone's
really
all
that
happy
with,
but
it's
it's
there
and
the
next
slide
has
all
the
changes
in
16
right,
you'll
notice.
The
dish
I've
got
the
table
contents
in
the
next
slide,
which
is
probably
the
more
interesting
one.
B
Let's
go
HTTP
got
a
new
structure
as
well,
I'm,
not
sure,
that's
all
that
that
different
to
how
it
was
before,
but
this
is
this
is
what
it
looks
like
it
was.
It
was
cleaned
up
a
little
bit
and
worked
for
reflow
a
few
Packers
having
a
similar
rework
done,
but
that
hasn't
landed
yet
I.
Don't
think
we'll
see
that
in
17
or
along
with
17.
This
is
what
that
looks
like
at
the
moment.
It
may
shift
around
a
little
bit
as
that
PA
if
I
was
I,
think
Alan's
almost
finished
with
that
one.
B
So
we
should
see
that
relatively
said.
Next,
ok,
17
fun
things,
so
we
are
going
to
change
the
layout
of
the
first
octet
in
both
short
and
long
header
forms
that
those
changes
have
been
held
based
on
a
number
of
things.
So
we
had
the
great
Google,
quick
migration
event
occur,
so
that
allowed
us
to
reclaim
a
couple
of
the
bits
that
we
were
sort
of
holding
in
fixed
positions
during
that
transition.
That
seems
to
have
been
successful,
based
on
the
reports
that
I
heard
from
the
end.
B
B
Some
people
noticed
some
interesting
problems
with
the
way
that
we
were
doing
stream
limits
and
how
then
interacted
with
blocked,
and
then
we
went
back
and
looked
at
it
and
thought
that
probably
the
best
thing
to
do
is
completely
restructure.
The
way
that
we
allocate,
we
grant
new
streams
to
appear,
and
so
we're
doing
that,
based
on
counters
rather
than
the
maximum
identifier,
which
makes
them
much
more
like
the
flow
control
than
they
were
previously.
It
just
means
that
we
have
more
frames
now.
B
L
B
I'm
gonna
go
over
that
bring
her
a
little
bit
more
I'm
gonna
separate
deck
on
that
one.
Okay,
I'll
go
through
that
change.
Afterwards,
I've
got
the
conclusions
to
the
discussion
on
the
on
those
slides
and
I'll
just
walk
through
them
in
a
little
bit
more
detail,
Irie
numbered
everything
which,
which
Lars
was
super
happy
about.
I,
also
put
ease
in
reset
stream
for
all
those
people
who
have
been
irritated
by
that
and
there's
a
bunch
of
other
other
things
that
we're
going
to
be
talking
about
here.
B
K
I
mean
my
my
ass
was
gonna,
be
a
little
more
time
than
that
a
month
since
a
little
more
appropriate
given
the
time
of
year.
That
would
be
basically
I
want
to
have
given
them
in
over
20
changes.
I
won't
have
time
to
read
the
spectra
ferpa
finish
and
make
make
sure
like
I,
feel
good
I
can
make
sure
I
have
my
head
think
I'm
all
the
way
about
right.
What
would
you.
B
K
B
That's
fair,
and
so
the
point
here
is
this:
this
is
your
warning.
We're
gonna
start
that
process
now,
I,
don't
think
it's
gonna
be
hard
punished
to
the
other
end
of
it.
It's
it's
just
that
we
want
to
make
sure
that
come
some
reasonable
time
before
we
made
again
in
Tokyo,
things
are
stable
and
we
can
start
to
talk
about
things
like
deployment.
E
A
B
K
Like
you
know
did
because
we
only
these
partial
implementations
and
because
we
have
and
because
every
time
we
do
an
implementation
you
find
in
Felicity's,
you
know
like
I,
don't
want
to
put
us
in
a
position
where
there
are
things
like
the
futures
better
than
the
past
and
where
things
that
like
just
suck
but
like
aren't
actually
death
like
are
things
that
we
say.
Oh,
we
can't
fix
because,
like
we
like
to
read
to
Frieza,
that's
like
right,
yeah.
B
And,
of
course,
our
right
bugs
will
continue
to
get
fixed.
You
have
another
one
on
the
next
one
I
think
you
have
one
more
yeah.
The
idea
is
that
at
some
point
very
very
soon,
we'll
have
a
protocol.
This.
We
need
pretty
strong
justification
to
make
changes
to
things
that
will
affect
interoperability.
B
My
suggestion
is
that
when
we
that
that,
if
we
get
issues
open
that
a
sort
of
speculative
or
suggest
improvements,
we're
going
to
be
a
little
bit
more
aggressive
about
closing
those
sorts
of
things
politely,
but
if
people
do
want
to
work
with
new
design
issues,
because
they've
found
bugs
I
definitely
want
to
see
those
those
raised
and
it.
But
if
you,
if
you're,
not
sure
whether
this
has
been
discussed
or
what
not,
maybe
maybe
discussing
with
the
list
before
your
your
open
issues,
then
then
that
might
be
the
best
way
to
do
things.
B
A
Okay,
zero
are
people
reasonably
clear
on
that?
It's
not
that
we're
saying
we're
gonna
shut
it
down
and
not
take
any
more
changes,
but
I
mean
we
need
a
period
of
stability
to
get
some
assessment
from
the
implementations
in
an
ideal
world,
we
will
prove
that
what
we
have
written
down
is
a
great
specification
and
then
we
can
move
forward.
If
we
find
problems,
then
we'll
deal
with
that
yeah.
B
But
then
one
of
the
one
of
the
sentiments
I
heard
expressed
was
that,
because
things
have
changed
quite
a
lot
quite
often
it's
been
very
hard
to
get
to
the
point:
we've
where
you've
got
an
implementation.
That
is
good
for
deployment,
and
one
thing
that
we're
kind
of
lacking
is
the
experience
in
deployment.
Exam
I
don't
want
to
ship
this
thing
as
RC
without
considerable
experience
in
deployment,
and
so
we
need
to
get
that
process
started
pretty
soon
right.
A
A
B
C
B
All
right,
so
this
is
the
outcome
of
the
discussion
in
New
York
with
colors
and
letters,
so
the
the
long
header
packet
will
look
like
the
first
one.
So
we
have
the
first
bit
saying
that
it's
long,
we
have
what
I'm
dubbing
the
quick
bit
in
the
next
one.
We
have
to
type
bits
that
allow
us
to
distinguish
between
the
four
different
types
of
long
headers
that
that
we
have
and
then
the
rest
of
physic.
This
is
encrypted.
So
the
orange
here
is
we're
using
the
packet
number
protection
scheme,
the
short
Heller.
B
We
have
the
short
head
a
bit
the
quick
bit.
We
might
have
a
spin
bit
and
everything.
That's
not
those
things
is
encrypted
and
under
encryption
in
both
of
them
we
have
a
packet
number.
The
short
header
will
have
a
key
face
bit,
and
that
means
that
the
packet
number
encryption
key
won't
rotate
as
the
keys,
rotate
and
key
updates
thanks
kazoo
next,
so
the
common
part,
as
I
said.
B
The
other
thing
is
that
the
spare
bits
we're
going
to
when
I
say
they
need
to
be
0,
and
maybe
you
can
negotiate
their
use
afterwards,
but
0
will
be
the
defaults
and
probably
in
the
long
header.
That
means
that
0
will
be
the
values
we're
encrypting
them.
So
they'll
look
random
on
the
wire
okay.
This
must
be
zero.
I
must
ignore
or
must
be
zero
much
jack
must
be.
Zero
must
check.
B
K
F
K
K
M
David's
cognizing
Google.
This
makes
me
uneasy.
One
of
the
first
one
was
what
occur
was
saying
that,
like
the
type,
what
is
like
kind
of
a
registry
where
today,
we
only
have
four
values
in
there,
but
it
was
kind
of
nice
to
have
it
extensible,
but
with
two
bits,
we're
not
gonna
be
able
to
fit
a
lot
more.
M
Do
all
you
have
with
rain
types
that
are
VA
lies?
We
could
only
like
limit
it
to
five
frame
types.
You
know
anyway,
so
we
have
example
frame
types,
okay,
so,
first
point:
second,
one
I
feel
like
we're
burning
two
bits
on
the
packet
number
lengths
that
are
already
encoded
in
the
packet
number
itself.
Guess.
M
B
B
M
N
B
N
B
D
B
L
L
A
L
Think
we're
going
to
want
to
do
Pittsburgh,
quick,
which
means
we're
going
to
need
NAT
traversal,
which
means
we
even
need
to
have
an
invariant
that
we
can
deem
axe
or
stun.
Or
we
need
to
reinvent
that
functionality
and
quick
I
think
it's
much
easier
to
have
an
invariant
that
we
can
dmax
just
based
on
the
first
bits
which
which
we
have.
If,
if
we
stick
with
those
first
two
bits,
I
I,.
E
Want
to
I
want
to
say
that
good,
just
the
the
whole
notion
of
invariance
of
things
that
we
were
choosing
to
keep
fixed
so
that
we
wouldn't
ever
change
them
and
and
that's
it's
a
minimal
set.
There
are
things
that
can
naturally
become
invariant.
Just
because
that's
our
deployment
works
sometimes
and
I'm
fine,
with
Stan
de-multiplexing,
one
of
those
I,
don't
think
we
need
to
codify
it.
L
L
K
I,
don't
think
I'm
in
favor
of
that
for
several
reasons.
First
of
all
that
there
going
to
be
an
enormous
percentage
of
the
correct
connections,
the
world,
you
should
never
need
to
be
much
was
done
like
all
the
ones
between
we
have
clients
and
wind
surfers.
Basically,
so
that's
point
one
point
two
is
the
primary
need
for
demultiplexing
is
the
endpoints
of
gene
multi
blocks
and
the
primary
different
variances
people
are
not
part
of
the
connection
to
be
able
to
do
things
with
do
things
or
not.
K
Do
things
with
the
wave
packets,
and
so
the
endpoints
are
perfectly
capable
of
arranging
like
to
only
do
versions
are
quick
which
not
the
new
Ghost
is
done,
and
so
I
don't
think.
The
purpose
of
invariance
is
not
like
nailed
down
design
decisions
in
perpetuity.
Is
that
instruct
people
who
are
not
part
of
the
connection
how
to
manage
the
packets
I.
L
F
Hi
I'm
Amelia,
so
we
updated
both
of
these.
It's
not
both
of
these
drafts
and
we
can
go
to
the
next
slide,
so
the
manageability
we
updated
them
Philip,
based
on
issues
we
had
and
github.
So
the
manageability
get
one
new
section.
We
added
some
illustrations,
some
ASCII
art
for
the
quick
handshake
and
we
added
some
Texas.
It's
good
use
considerations,
but
there
are
no
open
issues
in
github
on
this
document
anymore.
F
Of
course
we
have
to
update
the
document
to
align
us
with
the
protocol
specification
at
the
very
end,
but
this
is
in
pretty
good
shape.
Next
there
was
a
couple
of
more
open
issues
on
the
applicable
ax
t
and
we
did
a
lot
more
work.
We
have
three
new
new
sections
which
got
written
and
reviewed
and
merged
into
this
version.
F
B
F
As
I
said,
we
have
to
rework
a
couple
of
things
anyway,
when
the
trends-
our
transport
draft
changes,
so
we
can
merge
it
and
change
it
later.
We
can
hold
it
I,
don't
know
it
doesn't
really
matter
that
much
okay
next
slide.
Unfortunately,
we
have
some
open
issues
and
we
need
a
little
bit
of
help
there.
Some
of
these
things
are
just
questions
about
what
to
do,
and
if
you
want
to
do
it
at
all
and
some
of
these
things
we
need
actually
people
writing
text,
because
we
might
not
have
the
best
expertise
for
that.
F
So
there's
issue
11
and
29,
which
are
still
talking
about
giving
more
guidance
about
how
to
create
your
connection
ID,
because
you
can
encode
information
in
the
connection
idea
for
the
network.
There
was
a
proposal
or
like
a
report
like
a
proposal
to
make
a
recommendation
that
you
should
include
a
mech
in
the
connection
ID,
so
on
past
device
can
also
check
that
those
informations
are
kind
of
correct.
That
has
a
question
mark
because
there
was
some
discussion
and
it
was
not
T.
F
If
there's
something
you
want
to
recommend
or
is
needed
or
whatever
so
more
input
is
definitely
needed
here,
and
we
addressed
issue
29
to
some
extent,
but
there's
another
question:
if
we
need
more
guidance,
there
is
an
issue
open.
That's
issue
number
14
asking
if
we
need
more
guidance
about
application,
visible
errors
like
what
should
applications
do
with
those
errors?
I
don't
think
there
are
many
errors.
F
There
is
a
question
if
we
want
to
explain
the
idea
of
the
heck,
maybe
that
you
can
send.
If
you
have
a
message
based
application
protocol,
you
could
send
one
message
per
stream
and
then
you
can
cancel
the
sending
if
you
don't
need
this
message
anymore,
which
would
like
enable
some
kind
of
partial
reliability.
So
I
mean
this
is
the
way
how
to
use
the
quic
protocol
and
it's
it's
heck
to
get
some
partial
liability
and
the
question
is
to
want
to
spell
it
out
or
is
it
something
we
don't
want
to
recommend.
K
E
I
think
it's
completely
reasonable
to
do
for
what
it's
worth
I.
Don't
think
we
need
to
recommend
this
I.
Don't
think
we
need
to
say
anything
about
it.
I
mean
people
who
want
to
do
partial
to
laboratory,
know
exactly
what
it
implies
and
I
think
they
will
do
it
I.
Don't
think
anything
please
by
the
way,
when
we
do
this
I,
don't
think
anything
breaks.
F
K
Didn't
look
at
the
text,
but
I
think
like
like,
like
I,
get
I,
get
that
the
working
group
is
decided
not
to
take
on
partial
reliability
as
a
as
a
work
item
just
yet,
but
like
we
like
know
how
to
do
it.
We
have
drafts
like.
Let
us
not
like
document
things
which
we
know.
We
know
how
better
ways
to
do
know.
F
E
So
I
don't
so
to
be
clear.
This
is
not
I,
don't
I,
don't
I,
don't
think
we
should.
We
should
really
get
in
the
business
of
trying
to
figure
out
what
applications
should
not
do,
especially
on
things
like
this
API
use
is
always
something
that
we
figure
out
how
our
application
used
later,
but
I.
If
somebody
thinks,
has
a
problem
in
doing
this.
I
think
that
would
be
good
to
document
so.
K
I
guess
I'm
certainly
I
mean
if
you're
you
engage
me
here
are
gonna
have
to
do
that.
I
guess
I
will
but
like
like
I
guess.
What
I'm
saying
is
that
these
documents,
how
force
like
this
document,
has
force
even
moral
force
or
instructional
for
educational
force?
Then
it
ought
not
to
do
things
that,
like
we
know,
are
like
not
not
advisable
and
like
and
the
fact
that,
like
I,
think
every
think
they're
like
a
bunch
of
people
here,
I've
been
working
on
something
which
obviously
is.
F
K
E
B
B
C
Spencer
Tolkien
says
the
outgoing
transporter
II
director
responsible
for
quick,
partial
liability
and
FEC
and
stuff
like
that,
are
still
out
of
the
explicitly
out
of
scope
for
quick
as
that
is
chartered
today.
So
that
probably
has
a
great
deal
to
do.
Why
do
with
why
the
working
group
stopped
working
right
very
much,
which
is
good?
Hopefully
we
should
talk,
but
but
that's
the
mystery
of
why
it's
not
being
or
started.
A
quick
working
group
is
because
it
was
explicitly
excluded
in
the
current
Charter
I
believe.
P
Yeah
you
ain't
sweatin,
Google
I,
want
to
add
that
I
think
it's
fine
to
to
say
nothing
in
this
regard,
especially
given
Spencer's
comment.
I
think
that's
totally
acceptable.
I
also
want
to
argue
that
this
totally
works
like
you
can
deploy
this
in
a
fairly
large
application,
use
like
a
million
streams
and
make
them
all
unidirectional
and
make
them
unreliable
and
like
it
like
totally
works,
you
can
run
a
VPN
over
it.
You
can
do
a
whole
bunch
of
other
stuff
over
it.
F
G
Mean
one
argument
would
be
that
it'd
be.
We
are
gonna,
I,
think
in
quick
version
to
well
and
in
the
next
version
of
the
ITF
will
work.
Partial
reliability
specifically
been
proposed
as
one
thing
to
work
on,
so
it
seems
kind
of
weird
to
have
a
nob
strafes,
give
a
way
to
do
it
and
then
have
another
way
to
do
it
in
the
future
version
of
the
protocol.
The.
F
G
E
Generating
that
I'll
agree
with
that
I
think
there's
a
problem
here.
The
problem
is
that
when
we
talk
about
partial
reliability
as
a
noun,
it's
not
clear
what
one
is
talking
about
and
ever
end
up
having
to
describe
that
and
that's
the
problem.
Even
when
we
talk
about
something
that
was
excluded
from
the
Charter.
That
is
quite
different
from
what
thatis,
both
of
them
are
partial
reliability
and
that's
where
your
problem
lies.
Q
Tommy
Polly
Apple,
so
I'm
going
to
agree
with
everyone.
What
everyone's
saying
that
we
shouldn't
be
mentioning
the
partial
viability
hack
in
here,
partly
because
we
want
to
move
on
to
other
things,
partly
because
you
like,
adding
that
kind
of
sets
a
precedence
for.
If
we
include
this,
could
we
include
all
the
other
wacky
things
you
could
do
with
the
protocol?
It
could
be
essentially
an
unending
list
now,
on
the
flip
side.
Q
I
think,
like
things
like,
how
do
we
handle
errors
in
how
should
applications
deal
with
those
are
things
that
applications
must
face
and
they
must
have
some
behavior
with
so
I.
Think
adding
text
around
that
things
that
are
part
of
this
is
important.
So,
let's
focus
on
the
bits
that
we
do
need
to
give
guidance
on
that
are
really
current
concerns.
Okay,.
Q
Q
R
R
G
R
G
R
R
C
So
expensive,
Arkansas
responseware
director,
talking
about
what
I
am
curious
about-
and
my
curiosity
does
not
have
to
be
satisfied
here-
is
if
people
are
already
going
to
be
doing
partial
reliability
with
one
message
with
prestream.
This
way,
whether
there
is
what
the
what
the
incentive
for
doing
partial
liability
some
other
way
might
be
and
like
I
say
that
may
be.
That
may
be
a
conversation
you
all
can
have
with
me
after
the
plenary.
C
But
you
know
I
mean
like
or
anytime
doesn't
get
doesn't
care
don't
want
to
spend
working
group
travel
all
right,
but
as
the
guy
who
at
least
theoretically
would
be
doing
publication
would
quit
yeah
prepares.
They
only
publication
requests
for
stuff
like
this,
at
least
through
March.
You
say
I'm
kind
of
curious.
A
G
S
So
simply
I
put
it
the
other
way.
Listen
create
another
document,
that
is
for
the
extensions
of
quick
and
the
operations
and
manageability
considerations
for
the
extensions
and
then
make
a
forward
pointer.
That
says:
go
there
right
there
give
your
PRS
there
when
you
have
extensions,
because
I
think
the
other
thing
that
you
end
up
risking
and
the
point
that
I
think
Spencer
was
making.
F
So
that
any
sort
of
manageability
draft,
it's
very
clear
that
we're
only
discussing
the
specific
version
of
quick,
because
you
know
we
don't
know
what
other
words
will
look
like,
and
maybe
we
can
be
more
explicit
about
this
and
the
applicability
but
I
think
the
same
applies
because,
like
there
are
things
we
discuss
which
are
not
invariant,
they
are
very
version
specific.
So
we
can
only
discuss
the
specific
version.
I
mean
like
yeah
I
mean
like.
F
E
I
was
just
gonna
say
that
I
was
gonna,
put
Martin
Duke
on
the
spot
here,
a
little
bit
but
saying
that
he's
been
doing
some
thinking
on
the
first
one,
a
fair
bit
of
thinking
on
the
first
one
in
terms
of
its
interactions
with
load,
balancers
and
various
other
things,
and
it
might
be
valuable
to
sort
of
figure
out
not
right
now,
but
as
that
draft
progresses
has
that
effort
progress?
Is
it
that
doctor.
B
Yeah
so
month
also
know
having
a
back
channel
would
tell
me
about
this.
The
second
point
here
my
assertion
here
is
that
if
we
don't
specify
error
handling
for
the
application
level
errors
in
the
application,
the
whole
protocol
we
have
failed,
and
so
we're
having
them
in
this
document
would
be
redundant.
B
I
would
rather
have
them
properly
specified
in
in
those
documents.
So
Tommy
asks
whether
the
transport
level
errors
would
elicit
reactions
from
the
application
protocol
as
well,
and
that's
an
API
question
I'm
not
or
if
it's
quite
I'm,
not
sure,
if
I
care
about
that,
but
if
I,
if
I,
were
to
care
about
that,
I
would
expect
the
application
level
protocol
to
to
precisely
specify
what
actions
that
expected
endpoints
to
take
in
the
event
of
receiving
particular
errors.
So.
F
I
would
like
to
disagree
to
it,
but
on
your
first
point
and
so
far
the
HTTP
mapping.
Yes,
you
have
to
discuss
those
things
in
the
HTTP
document,
but
it's
a
general-purpose
protocol
and
I.
Don't
think
we
will
specify
a
mapping
document
for
every
single
application
we
find
out
there
so
giving
some
more
general
guidance
might
be
helpful
as
well.
All.
F
B
F
It's
it's
an
interface
question.
Should
we
because
it
can
happen
that
even
so,
you
configured
your
connection
ID
to
not
show
up
in
the
packet
that
you
might
still
have
it
in
the
packet
for,
for
example,
path,
MTU
to
carry
purposes,
and
it
should
we
want
the
application
about
that.
This
can
happen.
I.
F
Okay,
we
have
some
moral
issues,
I
put
them
on
a
separate
slide,
because
those
open
issues
are
more
about.
Like
you
know
what
interface,
what
things
should
be
exposed
over
an
interface
and
can
we
give
any
guidance
and,
as
we
don't
have
an
actual
abstract
interface
document,
it's
like
not
clear
what
we
should
do
about
these
questions.
For
example,
the
first
one
is
in
theory,
it's
possible
that
if
you
send
it's
your
RTT
data
and
the
sender,
the
receiver
doesn't
support
your
oddity
and
tells
you
one
round-trip
time
later.
F
J
Missionary
Tamar
Mia
I,
see
a
pattern
there
because
you're
worried
about
trying
to
make
general
discussion
of
things
that
really
belong
in
the
application.
Mapping.
Yeah
I
can
easily
see
different
application,
making
different
decisions
there
and
I
think
that
it
does
not
belong
in
a
generic
to
have
it
be
longer.
If
you
want
the
purpose
of
what
we
do
is
to
ensure
Interop,
sir.
J
C
J
F
That's
I
mean
that's
the
whole
question.
If,
if
there
is
no
general
recommendation,
you
can
give
it
doesn't
make
sense.
If
you
can
say
this
class
of
application
should
do
this,
and
this
class
of
application
should
do
this.
It
make
sense
to
give
some
guidance
here
and
I.
Think
like
for
the
first
point.
It
clearly
is
right.
F
If
you
are
an
application
that
might
have
if
you,
for
example,
if
you
send
a
request-
and
you
might
be
in
a
situation
that,
like
this
request,
might
not
be
up
to
date
anymore,
and
you
want
to
send
a
new
request,
then
you
don't
have
to
retransmit
the
data
you
can
send
you
data,
there's
a
general
class
of
applications
that
can
do
that.
I
really.
K
I
think
this
one
at
least
I'm,
not
taking
a
general
position,
is
application
specific,
the
within
in
TLS,
we
concluded
that
you
had
to
write
a
new
mapping
document
fraud
I
handle
0tt
for
like
every
single
application,
so
I
think
probably
this
case
issue
a
should
here
as
well
for
quick
because
quick
eyes,
like
approximately
the
same
like
RDG
like
properties
as
like
you
know,
as
other
applications,
I
know
they're
worse.
P
Suppose
anger
like
to
Icarus
point
as
well,
that
is
one
application,
might
choose
to
do
both
behaviors
as
well.
For
instance,
if
the
AL
pn
nacho
is
after
rejection,
it
might
want
to
automatically
retransmit.
However,
the
ALP
end
is
not
matched
that
it
might
want
to
do
another
thing,
so
an
HTTP
application
might
want
to
do
something.
If
it's
supports,
do
different
ilbm
types
I
mean.
F
This
is
totally.
This
is
totally
understood.
The
question
I
have
is:
if
you
have
this
interface,
you
have
to
have
a
quick
application
right.
You
have
a
library
they
provide
to
this
interface.
Is
it
so
obvious
what
you
do?
What
you
can
do
is
its
interface,
that
we
don't
have
to
recommend
anything,
because
everybody
will
understand
this,
or
is
there
something
we
need
to
tell
people
how
to
use
this
interface
correctly,
for
whatever
application
they
have
I
mean.
P
A
more
valuable
thing
would
be
the
considerations
of
an
interface
between
an
application
and
the
quick
transport
see
like
these
are
the
options
you
might
need
to
provide
and,
depending
on
the
other
aspects
of
what
the
interface
provides,
they
can
choose
to
express
this
now
where
they
want,
for
instance,
like
use
different
words
for
this,
or
something
like
that,
like
Apple,
might
have
requests
idempotency.
Another
thing
might
have
some
other
concept,
which
maps
cleanly
to
these
properties,
but
one
might
be
better
is
like
the
general
considerations
of
like.
B
Really
quickly
love
Thompson
I
was
actually
going
to
make
that
point.
I
was
gonna,
say
all
these
issues.
We
have
not
had
any
discussion
on
the
list,
so
we're
not
really
prepared
to
have
a
good
discussion
here,
thanks
to
Miriah
for
actually
putting
them
up
on
the
slide.
So
we
can
see
them,
but
I'd
rather
not
discuss
any
of
them.
Yeah
today
and.
F
I
have
one
more
general
question
here,
because
we've
been
talking
about
having
a
separate
document
that
actually
discusses
an
interface
or
an
abstract
interface
or
whatever
and
I
like
yeah.
You
have
you
have
a
mapping
there,
but
like
I'm,
actually
talking
about
like
these
kind
of
configuration,
knobs
or
whatever
that
could
be
described
somewhere
and
I.
F
Don't
see
this
coming
up
so
that
that's
why
this
is
like
all
classic
slide,
because
I'm
not
sure
what
to
do
with
this
I,
don't
think
we
I,
don't
I
will
not
wait
for
this
document
anymore,
because
I
don't
think
it's
showing
up
suddenly
so
I
guess
the
approach
would
be
rather
to
you
know
a
zoo
more
and
make
a
recommendation.
If
you
have
this
interface,
you
can
do
it
right.
That's
makes
sense,
I
think
that's
kind
of
what
I'm
just
said
so,
okay,
thank
you.
Okay,.
A
H
B
I'm
not
gonna,
take
any
questions
or
comments
on
this.
One
I
think
I
just
want
to
make
clarifying
questions
are
only
basically
we've
had
a
lot
of
discussions
about
this
one.
The
slides
here
are
rotten,
but
they
do
contain
some
interesting
things
that
allow
us
me
to
describe
the
problem
that
we're
talking
about.
So
the
problem
here
is
that
we
have
the
potential
for
a.
B
B
Well,
you
end
up
stuck
with
the
old
version,
and
you
can
sort
of
get
around
this
by
remembering
things
that
if
you
talk
to
the
server-
and
it
supported
a
particular
version
last
time-
well,
maybe
you'll
try
the
newer
version
next
time,
it's
a
little
more
complicated.
You
end
up
in
some
sort
of
suboptimal
arrangements.
B
People
thought
it,
of
course,
that
all
service
helps
where
you
have
it,
but
that's
not
not
a
guarantee
and
so
we're
exploring
a
couple
of
options
around
solving
these
problems,
and
these
slides
covered
one
of
the
options
that
we
discussed.
I.
Don't
think
we're
done
with
that
discussion,
yet
there's
I
think
I've
heard
about
four
or
five
new
ways
of
sort
of
approaching
this
problem.
B
Part
of
part
of
the
problem
that
we
wanted
with
one
of
the
solutions
was,
do
nothing
even
for
the
first
one,
which
is
kind
of
a
little
bit
odd,
but
if
you
think
it
through,
it
makes
a
bit
of
sense
so
just
wanted
to
make
sure
that
people
understood
that
we're
we're
we're
grappling
with
this
one.
At
the
moment,
it's
likely
to
take
a
little
while
before
we
sorted
out,
but
this
one's
pretty
high
priority
for
those
of
us
involved.
If
you
have.
C
B
That
you'd,
like
to
add
to
the
pile
I,
have
a
I,
have
a
pile
already
and
we'll
be
sending
a
note
to
the
list
with
with
the
list
of
the
things
we're
considered
in
the
reasons
why
each
one
of
them
might
maybe
good
or
bad
Eck
has
already
started
a
list,
which
is
a
pretty
useful
thing,
so
expect
something
very,
very
soon.
I
hope
to
solve
this
in
17,
but
we'll
see
how
that
goes.
One
of
the
chain,
one
that
several
of
the
proposals
change
the
invariance
so
I'm
a
little
leery
about
those.
N
B
B
You
could
still
have
a
situation
where
you
flip
the
flag
that
that
enables
the
new
version
in
between
the
time
where
you
sent
the
version
negotiation
and
then,
when
you
continue,
the
handshake
and
the
server
doesn't
maintain
state
between
those
two
points,
and
so
from
the
point
that
it
sends
out
the
version,
negotiation
and
the
point
that
it
sends
out
its
flight
of
handshake
messages.
It
can
change
its
mind
about
which
versions
it
supports
and,
of
course
you
can
do
things.
I
take
the
server
out
of
rotation
and
all
sorts
of
other
things
like
that.
T
B
B
B
Minutes
before
the
meeting
started,
if
I
recall
correctly,
I
thought
you'd
got
them
to
an
earlier
I'm.
Sorry
mark
I
thought
I'd
yep,
that's
terrible
here.
A
tear
will
probably
have
HTTP
issues
we
can
discuss
in
the
meantime.
Well
how
about
chair.
So,
let's
give
it
a
second
I
I'm.
Almost
there
I
think
if.
O
Plod,
so
when
I
go
back
and
compare
what
the
notable
changes
have
been,
it
basically
boils
down
to
everything
we
agreed.
Last
time
we
were
going
to
do
is
now
in
the
draft,
so
go
back
and
look
at
last
times,
presentation,
and
that
explains
the
changes
next,
so
a
couple
smaller
issues
that
have
come
up
since
the
interim
in
terms
of
exactly
how
we
want
to
handle
things
at
HTTP.
O
We
have
this
nice
spectrum,
where
how
do
we
get
the
two
settings
frames
from
client
and
server
and
what's
the
relationship
between
them
right
now,
when
you
send,
when
you
set
up
a
connection,
the
first
frame
on
the
control
stream
in
each
direction
has
to
be
settings
frame.
The
client
is
not
supposed
to
interpret
any
of
the
server's
responses
until
it's
seen
the
server
settings
frame.
The
server
is
not
supposed
to
send
anything
on
a
response
stream
until
it's
in
the
client
settings
right.
O
O
So
in
the
declaration
style
you
send
the
settings
frame
as
a
statement
of
your
own
capabilities
and
that's
it.
Kazuko
had
reached
the
possibility
that
we
don't
actually
have
noticeably
more
head-of-line
blocking.
If
we
make
this
a
full
negotiation
where
you
have
to
see
the
other
side
settings
frame,
whoever
speaks
first,
you
have
to
see
their
settings
frame
before
you
can
generate
yours,
which
gives
us
the
advantage
that
you
can
have
the
full
offer
select
negotiation,
but
it
departs
more
from
the
HTTP
style
on
the
other
extreme.
O
If
you
go
to
pure
defaults,
then
there
and
we
eliminate
the
head-of-line
blocking
the
currently
exists
slot,
so
we
can
get
out
of
the
head
of
unblocking
box,
but
that
has
the
drawback,
which
is
true
of
the
current
system,
that
when
you
have
that
second
settings
frame,
there's
no
clear
marker
as
to
when
that
takes
effect.
So
you
have
some
level
of
well.
This
has
always
been
true,
but
maybe
I
didn't
know.
It
was
true
slot.
O
B
So
mutton
Thompson
I'm
gonna
speak
out
against
the
two
changes,
although
not
too
strongly
against
the
defaults.
One
I
noticed
the
full
negotiation
one.
You
have
this
situation
where
one
side
makes
the
offer.
The
other
one
makes
the
answer,
depending
on
whether
you're
doing
0,
RCT
or
not,
and
whether
you've
accepted
it,
which
gets
a
little
weird.
No
client
must
no
server
settings
before
generating
settings
for
him,
which.
B
It
remembers
the
previous
size,
so
you
say
the
server
always
goes.
First
right
apply.
It
always
responds
correct,
yeah,
that's
a
little
weird
from
from
the
perspective
of
the
arrangement
that
we
have
mm-hmm,
so
I
very
much
prefer
not
to
have
that
one
I'm,
somewhat
okay,
with
having
defaults
for
the
purposes
of
alleviating
the
head-of-line
blocking
for
things
as
long
as
those
defaults
are
really
really
simple,
yeah.
So.
N
B
B
O
E
Iyengar
I
think
I'm,
basically
exactly
what
Martin
said:
I
don't
think
that
we
have
given
that
the
handshake
packets
are
already
head
of
line
blocking.
This
is
just
one
more
packet
after
that
and
I
think
it's
completely
reasonable
I
leave
with
one
question:
do
we
have
a
use
case
for
the
full
of
a
negotiation
full
offer,
select
negotiation
that
is
pressing
enough
to
sort
of
try
and
push
it
this
way,
I
know
kaswell's
gonna
come
up
and
say
something
to
that.
So
I'm
asking
the
question
and
I
walk
out.
T
There's
an
actual
use
case,
hopeful
negotiation,
and
that
is
that
the
peer
can
wait
for
the
other.
Oh
I'm,
the
endpoint
can't
wait
for
pierce
signal
about
two
particular
sites,
for
example,
and
decide
not
to
create
Cuba
can
hold
a
stream
or
two
called
to
pack
Dakota
stream,
and
that
helps
memory
constrained
devices
and
my
argument.
Uh-Huh
and
my
augment
fall
through
negotiation-
is
that
in
the
current
form
we
already
have
the
head
of
line
blocking
for
the
streams
that
sends
requests
so
having
full
negotiation.
Just
moves
that
lock
to
when
you
send
sittings.
T
P
Super
diner
I'm
kind
of
worried
about,
given
the
current
state
of
as
I've
definitely
opposed
to
offer,
select
offer
offer
select
semantics,
but
it's
and
kind
of
also
concerned
about
the
current
state
of
affairs,
where
you
have
to
wait
for
the
settings
frame,
then
I
know
there's
a
lot
of
discussions
happen.
A
lot
of
like
considerations
around
this
I
must
prefer
the
defaults
mode,
because
the
settings
frames
we've
said
that
it
has
head
of
line
blocking.
P
We
have
no
idea,
for
instance,
right
now,
because
you
have
to
send
it
requires
you
to
send
like
coolest
packets
in
point
five
RPG,
which
is
more
complex
to
do,
and
some
of
the
whole
nations
might
do
it
some
the
limitations.
Much
is
not
to
do
it,
but
by
default
you
won't
get
the
foreign
scene
and
then
HTP.
Do
you
have
this
kind
of
defaults
mode
right
now
where
the
client
speaks
first
before
seeing
the
settings
frame,
so
you
know
that
kind
of
works
in
this
reasonable
performance.
P
The
other
point
I
wanted
to
make
is
that
having
defaults
might
also
unlock
the
ability
to
have
to
use
the
defaults
in
0rq
T
as
well.
Currently,
when
we're
looking
at
implementing
the
API
to
store,
transfer
parameters
or
sorry
cube
back
parameters
and
other
settings
frame,
parameters
in
0rq
do
something
it
is
kind
of
painful
to
do.
It
is
doable,
but
it's
definitely
not
for
every
not
for
the
faint-hearted.
P
So
in
that
sense
it
might
make
it
simpler
for
people
who
are
who
want
to
use
defaults
and
who
have
not
seen
settings
for
him
to
see
like
I
will
get
a
reasonable
amount
of
performance.
But
I
won't
get
the
extra
performance
I
get
by
caching.
The
settings
frame
in
case
that
they
are
bigger
than
the
default
values.
I
Alan
and
Dimitri
on
jabber
would
like
to
say
that
in
both
implementations,
they've
decided
to
default.
The
queue
pack
table
size
to
zero
if
they
need
to
send
a
response
before
settings
have
arrived
and
as
Erik
Kinnear
myself,
there
is
some
attraction
to
getting
rid
of
the
head
of
line
blocking
and
doing
stuff
with
the
defaults.
But
again,
if
that
puts
us
into
an
awkward
case
where
you
know
yes,
you've
gotten
rid
of
the
head
of
line
blocking,
but
you're
now
effectively
head
of
line
block,
doing
anything
useful
anyway.
Does
that
really
help
so.
B
Requests
so
until
7:00
I'm
hearing
a
lot
of
people
saying
defaults
are
okay
but
I
just
don't
know
how
to
implement
it.
So
if
we,
if
we
could
see
how
that
would
be
implemented,
then
that
would
be
great
but
I
don't
know
how
to
do
a
default
for
header
block
size,
for
instance,
because
I
know
that
someone's
gonna
want
something
smaller
than
whatever
default.
We
pick
and.
C
B
I
Different
rules
for
different
settings,
which
is
scary,
but
not
totally
insane
to
to
Martin's
point
about
if,
if
you
have
things
that
you
can
set
reasonable
defaults
for
that
brings
you
back
to
head
of
line
blocking.
Does
that
mean
that
for
cases
where
you
didn't
need
any
of
those
values
or
or
that
wouldn't
have
hampered
you,
there
is
some
percentage
of
the
population
that
is
now
unblocked
and
everybody
else
is
just
in
the
current
status
quo,
because
that
may
be
worth
the
complexity
of
doing
it.
It
may
not
be.
P
B
O
Right
yeah,
so
if
I
can
interject
to
remind
folks
in
h2,
we
have
defaults
that
previously
exist,
but
then
one
so
you
get
the
settings
frame,
you
could
discover.
Reality
is
lower
than
the
defaults,
and
so
you
have
to
be
able
to
deal
with
any
setting,
reneging
on
itself
and
there
are
cases
where
that
gets
really
ugly.
O
T
Yes,
one
of
the
reasons
who
I
HTTP
to
behaves
the
way
Matt
described
because
is
because
it
depends
on
the
TS
version
two
which
endpoints
speaks
start
speaking
first,
because
in
tears,
I'm
going
to
the
circuit,
the
client
speaks
facts
and
in
one
by
three,
the
server
speaks
first,
but
in
case
of
quick,
we
know
when
the
waiter
that
the
client
speaks
first
or
the
server
speaks
first.
So
that
gives
us
the
possibility
of
having
full
negotiation
and
I'd
like
to
also
note
that
quick
transport
actually
uses
that
feature
to
do
a
better
negotiation.
T
E
E
T
O
E
E
O
G
B
B
E
O
All
right,
so
we've
also
had
it
raised
that
we,
because
we
have
the
length
at
the
beginning
of
each
HUP
frame.
There
are
cases
where
the
server
application
is
going
to
be
generating
data
without
a
defined.
You
know
size
to
end
to
payload,
but
at
the
same
time
having
to
write
the
header
with
each
chunk
that
comes
out
when
the
chunks
are
potentially
small
in
Kherson
overhead,
and
there
was
a
proposal
to
have
some
way
to
mark
that
this
is
going
to
be
the
end.
O
This
is
the
header
of
the
last
frame
on
the
stream
and
everything
else
on
the
screen
is
probably
data.
There
may
be
a
use
for
this
in
hetero.
Spit
data
is
the
most
compelling
so
next
slide.
We
can
have
a
spectrum
that
right
now
you
length
prefix
everything
and
the
argument
for
that
is
this
is
fairly
late.
This
is
not
critical.
You
know
it's
a
it's
a
nice
performance
win,
but
it's
not
going
to
change
the
world
if
we
do
something.
O
Data
frames
are
the
place
where
this
gets
you
the
most
impact,
but
it's
kind
of
weird
that
data
is
the
special
blest
frame
or
we
could
do
something
entirely
new
in
framing
initially
I
was
thinking
length.
0
means
runs
to
the
end
of
the
stream
for
all
frames.
For
some
implementations,
that's
challenging
I've
also
heard
the
proposal
to
have
a
new
frame
type
that
says:
okay,
we're
done
framing
and
everything
else
is
just
body
well.
K
O
So
basically,
the
issue
is
you've,
got
some
application
above
HTTP,
that's
dynamically,
generating
content,
Sharpe
and
handing
it
back
to
you
in
chunks
as
it
produces
it,
and
you
don't
know
how
long
that
ultimate
payload
will
be.
So
you
can't
just
say
you
know
payload
length
type
as
data
and
start
writing,
but
you
have
to
take
each
chunk
as
its
handed
to
you
length,
prefix
it
and
put
it
on
the
wire.
K
K
P
I
think
the
consistent
rule
is
the
most
obvious
one
to
go
with
as
it
clearly
as
a
use
case.
Personally.
I'll
also
point
out
that
zero
is
currently
defined
as
invalid
right
now,
we're
not
stealing
a
value
or
something
we're
defining
something
that
currently,
if
you
receive
a
zero,
you
are
supposed
to
close
the
connection,
so
we're
actually
defining
something.
That's
currently
undefined
yeah.
This
wouldn't
be
massively
valuable
for
it.
P
O
K
O
History,
for
that
is
Google,
quick
puts
the
entire
body
on
a
separate
stream.
So
you
don't
have
this
problem,
but
for
covering
the
full
flexibility
of
h2,
you
have
to
be
able
to
put
a
push
promise
in
the
middle
of
the
body
and
have
consistent
would
ring
and
the
only
way
you
get
that
and
quickest
to
have
it
on
the
same
stream.
So
you
have
to
have
frames
for
the
body
as
well
as
I.
K
E
As
I
think
I
think
it
is
the
the
idea
of
what
special
frame
is
very
appealing
to
me
personally,
it's
purely
additive
for
people
who
care
about
it
in
terms
of
feet.
Turning
is,
of
course,
but
it's
more
code
and
so
on,
but
I
think
that
it's
a
it's
a
it's
a
nice
way
to
handle
the
special
case
where
you
really
don't
care
for
all
the
frame,
if
you,
so
we
just
imagine
a
beautiful,
perfect
future
world
where
there
is
no
push,
you
don't
really
need
you,
you
could
just
use.
E
E
Take
my
trailers
yeah
sorry
about
that,
if
fair
enough
that,
but
but
even
then,
the
common
case
is
that
you're
going
to
have
just
body
on
these
on
these
streams
and
I
I
think
it's
worth
spending
just
a
little
bit
of
time.
Thinking
about
how
to
optimize
for
the
common
case,
and
if
you
can,
if
you
can
do
the
same
thing,
which
is
you
know
the
the
previous
slide,
does
the
length
and
I
suggest
that
he
use
a
length
of
zero
and
a
type
of
zero
for
this
new
frame
type
thing?
It
basically
I
think.
E
O
T
A
O
Right
now,
alright,
one
more
issue:
that's
been
brought
up
again,
something
that
we
have
somewhat
lost
in
the
transition
from
H
2
and
H
2.
You
have
the
priority
information
embedded
in
the
header
screen
and
then
you
can
use
priority
frames
subsequently
to
change
it.
We
can't
do
that
with
quick,
because
you
don't
have
a
defined
ordering
between
things
and
so
priorities
have
to
be
on
a
stream
that
gives
you
a
consistent
ordering
of
all
the
changes
to
the
priority
tree
now.
The
issue
that
that
creates
is
until
that
priority
frame
arrives.
O
You
don't
know
what
priority
the
request
should
have
and
it's
possible
that
a
request
is
comes
in,
gets
processed
and
start
streaming
back
before
you
ever
discover
where,
if
it's
entirely
tree
so
I,
don't
really
have
a
proposal
here,
because
everything
that
I
have
been
able
to
come
up
with
is
still
subject
to
ordering
Ken
screw
something
up
and
so
I
think
here.
The
question
is:
how
upset
are
we
about
that
and
if
we
are
upset
about
it,
does
anyone
have
any
bright
ideas
that
they'd
like
to.
B
So
Martin
Thompson
I,
wanted
to
add
to
the
piles
about
ask
the
question.
So
what's
the
default
priority
and
in
h2
the
default
priority
is
you're
attached
to
stream
0,
because
stream
0
is
the
control
stream
yeah
rule
of
three
whoops,
and
so
there
is
that
problem
root
of
the
tree
is
that
is
the
answer.
B
O
O
Definitely
one
thing
that
we
need
to
mention
right
now
we
point
to
7540
for
the
priority
scheme,
including
the
default
priority
of
request
when
they
are
first
issued.
We
should
probably
at
least
clarify
that
you're,
not
actually
a
child
of
stream
0
you
a
child
at
the
root
of
the
tree,
but
also
talk
about
the
gap.
Maybe
right,
yeah,
ok,
ok,
so
next
slide.
Last
question
around
the
HPP
draft
is
about
naming
and
I'm
kind
of
full.
O
So
I'm
going
to
pull
from
a
children's
story
that
my
cousin
introduced
me
to
last
week.
Hicks
lied
where
mrs.
McKay
has
23
kids,
all
of
them
named
dave
and
she's,
really
regretting
that
choice
and
coming
up
with
all
sorts
of
names
that
she
wishes.
She
has
given
them,
but
they're
already
named,
and
it's
too
late.
E
O
So
in
terms
of
what
we
call
HTTP
over
quick,
which
does
not
exactly
roll
off
the
tongue
and
doesn't
have
an
obvious
relationship
to
HTTP
1.1,
please
stop
calling
that
hb2
over
quick.
It
has
changed
dramatically.
It's
really
not
that
it's
not
really
quick,
because
quick
is
the
transport
and
when
we
discussed
in
Montreal
several
of
the
changes
that
we've
made
in
HQ
and
whether
we
want
to
bring
them
this
extension
stage
should
be
to
the
decision
was
no.
We
don't,
because
quick
is
the
future.
O
K
K
You
know
it
gives
the
impression
that,
like
nobody's
like
that,
like
HTTP
was
deprecated,
which
it
is
not.
It
gives
the
impression
that
like
if
we
ever
want
to
extend
HTTP
we're
not
going
to
and
I
mean
fundamentally
what
this
is.
It's
a
fork
with
one
fork
getting
longer
and
one
for
not
getting
very
long
very
fast.
That's
what
operationally
is
happening,
and
maybe
we
decided
not
to
do
a
lot
of
maintenance
on
CB.
K
K
Did
I
don't
think
I
think
that,
like
yeah,
like
I,
think
I'm
telling
people
like
you
want
like
ought
not
to
use
h2.
You
ought
to
use
quick
like
universally
know.
So
again,
like
you
know,
I'm
not
gonna,
like
light
on
the
river
this,
but
like
I
I,
don't
think
it's
great
idea
and
I'd
like
to
see
a
homily.
O
E
E
It's
it's
gotten
my
creative
juices
flowing.
This
is
lovely
and,
ironically,
you
said
quick
is
the
future,
and
this
is
ironical
because
it
depends
on
which
quick
you
are
talking
about
right.
That
is
a
quick
which
is
the
present,
and
that
points
us
to
a
problem
here.
We
already
have
a
Google,
quick
and
an
IETF
quick,
and
we
have
trouble
keeping
them
apart.
So
I
think
it's
very
useful
to
have
a
new
name.
First
second
I'll
agree
with
a
girl
that
there
is
a
little
bit
of
signalling
issue
here.
E
B
V
A
O
And
I
will
emphasize
one
other
thing
from
the
email
which
is
that
to
call
it
it's
gp3
I
think
we
wanted
the
HTTP
working
groups
input
anyway,
but
to
continue
their
naming
sequence,
we
probably
want
their
okay
on
the
name,
change
as
well,
so
we'll
be
not
probably
yeah
so
we'll
be
discussing
that
in
the
HTTP
working
group
tomorrow.
Yes,
yes,.
S
Set
her
T
and
it
was
actually
getting
up
to
ask
a
scoping
question
right.
So
there
are
places
where
this
string
will
appear
in
protocols
and
there
are
places
where
you
intend
for
this
string
to
be
in
the
signaling
you're,
giving
to
the
larger
community.
Do
you
think
those
two
actually
have
to
match,
because
in
the
strings,
I
can
certainly
see
a
reason
for
doing
HDB,
qv1
right
hhq
one
as
an
indication
in
the
protocol
string
of
where
we
are?
S
S
It's
not
it's
not
good
marketing,
but
it
may
be
good
protocol
and
I
think
you
should
be
very
clear
here
which
scope
it
is
you're
trying
to
solve
here,
because
otherwise,
the
rest
of
these
rats
in
the
line
and
I'm,
counting
myself
as
the
first
rat
in
it
are
gonna,
be
trying
to
brush
different
colors
on
the
shed
without
knowing
which
it
is
they're.
Trying
to
paint
so
I
will
bring
up
an
email.
A
W
Pat
McManus
I'm
I'd
like
to
emphasize
that
this
more.
We
will
talk
about
this
in
hte
bits
on
Thursday
yeah,
as
that
the
previous
slide
had
talked
about.
This
really
is
sort
of
an
outcome
of
HP
misses
previous
discussion
in
Montreal
and
sort
of
underline
some
like
questions
people
people
ask
like,
is
h3
a
successor
to
h2
that
we
would
recommend
I
mean
the
answer
to
that
would
be
yes,
I
mean
we
really
do.
W
But
we
are
saying
that
we
believe
this
is
superior
and
that's
why
we're
pursuing
this
work.
And
similarly
you
know
it's
going
to
maintain
the
backwards-compatible
semantics
promises
of
HTTP,
which
are
being
better
stated
with
HDTV
core
right
now,
but
I've
always
existed
and
that
that
is
compatible,
which
is
much
more
obvious
to
phrase
to
b3
than.
C
Spencer,
dawkins,
probably
for
the
second
for
the
for
this
moment,
special
honorary
director,
just
as
just
as
a
thought
for
the
chairs
and
walk
as
the
ops
area
director
who
became
the
yang
area
director,
had
a
proposal
for
semantic,
versioning,
versioning
and
stuff
like
that
for
yang
models,
and
things
like
that.
That
would
be
moving
very
quickly.
It
might
be
taking
worth
taking
a
look
at
that
and
seeing
if
that
was
something
you
wanted
to
come,
even
close
to
touch
and
I
don't
mean
clicked
close
to
using
I
mean
close
to
touching.
C
But
you
know
there
are
the
places
in
the
ITF
that
are
wrestling
with
version
numbers
that
are
with
versioning
for
things
that
are
moving
pretty
darn
fast
and
so
just
there's
a
personal
observation.
So
speaking
as
an
individual
I
spent
enough
time
hanging
around
transport
when
transport
changes
usually
meant
operating
system
upgrades
on
a
good
day
to
where
the
you
know,
the
idea
that
a
version
of
something
could
change
as
quickly
as
quick
versions
could
change.
If
the
working
group
agreed
on
it
quickly,
you
know
agreed
to
move
quickly.
The
underlying.
J
C
A
Gonna
insert
myself
in
the
queue
virtually
as
me,
so
imagine
I'm
standing
over
there
from
my
personal
standpoint,
the
the
naming
of
HTTP
needs
to
stay
within
the
control
of
the
HTTP
community,
and
so
I
would
personally
I
think
that
this
needs
to
be
driven
from
them.
Logically,
it's
our
deliverable
here,
so
we
need
to
have
the
conversation
here.
A
Http
is
defined
by
a
core
set
of
semantics.
That
is
not
version
dependent
and,
in
my
opinion,
calling
the
deliverable
anything
but
htv-3
sends
a
confusing
message
about
the
lineage
of
HTTP
and
how
its
forked,
you
know
it
looks
like
a
fork
if
we
don't
call
it
HTTP
three,
so
I
actually
I'm
180
degrees
from
Eckler
on
that,
and
so
I'd
ask
folks
to
keep
that
in
mind
that
you
know
we're
not
naming
quick
here,
we're
naming
an
HTTP
version
and
we
should
follow
the
conventions
of
HTTP
versioning
and
let
that
community
make
the
decision.
A
B
A
H
H
A
A
G
A
A
A
O
A
G
A
G
We
got
five
minutes.
I
did
a
quick
agenda
refresh
to
make
it
clear
that
the
if
time
permits
stuff
we
might
not
get
to
add
to
all
of
it
or
or
any
of
it
today
will
extend
into
the
overflow
time
tomorrow,
where
hopefully,
since
the
spin,
the
discussion
apparently
takes
15
minutes.
We're
gonna
have
lots
of
time,
but
if
any
of
those
presentations
can
anybody
make
useful
use
of
those
five
minutes,
you
can
Martin
okay,.
A
D
So
I
was
talking
about
initial
packets
with
Christian
at
the
hackathon
next
slide,
and
we
always
talk
about
the
injection
attack
that
yeah
and
on
path
attacker
can
can
disrupt.
The
hinge
can
disrupt
the
connection
during
the
handshake,
but
not
afterwards,
and
it
turns
out
that,
as
things
currently
stand,
this
is
not
quite
correct.
So
what
an
attacker
can
do
is
he
can
send
a
spoofed
initial
packet,
for
example,
containing
a
connection
close
frame
or
containing
a
malformed
packet
containing
a
malformed
frame.
D
There
endless
possibilities
how
how
he
can
cause
a
protocol
error
that
would
close
the
connection
and
what
what
we
currently
do
is
we
say
that
every
endpoint
accepts
initial
packets
accepts
and
processes.
Initial
packets,
at
least
three
are
TOS.
After
all,
initial
data
has
been
received
and
act,
and
since,
at
that
point
of
the
we
don't
have
a
good
RTG
estimate,
we
RTOS
might
be
as
much
as
600
milliseconds,
even
on
fast
connections,
so
this
would
be
after
the
handshake
long
after
the
handshake
actually
completes.
D
If
you
look
at
the,
if
you
look
at
the
the
diagram,
we
only
need
initial
packets
in
the
in
the
first
flight
until
the
more
precisely
until
the
client
has
received
the
server
hello
when
the
client
has
received
the
server
hello,
its,
which
is
to
handshake
keys,
and
it
won't
need
the
initial
keys
anymore
and
for
the
server
as
soon
as
it
receives
the
handshake
packet.
It
knows
that
that
the
server
hello
was
received
and
doesn't
need
to
be
retransmitted,
so
it
doesn't
need
to
accept
any
initial
packets
after
after
receiving
the
first
handshake
packet.
D
D
We
we
should
stop
accepting
and
retransmitting
initials
at
the
at
the
points
that
I
just
described,
which
are
different
for
client
at
server.
This
has
a
few
implications
for
how
we
do
loss
recovery,
because
stopping
to
process
initial
packets
also
means
that
we
might
not
receive
an
AK
for
a
packet
that
we
that
we
send.
For
example,
the
server
might
never
receive
an
act
for
the
server
hello
that
it's
end,
so
we
need
to.
D
N
Yes,
thank
you
for
using
this
issue.
Pr
1819
was
my
attempt
to
deal
with
this
by
basically
allowing
receivers
to
drop
initial
packets
if
they,
if
they
disrupt
the
connection
in
exactly
surfacing
as
you
described,
that
PR
has
died
for
a
lack
of
enthusiasm,
I
guess
I'm,
not
sure
what
the
process
is
to
resurrect
it
if
people
want
or
as
a
wreck
fit
or
if
the
people
who
killed
that
wanted
to
explain
why
that's
fine,
too,
but
I,
think
the
issue.