►
From YouTube: IETF106-HTTPBIS-20191118-1550
Description
HTTPBIS meeting session at IETF106
2019/11/18 1550
https://datatracker.ietf.org/meeting/106/proceedings/
A
A
A
So
this
is
the
HTTP
working
group
oops
we
for
those
who
didn't
catch
it.
We
had
a.
We
now
have
a
logo
for
the
work
that
we
do
here.
We
also
have
some
badges
and
stickers
to
handouts.
If
folks
are
interested,
there's
a
couple
there's
a
couple
on
the
blue
sheet:
we
have
more
in
the
background.
I
will
be
handing
those
out
afterwards
if
folks
are
interested,
if
you're
having
a
side
conversation
it'd,
probably
better-
to
do
that
out
in
the
hallway,
if
you're
having
a
side,
conversation
I'd,
probably
better.
A
Thank
you
so
and
and
by
the
way
these
are.
These
logos
are
open.
The
source
is
freely
available
on
the
github
repo.
So
this
is
the
note.
Well
hopefully,
you're
familiar
with
this.
If
you're
not,
these
are
the
terms
under
which
we
participate
in
the
ITF.
This
is
important
not
only
for
things
like
intellectual
property
and
copyright,
but
also
things
like
the
anti
harassment
procedures,
the
code
of
conduct,
how
we
treat
each
other,
which
we
do
take
seriously.
A
If
you
have
any
issues
around
this
or
any
questions,
you're
more
than
welcome
to
talk
to
Tommy
or
myself,
we
also
have
other
folks
the
idea
for
a
designated
to
help
out
with
these
issues.
So
please
do
inquire.
If
you
have
any
questions-
and
you
can
always
find
this
by
going
to
your
favorite
web
search
engine
and
searching
for
ia
TF
note
well.
A
A
Do
we
have
any
volunteers
describe
the
session.
Thank
you
very
much.
Anyone
willing
to
help
out
so
if
you
could
either
do
that
on
the
etherpad,
that's
in
the
minutes
or
on
a
Google
Doc
and
share
it
with
folks
in
the
room,
that'd.
A
Agenda
bashing,
so
we
have
two
sessions
this
week
today
we're
going
to
take
a
brief,
a
couple
minute
straight
of
items
to
go
over.
We're
gonna
have
hopefully
a
pretty
significant
discussion
of
a
proposal
for
priorities
in
http/2,
with
the
idea
that
that
would
also
be
useful
for
the
htv-3
efforts.
Hopefully
them
come
to
some
sort
of
a
sense
of
the
room
for
that,
then
we're
gonna
spend
a
good
amount
of
time
on
core
the
core
specs,
as
we've
done
for
a
number
of
meetings.
A
We're
gonna
go
over
some
of
the
issues
that
the
editors
would
like
some
feedback
on
and
then
finally,
we
have
10
minutes
reserved
for
discussion
of
a
proposal
for
rate
limiting
headers
and
that's
remote
presentation.
A
C
A
A
So,
as
you
might
might
know,
the
quick
working
group
is
working
on
HTTP
three
once
they
finish
that
they're
going
to
hand
it
off
to
us
for
maintenance
and
further
development,
but
that's
on
Tuesday
and
Thursday
SEC
dispatch.
There
is
a
proposal
being
discussed
or
something
that's
been
actually
on
our
radar
for
quite
some
time
being
discussed
around
HTTP
requests,
signing
that
was
mentioned
in
the
dispatch
this
morning,
but
the
the
media
discussion,
that's
going
to
happen
on
sexist
insect
dispatch
on
Tuesday,
there's
also
discussion
of
securing
proxy
to
back
in
communications.
B
A
Is
this
is
that's
a
great
question
and
I
brought
that
up?
Certainly,
like
I
said
it's
been
on
a
radar,
it's
been
our
list
of
drafts,
we're
tracking
the
proponents,
for
whatever
reason
took
it
to
dispatch,
and
then
they
had
a
discussion
with
SEC
dispatch.
That
doesn't
mean
it's
not
happening
here.
It
just
means
that
the
initial
discussion
has
happens
to
be
happening
over
there.
We
equally
could
have
had
the
initial
discussion.
I.
Think
ya.
A
I
know
and
that's
I
want
to
make
sure
that
that's
one
of
the
reasons
we
have
this
slide
is
to
make
sure
folks
here
understand
that
going
to
sex
check
dispatch.
This
time
around
is
probably
a
good
idea.
So
you
know
to
my
mind,
there
are
three
communities
involved
in
that
discussion:
there's
HTTP
implementers
and
the
HTTP
folks
in
this
room,
there's
the
people
who
actually
want
to
use
it
for
their
applications
and
then
there's
the
security
community
at
the
ITF
as
well,
and
so
we
need.
We
need
input
from
all
three.
D
A
A
W
pack
is
also
on
Wednesday,
that's
web
packaging,
which
has
been
discussed
a
little
bit
here
from
time
to
time.
And
finally,
mops
is
a
new
working
group
on
Thursday
morning
and
that's
media
operations,
and
that
means
video,
and
these
days
video
often
means
HTTP,
so
encourage
people
to
pop
into
that
and
see
what
it
looks
like
any
other
related
meetings
that
people
want
to
note
for
HTTP,
folks,
okay,
so
one
more
administrative
item,
some
of
you
may
have
noticed
that
there
are
only
two
of
us
up
here
at
the
table.
A
A
A
A
Gifts
can
be
donations
too,
okay,
so
that
leaves
us
let's
just
get
that
off
the
screen.
That
leaves
us
with
our
discussion
of
the
priorities
proposals
so
just
to
catch
folks
up.
There
was
a
discussion
starting
in
the
quick
working
group
about
the
design
of
the
priorities
mechanism.
They
then
came
to
us,
the
HTTP
working
group
and
some
of
us
wearing
different
hats
and
said:
do
we
really?
You
know
the
Charter
of
quick
says
that
hb3
will
have
everything
that
HTTP
hasn't
it?
A
We
really
mean
that
for
priorities,
and
we
came
to
an
agreement
between
the
working
groups
that
no
we're
not
going
to
force
your
hand
on
that.
You
can
ship
htv-3
without
compatible
HTTP
priorities,
and
so
the
immediate
discussion
afterwards
was
well.
What
would
a
replacement
mechanism
for
signaling
priorities
from
the
client
to
the
server?
Look
like
we
formed
a
design
team,
and
these
folks
are
now
reporting
back
to
us
with
what
their
recommendation
is.
E
E
Obviously,
a
mechanism
to
indicate
what
type
of
priority
hinting
is
being
used
so,
for
example,
a
negotiation
mechanism,
something
that
is
non
minimal,
so
at
you
know,
has
to
have
this
feature
set
that
we
think
we
need
and,
of
course
we
want
to
be
able
to
back
port
it.
To
takes
you,
and
ideally,
we
want
to
not
ship
something
that
we
kind
of
are
not
sure
of.
We
don't
want
something,
that's
unproven
and
introduces
too
much
risk.
I,
don't
want
to
do
kind
of
what
we
did
with
h2
all
over
again
next
slide.
F
F
If
you
have
a
flexible
system,
nobody
is
going
to
agree
on
how
to
actually
use
it.
You
can
see
that
very
nicely
to
implement.
Is
they
they
do
it
quite
differently
only
Firefox,
initially
using
the
full
complexity
of
the
system,
at
least
when
in
the
browsers
there's.
Another
point
is
that
all
browsers
just
use
one
scheme
for
all
the
web
pages,
so
it's
one
size
fits
all.
This
is
going
to
work
really
well
for
some
pages,
but
really
badly
for
others
as
well.
Next
slide.
F
So,
given
this
this
flexibility
and
these
different
approaches,
we
were
interested
in
which
actually
works
best
in
practice
next
slide.
These
are
results
from
a
paper
from
two
years
ago
on
HP
2,
and
we
found
priorities
mainly
impact
larger
pages.
In
our
use
case,
that
was
over
one
megabyte.
We
found
that
Chrome's
approach
is
actually
quite
good.
This
is
the
the
black
line
on
the
slide.
Chromed
us
everything
sequentially,
so
it
downloads
a
resource
in
full
before
going
to
the
next
one
in
most
cases,
so
that's
quite
good
for
the
web
browsing
his
case.
F
The
opposite
end
affair,
round-robin,
where
you
injure
leaf
bandwidth
between
everything
fairly,
is
the
worst
case.
This
is
kind
of
ironic,
because
that's
also
the
default
in
HTTP
2.
So,
for
example,
the
the
old
edge
browser
did
not
specify
priorities
and
always
fell
back
to
the
default,
always
getting
the
worst
case.
We
also
found
some
other
implementation
bugs
related
to
the
complexity
of
the
hb2
system,
in
that
so
next
slide.
Please.
So
now
for
a
quick
and
sp3,
there
were
a
lot
of
people.
F
That
said,
maybe
we
can
simplify
this
over
several
proposals
proposed
on
how
to
do
that.
There
was
a
question
from
the
quick
working
group
is
how
well
are
these
proposals
gonna
function
in
practice
before
we
decide
to
adopt
him,
so
we
decided
to
revisit
that
these
are
results
from
earlier
this
year.
We
implemented
all
of
this
again,
but
this
time
it's
free
and
quick.
We
did
all
the
browser
schemes
and
then
also
the
new
proposals
down
towards
the
bottom,
and
you
can
see
in
this
visualization.
F
F
We
again
confirmed
round
robin
is
absolutely
worst,
so
we
took
that
as
a
baseline
here.
So
all
the
numbers
here
are
actually
the
multiplicative
improvement
on
the
round,
robin
that
you
can
get
you
see
on
there
on
the
left.
That
chrome
is
the
best
performing
if
you
look
at
the
whole
webpage,
so
all
the
resources.
F
Another
important
result
was
that
if
you,
the
server-side
reprioritization,
is
very
powerful.
So
again,
if
you
have
the
same
scheme
for
all
pages,
some
pages
are
going
to
have
miss
prioritize
resources.
It's
very
useful.
If
you
on
the
server
side
can
say
this
resource
is
actually
more
important
than
the
browser
thinks
it
is.
F
The
problem
with
h2
is,
if
you
have
all
these
different
trees,
it's
difficult
to
know
how
to
adjust
a
priority
in
that
tree
for
each
browser,
you
could
do
some
kind
of
user
agent
sniffing,
but
that's
hacky
at
best
in
practice,
what
companies,
as
CloudFlare
I've,
been
doing,
is
simply
ignoring
what
the
browser
tells
them
and
overriding
everything
in
the
server
side,
which
kind
of
announced
the
whole
use
for
the
prioritization
system.
So
the
new
thing
we
need
to
come
up
with
really
needs
to
support
service
re
prioritization.
F
The
final
thing
is
that
again
use
we
can
go
to
a
very
simple
ski.
Some
people
have
been
floating,
simple,
FIFO
and
and
called
quits,
but
I
still
think
we
need
a
lot
of
flexibility
there
again.
You
will
have
some
web
sites
that
really
function
quite
badly
on
a
very
simple
scheme
and
then
there's
also
the
issue
from
head-of-line
blocking
removal
and
quick.
So
if
you
do
everything
fully
sequentially,
you
will
always
still
have
just
one
stream
on
the
wire
and
then
the
head
of
one
walking
isn't
gonna
do
any
benefits.
F
So
you
get
a
really
weird
situation
where
for
lossy
networks,
round-robin
actually
becomes
better
than
than
FIFO,
which
is
the
opposite
of
what
you
get
on
normal
networks,
better
networks.
So
you
need
some
flexibility
in
the
system
to
adapt
to
those
use
cases.
Those
are
the
main
results
and
Ian
and
his
team
and
did
some
actual
tests
in
wild
recently
to
confirm
or
deny
this
and
he's
gonna
propose.
Then
that.
E
Thanks
Robin,
alright,
next
slide.
Yes,
so
we've
been
ever
since
this
design
team
started,
one
of
the
first
things
I
started
doing
was
asking
other
people
to
write
code
because
really
I
don't
write
that
much
code
anymore.
It's
sad
but
true
and
I
convinced
a
co-worker
to
both
fix
our
existing
h2
scheme.
On
our
server
side.
It
turns
out
I
had
variety
of
those
which
caused
it
to
not
actually
perform
as
intended
for
Chrome
Luc's
implement,
FIFO
and
LIFO.
Actually,
it
turns
out.
Lifo
was
already
implemented.
Don't
ask
why
it
just
was.
E
Yeah
yeah
yeah:
it's
just
a
pre-existing
condition
that
sort
of
thing
we
also
added
round-robin.
We
did
not
have
support
for
that.
But,
since
you
know,
people
want
to
understand
like
exactly
what
the
h2
default
looked
like
in
real-world
pages.
We
wanted
to
get
some
data
on
that
and
to
go
back.
Gee
quick,
currently
uses
speedy.
It
always
has
and
never
bothered
to
move
over
to
82
priorities
because
it
never
really
was
worth
the
hassle
and
speedy
seem
to
perform
fine.
E
So
that's
kind
of
the
default
in
most
these
tests,
although
a
few
of
them
you'll
see
we
actually
compared
as
the
baseline
versus
h2
and
the
baseline
versus
speedy
separately.
Just
to
kind
of
give
you
better
statistics
on
the
metrics.
Oh
yes
and
FIFO
is
lowest
stream
ID.
First,
not
the
first
request
is
received.
So
it's
it's
a
request
order,
not
receipt
order
so
in
case
there's
a
redirect
on
the
on
the
request.
E
The
degradation
is
quite
significant.
I
kind
of
put
a
3%
in
two
points,
six
percent
is,
is
huge.
We
wouldn't
launch
an
experiment
that
made
something
like
0.3
percent
worse,
so
this
is
like
not
quite
catastrophic,
but
very
bad
like
this
is
completely
unacceptable,
bad.
So,
just
to
give
you
an
idea,
you
can
really
mess
this
up.
If
you
get
things
really
really
sideways,
obviously
like
those
fairly
sideways,
but
nonetheless
it's
worth
playing
up
next
time.
E
Actually,
some
of
the
most
interesting
data
comes
from
the
flywheel
data,
compression
proxy
and
chrome.
One
of
the
reasons
it's
interesting
is
there's
actually
a
high
degree
of
request,
multiplexing
and
they're,
commonly
a
large
number
of
requests
that
are
simultaneously
active
on
the
same
connection,
which
is
not
always
true
for
all
of
our
use
cases.
E
E
So
if
you
compare
relative
to
speedy
as
the
baseline,
which
is
kind
of
how
I
originally
set
up
these
experiments,
the
results
are
less
statistically
significant,
but,
as
you
can
see,
you
know,
there's
still
kind
of
an
indication
that
h2
is
is
better.
These
are
the
same
metrics.
It's
just.
The
statistical
analysis
was
done.
Two
different
ways
to
make
it
a
little
bit
more
interesting,
it'll
kind
of
give
you
some
understanding
next
slide,
amp
or
accelerated
mobile
pages.
E
Everyone
loves
this
I
know
so
this
actually
has
slightly
different
performance
properties
as
you'll
see,
and
in
this
case
speedy,
which
is
round
robin
within
bucket,
is
better
than
chrome,
h2.
Fifo
LIFO
around
Rothman
and
the
reason
for
that.
My
understanding
is
that
amp
actually
has
a
lot
less
like
kind
of
dependency
like
this
resource
depends
on
this
resource
depends
on
this
resource.
It's
much
more
designed
to
be
non
end
of
line
blocking
and
kind
of,
inherently
so
there's
a
much
simpler
resource
to
fancy
tree
next
slide.
E
Even
so
yeah
the
the
parameter
improvement
is
is
quite
large,
at
least
by
our
standards.
You
know,
you're
getting
close
to
two
one
percent:
a
performance
improvement
versus
priority
of
the
other
schemes.
So
one
thing
that
is
also
worth
noting
here
is
many
vanilla
suits
that
suggested.
Just
using
like
FIFO
is
fine.
I
think
this
data
at
least
presents
that
you
can
do
a
lot
better
than
five
without
something
overly
complex,
and
so
you
know
my
intuition
is.
E
E
Use
cases
should
allow
for
server-side
reprioritization,
as
marvin
mentioned.
The
existing
h2
tree
system
makes
that
quite
challenging
and
it
should
not
use
round
robin
as
the
default.
So
the
there's
a
current
draft
at
they
are
called
that
is
under
Kazuo
his
name
and
is
written
else
if
I'd
Lucas-
and
it
includes
a
scheme
like
this
but
I'm-
also
going
to
go
over
some
updated
design
details
that
we
have
one.
The
design
team
met
on
Saturday
or
Sunday
Mike
Mike.
D
E
Is
equivalent
to
essentially
a
speedy
parties,
but
we're
quite
not
round-robin,
but
instead
in
request
order.
We're
extremely
idea
orders,
that's
exactly
right!
Thanks
next
slide,
so
this
is
an
update
to
the
draft,
an
updated
version
of
the
draft.
That's
around
a
few
weeks
ago.
A
lot
of
the
details
here
are
actually
extraordinarily
similar,
just
copy
pasted
from
the
draft,
because
a
lot
of
the
concepts
are
the
same,
but
the
one
major
design
detail
is
kind
of
changed,
which
is
a
move
from
an
indent
header
to
one.
E
E
So
one
of
the
goals
here
is
to
have
a
somewhat
extensible
scheme.
We
want
to
have
a
core
functionality,
that's
actually
useful
and
we
can
prove
as
useful.
But
if
we
want
to
add
another
feature
to
this
scheme,
we
don't
want
to
have
to
ship
an
entirely
new
scheme,
and
then
then
we
want
to
have
a
way
of
like
expressing
this
new
thing.
So
the
idea
here
is
to
use
key
value
pairs.
Currently,
that's
specified
using
structured
headers.
There
might
be
other
ways
to
do
it,
but
you
know
it
seems
like
a
perfectly
plausible
approach.
E
There's
two
fields,
urgency
and
progressive.
Sir
agency
is
a
number
between
minus
1
and
6
right
now,
it's
basically
8
urgency
levels
to
indicate
these
are
like
the
equivalent
of
the
speedy
buckets
and
progressive
is
a
you
know,
0
or
1,
basically
a
boolean
to
say
either
I
want
this
approximately
round-robin
or
I
want
this.
You
know
sequentially
in
order
and
that
that
helps
you
indicate
that
a
resource
is
indicated.
It's
only
useful.
Why
call
or
nothing
so
there
are
a
lot
of
resources
that
just
can't
be
rendered
progressively,
and
that's
why
it's
called
progressive.
E
E
It's
just
completely
arbitrary,
so
this
ideally
will
allow
servers
to
effectively
prepare
those
things,
because,
if
everything's,
just
as
I
said,
if
it's
all
relative,
then
there
are
a
lot
of
different
ways
of
using
the
tree
as
well
as
gifts,
people,
some
advice
on
like
what
these
things
mean
and
how
to
use
them.
So
it's
a
little
bit
easier
for
application
developers
next
slide.
E
So
we
talked
a
little
bit
about
two
key
use
cases
before,
but
I
want
to
outline
them
a
little
bit
more
detail.
One
is
the
client
to
server
over
a
multiplex
to
HTTP
connection,
so
that's
HP,
2
or
HP
3.
So
it's
pretty
clear
that
we
understand
that
use
case.
This
is
exactly
what
h2
parties
did,
and
this
is.
This
is
something
we
have
a
lot
of.
Data
for
and
I
think
we,
we
know
how
to
ship,
and
we
know
how
what
the
performance
properties
of
it
are.
E
The
other
one
is
I
think
Roberto
described
it
as
within
the
server
where
the
server
is
kind
of
the
entire
serving
infrastructure,
and
that's
the
situation
where
an
origin
or
an
application.
Front-End
wants
to
change
the
the
party
as
it
when
it
arrives
with
the
proxy
or
maybe
the
proxy
does
itself
woman
change
the
party,
but
somewhere
inside
the
like
serving
infrastructure.
You've
decided
the
clients
beside
that.
This
is
the
party,
but
I
think
like
it
should
be
slightly
higher
than
the
other
images
are
slightly
lower
and.
E
So
the
proposal
here
is
to
actually
use
headers
and
as
an
API,
because
they're
the
standard
are
the
universal
API
for
HTTP
applications
could
also
have
a
specific
API.
You
know
it
I
know
a
lot
of
native
applications
like
to
actually
like
Burnett,
for
example,
has
a
way
to
expose
parties,
but
that's
a
little
bit
out
of
scope
for
the
design
team,
that's
kind
of
per
application.
E
However,
there
are
a
lot
of
challenges
with
headers
and
end,
and
it's
not
really
clear
that
the
the
working
group
wants
to
deal
with
those
right
now
at
this
moment,
and
we
also
need
a
frame
for
reprioritization
anyway.
So
the
proposed
solution
is
to
have
the
client
basically
consume.
The
header
convert
it
into
a
frame
on
the
wire
and
then
on
the
other
end.
If
it
needs
to
it,
can
convert
it
back
into
a
kind
of
whatever
representation
at
once.
So
this
is
fairly
flexible,
but
it
also
allows
existing
API
is
to
sorry.
E
So
there's
some
open
questions
here.
One
is:
should
this:
what
type
of
Heather
should
this
be?
Should
this
be
a
pseudo
header
and
can
and
should
this
be
exposed
to
the
Web
API?
So
this
is
actually
a
question.
I
was
gonna,
look
at
mark
Nottingham
for
there's
some
question
as
to
whether
this
is
kind
of
more
in
the
purview
of
the
w3c
or
is
this
is
in
the
purview
of
the
IETF.
But
it's
something
worth
thinking
about
next
slide.
E
So
the
wiring
coding
goals,
the
initial
priority
frame-
needs
to
be
delivered
prior
to
the
headers
frame.
So
it's
key
that
there
is
actually
an
initial
priority
and
we
know
what
it
is.
The
client
should
send
the
first
request
with
the
initial
priorities,
so
we
shouldn't
have
to
wait
for
the
settings.
E
So
the
new
proposed
frame-
and
this
is
actually
taken
out
of
the
existing
draft-
is
essentially
that
it's
a
stream
ID
and
a
priority
field,
which
is
a
string.
It's
only
saying
the
control
stream
because
of
HTTP
two
extensions.
It
seems
like
we
have
some
reason.
It
results
from
greasing
that
kind
of
confirmed
that,
probably
only
on
stream
zero.
Can
we
really
send
this
dream
this
for
a
new
frame
and
have
it
have
it
work
effectively?
E
It
must
be
sent
immediately
preceding
the
corresponding
headers
to
make
the
parsing
machinery
a
little
bit
easier
and
reprioritization
is
also
on
the
control
screen.
So
that's
pretty
straightforward.
So
there's
one
awkwardness
here,
which
is
this
situation,
where
I'm
making
a
request
on
the
request,
dream
and
right
before
it
I
have
to
serialize
this
new
frame.
It's
unfortunate
that,
just
due
to
how
hb2
is
specified,
we
don't
think
we
can
get
away
with
doing
it
any
other
way.
I
mean
if
other
people
have
especially
deployment
data
that
shows
otherwise
I'm
sure
we
could.
E
E
This
is
fairly,
you
know,
there's
an
ID
to
indicate
whether
it's
the
push
ID
or
the
stream
ID,
at
least
when
it's
not
on
the
request
stream.
Sorry,
when
it's
on
the
control
stream,
when
it's
on
the
request
stream.
Obviously
that
indicates
what
stream
ID
it
is,
and
you
know
in
this
case
we
actually
send
the
party
frame
on
the
request
stream.
First
and
then
on
the
control
stream
later
and
I
had
little
brackets
around
there
just
and
kind
of
indicate
that
that's
optional
this
next
plane.
E
So
let's
talk
to
about
the
proxy
to
origin
case
or
the
kind
of
within
server
case
a
little
bit.
The
tenant
proposal
is
that
a
priority
header
can
be
sent
to
the
proxy
indicates
the
birdie
on
the
previous
hop.
So
whatever
the
client
indicated,
the
priority
was
in
the
in
the
frame
and
it
can
also
be
sent
as
a
response
to
say.
Actually,
I
would
like
to
override
the
party
that
the
client
originally
specified.
E
E
Negotiation
with
settings,
this
is
also
in
the
draft
currently.
So
the
key
use
cases
we
definitely
want
to
capture
are
the
client
saying
I:
do
not
support
HP
to
parties.
So
if
you
know,
don't
don't
use
the
default
ordering,
don't
don't
use
round
robin
so
on
and
so
forth.
So
that's
a
critical
use
case,
and
that
was
in
a
previous
draft
that
Lucas
and
Brad
put
out
I
think
at
the
last
idea.
The
other
thing
we'd
like
is
the
server,
couldn't
express
what
information
it
wants
from
the
client.
E
It
turned
out
that
negotiating
anything
with
settings
is
is
a
little
bit
awkward
because
you're
not
really
sure
whose
settings
are
going
to
be
received
first
and
you
can't
rely
on
ordering,
and
so
we
ended
up
having
a
neat
bit
value,
which
indicates
the
priority
scheme
and
the
server
expresses
kind
of
the
order
of
priority
schemes.
It's
it
prefers,
and
you
know
the
client
basically
chooses
the
first
one
that
it
supports.
There's
a
few
other
ways
of
of
making
this
work,
but
I
think
something
with
that.
E
Some
small
issues
are
still
to
be
decided
should
urgently
start
at
one.
This
might
be
kind
of
confusing
to
developers.
There's
also
the
question
of:
should
the
lowest
priority
actually
be
the
most
urgent
or
should
the
highest
priority
be
the
most
urgent
again.
This
is
one
of
those
like
developer,
API
sort
of
things.
If
we
actually
think
we're
going
to
expose
this
to
people,
we
probably
should
expose
them,
be
like
easiest
surface
possible
or
did
the
encoding
next
slide.
E
All
right,
awesome
I,
like
sheds
as
much
as
anyone
all
right.
So
if
we
review
the
the
core
goals
and
at
least
the
bullet
points,
you
know
when
I
went
back
through
and
this
is
sort
of
unintentional,
it
seems
like
we
hit
kind
of
the
core
goals
that
were
outlined
at
the
beginning
and
covered
the
core
use
cases
and
I.
Don't
think
I
think
we
are
quite
confident
at
least
based
on
the
experimental
and
simulation
data
that
the
scheme
that
we're
outlining
here
actually
will
work
and
actually
will
perform
well.
B
Waiting
for
directions,
so,
okay,
go
ahead.
If
you
go
back
to
slide,
I'm
gonna
be
jumping
all
over
the
place
here.
I'm
good,
yep
I
think
there's
only
three
here,
because
back
porting
and
indicating
that
you're,
not
using
HTTP
or
--'tis
kind
of
imply
the
same
thing
right.
Yeah
I
thought
I'd
like
this
a
whole
lot
more
than
I
did
in
the
end,
unfortunately
I'm
having
real
trouble
sort
of
reconciling
this.
B
A
B
To
go
back
to
the
requirements
that
led
to
those
conclusions
more
than
anything
else,
and
you
had
a
slide
there.
That
basically
said
that
you
have
to
have
the
priorities
before
the
first
bits
of
the
headers
frame
lands
on
the
wire.
Yes,
I
think
to
make
phones
back
I'm,
not
convinced.
That's
true,
particularly
if
you
start.
If
you
accept
the
fact
that
the
that
a
header,
a
an
urgency
header
field,
whatever
you
want
to
call
it
I
forget
what
it
was,
is
the
API
in
which
you
expressed
that
piece.
B
You
think
about
those
cases
where
people
are
streaming
header
fields
in
to
the
stack
in
a
stream
in
the
important
stuff,
the
the
pseudo
header
fields
and
then
a
block
of
the
other
ones,
and
those
might
go
out
on
the
wire
before
you
actually
have
access
to
the
information
that
allows
you
to
prioritize
these
things.
So
do
we
have
any
information
that
supports
I?
Can.
E
Tell
you
exactly
why
we,
that
stipulation
is
in
there
and
you
can
decide
whether
that's
a
good
way
and
the
reason.
Why
is
because,
if
you
want
to
allow
several
different
priorities
and
you're,
sending
it
back
to
an
origin
application
front
end
or
whatever,
you
need
to
be
able
to
put
that
somewhere
in
the
original
indication
or
don't
need
to.
But
it
makes
it
a
lot
easier
if
you're
able
to
put
it
in
a
header
in
the
original
request
back
to
the
origin,
and
if
you
can't,
then
you
need
another
piece
of
metadata.
E
That
says
like
this.
Is
the
party
information
that
that
was
on
that
request
and
so
having
it
before
you're
forwarding
it
back
just
makes
the
process
a
lot
simpler.
So
it's
less
a
local
scheduling
decision
and
more
of
a.
If
you
want
to
inform
the
backend
of
what
the
client
said.
It
makes
life
a
lot
easier
and
that's
why
that
was
I'm.
E
A
But
I'll
so
Martin
before
you
go
and
and
for
you
and
everyone
else
in
the
line
you
know,
my
understanding
where
we're
at
is,
is
that
the
design
team
is
trying
to
make
a
recommendation.
The
next
step,
the
working
groups
decide
whether
or
not
wants
to
do
a
call
for
adoption
on
the
draft.
Knowing
that
we
don't
we're
not
going
to
rubber
stamp
it,
we
need
to
talk
about
it.
Do
you
think
that
it's
ready
for
that
or
do
you
think
the
design
team
needs
more
time?
I.
D
E
B
Once
we've
chased
a
few
of
these
things
to
ground
I,
don't
see
us
doing
anything
other
than
what
these
fine
folks
have
produced,
because
it's
I
think
it
is
approximately
the
right
thing
to
do.
It's
just
that.
I
want
to
make
sure
that
what
we,
what
we're
taking
is
reason
properly.
We've
got
60
minutes
so.
A
D
D
D
Can
priorities
change,
hop-by-hop
and
given
that
h2
and
currently
h3
no
longer
have
the
concept
of
hop-by-hop
headers,
then
expressing
what
you
want
on
this
hop
versus
what
the
client
asked
for
in
the
last
hop
gets
kind
of
dicey
if
you
an
end-to-end
header
and
trying
to
mix
semantics
of
different
connections
in
there.
So
what
we
wound
up
with
in
our
discussion
yesterday
was
we
use
headers
anywhere,
that
we
are
talking
about
a
different
hop
and
we
use
the
frame
to
talk
about
this
hop.
G
Of
the
whole
progestin,
but
not
just
sit,
so
it's
basically
the
client
sending
a
frame
to
indicate
how
that
should
be
prioritized,
while
using
a
header
to
communicate
that
information
from
the
proxy
to
origin
and
from
the
origin
to
the
proxy
to
the
powers
in
signal
how
it
should
prioritize.
Based
on
the
information.
That's
the
server
has.
H
Such
gentle
I
really
enjoyed
the
data
that
you
collected.
It
seems
like
you,
got
seven
different
priority
levels
from
understanding
it
correctly.
You've
run
it
a
bunch
but
against
a
bunch
of
different
websites.
There's
kind
of
a
couple
percentage
difference
between
different
approaches,
I
guess
like
as
a
website
developer
I
would
want
to
be
like
well
for
websites
that
have
these
particular
characteristics.
We
use
this
priority
scheme
and
these
other
websites.
We
use
this
particular
card
scheme.
I'm
curious
if
you've
got
any
recommendation.
E
Actually,
I
think
the
some
of
the
text
in
the
draft
is
actually
like
not
a
bad
recommendation,
because
you
guys
talked
about
some
of
the
issues
about
like
whether
resources
can
be
loaded
progressively
and
whether
they
depend
on
each
other
and
so
on
and
so
forth.
I,
don't
think.
That's
the
end-all,
be-all
and
actually
I.
Think
some
of
the
the
the
post
link
by
a
cloud
flare
is
actually
good
about
that
as
well,
but
I
think
I
think
the
answer
is
either
as
a
browser.
E
C
Roy
fielding
I
in
your
presentation,
I
didn't
see
any
reference
to
those
messages
that
I
sent
earlier
about
using
ditches.
Just
a
header
field
and
I
feel
at
this
point
that,
while
I
appreciate
a
lot
of
the
the
work,
that's
gone
into
it,
we're
really
on
the
cusp
of
a
changing
document
and
not
not
something.
That's
ready
for
any
sort
of
notion
of
consensus,
even
amongst
the
the
design
group.
So
I
think
that
the
work
should
continue
and
and
find
the
right
path.
C
I'm,
not
really
interested
in
a
lot
of
the
complexity,
that's
inherent
in
in
the
in
the
scheme.
Right
now,
in
the
sense
that
you've
got
a
lot
of
talk
about
frames
and
prioritizations
and
using
different
things,
which
you
know
from
my
perspective.
I
don't
need
any
of
that.
I
just
need
a
header
field,
so
I
would
like
to.
C
Get
us
back
to
the
point
where
we're
you
know
we
have
discussions
in
in
the
design
group
or
whatever,
or
we
shift
out
of
the
design
group.
We
should
actually
be
thinking
about
all
of
the
complexity
of
HTTP,
not
just
Chrome's
interests,
not
just
one
browsers
interest
or
a
different
browsers
interests.
B
B
That's
me
sounds
like
a
far
more
complex
situation
to
be
to
be
in
and
I.
Don't
know
that.
That's
really
all
that
helpful.
Do
you
have
any
information
to
suggest
that
this
is
absolutely
necessary
because
its
complexity
and
it's
a
sort
of
complexity,
that
I
thought
that
we
kind
of
decided
we
didn't
need
I
can.
E
B
B
E
G
B
I
The
server
that
is
receiving
the
requests
is
very
often
an
HTTP
one,
one
server,
regardless
of
how
the
client
connection
is
actually
terminated,
and
so
we
need
a
mechanism
of
specifying
to
the
thing
that
is
terminating
the
multiplexing,
how
it
should
do
something
right
and
if
you
have
an
HTTP
one-one
client
is
pretty
obvious
that
the
only
thing
you
can
do,
a
sorry,
an
h2,
b11
server.
It's
pretty
obvious.
The
only
thing
you
can
do
is
put
a
header
in
there
unless
you're
gonna
do
an
HTTP,
1.2
I
know.
I
So
it
seems
like
that's
a
pretty
foregone
conclusion,
then
the
next
question
you
have
is:
do
you
strip
it
or
don't
you
strip
it
right?
And
if
you
don't
strip
it,
then
you
have
all
kinds
of
additional
complexity
around
caching
that
gets
really
interesting
and
fun.
Clearly
we
could
invent
a
new
category
of
headers,
but
then
we'd
have
to
deal
with
the
backwards
compatibility
of
current
deployments.
I
The
observation
was
that
many
implementations
are
composed
of
different
libraries
together,
and
it
was
not
obvious
that
there
was
any
one
API
that
you
could
specify
that's
going
to
go
all
the
way
to
the
part
of
the
implementation
that
actually
has
to
do
the
multiplexing,
which
was
the
reason
for
saying
you
know.
Ultimately,
the
one
thing
that
is
HDPE
gosh
darn:
it
is
there's
headers
and
then
there's
other
stuff
right.
So
headers
is
the
way
to
talk
to
the
thing.
That's
actually
doing
the
serialization
and
the
multiplexing
I
see
Roy
laughing
anyway.
I
It's
sadly
true,
like
HTTP
man,
so
you
know
if
we
decide
to
make
that
end-to-end
I
mean
okay,
fine,
but
then
the
question.
But
then
you
have
to
do
a
disambiguation,
because
you
are
trying
to
target
a
specific
hop
when
you
are
a
client
trying
to
do
signals
prioritization.
So
either
you
have
to
make
rules
for
stripping
or
you
have
to
make
rules
for
mutation
all
right
anyway.
There.
A
G
A
Okay,
so
it's
pretty
clear
that
we
don't
have
consensus
on
the
solution
and
there's
still
a
lot
to
discuss
in
my
mind,
the
question
is:
where
should
the
discussion
happen
and
how
I
don't
yeah
just
my
perspective?
I
am
a
little
concerned
that
I
see
things
like
we're.
You
know
the
design
team
comes
to
consensus.
The
design
team
is
closing
issues,
but
it's
not
incorporating
into
our
community
and
so
I'd
rather
have
these
things
done
in
public
spaces,
so
we
can
get
the
input
from
the
entire
community
and
so
I'm
thinking.
J
Fight
yeah
and
I
think
one
question
just
if
anyone
on
the
design
team
thinks
that,
having
more
time
as
just
a
design
team
would
be
like
very
important
or
change
the
output
of
this.
That
would
be
good
to
know,
but
it
seems
my
impression
is
that
the
design
team
you
know
has
what
they
have
and
they
think
that's
a
good
spot
and
it's
probably
most
useful
to
have
that
come
into
the
larger
group.
Now
so
is
that
I
say
I
see
nodding.
So
that
seems
to
be.
A
B
I
mean
this
issue
of
whether
this
is
a
hop-by-hop
or
into
any
signal
is
kind
of
fundamental
and
we
need
to
resolve
it
now
sure
we
can.
We
can
say
that
we're
going
to
adopt
something
and
then
decide
to
do
something
completely
different
to
what's
being
proposed
in
that
in
that
document,
but
I
don't
see
why
we
have
to
I
think
we
should
probably
just
resolve
the
issue
before
we
adopt
this
thing,
since
it
proposes.
G
J
It's
also
a
I
think,
as
has
been
already
pointed
out
by
Kazu
her
like
there
are
two
parts
to
this
there's
what
we
want
to
communicate
of.
Essentially
this
the
seven
levels
plus
the
progressive
bit
and
if
we
think
essentially,
we
could
adopt
that
part
and
say
we're
not
sure
about
the
communication
scheme
for
it.
But
we
believe
that
that
is
the
right
thing
to
communicate.
However,
it
is
done
yeah
I,
think
that's
less
contentious.
K
B
A
And
I
don't
think
we
know
where
we're
going
yet.
But
if
you
look
at
the
history
of
documents
we've
adopted
and
what
comes
out
the
other
end,
there's
often
a
remarkable
difference,
and
you
know
the
higher
orbit
here
is-
is
the
way
HTP
working
group
is
working
on
this
now
and
I.
Think
that's
the
question
at
hand.
I
had.
E
B
A
We're
talking
about
next
steps,
yeah,
so
I
think
we'll
do
a
call
for
adoption
on
the
list
before
we
we
go
to
the
next
item.
You
know
if
we
adopt
something
we'll
have
list
discussion,
we'll
have
issues
the
next
opportunity
to
discuss
this
face-to-face
for
this
working
group
is
going
to
be
in
Vancouver.
B
G
B
M
As
a
potential
host
of
the
quick
interim
meeting,
the
person
who
reserved
the
rooms,
for
example,
what
we
have
reserved
now
is
currently
split
up
for,
essentially
speaking
the
Interop
and
then
a
few
days
of
the
quick
standards
meeting.
If
you
are
planning
to
add
two
days
to
do
that
on
the
beginning
or
end,
I
am
not
at
all
confident
at
this
point
that
we
have
those
rooms
available.
If
you
want
to
replace
the
interrupts
with
a
different
standards
meeting
or
run
them
in
parallel.
That
seems
very
different
and
I'm
a
little
concerned.
I.
A
Was
thinking
about
something
else,
but
I
was
not
assuming
any
availability
in
your
parts,
but
we
can
take
this
up.
So
thank
you
very
much
sure
you're
off
the
hook.
Yes,
all
right!
Well,
we'll
have
more
discussions
about
this.
Thank
you
very
much
and
thank
you
to
Ian
for
the
presentation
that
was
very
helpful.
So.
C
C
If
you
want
to
see
how
we've
reorganized
everything,
then
you
want
to
look
at
the
whole
github
history,
vast
majority
of
our
work
so
far
has
been
trying
to
fit
the
rights
paragraphs
in
the
right
locations
and
then
making
minor
changes
in
the
paragraphs
after
that.
So
you'll
see
it's
much
easier
if
you,
if
you're,
just
interested
in
what
the
protocol
changes,
are
look
at
the
franc
and
RFC
dips.
C
Julian
has
has
kept
detailed
tracker
while
they're
at
a
post
against
the
70
to
3x
RFC's,
and
we
only
have
two
left
to
fix
in
the
drafts,
those
issues
163
and
53
and
I'm
not
sure
if
we
can
talk
about
them
later
or
not,
but
there
there
can
and
next
slide.
So
what
are
the
issues
to
discuss
now?
Work?
Okay,
so.
A
A
I
guess
really
we
tried
to
do
it
last
time
and
we
were,
but
you
subtle.
So
we
were
all
this
time.
Third
edition
of
being
to
several
yes,
and
at
the
time
one
of
the
one
of
the
questions
that
came
up
was
well
delete,
isn't
pretty
much
in
the
same
a
place.
Why
don't
we
say
the
same
thing
for
it
and
in
reality
there
are
two
different
approaches
we
could
take
here.
One
is
is
to
let
delete
follow
the
path
of
get
which
is.
It
was
never
defined
to
have
a
request
body.
Therefore,
you.
A
Dragons,
be
there
don't
put
one
there
or
it
would
take
the
path
of
options
which
is
there's
a
request
body
that
could
occur
on
this,
but
we
don't
really
know
what
it
means.
That's
up
to
the
resource
or
something
else
to
define
the
semantics
self
about
covers.
It
doesn't
and
options.
Of
course,
the
request
body
is
used
more
often
than
lots
of
other
places
yeah.
Maybe
you
scroll
down
a
little
bit
yeah.
G
A
So
Julian
opened
this.
He
wanted
this
consider
them
separately.
So
there's
some
discussion
here
and
and
Everett
points
out.
Yes,
the
intent
of
delete
is
just
ability
to
target
resource,
and
so
the
question
is
what
would
a
request
body
mean
and
especially
for
generic
software
that
didn't
understand
a
particular
request
body
format?
What
would
that
do
and
would
it
break
any
existing
uses
of
delete
and
so
I
think
he
is
proposing?
We
take
the
get
path
for
delete
Julian,
not
in
this
one,
but
later
points
out
that
there
are
people
who
are
using
delete
bodies.
A
C
I
I
think
it's
always
been
forbidden
from
the
sense
that
semantics
that's
I
mean
I
realized
that
you
can
read
it
in
a
in
a
way
that
by
not
actually
actually
demanding
that
we've
not
send
it
that
we
are
allowing
it,
but
it
had
exact,
say
the
same.
Exact
text
was
used
as
the
for
the
method
get'
for
the
same
reason,
and
we
just
didn't
want
parsers
to
change
to
not
read
the
body,
because
if
you
don't,
if
you
don't
try
to
read
the
body,
then
you're
creating
a
security
hole.
C
C
So
I
know
what
I'm
absolutely
certain
what
the
intent
is,
because
I
lived
through
that
nightmare
several
times
and
I
agree
that
some
people
have
taken
liberty
of
stretching
what's
there
into
whatever
their
latest
application
is,
but
they've
always
done
it
in
particularly
stupid
way.
So
I'm
I
don't
care
if
we
break
those
just
you
know
they
obviously
look
like
they're
looking
to
be
broken,
there's
already
servers.
They
do
not
interoperate
with
and
the
point
of
the
protocol
to
define
what
is
interoperable.
Not.
What
is
what
everyone
does
in.
K
A
K
But
you
can't
say
that
we
have
some
post
and
then
claim
that
putting
something
into
bed,
because
I
think
we
had
an
interesting
discussion
about
whether
and
it
immediately
is
allowed
to
what
the
payload
on
a
delete,
request
and
I
think
it
would
be
good
to
get
to
the
bottom
of
this
first,
because
I
don't
eat
this.
A
spool
I
mean
that's
up.
That
better
implies
that
intermediaries
can
essentially
drop
any
request
body,
except
for
maybe
post,
and
that
is
certainly
not
true,
so
I
think
that's
something.
K
K
A
Julian,
the
difference
is
that
software
that
either
drops
errors
on
a
post,
payload
requests,
payload
or
software
that
doesn't
make
it
available
to
the
application
would
be
considered,
broken
I
think
by
anyone,
but
software
that
either
rejects
or
a
delete
request
a
payload
or
doesn't
make
it
available.
The
application,
I
suspect,
is
quite
common
and
I'm
thinking
about
web
application,
firewalls
I'm
thinking
about
various
api's
and
various
servers
and
very
live
various
libraries
that
implement
HTTP
on
the
server
side,
as
well
as
intermediaries,
and
see
DNS
and
proxies.
So
does
anyone
here
have
an.
C
A
K
C
K
B
In
the
room
Thompson,
are
we
talking
about
preserving
the
use
cases
that
these
people
use
these
bodies
for,
or
are
we
talking
about
simply
maintaining
backward
compatibility
through
some
sort
of
adherence
to
the
spirit
of
previous
specifications,
because
I'm
inclined
to
say
that
you
know
every
every
server
I've
seen
drops
delete
bodies
or
doesn't
pass
them
through
and
so
in
terms
of
interoperability.
The
request
body
on
undelete
is
pretty
damn
close
to
useless.
K
B
K
Think
if
we
make
a
normative
change,
it's
not
something
that,
in
my
mind,
is
clearly
allowed
right.
Now
we
need
to
have
a
very
and
I
haven't
heard
that
yet
and
if
the
reason
is
that
intermediary
is
actually
I,
think
you
need
a
separate
issue
to
clarify
that
I
don't
think
they
are,
and
that
also
affects
the
whole
extensibility
story
of
HTTP.
So
if
there's
any
doubt
about
whether
intermediaries
are
allowed
to
drop
the
crest
bodies
on
methods
they
don't
know,
we
need
to
clarify
that
and
that's
much
more
important
than
this
issue.
Julian.
H
Substantial
I,
just
yeah
I,
feel
like
there's
a
dangerous
thing.
If
we
start
disallowing
bodies
everywhere,
I
mean
I've
written
with
apps
I've.
You
know
I'm
sure
I'm
gonna
get
in
trouble
for
Spiegel
here,
I've
put
bodies
and
get
requests
and
I
love
doing
so.
One
of
the
reasons
is
that
semantically,
if
what
I'm
doing
is
requesting
something
I,
don't
want
to
just
push
all
of
my
push
bodies.
H
Sorry
I
don't
want
to
make
every
request
that
contains
a
body
end
up,
saying,
saying,
post
on
it,
I
think
that
means
that
all
of
the
other
HTTP
methods
become
pointless.
So
yeah
I
thought
it's
worth.
I
think
that
it's
still
worth
having
bodies
on
get
requests
on
delete
requests
and
so
on,
insofar
as
those
requests
and
matching
the
semantics
that
are
meant
by
getting
delete
but
but.
A
C
The
right,
the
problem
that
you
run
into
is
that
there's
something
there
is
TCP
or
SS
or
TLS
connections,
and
then,
on
top
of
that
is
HTTP,
and
the
only
thing
that
differentiates
the
two
are
the
set
of
agreements
we've
made
to
restrict
and
what
we
send
so
I
understand.
G
are
what
I
really
want
as
a
search
method,
but
I
can't
define
a
new
method.
Alts
I'll
just
send
get
instead
and
I'll
send
stuff
in
the
body
there's
a
reason.
C
We
didn't
allow
that
in
the
first
place
it's
to
make
the
URLs
valuable
and
linkable
across
the
web.
That's
why
it
was
there
so
as
much
as
I
appreciate
that
use
case,
I
deliberately
killed
it.
Okay,
yeah
it's
it's
not
it's
not
and
feel
free
to
blame
me
for
that
or
timber
timbres,
Lee
or
anyone
else
Tim.
It
Tim
actually
had
separate
methods,
but.
C
J
One
point
I'd
like
to
bring
up
to
that
is:
if
we
do,
you
know,
if
you
did
want
to
specify
that,
yes,
you
can
have
the
body
in
there
I
think
we
we
do,
need
a
good
explanation
of
how
we
understand
the
semantics
to
make
it
interoperable
I.
Think
that's
the
part
of
the
bar
that
you
it
can't
really
satisfy,
because
these
are
all
nonsense.
You
use
cases
yeah.
H
G
H
F
J
It
seems
like
yeah,
it
seems
like
we
should
kind
of
wrap
this
up.
We
have
some
disagreement
among
the
editors,
but
I
think
it'd
be
good
to
get
a
good
sense
of
the
room
of
what
the
working
group
wants
to
do
on
this
we've
had
some
comments,
but
not
a
ton,
so
I
think
you'd
be
perfect
to
take
a
hum
on
this.
Yes,
okay,
so
essentially
the
two
options
are:
do
we
allow
you
guys
going
out
to
be
able
to
home
from
yeah.
H
J
J
J
K
J
E
J
A
A
All
right,
next
up,
our
old
friend
updating,
stored,
headers,
sorry,
our
old
friend
updating,
stored
headers,
so
we've
had
a
fairly
long-running
discussion
here
of
what
to
do
about
updating,
stored
headers
in
a
cache.
So
when
a
304
comes
in
and
has
new
headers,
the
current
specification
says
that
you
update
the
stored
copy
with
the
new
headers
and
it
turns
out
that
that
is
not
always
done
consistently
by
implementations.
A
A
few
implementations
don't
do
it
at
all,
which
is
to
bug,
but
especially
the
browser's
omit
some
headers
from
the
update
and
we've
been
going
through
a
discussion
of
why
that
is
and
then
what
the
right
design
is,
and
you
can
see
it's
been
a
fairly
long
discussion.
I
wrote
some
tests
to
figure
out
what
people
actually
did.
We
got
gathered
some
date
on
that
and
I
think.
A
This
is
the
most
recent
proposal.
Oh
and
there
was
a
comment
an
hour
ago,
which
is
to
replace
the
third
paragraph,
and,
of
course,
that's
contextual.
Thanks
mark
I
think
we're
talking
about
this
proposal
here,
where
we
say
something
like
due
to
their
semantics.
Updating
some
header
fields
can
result
in
the
cached
estate
becoming
consistent
invalid,
depending
on
how
it
is
implemented.
For
example,
updating
content
location
might
make
a
cache
response.
Incorrect,
we're
updating,
content
range
of
might
might
be
unrealistic
after
partial
responses
have
been
combined.
A
Likewise,
changing
the
value
of
a
header
field
might
have
external
effects
that
the
cache
cannot
account
for,
for
example,
a
user
agent
cached
or
as
a
pre
render
artifact
instead
of
the
raw
bytes
of
an
HTTP
response.
Changing
the
content,
post
factor
is
not
possible
and
possibly
dangerous,
and
so
I
think
would
it
go
No.
Thank
you.
Thank
you.
In
these
limited
situations,
a
cache
may
omit
the
headers
listed
below
from
updates
servers
should
not
send
updated
values
for
these
headers
in
a
3
or
4
response.
A
I
did
as
much
digging
as
I
could,
through
the
the
browser
revision
histories
and
discussions
and
I
think
that
there
was
a
lot
of
cargo
cult
thing
around
old,
26
16
and
even
26
28
68
language,
around
cache
updates
where
people
misunderstood
it
and
then
applied
it
unevenly
and
as
a
result,
a
lot
of
headers
are
exempted
from
updates
that
don't
need
to
be
and
I.
In
the
cases
I
talked
about
before
work
with
content,
ranging
content
type
and
things
like
that.
I
completely
understand
why
the
browsers
want
to
exempt
those
headers.
A
But
there
are
other
headers
like
everything
starting
with
X
content
is
exempted,
which
doesn't
really
make
sense
or
content
and
I.
Think
that's
because
they
thought
that
those
indicated
what
entity
headers
were.
Of
course,
we
removed
entity
headers
from
Biss
that
that's
not
a
concept
in
HTTP
anymore,
and
so
it's
really
just
talking
about
what
the
right
list
of
headers
are
to
include
in
this
exemption,
and
so
I
wanted
to
flag
this
for
discussion
to
see.
B
Your
legs,
no
I,
wanted
to
talk
to
Andy.
First,
madame
Thomson
I
might
be
able
to
help
with
setcookie.
Is
it
because
what's
happening
is
that
the
cookies
are
being
taken
off
as
the
response
comes
in
and
acted
upon
at
that
point,
and
then
there's
no
expectation
that
cookies
will
divert
will
be
available
to
cache
responses
right.
B
Artifact
of
having
a
deeply
integrated
stack
rather
than
one
that
is
strictly
layered
right,
so
I,
don't
know
that
that's
strictly
a
problem,
it's
just
an
interesting
side
effect
of
the
way
this
works
is
that
the
browser's
effectively
acting
as
an
intermediary,
that's
stripping
off
cookies,
because
it's
already
dealt
with
them
and
when
things
request
things
from
the
cache,
they
don't
care
about
cookies,
because
we
know
that
none
of
those
things
care
about
course.
It's.
B
B
A
A
B
A
N
A
This
is
issue
163,
which
is
a
red
ID,
52:36
I'll,
open
that
up.
Oh
dear,
oh.
A
C
Yeah
not
exactly
sure
I
did
just
brief
you're.
Looking
at
the
correct
text,
it's
not
correct,
so
in
the
sense
that
if
you
have,
if
you
have
a
validator,
that's
based
on
content
md5,
for
example,
or
a
hash
of
the
content,
it
is
going
to
be
the
same
validator,
no
matter
how
many
different
media
types
you
have
for
that
representation.
C
So
you
may
have
multiple
yeah.
You
may
have
the
same
image,
for
example
in
different
media
types
based
upon
what
which
particular
browser
accepts,
what
particular
media
types
like
that,
which
actually
represent
the
same
number
of
bits.
It's
just
that
one
browser
called
it
X,
experimental
image
and
last
one
and
a
more
recent
one
calls
it
application
image
whatever
first,
just
purely
because
of
versioning.
So
the
exact
same
content
exact
same
bits
that
have
the
exact
same
validator
and
that's
perfectly
normal.
Okay,.
B
G
C
C
B
A
C
C
H
K
K
K
A
We
failed
to
communicate
we
and
our
predecessors,
although
that
includes
Roy,
so
I
failed
to
communicate
well
enough
to
implementers
to
implement
this
correctly
and
the
implementations
don't
behave
well
in
this.
With
this
kind
of
input
and
as
I
understand
it,
especially
the
client
implementations,
don't
want
to
change
the
way
they
behave
regarding
this
because
to
them
it's
all
risk
and
no
reward.
They
are
compatible
with
the
web
as
its
deployed.
K
K
C
A
A
A
A
C
C
A
A
C
A
That
the
concern
is
I,
understand
it.
Well,
I
don't
wants
to
have
one
way
to
do
it
so
that
he
gets
consistency
between
implementations,
I,
think
the
concern
is
driven
that
in
pathological
cases,
when
you
have
hetero
combination
and
space
is,
is
semantically
significant,
for
example,
you're
doing
things
like
signatures,
then
it
would
be
good
to
have
one
way
to
do
this,
especially
if
your
signature
algorithm
is
combine
all
instances
of
this
header
and
then
sign
that
or
you
know,
do
integrity
on
that.
You
need
one
way
to
do
it.
Yes,.
C
My
my
trial
issue,
my
issue,
would
be
any
additional
requirement
here
would
imply
that
everyone
who
doesn't
do
that
is
somehow
broken
and
I.
Don't
think
that
anything
any
implications
out
there
actually
care
in
terms
of
you
know
whether
they
state
comma,
their
one
space
or
commas,
no
space
or
common
five
spaces.
The
reason
for
that
is
is
frequently
when
you're,
combining
header
fields
and
they're
right
next
to
each
other.
You
might
want
just
white
space
everything
between
the
two,
for
so
that
you
don't
have
to
recopy
in
a
memory,
but
that's
a
really.
A
Mean
my
impression
is:
is
that
the
vast
majority
implementations
do
do
comma
space,
but
you
know,
and
maybe
I
think
it
was
discussed
before.
Maybe
one
thing
we
could
do
is
not
put
it
as
part
of
a
requirement.
It's
a
may,
but
it's
still
requirement
language
but
say
the
canonical
way
to
do
this
is
X.
You
know
divorce
it
from
that
requirement
language
a
little
bit
right.
A
I'm
mark
this
needs
data,
because
I
think
we
left
this
in
a
previous
meeting
saying
we
wanted
to
get
a
little
data
about
what's
out
there,
but
just
to
remind
folks
we
had
this.
This
issue
of
you
know:
header
field
names
are
defined
as
tokens
and
that's
actually
an
extremely
fulness
of
syntax,
and
so
the
suggestion
was
maybe
we
can
cut
it
down
a
little
bit
and
I
think
folks
latched
onto
that
initially,
because
they
thought
okay.
A
If
we
can
constrain
the
syntax
of
field
names,
then
that
has
potentially
some
security
benefits
when
you
have
strange
characters
and
field
names
and
then
they're
put
into
things
like
the
environment
or
you
know
other
places
that
can
have
surprising
and
sometimes
dangerous
effects,
and
then
I
think
we
got
cold
feet
on
that.
A
little
bit
in
that
folks
felt
that
if
we
constrain
the
syntax
in
any
significant
way,
we
could
be
breaking
deployed,
applications.
A
And
then
yeah,
that's
Willie
said
what
H
a
proxy
accepts.
We
discussed
it
twice
in
nine
cup.
Now
we
haven't
been
there
twice:
okay
and
then
in
104,
still
seeking
data
for
characters
used
in
the
world.
Looking
at
the
HP
archive,
so
I
took
an
action
I
think
to
do
that.
We
have
any
further
thoughts
about
that.
I
just
want
to
keep
that
on
people's
minds.
A
If
we
do
discover
that
you
know
in
the
wild
these
meta
characters
or
they're,
not
delimiter
s,
because
the
owners
are
accepted
from
token,
but
the
non-ascii
not
non
letter
non
digit
and
common.
You
know
tokens
are
used.
Are
we
comfortable
getting
rid
of
those
or
at
least
cautioning
against
them?
Perhaps
any
thoughts
I.
A
And
if
we
could
get
that
pushed
all
the
way
up
to
the
registry
so
that
you
can't
register
them
all
or
you
have
to
have
an
exceptional,
whatever
that'd
be
good.
Okay
I'll
do
that
work
then
I
actually
have
to
do
that.
Work
for
something
else.
Right
now,
so
I
can
put
two
things
together.
No,
that's
fine!
That's
fine,
I
think
I
think
we're
done.
A
So
I'm,
just
stepping
back
I,
think
Roy
and
Julian
and
I
are
planning
to
get
together
around
the
quick
interim
in
early
February
or
late
January
and
work
on
the
drafts
for
a
few
days
and
try
and
turn
out
some
of
these
issues,
and
especially
editorial
stuff.
We
did
a
similar
thing
during
HTV
bists
and
that
was
quite
productive,
so
we're
hoping
that
that'll
get
us
the
point
where
this
issues
list
is
much
much
smaller
and
we'll
have
some
drafts
review
in
the
working
group
and
maybe
around
Vancouver,
so
I
he's
starting
to
think
about.
C
A
C
A
We
can't
really
retro
actively
define
a
scope
for
the
header
in
a
meaningful
way
that
if
you
wanted
to
find
one
that's
tightly
scoped,
it's
probably
best
as
a
new
header,
it
is
defined
as
something
that's
vague,
and
so
it's
gonna
be
used
in
a
lot
of
different
ways
where
it
is
used
and
so
narrowing
it
down
is
probably
problematic
unless
it's
for
a
specific
use
case
that
says,
use
the
retry
after
header
to
do
X.
So.
B
B
That
heuristics
may
be
may
be
your
best
bet
without
the
presence
of
that
other
information,
for
instance,
if
it's
attached
to
a
503,
often
that
is
something
that
means
that
the
server's
overloaded
or
something
along
those
lines,
and
you
might
want
to
treat
that
as
applying
to
all
requests
that
you
might
want
to
make
to
that
server.
But
it
doesn't
explicitly
say
anything
like
that.
It
just
says:
use
your
heuristics
or
whatever
information
you
might
get.
I
don't
know.
If
you
want
to
consider
incorporating
that
text
or
something
along
those
lines,
but
I.
A
G
A
B
A
A
Okay
thanks,
so
the
issue
you
were
referring
to
Roy.
Oh
sorry,.
C
A
C
So
so
what
we're
working
on
right
now,
literally
in
what
I
was
drafting
I'm
flight
over
and
I?
Read
it
last
night
and
it's
not
as
good
as
I
wanted,
so
I
didn't
submit
it
anyway.
The
the
notion
of
authority
in
for
HCP
and
the
notion
of
age.
What
is
an
origin
server
in
HTTP
is
sort
of
in
meshed
in
the
in
the
old
way.
We
think
about
the
Internet
in
terms
of
using
TCP
and
HCP.
You
contact
an
origin
server
on
a
certain
port
and
that
defines
the
authority
for
four
HUP.
Four
HUP
s.
C
C
There
is
a
specific
way
defining
what
the
what
the
authority
is
in
sense,
that
we
define
the
host
of
the
of
the
origin
server
and
the
port
and
this
the
scheme
as
defining
the
authority
for
HTTP,
but
then
for
TLS
based
services.
What
occurs
is
a
a
certificate
handshake
which
is
applicable
for
the
host,
regardless
of
what
port
it
comes
in,
because
the
port
is
not
a
trusted
interface
and,
of
course,
Martin
can
explain
this
much
better
than
I
can
anyway.
C
So
what
we're
trying
to
do
is
rewrite
the
definition
of
what
the
HTTP
authority
is
for
HTTP,
so
that
it
is
applicable
to
not
only
define
which
server
you're
talking
to,
but
also
allow
things
like
alternative
services
in
h3s
use
of
quick
to
take
over
that
authority
will
be
able
to
represent
itself
as
the
authority,
even
though
they're
not
talking
tcp
over
the
specific
port.
So
it's
it's
basically
redefining
it
as
here's.
C
What
authority
is
means
for
HTTP,
but
here
are
these
other
ways
in
which
that
is
also
accomplished,
not
deprecating
any
of
the
existing
ones.
So
that's
the
goal-
hopefully
I'll
get
something
finished
tonight
or
tomorrow
and,
of
course,
won't
be
ready
for
review
until
this
later
in
the
quarter.
But
that's
hungry
working
on
right
now
and
if
you
have
anything
that
you
want
to
be
absolutely
sure,
I
include
in
that
feel
free
to
add
it
to
issue
whatever
issue.
This
is
which
one
is
it
37,
37
yeah,
okay,.
A
L
L
L
L
We
might
use
structured
headers
to
define
the
specification
we
may
want
to
use
an
upper
bound
for
read,
limit
result
which
is
Delta
second,
and
there
have
been
some
discussion
about
Hannah
names.
We
can
postpone
them
once
we
adopted
specification
next,
thanks.
I
have
to
thank
a
lot
of
people
for
the
initial
contribution
that
is
mark.
L
L
Our
our
goal
actually
is
to
put
everything
on
a
common
ground,
because
this
kind
of
header
can
work
only
if
they
are
the
semantics
is
standardized.
While
currently
we
have
a
high
proliferation
of
header,
and
this
means
that
clients
just
ignore
them,
because
clients
don't
know
which
odd
header
they
with
limit
headers,
that
they
can
found
I
can't
in
some
environment
there
are
12,
but
client
just
can
iterate
looking
for
12
possible
between
headers,
so.
I
L
B
B
D
B
I
think
that's
that's
a
highly
applicable
here.
This
is
one
of
those
cases
where
these
things
will
apply
across
different
scopes,
and
it
may
be
that
you
need
a
scope
parameter
and
in
this
I'm
thinking
about
the
case
where
you
have
a
forward
proxy.
That
applies
one
rate
limit,
and
then
you
have
back
end
servers
that
are
responsible
for
parts
of
the
space
and
each
of
them
have
independent
right
limits
and
when
you
get
a
response,
back,
you're
gonna
get
multiple
right
limits
back
and
some
of
them
will
apply
across
the
entire
server.
B
N
Relaying
for
Chris
lemons,
it's
going
to
be
important
to
think
about
per
hop
considerations,
because
in
some
cases
a
proxy
needs
to
communicate
to
a
client
that
a
given
request
is
out
of
limit
and
in
some
cases
a
proxy
might
wish
to
retry
a
request
after
the
limit
has
expired
transparently
to
the
client
and
then
relaying
for
Thomas
Peterson.
Is
there
a
reason
why
this
spec
isn't
making
use
of
structured
headers?
This
could
apply
to
both
the
optional
fields
and
replace
three
headers
with
one.
A
Okay,
thank
you.
So
I
had
two
quick
questions:
one
for
Roberto,
one
for
the
room,
Roberto,
there's
a
as
you
mentioned.
There
are
a
lot
of
folks
who
are
doing
this
in
the
wild,
especially
for
things
like
http-based
api's.
Have
you
have
you
engaged
with
those
communities?
Have
you
had
discussion
with
them
about
your
proposal
at
all.
L
L
A
C
A
All
right,
I,
don't
think
we're
ready
to
do
a
hum
or
anything
quite
yet,
but
but
this
is
something
I
think
is
definitely
on
our
radar.
What
could
please
continue
to
work
on
the
draft?
Please
continue
to
engage
with
the
working
group
and
and
we'll
have
more
discussion
and
we'll
see
what
we
said
next
time:
okay,
Thank
You
Roberta.
Yes,
thank.