►
From YouTube: IETF105-HTTPBIS-20190722-1550
Description
HTTPBIS meeting session at IETF105
2019/07/22 1550
https://datatracker.ietf.org/meeting/105/proceedings/
B
A
A
D
Hello
welcome
to
Monday
session
one
of
HTTP
or
the
HP
working
group.
If
you
prefer,
we
have
a
pretty
full
agenda
today,
mostly
made
of
both
talking
about
our
active
and
adopted
drafts,
including
a
couple
that
are
new
on
our
plate.
Since
the
last
time
we
met
okay
and
we're
just
about
ready
to
go.
We're
also
going
to
talk
about
prioritization
a
little
later
on,
so
we'll
have
a
chance
to
agenda
fashion.
You
and
mark
puts
up
the
agenda
salon
and
the
blue
sheets
have
been
started
to
be
passed
around
already,
which
is
great.
D
We've
also
established
the
scribes
and
minute
takers.
So
thank
you
for
your
volunteerism
to
the
IETF
up
on
the
screen.
Now
is
the
note?
Well,
the
note
well
governs
your
contributions
to
the
IETF
and
the
intellectual
property
they're
on.
If
you
have
questions
about
the
note,
well,
come
chat
with
us
or
even
better
our
ad
next
slide,
like
I,
say
more:
oh,
it's
a
check,
check
and
check.
Thank
you
again.
Everyone
and
here
is
our
agenda,
and
you
can
see
the
list
of
active,
extensive
drafts
on
the
official
agenda.
D
Would
anyone
like
to
make
changes?
Thereupon,
going
once
going
twice
sold
all
right,
so
we'll
kick
it
off
the
first
draft
we're
going
to
talk
about
it
has
been
actually
adopted
since
the
last
time
we
met
together
and
I
think
believe.
Lucas
is
going
to
do
the
update
and
presentation
on
it.
Thanks
Lucas.
E
E
As
you
see,
this
is
2002
done
by
Mogul
and
it
tried
to
fix
a
problem
at
the
time
and
the
draft
goes
into
a
lot
of
discussive
texts
about
what
that
problem
was.
It
was
even
update
in
2010
by
that
RFC
to
just
add
algorithm
so
on
the
next
slide
or
two
I'll
explain
this,
but
not
going
there.
Yet
it
had
this,
this
terminology
around
instances
and
entities
I
don't
want
to
get
into
that.
E
Get
down
to
the
core
of
what
this
thing
is
because
in
practice
it's
actually
quite
simple
and
this
few
edge
cases
that
jump
out.
So
we
try
to
improve
upon
them.
In
simple
terms,
the
dye
dress
is
just
a
hash
over
a
representation
all
payload
body,
so
you've
gone
to
the
next
slide.
As
an
example,
you'd
make
a
request
for
a
thing,
get
a
response
back,
and
in
this
case
it's
HelloWorld.
E
We
have
a
sha-256
hash
of
that
body
so
that
I
just
had
a
is
had
a
field
is
composed
of
the
algorithm
that
you're
using
to
calculate
the
hash
and
then
the
hash
itself.
The
format
of
that
hash
can
vary,
could
be
basics,
t4
or
some
other
thing.
That
definition
is
basically
tied
to
the
algorithm
name
and
title,
and
so
the
list
of
algorithms
held
in
Ayana
is
what's
listed
on
the
right
hand,
side
there
and
I've
gone
to
the
next
slide.
E
I'll
come
on
to
that
in
a
second.
But
what
this?
What's
the
new
digest
thing?
It's
it's
minimum
as
I
said
trying
to
tease
the
same
semantics
as
RFC
7230
terminology
rather
than
entity,
and
this
width
and
we
use
representation
rather
than
instance,
we
selected
representation
data
I,
don't
really
want
to
get
into
that
what
those
things
mean,
but
maybe
if
people
need
clarifying
horrifying
at
the
end,
we
can
do
that,
but
those
edge
cases
I
mentioned
around
when
you're
doing
different
range
requests
or
content
encodings.
E
Some
of
the
things
that
have
happened
in
the
meantime
in
the
20
years
of
security
considerations
for
things
around
signatures
so
digest?
Is
you
quite
often
to
protect
the
payload
of
a
response
and
signature
head
is
easy
to
protect
the
metadata
and
the
digest.
These
are
the
metadata
included
in
it,
and
so
you
can
create
a
better
value
proposition
by
doing
those
things,
but
there's
those
considerations.
So
we've
come
to
the
next
word.
E
Algorithms,
the
security
landscapes
changed
in
the
last
20
years,
so
things
that
were
defined
in
that
header,
like
char
one,
an
md5
a
now
not
recommended
and
like
who
said
we,
we
added
these
things
called
identity
or
content,
encoding,
independent
algorithms,
so
it
can
be
a
bit
confusing.
Sometimes
if
you
get
a
digest
back
to
understand
exactly
which
content
encoding
was
used
to
create
it,
and
the
draft
goes
into
more
detail
about
those
again,
it's
quite
simple.
E
Yes,
next
slide,
please
there's
a
bunch
of
open
issues
that
need
input.
If
anyone
cares
to
add
anything,
they
are
presented
here
in
order
of
kind
of
divisive
nurse
they're,
not
urgent,
but
some
of
them
are
harder
to
debate
than
others,
and
you
can
see
if
look
at
the
bottom,
you
know
do
we
need
a
threat
model?
Maybe
one
for
citing
char
is
difficult
to
say,
but
really
it
was
got
mentioned,
chopped
down
used
with
a
few
signatures.
E
Do
we
want
to
get
into
how
to
use
this
thing
personally,
I'd
like
to
focus
on
making
a
message
clearer
for
what
this
header
is
and
that
any
guidance
on
how
to
use
it
should
probably
live
elsewhere,
but
I'd
really
like
opinion
on
that,
how
to
use
this
thing
with
patch
requests,
for
instance,
filling
weird
gaps
that
exist
in
the
current
document
in
the
right
way.
So
if
anyone
cares,
please
go
on
to
to
get
up
I,
don't
think
we
need
to
go
through
them
one
by
one
now,
just
for
times
sake.
E
D
F
F
E
Or
less
I
think
the
question
might
be:
is
the
new
identity,
digests
algorithm
value
of
use
to
people
I
suspect
it
is
yeah
anyway?
So
it's
a
one
comment
has
been:
how
much
does
this
ad
that's
new
and
my
response
would
be,
it
doesn't
add
anything
new
new
algorithms
help
purify
the
usage
of
this
header.
Yes,.
F
So
the
other
thing
to
think
about
here
is
the
relationship
with
s
RI
and
getting
that
clarified
would
be
kind
of
interesting
I.
Don't
know
whether
it's
our
responsibility
or
someone
else's,
but
it's
worth
thinking
about
I,
don't
know
what.
Whether
you
want
me
to
open
the
issue
on
that
one
or
not
yeah.
F
The
question
that
I
would
ask
there:
it
is,
if
I
have
an
S,
are
I
on
the
link
and
I.
Follow
that
link
and
I
also
had
a
digest.
Header
field
on
that
response
would
I
expect
the
Charter
56
value
to
be
the
same
on
the
s.
Ri
and
the
the
day
just
had
a
field,
and
if
you
can
say
yes
that
would
be
really
nice
and
you
could
say
as
much.
That
would
be
even
nicer
good
to
keep
recognized
Roberto.
G
Roberto
pay
own
good
to
have
another
Verito
around
by
the
way
hello
question:
we
talked
about
binary
or
structured
headers
in
the
past,
and
this
intersects
here,
because
the
header
name
and
how
it
is
encoded
are
the
same.
If
we
are
talking
about
actual
having
binary
representations,
where
you
don't
have
to
worry
about
an
encoding
of
the
hash
itself,
it
would
make
a
lot
of
sense
to
make
sure
that
the
hash
name
there
is
well
known,
I'm
sure
you
can
call
it
binary
or
something
like
that.
G
H
I
E
F
F
F
Someone
has
already
done
that
the
the
media
code
gods
have
been
summoned
one
of
one
of
the
questions
that
I
had,
and
this
relates
to
a
similar
sort
of
discussion.
We
had
about
content
coding
a
little
while
ago
is
what
are
the
principles
that
we
use
to
drive
the
process
of
defining
new
digest
schemes?
Obviously,
there's
a
couple
in
here,
some
of
them
quite
simple.
F
We
have
some
schemes
that
are
increasingly
complex
and
some
of
them
are
parameterised
in
interesting
ways
and
it's
not
clear
from
the
current
system
how
we're
supposed
to
encode
that
sort
of
information.
So,
for
instance,
if
you
look
at
the
proposal
that
Jeffrey
and
I
have
been
working
on
for
sort
of
a
progressive
digest
system,
it's
the
my
struct.
F
A
Your
what
you
were
saying
makes
me
think
that
we
need
to
think
about
that
more
carefully,
because
it's
not
only
content
and
coding.
You
know
you
have
identity
ones,
for
example,
but
also
we
have
things
like
partial
content
versus
full
content
and
other
dimensions,
and
and
how
is
that
going
to
stack
up
so.
F
That's
slightly
different.
What
I
was
talking
about
and
yes
I
agree.
We
need
to
talk
about
that
one
as
well.
If
you're
talking
about
your
progressive
yeah,
this
is
talking
about
the
progress
mano
and
to
provide
context
there.
The
there's
a
block
size
at
the
start
of
the
stream.
That
then
determines
where
the
progressive
hashes
appear
the
interleaved
into
the
end
of
the
stream,
and
you
need
to
know
that
block
size
in
order
to
render
the
content
and
I
think
it
was
Roberto.
F
Who
mentioned
that,
and
not
this
Roberto,
the
other
Roberto
who
mentioned
that
when
you
remove
that
content
coding
and
just
save
the
file
to
a
disc,
you
might
still
want
to
be
able
to
go
back
and
take
that
hash
and
prove
that
it
still
applies.
But
you
just
lost
the
emitter
information
that
was
encoded
into
that
strings
in
alternative
representation.
E
F
F
D
L
K
D
M
Okay,
so
apparently
I
missed
the
memo.
They
were
supposed
to
talk
about
this,
but
yeah.
There's
a
draft
there's
a
slight
that
there's
like
one
ambiguity
with
telus
one
three
HTTP
and
like
key
update
versus
renegotiation,
verse,
handshakes,
certs,
there's
like
one
important
sentence
in
this
draft
plus
a
bunch
of
filler
text,
I
think
it's
hopefully
fairly
straightforward.
D
So
that
has
actually
been
the
feedback
during
adoption
and
I'm
I.
Think
what
we
wanted
to
use
this
meeting
time
for
or
see
if
there
were
any
other
issues
that
need
to
be
opened
here,
because
I
think
this
is
great
and
if
you
know,
if
there
aren't,
maybe
we
should
talk
about
working
group
last
call
Martin.
Are
you
gonna
ruin
that
dream
Martin
Thompson.
N
O
A
P
Hey
my
name
is
Jude
Sakura
I'm,
going
to
talk
about
the
proxy
status
header
that
marketing
and
I
myself
put
together
and
present
it
at
the
last
ATF.
That's
right
as
a
quick
reminder:
proxy
status
is
a
header
that
contains
detailed
information
about
why
particular
requests,
fight
or
succeeded
in
a
journey
through
various
intermediaries,
Citians,
reverse
proxies
and
whatnot.
This
is
not
a
new
concept.
It's
in
dawn
many
times
we
just
trying
to
standardize
it.
P
P
First
of
the
issues
is
about
adding
detailed
status.
Types
for
HTTP
requests
errors.
So
right
now
we
have
about
dozen
of
types
for
response
errors,
but
we
have
only
one
for
request
errors.
It's
called
HTTP
request
error
and
we
piggyback
on
the
status
codes
to
convey
the
information
about
why
the
particular
request
failed,
and
this
is
kind
of
unfortunate,
because
a
the
header
is
not
self
contained.
So
recipe
need
to
look
at
both
the
proxy
status
type
and
the
HTTP
response
code
to
figure
out
why
the
request
failed.
P
It
must
also
means
that
we
are
kind
of
constrained
by
existing
HTTP
status
codes
in
you
know
the
errors
we
can
represent
and
it's
much
harder
to
extend
this
in
the
future.
The
internal
feedback
I
got
because
of
those
issues
is
that
you
would
basically
put
the
details
in
free
forum
details
parameter,
but
since
it's
freeform
we
wouldn't
really
be
standardizing
anything.
So
that's
kind
of
unfortunate
the
only
downside
of
this
issue.
P
A
J
A
Opposite
so,
if
we
keep
the
current
design
and
you
have
a
status
field
on
it,
it
just
or
sorry
in
the
curt
design,
a
new
status
code
doesn't
require
any
changes
to
this.
It
just
means
that
you
know
so
if
it's
a
411
length
required
all
that
the
presence
of
the
proxy
status
field
means
is
that
this
was
generated
by
a
proxy
not
by
the
ordered
server
behind
them
and
as
a
reminder
in
this
draft,
we
use
proxy
somewhat
generously
to
mean
forward
proxy
or
reverse
proxy
Gateway.
You
know
any
kind
of
intermediary
node
right.
A
P
P
The
only
exception
when
that's
not
true
is
the
HTTP
response
status,
which
basically
means
that
the
response
was
for
that
that,
for
from
the
next
hop,
as
is
without
any
modifications,
marketing's,
it's
kind
of
confusing
and
the
suggestion
here
was
to
add
new
informational
proxy
info
header.
That
would
carry
the
information
about
each
node,
so
we
would
still
retain
the
ability
to
do
HTTP,
try,
sorting
and
stuff
like
that
and
keep
proxy
status
only
for
the
response
is
generated
by
the
intermediaries.
A
And
so
the
idea
here
is
is
that
when
you
see
a
proxy
status
header
field
on
a
response,
you
know
that
the
status
code,
that
the
actual
response
was
generated
by
an
intermediary.
If
we
have
this
split
where
as
proxy
info,
would
be
information
to
added
incrementally
to
the
response,
you
know,
whatever
information,
you
know,
opportunistic
information
about
the
connection
that
we
might
use
for
debugging
or
whatever,
without
having
that
response
generated
on
the
actual
proxy
node.
I
Chris
Cummins
from
Comcast,
so
under
the
previous
proposal
it
was
possible
for
if
I
understood
correctly,
for
multiple
proxies
in
the
chain
to
have
generated
responses
right
and
there
could
theoretically
be
you
know,
one
of
the
proxies
could
be
returning
a
cached
error
and
then
another
proxy
could
be
saying.
Well,
I
got
an
error
from
upstream
and
so
I
generated
dis
response
codes.
P
A
P
A
Yes
and
so
yeah,
then
this
is,
you
know,
a
larger
discussion
of
we're
decomposing
these
different
mechanisms
and
making
sure
that
they're
well
aligned.
You
know
this
is
about
really
just
intermediary
nodes
and
their
behaviors,
whereas
that
draft
is
about
caches
which
can
be
intermediaries,
but
they
can
also
be
in
other
places.
Q
Okay,
can
you
hear
me
now:
yep,
okay,
the
just
in
general
I
know
their
people
are
very
fond
of
these
very
long
names
in
in
header
fields,
I
hate
them,
particularly
when
they're
repeating
something
that's
a
3-2
degree
code
and
the
HTTP
response
already.
So
how
about
in
the
key?
In
the
cases
where
the
proxy
status
is
just
indicating
that
the
proxy
is
sending
this
code
just
use,
the
three-digit
code
instead
of
HTTP
correspond
table.
A
So
I
think
the
proxy.
If
we
do
this
split.
So
that's
that's
a
different
issue
really,
but
if
we
do
this
split,
then
the
proxy
status
header
field
will
only
occur
in
error
responses
which
are
relatively
rare.
So
while
I
generally
agree
with
your
sense
there,
it's
not
like-
and
it's
gonna
be
in
every
message.
Q
A
P
Good
example:
hey
the
second
thing,
and
so
basically
the
main
target
of
this
header
is
responses
from
the
origins
right
and
right
now.
Basically,
the
only
solution
we
have
in
HTTP
what
is
502
but
gateway
or
five
or
top
four
gateway
timeout
right.
We
here
have
defined
at
least
twenty
or
more
status
types
that
describe
in
more
detail
why
this
happened
right,
and
this
is
something
that
other
companies
not
all
but
like
a
lot
of
companies,
especially
CD
ends
within
the
paths
of
the
risk,
really
need
for
that.
R
Find
call
Verizon
media,
so
we
do
this
in
patchy
traffic
server.
We
have
something
like
this,
but
we
do
it
via
do
it
via
header
and
in
there
we
encode
every
host,
and
then
we
basically
have
all
the
status
codes
of
you
know
what
happened
if
it's
Ram
cache,
yet
if
it
was
disk
cache
yet
we'd
have
all
the
you
know
when
we
encode
all
that
stuff
is
like
error
codes
and
everything
and
one
thing
about
bad,
is
having
them
for
each
hop.
R
Is
you
know
you
have
this
hostname
and
you
have
all
of
the
status
codes
for
each
one
of
them.
Instead
of
this
you're
gonna
have
to
like
parse
it
out
and
buy
multiple
headers
and
figure
out
exactly
what
the
status
codes
for
each
one
of
those
are.
So
so
it
seems
like
you
would
want.
Couldn't
you
add
this
to
like
four
did
or
something
like
that,
and
then
you
have
all
of
the
protocol
information,
and
you
have
all
of
the
for
did-
is
a
request
10
ago.
A
So
that's
a
much
larger
discussion,
which
we've
got
the
cash
header.
Would
you
have
intended
to
have
the
cash
state
yeah
proxy
info
is
basically
the
intermediary
state.
Yes,
and
this
is
proxy
status.
If
we
accept
this
proposal
is
just
for
a
proxy
generated
error,
page
effectively,
yep.
So
that's
that's
the
current
breakdown
so.
P
R
R
Special
program
that
takes
the
20,
you
can
have
an
extended
version
of
it.
It
was
just
like
24
characters
and
then
yes,
you
translate
it
I
agree.
That's
kind
of
difficult
for
end-users,
I
mean
for
experts
and
stuff
like
that,
it's
not
as
a
gil
end-users,
and
it
may
be
very
difficult,
but
I
think
something
that's
shows
you.
Your
hops
gives
you
status
in
each
one
might
be
easier
to
do
than
to
have
it's
broken
up
into
multiple
headers
right.
Yes,
that's
my
opinion.
We're
gonna
have
to
cut
the
line
after
our.
P
F
A
Think
in
a
lot
of
people's
minds,
unusable
for
anything,
useful
and
so
cash
loop
starts.
Edm
loop
effectively
abandoned
that
for
some
purposes
and
probe
down
to
a
very
specific
purpose.
Cd
and
loop
is
a
request
header
and
it's
targeted
just
at
certain
kinds
of
notes.
So
it's
very
separable
from
this
cash
is
for
caches,
and
so
a
cache
would
generate
it
on
a
response.
Sorry
cache
what
we're
gonna
go
on
down
staff?
Yes,
yes,
cash
status,
yeah,
dick
that
PR.
A
This
is
for
intermediaries
and,
as
I
explained
before,
the
breakdown
that's
being
proposed
is
to
break
that
between
just
general
intermediary
context
and
intermediary
generated
errors
which
are
different
things
because
from
a
CDN
standpoint,
for
example,
you
often
have
to
add
a
little
bit
of
extra
context
to
responses
for
debugging
and
that's
a
very
typical
thing
to
do.
But
an
error
page
often
people
will
write
scripts
depending
upon
that.
A
F
That
makes
me
think
that
the
second
direction
that
you're
proposing
is
for
at
least
a
proxy
status
header
field,
a
reasonable
course
to
take,
but
I'm
a
little
concerned
about
sort
of
submarining
in
whole,
new
proxy
information
fields,
that's
somewhat
like
via,
but
not
not
entirely
so
somewhat.
What
it's
someone
like
via,
but
not
right
entirely.
It's
a
sort
of
well
that
one
didn't
work
out,
so
we're
just
gonna
make
another
one.
It's
effectively.
A
P
J
F
P
D
F
J
A
I'm
willing
to
split
it
off
I,
don't
think
it's
going
to,
but
one
of
the
reasons
to
sort
of
separate
header
is
is
that
the
semantics
are
a
little
bit
different,
but
very
rich
on
both
sides
and
smashing
it
all
together
is
gonna,
make
it
a
really
complex
header,
so
this
place.
Thank
you.
Okay.
Next
up
is
the
cache
header,
hello,.
D
A
The
other
issue
was
a
little
more
interesting
Alex
Russo
cough
who,
if
folks
don't
know
him
is
from
the
old
squid
days,
said:
unlist
a
while
back
when
this
first
came
out.
If
we're
gonna
standardize
this,
then
you
know
it's
reasonable
to
ask
whether
we
want
to
just
effectively
pave
the
cow
paths
that
you
know
the
cash
X
cash,
which
is
why
we
call
it
a
cash
originally,
because
you
just
take
the
X
off
started
off
as
in
squid
and
maybe
even
back
in
harvest
I.
A
Forget
might
be
something
you
can
answer
that
and
it
was
created
over
time
in
in
what
arguably
is
in
really
non
optimal
way.
And
so
you
know
the
question
is:
do
we
want
to
replicate
that
because
everybody's
very
comfortable
and
understands
what
those
things
mean,
including
not
only
implementers
but
also
people
who
are
consuming
these
things
or
do
we
want
to
refactor
it
and
and
start
from
scratch
effectively
and
make
it
a
little
cleaner
and
a
little
more
obvious?
A
A
I
took
a
strawman
approach
to
you
know.
How
would
we
break
this
thing
up
into
different
facets
and
I
put
that
into
the
issue,
and
so,
if
people
want
to
take
a
look
at
that,
I
want
to
hear
people's
impressions
of
which
way
they
want
to
go
on
this,
especially
from
implementers,
because
this
thing
has
to
get
implemented
and
if
it's
the
difference
between
oh
yeah,
it's
just
like
X
cache.
Therefore
I
can
Street
use
that
code
and
I
don't
yeah
I.
A
I
T
Colin
from
Congress,
so
the
one
comment
I
was
gonna,
add
to
the
previous
discussion
as
well
to
this
one.
We
currently
emit
these
as
part
of
our
server
timing,
headers
that
we
emit,
because
it's
picked
up
available
in
JavaScript
land,
and
so
the
semantics
are
now
duplicating
and
it's
helis
context
of
having
a
normalized
structured
data,
but
also
having
it
represented
there.
So
one
consideration
lucky
day
to
see
how
we
can
merge
some
of
these
or
have
anacs--
bridge
into
the
HTML
spec.
T
So
we
can
have
access
to
some
of
these
headers
for
beaconing
purposes.
So
that's
the
purpose
we
use
them
for
is
to
collect
statistical
data
from
the
client-side
on
performance
etc.
Sure,
so
we
can
probably
have
that
conversation,
maybe
with
a
fetch
box
for
sure.
The
second
is:
do
you
have
a
notion
of
opting
in
for
these
additional
headers
for
debug
mode,
flag,
tisha,
I,.
A
Think
for
the
time
for
the
scope
of
this
we
haven't
discussed
this
explicitly,
but
my
assumption
has
been
that
currently
it's
it's
a
case-by-case
basis.
Different
CD
ends
have
different
approaches
to
turning
on
debug
mode
and
they
have
different
threat
models
for
exposing
that
information.
Likewise,
different
proxy
is
going
to
be
configured
in
different
ways,
and
so,
if
we
try
to
come
up
with
some
sort
of
negotiation
or
triggering
mechanism
for
this
I
think
we'd
probably
raise
the
chances
that
this
is
gonna
fail.
A
So
right
now
we're
just
gonna
focus
on
defining
the
semantics
and
the
syntax,
and
then,
if
we
can
later
down
the
road
gets
agreement
about.
How
does
one
turn
on
debugging
for
CD
and
Celeste?
Reverse
proxies?
Hey,
that's
great,
but
but
we
don't
have
to
couple
it
to
defining
these
Semitic
since
in
taxes.
My
thinking.
U
Mat
stock
limelight
networks,
yeah
I
mean
it
in
regards
to
implementation.
I
think
that
unless
somebody
was
really
driving
us
to
actually
go
and
refactor
this,
it
would
be
hard
to
a
hard
sell
to
actually
go
and
do
it
so
as
much
as
I
like
the
idea
of
refactoring,
making
it
clean,
I
think
in
practice
it
would
be
tricky
to
do.
A
That
actually
brings
to
mind
Mike.
One
of
my
concerns
about
keeping
the
current
approach
is
that
from
what
I've
observed,
different
reverse
proxies
and
forward
proxies
and
CD
ends
use
all
the
squid
ex
quit
tags,
but
they
all
mean
slightly
different
things
and
there's
gonna
be
a
great
temptation
for
everybody,
just
to
say:
okay,
let's
just
take
that
code
and
put
a
new
header
name
on
it
and
you
know
it'll
diddle,
the
syntax
and
we're
done,
and
they
won't
be
done
because
now
we
won't
have
good,
interrupts
so
I'm
a
little
worried
about
that.
I
Yeah
I
was
gonna,
bring
up
exactly
the
point
that
you
you
just
made.
The
the
different
caches
and
intermediaries
have
slightly
different
interpretations
of
some
of
these.
These
words
and
to
me
some
of
the
value
is
defining
the
words
very
specifically
and
concretely
with
all
capital
letters
and
then
interrupts.
I
A
My
current
inclination
after
listening
to
this
is
that
you
know
if
we're
able
to
write
down
a
new
thing
accurately,
which
is
maybe
a
big
if
or
precisely
enough,
that
it's
it's
can
be
held
to
truly,
we
might
get
something
out
there.
That
is,
it
gets
more
interrupt,
but
gets
there
more
slowly
that
that
it'll
take
more
effort
for
implemented
to
help
them
better
yeah,
but
better
yeah.
J
Alessandro
godina
CloudFlare,
so
the
more
complicated
one
can,
in
the
end,
be
implemented
by
just
taking
the
previous
status
and
then
sort
of
converting
it
into
different.
J
Splitting
it
up
in
different
values,
so
I
think
it's
it's
not.
It
wouldn't
be
that
hard
to
implement,
maybe
not
directly
in
say
the
the
web
server,
but
they
may
be
in
something
more
high-level
that
customizes
the
responses
so
I
think
that
would
be
fine,
they're
only
kind
of
problem
I
would
that
is
at
least
for
us.
This
information
is
mostly
intended
to
be
consumed
by
humans.
So,
like
you
do
the
response,
you
do
the
request
and
then
you
get
the
header
and
then
you
immediately
see
it's
a
hit
or
a
mess.
J
I
G
Repair
to
pay
own,
so
the
browser
has
also
had
a
cache
and
probing
that
cache
and
or
requesting
things
from
that
cache
has
been
either
non-existent
or
problematic.
From
the
point
of
view
of
the
application,
for
instance,
there's
this
push
thing
and
we
have
no
way
of
knowing
that
we
got
it
because
of
push.
Let's
say:
that's
a
separate
issue
potentially,
but
maybe
not
and
I,
think
there's
a
separate
question
we
should
be
asking
about.
This
is
how
we
should
be
using
HTTP
probing
the
local
cache,
as
opposed
to
a
remote
cache.
R
Brian
call
patchy
traffic
server,
so
we
looking
at
all
the
different
status
codes
you
have
for
cache.
The
only
one
that
we
went
ahead
and
expanded
on
from
the
squid
codes
was
the
refresh
Ram,
so
we
actually
specify
if
it's
actually
from
Ram
cache,
which
is
helpful
for
us,
and
that
would
be
the
only
thing
that
I
would
add
one
there.
Okay,
thanks
well.
I
A
Need
a
good
extensibility
story
and
then
we
want
to
drive
people
towards
common
values,
because
that's
the
whole
point,
but
yeah
there's
needs
to
be
an
escape
valve
and
then
that's
the
discussion.
We
need
to
have
definitely
but
I
wanted
to
get
the
general
tie
level
shape
for
it
together.
First
thanks.
A
D
A
Variance
is
a
little
more
venerable.
Definitely
we
haven't
made
much
progress
on
this
because
we
put
a
pin
in
it
to
wait
for
implementation.
Experience
and
I
have
assurances
from
my
one
generous
and
unnamed
potential
implementer
in
the
immediate
future
that
he
might
be
able
to
gets
to
something
soon.
He
or
she
might
be
able
get
something
soon,
but
in
this,
and
it
is
also
being
used
by
the
exchanges
proposal
for
a
little
bit
different
purposes.
But
but
there's
still
an
interesting
use
case
there
and
and
what's
nice
there's
an
also
kind
of
validates.
A
So
my
thinking
at
this
particular
point
is
that
I
might
want
to
couldn't
sketch
that
I
put
a
new
draft
out
to
see
if
that
beats
the
bushes
of
people
who
might
want
to
go
and
play
with
it
a
bit
more
beyond
that
I.
Don't
really
have
anything
to
report
I
think
in
a
month
or
so
ago,
I
went
through
and
addressed
some
of
the
open
issues,
especially
that
Jeffrey
asked
and
it
opened
against
it.
I
think
there's
still
a
couple,
but
it's
just
iterative
stuff.
It's
not!
T
D
A
A
So
I
haven't
talked
to
the
other
chairs
about
this
at
all
as
to
whether
this
is
in
scope
or
not,
but
there's
the
issue
of
using
the
word
header
field,
but
that's
a
core
issue,
not
an
issue
for
this
I
think
what
was
this
is
Ben
Carrick
when
he
was
doing
a
last
call
review
of
50
70
to
fight
this.
One
of
the
things
that
came
up
was:
do
we
need
more
advice
about
when
and
how
to
use
Bowen
on
your
eyes?
That's
something!
Potentially
we
could
address
here
in
another
context.
A
Don't
I
frankly
forget
where
that
was
some
people
are
talking
about.
When
do
you
use
your
eye
components
versus
meeting
new
headers,
some
advice
about
that
might
be
good
to
put
in
there,
or
at
least
examples
to
understand
the
trade-offs
when
you
make
that
choice
when
you're
designing
a
protocol
and.
A
Yeah
some
some
advice
about
when
you-
and
this
is
compactly
in
a
couple
of
different
working
groups
using
HTTP
recently
the
idea
that
a
response
has
this
inherent
value
of
being
either
fresh
or
stale,
and
what
that
means
in
relation
to
your
application
when
it
consumes.
That
response
is
something
that
applications
probably
need
to
at
least
be
aware
of,
if
not
talked
about.
When
you
define
a
specification.
J
D
A
I
D
So
my
inclination
is
that
if
people
want
to
provide
text,
we
can
essentially
treat
this
while
it's
in
this
sort
of
interim
state
as
a
living
document.
If
you
will,
if
we
have
the
energy
to
make
these
kinds
of
make
these
kinds
of
updates
before
it's
ready
to
technically
go
through
less
call,
but
that
wouldn't
be
driven
by
the
presence
of
you
know.
Texts
are
not
coming
along.
D
V
V
So,
basically,
last
time
around,
we
added
the
required
domain
extension
to
try
and
limit
the
possible
attack
from
a
compromised
cert
being
used
by
an
attacker
that
helps
with
some
of
the
situation's.
The
lingering
angst
that
has
been
relayed
to
me
is
around
DNS
check
and
I
think
the
path
forward
there
may
just
be
to
try
and
make
that
not
being
a
separate
issue
from
this
document
and
just
say
if
you
would
do
a
DNS
check.
V
V
So
we
don't
currently
have
an
issue
for
that,
because
we
thought
we
had
result
that
with
the
extension,
but
we
can
certainly
open
an
issue
to
clarify
that
text
and
if
anybody
wants
to
come
up
to
the
mic
and
suggest
text,
I'd
prefer
PR.
But
if
we
want
discussion
around
it,
that's
good
too
other
than
that.
The
two
open
issues
coming
this
week
were
editorial.
Had
a
PR
I'm
rich,
the
PR,
we
now
have
no
open
issues,
so
I
would
say
the
main
thing
between
us
and
last
call
is
some
actual
implementations.
C
K
D
A
So
we've
had
a
bit
of
a
burst
of
activity
on
structured
headers.
Recently,
if
I
go
down
to
the
changes,
can
you
see
that
in
the
back
and
you
read
that
or
Eliza
what
up?
Oh
there
how's
that
great
Thanks,
so
the
important
ones
in
since
draft
ten
are
towards
the
end.
781
we
closed.
We
allow
empty
dictionaries
and
lists
now.
So
in
the
data
model,
those
structures
allow
are
allowed
to
be
empty,
whereas
they
weren't
before
they're
serialized,
as
the
header
not
appearing
on
the
wire,
is
how
it's
currently
spelled.
A
797
I
was
not
important
for
discussion.
816
allow
inner
lists,
but
in
both
dictionaries
and
lists,
and
to
remove
that
effectively
obviates
lists
of
Lists
as
a
separate
top-level
data
structure.
So
now
you
have
this
in
most
places
where
you
had
a
member
in
those
structures,
you
neither
have
a
single
thing
or
you
can
have
a
list
of
things
and
after
a
long
and
enjoyable
fruitful
discussion
of
syntax,
we
ended
up
on
using
parentheses
and
whitespace
to
delimit
those
lists,
and
it
looks
it
looks
okay
and
finally,
839
was
even
more
interesting.
A
We
subsume
parameterize
lists
into
lists,
which
means
that
basically
lists
now
can
have
optional
parameters,
list
items
and
the
winning
argument
for
that
seemed
to
be
that
when
you
define
a
structured
header,
it
may
be
that
in
the
future
you
want
to
add
parameters
to
it
and
if
it's
defined
as
a
list,
well
that's
awkward.
It's
not
backwards
compatible,
whereas
if
it's
all
parameter
all
list
members
could
potentially
have
parameters,
then
you
can
retroactively.
A
I
still
have
to
do
a
lot
of
work
on
the
test
suite
and
on
the
sample
implementation
that
I
have
in
Athens
to
validate
all
these
changes
make
sure
the
algorithms
are
absolutely
correct,
but
we've
had
multiple
eyeballs
on
it.
We've
got
some
fairly
detailed
feedback
from
pH
K
on
the
algorithms
and
I
feel
like
they're,
in
pretty
good
shape.
A
We
might
have
a
couple
of
bugs
in
there,
but
hopefully,
when
I
get
a
chance
to
sit
down,
we'll
be
able
to
really
make
sure
that
they're
just
beating
the
test,
suite
is
still
representative
and
when
it
passes,
and
then
the
other
implementations
last
update
as
well.
So
that's
the
recent
changes,
the
open
issues.
It
is
freaking
cold
in
here,
I.
A
A
If
we,
you
know
the
people
who
need
that
kind
of
precision
can
put
something
into
binary
or
put
something
in
a
string.
Those
use
cases
in
HTTP
headers
are
relatively
uncommon
and
if
they
are,
if
somebody
does
want
to
use
that
kind
of
precision-
and
it's
not
interoperable
or
easy
to
implement
it's
much
more
likely.
That
we'll
be
adding
to
the
problem
there
problems
not
helping
them,
but
that's
my
personal
feeling,
I'm
very
happy
to
be
convinced.
Otherwise,
so.
F
Mum
Thomson
can
we
confirm
that
this
is
not
just
a
particular
C
library
implementation
that
we're
hitting
and
it's
in
the
C
language
specification
and
the
similar
functions
in
other
languages
and
not
different,
so
you're
asking
for
research
and
data
yeah
if
you're
gonna
make
this
sort
of
change
yet
sure
otherwise,
I'd
be
tempted
to
say
well
what
I
just
rolled
your
own
string,
a
fire
for
the
float?
It's
not
a
huge
amount
of
code,
you're.
A
Right,
we
do
need
to
do
more
more
digging
here,
a
little
bit.
My
assumption,
which
wasn't
terribly
well
thought
out,
was
that
you
know
most,
you
know,
look
at
Python
and
Ruby
and
so
forth,
they're
all
going
to
be
based
on
the
C
libraries
yeah.
What
about
Russ?
But
what
about
all
these
other
ones?
Yeah
sure?
But
the
point
is
interrupts
we
have
to
interview
these.
Let
me
just
talk
with
each
other,
so.
A
Right
and
then
actually
I
think
in
our
private
discussion
when
Patrick
can
we're
talking
about
this
one
of
the
things
that
I
flooded
was
maybe
somebody
uses
in
sand
like
if
you're
mapping
from
Q
values,
then
you
say
well.
This
is
the
mapping
for
Q
values
into
this
different
structure.
Well,
the
only
thing
you
really
lose
is
the
ability
to
essentially
take
an
existing
header
that
uses
floats
and
pars
it
as
a
structured
header
without
changing
its
identity,
but
that
may
be
a
reasonable
trade-off
as
well.
G
One
of
the
big
reasons
to
use
floats
is
so
you
can
get
a
lot
of
things
past
that
decimal
point,
and
while
this
trade-off
may
be
a
perfectly
fine
one,
let's
make
sure
that
the
name
actually
reflects
the
fact
that
it
has
a
really
huge
trade-off
in
its
precision
so
that
in
the
future.
Maybe
when
somebody
act
because
of
the
presence
of
something
that
can
serialize
the
float
in
the
format-
that's
not
so
expensive,
it
would
be
nice
to
actually
be
able
to
represent
it.
A
G
G
A
G
A
F
A
A
I
A
I
think
well,
this
came
about
because
we
were,
you
know,
relying
on
shared
concept
of
float
that
wasn't
well
spelled
out
in
the
spec,
and
now
it
sounds
like
we're,
spelling
it
out
to
the
degree
of
precision
that
we
spelled
out
in
twith.
So
it's
worth
we'd
have
to
do
for
that
process
for
this
scene
in
general,
I.
Think
speaking
for
Roy.
F
Now
it's
a
now:
it's
a
naming
race,
no
name
race.
Fun
month
onsen.
We
have
15
ten
digits
on
integers.
What
oh
yeah
yeah
so
just
take
six
of
those
and
put
them
on
the
right-hand
side
and
leave
the
rest
on
them
on
the
left-hand
side
and
there's
your
limits
right.
There
I
think.
That's
concretely,
that's
a
serious
proposal
and
six
seems
like
a
convenient
number
I.
Don't
know
someone
must
have
picked
five
or
seven.
Then
we
can
argue
about
that
one.
But
that's
seems.
A
Martin,
could
you
could
you
write
that
down
in
the
issue?
So
I?
Don't
forget
it's
848!
Thank
you.
Okay,
that's
editorial
star
in
parameter
names,
so
it
was
notice.
I!
Think
when,
when
pH
can
I
were
talking,
he
asked
if
there
was
any
use
for
star
and
I
rely
in
parameter
names
and
I
realized
that
star
is
used
by
RC,
50
97
or
as
Julian
corrected
me
81
87.
This
is
the
internationalization
coding
for
parameters
in
normal
HTTP
headers,
and
so
the
question
is
right.
A
Now
that
is
disallowed
in
forever
names
in
what
we
call
keys,
I
think
in
the
current,
a
B
and
F.
So
if
you
wanted
to
map
one
of
these
headers
into
structured,
headers
you'd
need
another
way
to
denote
that
this
is
the
internationalized
version
of
that
parameter
because
for
those
that
may
not
have
mmediately
paged
it
in
as
soon
as
I
said
81
87.
The
model
of
that
is
that
there
are
a
pair
of
parameters,
one
that
is
plain
ASCII
and
one
that
is
internationalized
so
that
you
can
fall
back
to
the
ascii.
A
If
you
need
to,
and
that's
the
the
convention
for
those
things
for
in
things
like
content-disposition
and
the
internationalized
version,
is
denoted
by
I
believe
a
trailing
star
if
I
remember
correctly,
so
it
would
be
a
non-trivial
mapping,
and
so
one
thing
we
could
do
is
allow
stars
in
these
parameter
names.
One
further
thing
we
could
do
if
we
so
felt
inclined
would
be
to
just
reserve
that
star
for
that
very
particularly
use
case
and
not
for
anything
else,
doesn't
have
any
thoughts
about
that.
I.
G
If
you
allow
it,
the
size
of
every
key
will
be
larger
because
you
will
not
be
able
to
encode
it
as
efficiently
on
the
wire
when
we
encode
the
keys
right.
You
can
imagine
in
other
protocols
revisions
that
we
encode
the
keys
via
a
specific
table
Huffman
table,
for
instance,
right
adding
another
character
that
is
not
used,
as
often
will
slightly
expand
that
it's
not
a
huge
deal
but
I'm
just
pointing
out
that
adding
a
character
to
the
acceptable
input
set.
G
Those
actually
have
an
impact
on
the
size
of
the
things
that
we
said
in
the
case,
where
we're
not
using
it
right.
So
if
this
is
very
rarely
happening,
it
would
be
nice
to
disallow
in
structured,
headers
and
say
if
you
want
to
use
structured
headers.
Well,
you
could
ask
you
sorry,
but,
and
if
you
want
to
do
some
mapping,
you'd
do
it
in
another
header.
It's.
A
A
N
I
A
A
G
B
I
A
Roy
asked
that
we
add
us
some
some
text
about
limitations,
I
looked
at
this
and
then
I
looked
at
the
spec
and
I
feel
like
it's
pretty
specific
about
what
the
spec
is
trying
to
do
and
what
it's
not
trying
to
do
so.
I,
don't
know
what
you're
asking
for
here
Roy.
So
if
you
could
give
me
some
more
information
or
give
me
some
proposed
texts,
that
would
be
great
because
I
don't
know
how
to
actually
this
request
there.
Q
Okay,
so
what
I
think
this?
This
is
talking
a
little
bit
time,
so
I
don't
mind
if
you
close
this
issue,
because
it's
really
there
are
a
lot
of
things
that
I
found
out
only
by
reading
the
a
PMF
and
that
there's
no
mention
of
it
in
the
text,
particularly
having
to
do
with
how
many,
if
you
can
have
empty
lists
or
empty
header
values.
But
if
you've
already
corrected
that
in
the
text,
then
this
is
no
longer
applies.
Okay,.
A
If
you
find
any
of
those,
please
do
open
issues
about
those
specific
things.
We
should
definitely
make
sure
that
it's
unambiguous.
N
A
A
My
push
back,
or
my
concern
has
been
that
you
know
structured
headers
is
all
about
presenting
type
data
to
applications
in
ways
that
they
can
easily
consume
in
a
highly
interoperable
fashion
and
that
interoperability
in
your
eyes.
Unfortunately,
the
story
isn't
that
great.
If
this
thing
is
going
to
be
implemented
by
browsers,
it's
highly
highly
likely
that
they're
gonna
use
the
fetch.
Your
restaurants
are
the
what
working
group
you're
aspect
to
offer
that
URI
and
that
has
a
different
API
surface
than
what
we
would
explain,
different
parsing
and
processing
the
new
weeks.
A
A
That's
actually
likely
to
get
implemented
by
browsers
I'm
all
for
it,
but
I
don't
see
that
as
achievable
in
the
timeframe
that
we
have
in
this
working
group
for
this
document
and
I
really
want
to
ship
this
document,
because
it's
starting
to
get
a
lot
of
things
depending
upon
it.
That's
where
I'm
at
with
this
cuz.
S
S
F
Thompson
I'm
inclined
to
agree
simply
because
it's
it's
gonna
be
very
difficult
to
specify
this
correctly.
Even
if
you
sort
of
wave
your
hands
rapidly
about
all
the
various
interoperability
issues
that
we
have
with
your
eyes,
do
you
allow
relative
your
eyes?
Do
you
allow
all
these
sorts
of
questions
start
coming
up
and
and
what
would
they
be
relative
to
it's
tricky?
So
that's
not
even
getting
into
character,
encoding
problems.
A
I
D
A
I
A
U
That's
back
line
like
so
yeah
I
think
that
this
this
kind
of
runs
counter
to
the
whole
there's
already
existing
implementations
and
we're
just
going
to
do
a
cut
and
paste
of
what's
there
and
unlike
what
we
were
talking
about
with
floating-point,
we
can't
play
any
like
realistic,
scoping
reductions
to
make
the
problem
simpler.
So.
O
Yeah
Ryan's
levy,
Google
works
on
Chrome
and
sadly,
one
of
the
folks
maintained
the
URL
side,
I
support
what
Mark
was
proposing
air,
which
is
punting
this
issue
and
and
the
unfortunate
part
of
it
is
that
one
of
the
challenges
that
would
be
with
implementing
this
and
what
benefits
the
structured
headers
is
that
you
get
that
error
processing
model
right.
You
understand
what
is
a
valid
or
invalid
model
and
the
whole
reason
why
we
have
that
joint,
dubbed
3c
just
be
clear.
There
is
an
sto
behind
it.
O
You
know
what
WG
fetch
spec
is
that
challenge,
which
is
a
lot
of
the
URI
processing
model,
was
in
terms
of
error
handling
correctly,
and
so
there's
issues
that
exist
out
there
on
the
web
that
don't
conform,
and
we
deal
with
that
messed
up
yeah.
So
the
challenge
of
trying
to
like
do
this
now
for
structure
headers
is
trying
to
define
that
err
processing
model,
and
that
is
a
yak
shave.
It's
been
going
on
for
nearly
a
decade
and
it's
slightly
more
hairless,
which
is
great,
but
it's
not
there.
Yet.
G
I
might
be
alone
here,
but
I
it
seems
like
there
are
two
advantages
to
structured,
headers,
one
of
which
we
are
talking
about
right
now
and
the
lack
of
that
one
we're
excluding
the
other
to
be
specific.
There
is
how
we
encode
things
on
the
wire
which
could
potentially
be
more
efficient,
and
then
there
is
how
we
represent
it
to
the
user,
which
in
this
particular
case,
we're
suggesting
we're,
probably
gonna
end
up
having
B
text.
G
If
we
can
get
some
advantages
by
having
a
shorter
encoding,
because
we
know
it's
a
URI
and
we
know
who
we're
talking
to
that
might
be
worthwhile,
even
if
we
just
serialize
it
to
text.
So
it
may
be
that
it's
useful
on
the
wire,
so
I
think
we
should
be
talking
about
these
things
separately.
How
we
present
versus
how
we
serialize
or
store
I,
think.
A
Part
of
my
concern
is
it's
almost
an
attractive
nuisance
that
if
you
define
it
as
a
URI,
some
implementations
are
gonna
present
it
with
an
API.
On
top
of
that,
and
then
that's
gonna
cause
interoperability.
Problems
I
mean
really.
This
specification
is
a
big
game
of
chicken
that
you
know
where
we're
defining
this,
as
precisely
as
we
can
to
try
and
encourage
a
high
level
of
in
a
row,
but
it's
always
up
to
the
implementers
to
actually
show
that
line,
and
if
we
have
one
major
implementation
that
decides
it's
gonna
go
off
and
go
cowboy.
D
W
Okay,
so
we're
gonna
have
three
options.
First
option
is
we
don't
define
anything
for
the
URI
and
we
leave
the
text,
as
is
second
option?
Is
we
decided
it's
critical
to
define
something
for
the
URI
and
that
needs
to
be
proposed
and
then
the
third
option
is
you
don't
know
enough
yet,
and
you
want
to
leave
this
document
in
limbo
and
make
the
author
sad.
W
F
C
W
D
X
X
The
first
is
a
sec
eh
prefix.
We
had
various
people
that
are
interested
in
adding
namespace
for
client
hints
similar
to,
for
example,
there
is
the
sack
fetch
namespace
for
a
various
fetch,
related
request,
headers
and
otherwise.
The
sack
prefix
is
something
that
is
pretty
important
from
course
perspective
and
from
a
perspective
of
making
sure
that
these
headers
are
something
that
only
the
browser
can
set
and
cannot
be
set
by
user
and
JavaScript.
X
So,
for
example,
if
we'll
have
multiple
of
those
namespaces
collect
like
a
request
that
falls
into
two
namespaces,
how
do
we
represent
that
so
I'm
wondering
if
anyone
else
has
objection
like
if
anyone
has
objections
to
the
sack
prefix,
because
this
one
I
think
is
critical
for
the
fetch
processing
model
and
then
I
will
I'm
wondering
if
anyone
here
has
strong
opinions
regarding
the
see
like
the
CH
addition
to
that
as
the
namespace
for
general
request
headers.
So.
Q
A
B
A
It's
a
convention,
so
it
seems
a
little
unnecessary
than
to
add
that,
like
you
could
say
it's
a
convention
saying
if
you
want
to
flag
this,
as
you
know
just
to
humans
as
hey.
This
is
a
client
in
fine.
But
but
you
know
by
saying
it's
a
prefix
that
kind
of
implies
some
sort
of
automated
handling
and
that's
maybe
causing
the
confusion
here.
X
A
But
if
I
define
a
client,
hint
and
I
fail
to
call
it
CH
something
it's
still
a
client
and
for
all
intents
and
purposes,
okay,
nothing
special
would
happen.
Yeah
yeah,
because
it's
just
the
when
you
start
inferring
things
based
upon
proof
when
you
get
into
a
bit
of
a
mess.
If
there
are
multiple
facets
to
the
prefix,
because
you
know
if
we
define
five
more
prefixes
and
then
you
have
an
ordering
problem,
yeah.
X
A
Would
just
say
you
know,
for
security
reasons,
I
think,
there's
a
whole
nother
discussion
to
be
had
about
whether
sec
is
always
required
on
the
request
header
or
whether
it's
for
just
it's
a
case-by-case
thing,
I
think
there's.
There
could
be
an
argument
that
some
client
hints
are
okay
to
expose
to
JavaScript.
But
that's
you
know
case
by
case
and
so
I
I
personally
would
be
like
okay,
every
client
who
needs
to
evaluate
whether
it
needs
this
prefix
and
if
you
want
to
be
friendly,
put
CH
in
the
front
just
so
that
people.
A
F
F
This,
maybe
that
was
what
I
was
just
bringing
up
right
and,
and
so
so
Marx
suggested
that
maybe
that's
not
a
strong
requirement,
then
may
be
the
case.
I
I
thought
about
this
in
the
past.
Oh
whatever
the
conclusion
was
I
think
we
should
stick
I
I,
like
that
framing
of
the
thing
at
marquette,
so
cool.
X
X
X
X
F
We
haven't
fully
investigated
the
propagation
of
client
hints
into
third
party
browsing
contexts,
which
requires
a
little
bit
more
thought
on
our
part
and
we'll
find
someone
to
look
into
that
in
a
bit
more
detail,
someone
who's
more
familiar
with
with
what
what
our
policies
are.
Regarding
third
party
browsing
at
third
party
context,
the
question
about
whether
this
constitutes
a
new
surface
area
for
passive
fingerprinting
and
various
other
passive
use
of
information
I
think
we're
getting
closer
on
this.
F
On
this
point,
the
key
concerns
seems
to
be
right
around
the
properties
of
the
individual
client
hints
that
we're
talking
about
and
some
of
them
you
can
imagine
being
quite
easy.
So
if
we
imagine
that
we
had
except
language
turned
into
a
client
hint
which
I
think
we
want
to
do,
that's
essentially
static
and
very
rarely
changes
in
DPR
and
though
some
of
these
things
are
very
static
in
thing,
and
so,
when
you
release
when,
when
a
site
says
I
would
like
access
to
that,
we
can
look
at
that.
F
Viewport
is
a
little
more
interesting
because
it
does
change
over
time
and
there's
a
couple
of
things
that
do
change
over
time
and
then
at
the
extreme
end.
We
have
something
like
geolocation,
which
has
constant
changing
properties,
but
it
may
be
the
case
that
what
we're
concerned
about
there
is
that
this
is
also
a
property
that
is
behind
a
permission,
gate
or
some
other
thing
like
that
and
has
additional
policies
around
its
use.
And
so
we
need
to
understand
how
that
interacts
with
with
with
that
as
well.
F
So
what
we've
suggested
is
that
we
start
being
very
crisp
about
what
it,
what
it
is
that
we're
using
to
decide
whether
something's,
ok
to
use
in
this
context
and
be
very
clear
about
what
our
principles
drive
that
and
allow
for
different
browser
implementations
to
make
different
decisions
about
what
they
may
or
may
not
want
to
use
in
this
context.
So
it
might
be
that
if
we've
got
a
property,
that's
available
to
script
passively,
that
doesn't
change
very
often
everyone's
happy
with
that
one.
X
F
So
with
that,
this
may
be
the
most
fully
formed
and
well-thought-out
privacy
considerations
section
in
any
RFC
ever
so.
Thank
you
for
doing
that.
I
know
this
has
been
hard
but
I
think
we're
getting
pretty
close
to
to
this
being
big.
A
good
thing
and
I
do
want
to
start
using
this
for
things
like
user
agents,
so
people
this
is
fair
warning
user
agent
is
now
on
death
row.
So.
A
D
X
X
X
Other
parties
on
the
wire,
which
are
typically
because
this
is
restricted
to
HTTPS,
are
either
CDNs
or
MIT
M
proxies,
and
the
claim
is
that
this
will
enable
those
parties
to
log
that
sensitive
information
and
I
believe
the
question
sums
up
to
our
CD
ends.
Part
of
the
threat
like
the
privacy
threat
model
because
they
can
like,
if
you
have,
if
your
mi
tanning,
TLS
or
like
if
you're
terminating
TLS,
you
can
already
inject
JavaScript
and
do
all
kinds
of
bad
things.
F
The
CDN
is
the
origin
from
the
perspective
of
the
of
the
other
browser.
So
what
the
CDN
wants
to
do
with
this
information
is
the
business
of
the
origin,
server
and
the
CDN.
They
consult
that
adults
themselves,
I,
don't
think
that
represents
any
special
privacy
problem
and
I.
Don't
think
it's
worth
docume
that
and
if
we're
gonna
start
talking
about
interception
props
and
proxies
we're
into
all
sorts
of
problems
and
I'm
not
signing
on
for
that
and
I.
Don't
expect
anyone
else
to
have
to.
O
So
Reynes
levy,
Google
and
as
much
like
it
could
rehash
something
that
will
mark
thought.
We
close
to
happy
there's
a
lot
of
discussion
when
we
were
developing
the
browser
web
crypto
API
in
terms
of
why
are
some
folks
gonna
use
this
well
in
the
common
cases
for
sort
of
client-side
encryption,
even
though
sir
Brian,
could
you
just
get
a
little
closer
I'm?
Sorry.
K
O
One
of
the
discussions
with
why
why
would
you
do
browsers
client-side
encryption
for
things
was
an
example
to
prevent
information
from
from
being
accidentally
logged
right,
there's
been
multiple
security
breaches
of
say,
then.
There's
accidentally
logging
passwords
then
causing
issues.
So
in
that
argument,
in
which
it
says
that
this
is
not
a
threat
from
an
adversarial
threat
model,
it
is
a
threat
from
an
incidental
or
accidental
operational
failure.
Rienne-
and
this
is
the
part
that
makes
mark
unhappy-
is
the
reintroduction
of
a
prefix
to
indicate
for
intermediaries.
You
perhaps
should
not
love
hug,
SEC
CH.
O
Anything
in
that
prefix
on
the
basis
that
if
we
assume
that
client
heads
as
Martin
was
talking
about,
may
contain
some
identifying
information
or
may
themselves
be
fingerprinting.
The
ability
to
have
that
structure
allows
for
filtration
on
the
server
side
to
prevent
the
incidental
logging
of
that
information.
The
same
way
that
one
might
say
should
filter
out
cookie
or
other
headers
in
you
know
a
structured
header.
This
might
be
a
single
field,
so
you
might
not
need
the
CH
prefix.
O
A
F
So
Martin
Thompson
on
that
point,
though,
it
is
a
good
one
and
I
think
rockins
onto
onto
something
that
we
need
to
think
about.
More
generally,
when
we
start
talking
about
adding
header
fields,
I
tend
to
think
that
these
are
very
much
designed
for
consumption
by
CD
ends,
and
so
accidental
logging
aside.
These
are
this
is
something
that
Citians
probably
want
to
log,
because
it's
going
to
be
changing
their
behavior,
and
so,
if
we
think
about
it
in
that
context,
maybe
we'll
find
that
going
to
that
going
to
those
extra
effort.
F
A
F
And
I
think
that
that
is
the
right
balance
to
strike
here
is
recognizing
that
this
could
be
identifiable
information,
particularly
once
we
said
start
to
get
to
the
point
where
there's
enough
entropy
in
here
to
narrow
things
down
quite
a
lot,
and
so
recognizing
that
and
just
just
putting
it
in
there
saying
that
when
someone
handles
this
information,
they
treat
it
with
respect
motherhood
and
apple
pie.
But
do
it
anyway.
A
X
Ok
and
then
new
and
exciting
developments,
HTTP
SVC
enables
us
to
solve
one
of
the
long-standing
issues
related
to
adoption.
People
wanted
to
use
cryin
hints
as
a
way
to
come
to
perform
adaptation
on
the
navigation
response,
so
typically
on
the
HTML
itself,
and
that
has
always
been
a
thorny
point.
We
added
except
CH
lifetime
in
order
to
address
it
in
future.
H
HTML
related
negotiations,
but
it
didn't
address
the
very
first
request
and
pushing
that
except
CH
signal
to
DNS
will
enable
us
to
solve
that
problem.
X
I
wrote
a
PR
that
adds
an
alt
service
extension
to
client
hints.
It's
not
clear
that
this
David
Benjamin
have
commented
on
that,
basically
saying
that
it
has
a
few
different
characteristics
from
other
things
that
are
currently
in
service.
So
maybe
all
service
is
not
the
right
answer
here,
but
I
would
love
to
find
a
path
forward
to
push
the
opt-in
to
DNS
as
well.
On
top
of
just
being
a
header,
I
think.
A
X
A
X
A
D
X
D
Q
D
B
Y
Should
be
challenging,
my
name
is
Ian,
sweat,
I'm
from
Google
I'm,
talking
about
HP
three
priorities
and,
to
some
extent
HP.
Two
priorities.
I
talked
about
this
briefly
at
the
Quicken
Durham
in
London,
and
here's
kind
of
an
overview
of
where
we're
at
today
and
what
are
some
ideas
of
the
HP
3
workgroup
members
excellent.
Y
So
it
all
began
with
the
coin.
Flip
I
was
not
there.
I
know
some
of
you
were
and
I
know.
There
are
two
competing
proposals
and
the
tree-based
proposal
next
slide.
One
so
HP
two
priority
tree
is
essentially
you
know
it's
what
you
have
here.
There
are
weights
and
you
know
notes:
gonna
have
parents,
streams
can
depend
on
streams
or
they
can
depend
on
this
implicit
route.
Y
So
there's
been
a
fair
amount
of
discussion
historically
on
the
list
to
vote
the
challenges
of
streams
depending
on
streams
in
certain
circumstances,
there's
also
this
concept
of
placeholders,
which
I'll
talk
about
briefly
later,
but
which
is
basically
like
a
stream
that
doesn't
really
exist.
It
just
exists,
so
you
can
have
something
that
you
can
reference
for
a
very
long
period
of
time
and
Firefox
actually
uses
a
session.
See
you
later
so
RFC
7540
has
a
much
nicer
description.
Y
So,
given
my
time,
I'm
gonna
keep
moving
for
it
yeah,
so
one
way
of
thinking
about
them
and
the
way
I
usually
think
about
them,
is
that
strict
prioritization
is
implicit
and
encoded
in
the
tree
structure
and
the
weights.
Allow
you
to
share
bandwidth
between
nodes
where
the
nodes
my
either
be
streams
themselves
or
like
trees
of
have
streams.
So
next
slide.
Y
So
how
do
browsers
use
H
to
party
next
slide,
so
chrome
uses
a
linked
list.
Essentially
so
just
puts
everything
in
one
big
long
list,
and
that
gives
it
strict
ordering.
So
it
knows
exactly
which
thing
is
higher
priority
than
the
other
thing.
This
is
very
straightforward
and
maps
relatively
well
to
its
five
party
levels
that
it
has
internally
in
the
in
the
browser.
Next
slide.
Y
Firefox
creates
this
placeholder
model
where
it
uses
a
six
plate
or
sorry
five
placeholders
and
separates
things
into
buckets
based
on
whether
or
not
their
render
blocking
resources
or
background
so
on
and
so
forth,
and
uses
weights
to
end
dependencies
to
kind
of
trade
off
between
them.
But
it
uses
weights
a
little
bit
more
than
than
chrome
DES.
Next
slide,
Safari
does
use
weights,
and
you
know
kind
of
just
puts
render
blocking
resources
at
higher
weights,
then
than
other
things.
I
didn't
actually
put
edge
in
here.
Y
Y
So
let
me
give
you
a
quick
overview
of
a
few
of
the
other,
both
where
we're
at
for
h3
right
now,
as
well
as
some
alternative
proposal
proposed
on
the
list.
First
by
Patric,
meanin
and
then
subsequently
discuss
prior
ID
about
their
working
group.
Member
isn't
including
Lucas
and
Kazuo
next
next,
next,
okay,
sorry
I
was
gonna
need
to
save
my
time
so
conceptually
HQ
priority
tree
that
we
discussed
previously,
it's
really
clean,
like
you,
have
two
concepts.
Y
You
have
a
you
know,
basically
who
you're
a
parent
of
and
a
weight,
and
it
provides
you
a
huge
amount
of
power
and
a
lot
of
flexibility,
but
it
has
some
challenges
and
it
provides
a
lot
of
functionality
that,
in
reality,
browsers
like
really
are
not
using
and
do
not
need,
and
some
of
that
complexity
embodies
itself
in
the
implementations
as
well.
So
next
slide
h3
priorities
actually
are
you
can
they
argue
this,
but
most
people
think
they're
slightly
more
complex,
as
they
are
currently
specified
than
82
priorities.
Y
They
add
explicit
placeholders
instead
of
having
implicit
ones,
so
whether
this
is
more
or
less
complex
than
in
Pleasant
ones,
maybe
use
a
point
of
contention,
but
in
order
to
ensure
consistency,
all
priority
frames
are
now
sent
on
the
control
stream.
So
you
can't
send
priority
as
part
of
the
request
itself,
because
you
may
have
issues
with
you
know.
Does
this
priority
apply
for
storage?
Y
Is
this
one
to
apply
first
and
you
just
you
basically
get
tree
and
consistency
because
you're
trying
to
maintain
distributed
state
at
a
distance,
and
you
have
no
idea
why
blocking
so.
The
only
way
to
really
fix
that
is
to
put
them
on
the
control
stream,
so
you've
actually
reintroduced
head-of-line
blocking
into
a
protocol,
but
we
would
like
not
to
have
a
head
of
wire
blocking
in.
However,
we
solve
this
by
adding
this
orphan
placeholder
concept,
so
the
idea
is
really.
Y
We
probably
want
the
default
to
be
FIFO,
not
round-robin,
and
so
in
order
to
achieve
that,
we've
created
something
that's
basically
like
if
you're
not
really
sure
how
to
prairies
prioritize
this
thing,
you
put
it
at
the
root
and
you
service
it
after
everything
else
is
one
way
of
thinking
about
it.
So
it's
also
been
proposed
as
a
zero
weight
option.
Y
I
think
functionally
they're
quite
similar,
but
but
the
goal
here
is
to
achieve
FIFO
by
default,
especially
when
the
priority
information
is
lost,
because
now
it's
on
a
different
stream
and
it's
not
embody
embedded
in
the
request.
Header
next
slide,
so
patrick
mean
on
the
list
around
january
or
february
of
this
year,
proposed
something
that's
largely.
A
speedy
style
in
numerical
priority
originally
had
two
bits
for
concurrency.
Now
we
simplified
it
to
one.
So
this
is
basically
saying
you
know
everything
has
kind
of
a
strict
prioritization.
Y
You
know
higher
priority
service
before
lower
priorities
and
then
either
you
want
a
request
sequentially.
So
you
want
the
entire
response,
all
or
you'd,
like
it,
round-robin
too,
with
other
requests.
There
are
the
few
more
details
that
are
in
the
write-up,
but
this
is
based
on
his
experience
on
the
quorum
loading
team,
as
well
as
assume
experience
at
CloudFlare
next
slide,
and
actually
I
should
go
back.
Y
So
what
do
we
actually
need
here?
Actually
so,
based
on
what
I
can
I
can
observe
separate
research
efforts,
both
Patrick
Menon
and
Robin,
Marx
and
others
have
kind
of
come
to
around
the
same
conclusion
of
what
we
want
at
least
for
standard
web
page
loading.
So
the
optimal
ordering
occurring
to
Patrick
is
serialized
the
CSS
and
blocking
JavaScript
I
will
just
let
you
read
it.
It's
probably.
Y
Y
Y
Y
It
depends
on
your
view
of
adoption
and
http2
priorities,
but
full
adoption
is
something
in
the
range
of
25
percent.
According
to
one
study
you
know,
I
think
partial
adoption
is
is
certainly
a
bit
better,
but
it's
certainly
not
ubiquitous
on
either
the
client
or
the
server
side
and
most
the
clients
that
you
saw
at
the
beginning.
Y
They
started
using
priorities
on
the
day
that
some
web
developer
decided
he'd
like
had
some
good
ideas
and
then
like
he
did
a
test,
and
maybe
he
was
better
in
some
circumstance
and
then
it
hasn't
been
touched
in
like
four
years
great,
like
I
mean
this
is
not
being
actively
worked
on
and
improved
from
what
I
can
observe
so
I
mean
I,
don't
think
we're
seeing
an
increase
in
the
h2
variety
adoption
at
this
stage
or
dramatically
changing
usage
patterns
for
what
I
can
establish
so
I
have
two
minutes.
That's
awesome
next
slide.
Y
As
Patrick
Meighan
pointed
out
in
his
blog
post
very
nicely,
it
would
be
awesome
to
allow
server
input.
Sometimes
the
server
simply
does
no
more
than
the
browser,
and
the
existing
tree
model
is
extremely
difficult
to
achieve
this.
So,
as
we
already
discussed,
you
have
to
put
everything
in
this
certain
order
and
you
have
to
put
everything
on
the
header
stream.
Otherwise
you
have
the
possibility
of
what
losing
synchronization
if
you
try
to
achieve
that
with
a
back-end
and
a
client.
Y
Y
My
original
suggestion
was
that
we
try
to
move
forward
with
that
and
try
to
move
forward
with
that,
but
for
both
hp2
and
for
HTTP
3.
Subsequently,
others
have
suggested
like
they
really
don't
want
to
go
that
way
at
this
stage
in
the
game
and
that
the
optimal
option
is
to
remove
priorities
from
the
draft
entirely.
I
am
happy
with
that
option.
Martin
thompson
has
prepared
a
PR
for
that
option.
Y
From
a
procedural
perspective,
I
think
that's
a
more
expedient
way
forward
because
it
does
not
block
the
standard.
Is
a
standardization
of
HTTP
3
on
figuring
out
optimal
priorities
which
are
really
I,
would
say,
orthogonal
issues,
I
mean
I,
don't
think,
there's
a
requirement
that
they
be
one
be
blocked
on
the
other
and
as
we've
already
shown
through
the
issues
that
I've
posted
here,
there
are
substantial
issues
with
just
kind
of
kind
of
fixing
each
two
priorities
and
moving
them
on
to
h3
that
are
already
creating
substantial
challenges.
So.
A
All
right
so
I'm
gonna
miraculously
transform.
You
now
see
before
you
a
quick
working
group
change
from
from
that
perspective,
what
we're
looking
for
is
input
from
this
community
about
how
htv-3
should
address
priorities.
One
of
the
big
concerns
in
that
work
is
is
that
the
more
Delta
we
have
from
HTTP
to
semantics
in
HB
3,
the
more
friction
it
could
create
for
adoption
of
the
new
protocol,
and
so,
if
you
know
HTTP,
this
working
group
owns
semantics
to
this
protocol,
and
so
that's
why
we're
here
is
to
have
this
discussion.
So
right
we
are.
A
G
Repair
to
pay
on
I'm
trying
go
as
fast
as
possible,
so
I
want
to
say
that
it
doesn't
really
matter
what
spec.
What
matters
is,
what
implementations
do
and
from
that
perspective,
h2
priorities
as
cool
as
it
seemed
at
the
time,
seems
like
it's
a
failure
right.
So,
let's
figure
out
what
the
next
thing
is
and
move
on
so
I
would
also
warn
people.
However,
a
lack
of
priorities
is
really
bad.
We
know
that
from
what
we
did
in
speedy.
Initially
in
the
in
the
very
initial
things
we
have
to
have
some
priorities.