►
From YouTube: HTTP WG Interim Meeting, 2020-05-19
Description
HTTP WG Interim Meeting, 2020-05-19
A
C
A
D
D
D
A
A
A
A
I'll
start
with
what's
on
the
screen,
which
is
the
note
well,
if
you're
not
familiar
with
this,
this
is
the
policies
under
which
we
note
EF,
even
virtually,
and
so,
if
you're
not
familiar
with
this,
please
do
take
the
time
to
get
by
with
it.
It's
about
things
like
analytical
properties
or
asking
procedure,
kind
of
conduct,
copyrights
and
so
forth
and
so
on,
and
you
can
find
it
at
the
URL
at
the
top
of
the
page
or
by
searching
searching
for
IETF
note
well,.
A
Today,
in
this
week's
meeting,
we're
going
to
have
a
fairly
substantial
discussion
of
privatization
led
by
Liga's
and
then
we're
gonna
have
about
10
minutes
for
the
client,
sir,
and
her
proposal
from
Brian
and
finally
about
15
minutes
through
the
use
of
the
resource.
Rhd
says
I'm
home.
Do
we
have
any
agenda
Bash.
A
G
A
G
A
A
D
B
So
hello,
everyone-
this
is
possibly
my
first
HTTP
into
a
meeting,
so
it's
really
exciting,
so
I'm
going
to
talk
about
the
extensible
priorities
draft,
which
was
adopted
following
a
call
for
adoption
that
we
thought
me
but
I
just
after
Singapore,
so
this
document
has
been
active
in
in
the
hood,
we're
in
the
working
group
for
about
six
months
now
we
brought
that
in
with
some
known
issues
that
we've
been
trying
to
talk
through.
So
if
you
go
on
to
the
next
slide,
please
I
just
want
to
give
a
brief
refresher
for
every
word.
B
B
B
U
equals
1
equals
3
plus
I,
so
it's
if,
ultimately,
this
is
a
dictionary
parameters,
and
the
idea
is
that
in
future,
people
could
define
some
extension
parameters
and
we
have
some
use
cases
already,
but
the
core
document
extensible
priorities
is
focused
on
these
two
things
which,
based
on
the
design
teams,
work
in
the
past
field.
That
gives
us
enough
stability
to
provide.
You
know
web
browsing
so
a
prioritization
of
resources
and
responses
that
can
suit
the
web
browsing
use
case
and
some
others.
B
That
the
rough
shape
of
that
draft
has
been
around
for
a
while,
and
so
people
have
been
thinking
about
how
they
might
want
to
use
that
before
the
adoption.
But
what
I
can
say
based
on
my
understanding,
which
may
be
slightly
wrong
or
out-of-date
I
did
try
to
do
a
very
loose
survey
of
things,
but
people
who
have
been
interested
and
I've
actually
started
some
work
to
implement
the
extensible
priority,
schema
a
chrome
and
quickly
and
/
H,
o
ng,
TSP
or
ng
hb3.
B
The
implementation
I'm
responsible
for
quiche
is
not
a
work
in
progress
for
this
I
believe
move
fast.
Similarly
has
some
work-in-progress
code
and
some
others,
so
this
is
this
is
good.
I've
got
some
commentary
on
how
we
might
want
to
look
at
Interop,
but
it's
it's
good.
That
people
are
at
least
thinking
of.
Well
sorry
have
done
the
implementation
following
the
strong
support
that
we
had
in
the
reduction
next
slide.
Please.
B
So
I
just
want
to
give
a
flavor
of
the
impact
of
this
scheme
for
for
the
quiche
implementation
that
I
mentioned
we're
responsible
for.
Obviously,
a
lot
of
this
comes
down
to
how
a
river
would
schedule
resources
anyway,
regardless
of
what
the
priority
signal
is,
which
is
what
extensible
priorities
focuses
on.
It
also
gives
some
guidance
on
how
to
to
take
those
signals
and
what
to
do
with
them.
It's
very
little
because,
ultimately,
the
priority
signals
a
hint.
B
But
what
I
wanted
to
illustrate
here
is
that,
by
thinking
through
how
to
incorporate
this
scheme
and
to
rewrite
our
scheduler,
that
we
would
say
without
any
any
work.
What
we
can
do
today
is
for
five
concurrent
transfers
of
five
megabytes,
all
with
the
the
same
equivalent
agency,
a
server
that
doesn't
understand
that
just
ignores
them
and
sends
all
of
those
resources
in
a
round
robin
fashion
and
so
on.
B
B
We
have
some
service
implementations
that
always
did
some
kind
of
scheduling,
Rabi
marks
and
and
Coe
I've
done.
Some
nice
survey
is
using
different
clients
and
probing
different
implementations
to
see
what
they
do.
What
schemes
they
do.
They've
reported
some
some
of
this
back
to
people
who
are
doing
maybe
last
in
first
out
and
telling
them
that
that's
not
a
deal,
and
these
two
things
are
kind
of
independent.
You
can.
You
can
fix
your
scheduler
without
necessarily
having
to
implement
the
extensible
priority
scheme
in
in
wholesale,
especially
things
like
reprioritization.
B
You
might
be
able
to
come
up
with
something
a
bit
more
performant
and
acting
similarly
with
with
some
tweaks,
which
I
think
is
a
good
thing
so
yeah
the
question
I
have
is
a
day
they
consuming
the
extensible
priorities.
Yet
you
know
some
interact.
Interop
activity
could
help
and
test
and
measure
this
stuff.
Some
of
us
have
talked
about
this
here
and
that
nothing
so
far
has
been
kind
of
written
down
anywhere.
I
did
wonder
if
defining
some
test
cases
to
exercise
or
some
basic
core
functionality
would
let
us
show
that?
B
Okay,
you
know
the
example
I
just
gave
was
first
in
first
I
will
yeah
that's
pretty
obvious,
but
what
about
things
way?
We
have
weighting
of
resources
that
were
requested
later
in
the
connection.
Can
you
prove
that
those
can
preempt
earlier
requests
to
a
lower
priority,
those
kinds
of
things
and
so
part
of
the
some
of
those
side?
B
B
B
Although
it
did
there's
only
there's,
there's
not
that
many
levels
and
there's
not
a
lot
of
granularity
there
that
actually
some
implementers
felt
restricted
by
giving
assigning
meaning
to
those
levels.
And
so,
through
the
course
of
the
discussion.
We
decided
that
it.
Actually,
it
would
be
okay
if
we
just
had
the
same
range,
eight
levels
of
agency
and
and
to
say
that
a
client
can
kind
of
use
those.
B
However,
it
would
like,
with
the
expectation
that
the
server
transmits
in
an
order
from
the
lower
agency
to
high
agency,
so
during
the
course
of
that
discussion,
we
should
have
said
we
kind
of
agreed
that
the
default
should
change
from
one
to
three.
Unfortunately,
that
change
didn't
make
into
draft
zero
zero.
That
was
just
a
clerical
error,
so
that
has
since
been
fixed
in
the
editors
copy
and
anticipate
I
will
make
into
the
next
draft.
B
But
we
still
have
a
special
call-out
to
the
largest
agency-level
last
seven,
which
is
background,
and
we
just
say
that
you
know
if
you're,
using
something
that's
kind
of
interactive
for
browsing
a
webpage.
You
probably
don't
want
to
request
things
at
a
background
level,
because
it's
not
going
to
end
up
forming
in
a
way
that
you
would
like.
B
Okay,
yeah
next
slide,
Thanks
thanks
for
the
prompt,
so
there
the
remaining
issue
you
haven't,
got
over
that
one-
that
I'm
running
large
issue
that
we
would
like
some
time
but
from
the
group
on
is
this
and
a
discussion
about
headers
versus
frames,
and
this
comes
down
to
how
to
signal
the
initial
priority.
There's
an
issue
open
for
this
not
have
the
title
about.
Headers
is
friends,
but
ultimately
that
seems
to
be
where
most
of
this
discussion
is
now
being
concentrated.
B
B
Given
we
have
an
ability
to
send
a
frame
already
and
that's
in
the
spec
that
we
possibly
just
send
that
before
a
request,
rather
than
have
to
wait
for
a
request
to
have
been
sent
with
or
without
authority
had
at
first
and
then
week
to
update
the
priority
of
it
and
so
in
practice.
What
we've
seen
is
that
chrome
is
already
doing
this
in
spite
of
the
text
that
it
would
sell
it
shouldn't
any
practice.
B
Actually,
Chrome's
behavior
had
to
be
accommodated
by
HTTP
3
servers
because
there's
no
order
and
guarantees
between
the
control
stream
and
the
request
streams.
So
you
know
in
sense,
if
there's
some
reordering
say
or
a
server
application
doesn't
read
those
stream
the
application.
So
the
application
doesn't
read
the
data
out
of
the
stream
in
the
order
that
it
was
necessarily
strictly
received.
There's
possibility
that
that
the
reprioritization
signals
received
before
the
thing
that
were
opposed
to,
and
there
was
some
guidance
already
for
this
to
say
about
buffering
and
how
you
would
accommodate
that
thing.
D
B
That
is
a
good
clarification
point.
I'm,
smart
and
I
did
gloss
over
that
I
haven't
we
can
I,
haven't
got
slides
that
speak
to
that
point,
because
yeah
I
don't
have
that.
It
is
a
good
question.
We
can't
discuss
that
during
this
meeting.
It
doesn't
necessarily
help
us
overcome
the
headers
versus
frames
issue.
It
is
related.
If
we
decide
we
don't
need
row
prioritization
that
possibly
tips
things
in
balance
of
not
having
frames
at
all.
Yes,
which,
I
think
is
the
point
you
getting.
E
Yeah,
so
I
was
gonna
kind
of
put
a
clarification
point
that,
to
my
knowledge,
the
thing
that
chrome
is
doing
today
does
not
violate
any
normative
text
and
the
existing
document
do
to
that
Lucas
as
he
wanted
to
do
to
atv-3
having
no
reordering
Garant
ordering
guarantees.
These
things
can
kind
of
arrive
at
any
time.
E
So
you
know
you
sort
of
have
to
support
the
frame,
possibly
right
and
before
that
their
request
anyway,
and
so
the
choice
to
do
that
was
partially
just
it
matches
the
existing
semantics
of
what
chrome
was
already
doing,
which
was
using
a
frame
to
speak
to
martin's
comment.
The
design
team
came
back
with
the
decision
that
reprioritization
was
important
well
because
it
was
existing.
It
was
being
used
in
existing
applications,
including
web
browsers,
and
also
it
seems
like
there
were
these
cases
when
it
was
compelling.
H
Yeah
I
think
this
is
a
little
complicated,
but
there's
a
number
of
things
that
would
be
helpful
in
this
and
I.
Don't
think
any
one
of
them
would
be
uniquely
definitive
on
this
point.
But
I
would
say
that
there's
evidence
that
this
has
some
material
improvement
for
some
class
of
application
and
that
service
would
be
willing
to
implement
it
in
the
in
the
sense
that
not
just
Google
servers
implementing
it,
but
in
more
in
general
and
and
that
it
works
when
random
person
implemented.
B
I
think
see
the
point
here
and
and
if
I
think
the
comment
I
can
make
is
that
the
outcome
of
that
decision
will
in
in
impact
this
had
as
versus
frames
debate,
which
is
why
it
is
important.
I,
don't
know
if
I
have
the
answer,
I
don't
know
if
we
can
come
up
with
our
answer
today
or
if
we
need
further
input
from
the
list.
B
B
I
From
our
experience
we
have
implemented
and
a
priority
at
its
frame.
It
was
a
bit
of
pain
to
do
the
buffering
I
mean
buffering
the
priority
update
frame
that
arrives
before
the
stream.
So
that's
something
we've
done
in
HTTP
two
priorities,
because
we
have
the
idol
stream
being
privatized,
so
it's
nothing
more
complicated
than
gtp
do
and
other
aspects
of
the
new
prioritization
scheme
is
much
more
simpler
and
still
be
so.
I'd
say
that
it's
still
much
more
simpler
than
HTTP.
B
B
B
Okay,
so
I
don't
know
if
we're
gonna
be
able
to
answer
that
question
I
think
maybe
maybe
if
we
just
continue
progressing
through
the
slides,
bearing
in
mind
that
some
of
the
outcome
of
the
proposal
is
press
it
on
some
other
discussion
that
might
need
to
happen
if
people
are
okay
with
that
yeah.
Okay,
next
slide.
I
B
A
kind
of
headline
summary
what
it
does
is:
it
gives
more
formality
that
priority
update
is
allowed
for
initial
priority.
It
adds
some
more
clarification
on
which
end
point
might
be
able
to
send
this
thing,
which
might
not.
You
know
these
kinds
of
things.
The
name
is
a
little
bit
odd.
If
you're
going
to
use
it
as
an
initial
priority,
I,
don't
think
we
need
to
bug
share
that
on
this
call,
especially
if
we
aren't
convinced
we
need
it
anyway.
So
I'll
move
on
from
that
one.
B
But
you
know
what
it
says:
is
clients
consent
a
priority
update
they.
You
now
have
two
ways
of
sending
an
initial
priority,
so
you
say
that
they
may
have
met
a
header
field
and
send
on
me
a
priority
update
frame.
They
can
send
the
priority
update
for
him
first
and
then
a
field.
This
isn't
really
much
different
to
a
reprioritization
event
and
we
already
have
some
text
that
describes
like
kazuto
just
mentioned
that
the
buffering
and
which
which
of
those
signals
you
probably
want
to
give
the
the
last
bullet
point
on
the
slide.
H
Yeah,
just
a
quick
one:
how
do
you
account
for
the
resource
allocation
here?
Obviously,
you
can't
get
one
of
these
things,
but
that
refers
to
something
that
the
other
side
couldn't
create,
but
there's
some
layering
issues
there
in
terms
of
knowing
that
a
particular
stream
is
allowed
or
not,
or
looking
at
the
push
IDs
and
what's
allowed
there.
Have
you
considered
a
ship
here
that.
B
Was
my
intention
to
there
may
will
be
gaps,
but
you
know
compared
to
the
current
text
in
draft
zero,
zero.
There
is
a
few
you
know
to
dues
state
what
to
do
if
you
got
a
reprioritization
priority
update
frame
for
a
push
ID
that
was
beyond
the
max
push
ID
limit.
You
know
that
some
of
that
is
is
directly
visible
to
the
application.
Like
you've
just
said,
other
things
like
if
you've
got
a
priority,
update
for
a
request
stream
beyond
democ
streams
or
max
bi-directional
stream
limits
that
had
been
advertised.
B
We're
also
addressed
as
far
as
I'm
aware,
if
there's
errors
in
that,
like
I'd
like
to
get
that
tidied
up
but
I'm,
not
I,
don't
know.
Based
on
my
implementation
experience,
there
isn't
a
layering
violation
here,
because
the
application
is
in
control
of
the
stream
limits
that
it
effectively
asks
the
library
to
to
manage
on
its
behalf,
and
we
have
a
way
to
query
things.
I
believe
is
that
I.
H
Know
I
was
concerned
here
about
the
the
case
where
you
have
effectively
a
string
budget
that
you've
told
the
transport
that
you've
got,
and
so
you
don't
necessarily
know
where
the
transport
is
with
that
and
you're
expected
to
deal
with
one
of
these
frames
and
and
know
that
identifies
a
stream
that
you've
allowed.
The
other
side
to
have
now
requires
crossing
the
boundary
into
the
transport
to
know
whether
the
transport
has
permitted
that
particular
stream
ID
when
and
that
frame
comes
in.
That's
all.
B
It's
a
cute
observation,
I
think
you
know
the
priority
update
frame
was
based
on
the
old
HTTP
three
priority
frame,
and
so
you
know
the
the
even
the
language
of
the
prioritized
element.
Id
were
referred
to
a
stream
ID
for
requests
so,
but
that
I
would
say
that
the
layering
violation
was
always
there.
I
didn't
implement
the
old
priority
scheme,
so
I
can't
say
how
hard
it
was.
I
know
well,
I
believe
some
people
on
on
the
call
did
implement
the
old
priority
scheme.
Maybe
they
can
speak
to
that.
D
This
is
Tommy,
probably
just
inserting
myself
as
a
individual
I'd
echo.
What
Martin
was
saying
that
we
shouldn't
assume
that
the
relationship
between
the
application
and
the
transport
is
so
directly
managed
as
far
as
what
the
mac
stream
ID
currently
is
in
our
rotation,
you
know,
while
we
may
allow
an
application
to
be
aware
of
that,
there
can
definitely
be
modes
in
which
it's
more
just
like
here's,
a
window
threshold
of
what
I'm
roughly
able
to
have
and
the
transport
may
be
managing
this
and
moving
this
without
direct
interaction
from
the
application.
B
B
Next
slide,
please
I'm,
aware
of
time,
so
I'll
try
and
pick
up
my
pace
a
little
bit
the
desired
outcome.
From
this
meeting
that
I
had
was
to
say
you
know
we
put
this
proposal
out
there
on
the
April.
The
30th
we've
had
some
some
general
feedback.
That's
been
supportive,
I
mean
I'm
hearing,
maybe
slightly
different
things
in
this
meeting,
which
is
ok.
It's
felt
like
when
I
wrote
these
slides
that
you
know
we
could
carry
on
incorporating
some
improvements.
We'd
be
ready
to
merge
readiness,
very
brat,
zero
one.
B
At
some
point,
some
remaining
things
to
highlight:
discuss
I,
would
have
liked
to
get
through
in
this
meeting.
Is
that
there's
been
some
frame
format,
changes
to
HB
3
and
how
we
do
some
versioning
across
the
priorities
draft
and
also
a
question
to
the
working
group
about
diagramming
of
HB
2
+
HB
3
frames
when
they
appear
in
the
same
document.
So
if
those
two
discussion
points
are
what's
going
to
come
up
next,
so
if
we
go
on
to
the
next
slide,
please.
B
There
was
a
separate
issue,
the
one
around
headers
versus
friends,
which
was
to
consider
using
HB
3
frame
types
rather
than
a
bit
field.
So
in
the
bottom
here
we
see
the
the
old
layout
of
the
frame
that
dedicated
a
bits
of
which
only
one
was
used
to
distinguish
between
prioritizing
a
request
or
a
push
ID,
as
mentioned
earlier
this
this
emulates,
the
old
priority
frame
and
that
how
it
it
would
perform
this
disambiguation.
B
That's,
although
it's
it's
a
waste
of
a
byte
which
isn't
necessarily
that
much
overhead
of
the
work
that's
happened
in
stuff,
like
the
Datagram
frame,
has
given
a
precedent
for
using
frame
types
to
distinguish
between
frames.
That
semantic,
please
say,
do
the
similar
thing,
but
requires
slightly
different
formal
layouts
when
they're
serialize
to
the
wire.
So
this
is
what
the
old
frame
looked
like
tied,
0,
F
1
at
E
bit
next
slide,
please
at
the
new
frame.
B
The
proposal
is
to
remove
that
bit.
Field
keep
the
ID
and
the
value
fields
within
it,
but
then
change
the
frame
type.
So
the
first
frame
type
here
which
I
won't
read
out
applies
to
requests
and
the
second
frame
type
applies
to
pushes
I'll,
explain
why
those
friend
types
as
they
are
in
a
moment
but
since
drat
zero,
zero,
the
quick
documents
of
landed
changes
to
modify
the
language
of
their
diagrams.
So
with
this
change
that
adopts
a
similar
lingua
here
so
rather
than
ASCII
diagrams,
we've
got
more
of
a
textual
notation.
B
That
includes
the
full
frame
layout,
not
just
the
frame
payload,
but
it
includes
a
definition
of
the
type
the
the
allowed
ranges
which
are
the
two
values
shown
above
the
length
of
that
frame,
and
then
the
two
pigeon-toed
fields,
a
prioritized
element
ID
and
the
priority
field
value
next
slide.
Please.
B
So
the
question
some
of
you
might
be
asking
is
why
change
the
frame
type
from
you
know
0xf
to
that
horrible
thing,
and
so
the
justification
for
this,
in
my
mind,
is
that
this
is
a
breaking
change
in
the
frame
format.
With
the
priorities
draft,
we
don't
have
any
way
to
signal
what
version
of
the
priorities
draft
we're
using
at
any
point
in
time
and
therefore
there's
a
risk
that
the
end
points
generating
pars
the
frame
with
different
expectations.
B
If
you
have
you
know,
without
that
bit
field
in
there,
you're
gonna
start
parsing
things
differently
and
there's
a
danger
here
that
I
see
of
the
parsing
error,
causing
of
the
frame
which
can
cause
a
a
must
of
a
connection
error
that,
given
that
these
frames
are
anticipated
to
be
sent
that
could
end
up
with
some
bad
stuff.
Oh
and
we
might
be
able
to
get
over
this
and
some
really
interrupt
and
fix
things
but
longer
term
I
I
was
trying
to
to
mitigate
things.
B
B
The
bounds
like
to
that
approach
that
I
see
is
that
we
may
need
to
integrate
the
priorities
drive
faster
and
then
we
can
iterate
HB
3,
especially
now
it's
layton
in
its
process
and
if
we
need
to
make
any
further
breaking
changes
or
changes
that
affects
stuff
in
some
weird
way
that
we
end
up,
painting
ourselves
in
a
corner
straight
away.
So
a
different
option,
which
is
the
proposal
or
the
text
in
this
proposal,
is
to
pick
different
types
for
each
priorities.
B
Draft
so
that
it's
very
clear
when
you
see
that
type
on
the
wire
that
you
understand
what
priorities
draft
is
being
communicated
and
and
then
once
we're
ready
to
actually
finalize
this
document,
we
can
revert
back
to
the
0xf
and
0x1
types,
because
they're,
nicer
and
I'll
credit
cuz
hero
with
with
coming
up
with
that
proposal
as
well,
so
I
just
codified
it
into
the
text.
I,
don't
know
if
anyone's
got
any
opinions
on
on
which
of
those
that
they'd
like
to
mention
here
or
to
take
it
to
the
issue
or
the
list.
I.
B
See
a
comment
in
the
jabber
about
an
extension
that
indicates
the
priorities,
version
I,
think
to
me
at
least
that
fundamentally
comes
back
to
you
know
needing
to
wait
for
settings
before
you
can
send
stuff
and
this
kind
of
whole
problem
with
avoiding
delaying
of
requests.
And
yes,
okay,
the
comment
that
we
might
want
to
use
a
transport
parameter
is
a
broader
scoped
issue,
I
think
of
codifying
HB
3
or
application
layer
specifics
into
the
quick
transport,
which
I
think
is
a
long-term
issue
and
I.
Don't
think
we're
going
to
resolve
that
in
time.
E
Was
gonna
say
that
I
think
option
to
you
is
probably
the
right
option,
which
is
where
you
were
right
up
and
I.
Think
you
it
it's
also
what
I
believe
what
the
back
frequency
draft
the
John
and
I
are
working
on
and
because
you
know
I
think
suggest
that
the
same
approach
for
that.
So
you
know
it's
a
live,
look
at
a
few,
tries
at
this
approach
and
and
see
how
it
works,
but
I
think
it's
certainly
workable
and
I'm
pretty
straightforward.
So
thank
you.
H
H
Martin
I
see
an
upside
in
that
I.
Think
implementations
will
be
far
more
tolerant
of
just
random
junk
that
we
send
them
if
we
send
them
junk
with
the
purpose,
even
if
they
don't
understand
it.
So
I
think
this
is
a
good
general
strategy
for
working
on
experiments.
I
think
the
odds
of
collision
here
are
astronomically
1,
so
carry
on.
H
B
Resolve
them
by
themselves,
I've
got
a
slide
for
that,
so
so,
let's
move
on
to
it,
just
just
so
I
can
give
some
more
context
and,
and
ultimately,
if
there's
nothing,
you
know
sound
great
but
yeah
for
page
B
to
frame
in
this
draft
there
is,
there
is
no
functional
change.
All
this
happened
is
for
consistency,
I've
updated
the
diagram,
and
so
what
we
have
is
the
full
h2
frame
layout
the
length
of
it,
the
type
you'll
notice.
This
type
is
0xf.
B
This
is
partly
because,
basically,
the
type
is
only
8
bits,
so
we
don't
have
as
much
space
and
I'm
concerned
about
wasting
precious
space
for
experiments.
I
also,
don't
necessarily
see
that
being
as
much
experimental
work
happening
in
the
h2
frame,
Department
and
but
again.
If
people
disagree,
I'm
happy
to
consider
that
this
is
just
some
kind
of
editorial
decisions
with
strawman
proposal
put
out
there
for
some
feedback,
but
you
can
see
that.
B
But
this
is
you
know
the
full
frame
definition
and
the
only
things
that
really
care
about
for
the
priority
update
frame
is
the
fields
after
are
so
the
prioritized
stream
ID
and
the
priority
field
value.
So
there's
already
been
some
discussion
in
the
community
about
on
the
ticket
about
this
next
slide.
Please.
B
An
h2
continues
to
use
those
ascii
diagrams,
but
h3
dropped
that
in
favor
of
the
quick
style,
formatting
and,
and
so
therefore,
there
might
just
be
some
element
of
surprise
for
people
who
coming
at
this
trying
to
implement
one.
You
know
one
version
of
extensible
priorities
and
as
mine
says,
this
is
an
editorial
thing,
but
you
know
having
some
discussion
between
Kazuo
or
myself
and
so
from
Mike
yeah.
We
we
wondered
if
this
is
more
of
a
question
for
the
HTTP
working
group
community
as
a
whole,
rather
than
specific
to
this
issue.
B
B
So
yeah,
the
the
overall
shape
of
the
draft,
is,
you
know,
we've
closed
out
one
of
the
same
major
issues.
We've
got
this
headers
versus
frames
which
may
be
dependent
on
if
we
think
prioritize
reprioritization
is
needed
and
I'll
probably
needs
go
away
and
try
and
answer
that
question
at
the
same
time.
B
But
if
we,
if
we
park
those
things
momentarily
and
look
at
the
other
issues
that
are
open
on
the
draft,
most
of
them
are
just
kind
of
minor
clarifications
and
improvements
which
are
great
I,
think
the
only
one
that
is
kind
of
substantial
is
around
server
push
and
what
the
default
priority
of
a
push
is.
I
put
this
as,
if
time
permits,
I,
think
we're
probably
out
of
time
so
yeah.
B
There
is
an
issue
for
this,
and
if
people
want
to
put
some
commentary
on
now
on
the
mailing
list,
I
think
there
was
some
discussion
to
be
had
it's
ultimately
I
think
it's
does
anyone
care
about
Civic,
push
and
prioritization.
There's
some
really
good
discussion
between
Tom
Bergen
and
Mike
Bishop
about
the
different
merits
of
of
how
to
default
prioritize,
something
and
whether
you
know
it
matters.
If
you
can
be
prioritizes
over
push
quickly,
then
you
get
into
issues
around
round-trip
times
it
all
sorts
of
stuff,
so
I
think
there's
some
good
background
reading.
B
A
I
A
A
K
I
had
to
drop
off
and
rejoin,
but
area
apologies.
Okay,
the
clients
are
HTTP
header
draft.
This
is
a
individual
draft
discussing
conveying
client
certificate
information
from
each
Lea
authenticated
TLS
connections
from
TOS
terminating
your
first
proxies
back
to
origin
server
applications,
there's
been
a
little
bit
discussion
on
list
and
here
to
talk
about
kind
of
where
it's
at
and
where
this
working
group
may
or
may
not
want
to
go
with
it.
So
thanks
for
your
time
and
mark,
if
you
give
me
the
next
slide,
please
you.
K
So,
just
a
little
bit
of
context
of
motivation
behind
this.
Basically
there's
a
world
out
there,
where
very
often
HTTP
application
deployments
oftentimes
are
deployed
in
such
a
way
that
the
TLS
term
connection
from
the
client
is
terminated
by
a
reverse
proxy
sitting
somewhere
in
front
of
the
actual
HTTP
application
back
then
it
doesn't
mean
that
necessarily
that
there's
not
HDS
between
the
backend
components,
but
that
initial
connection
that
the
client
sees
to
the
server
is
is
terminated
by
this.
K
This
front-end
component,
you
see
this
in
all
kinds
of
things
like
old
fashioned
into
your
burst
proxy
and
order
origin,
server
deployments
more
and
more
now,
as
CDN
as
a
service,
type,
operands
or
other
or
other
applications,
load,
balancing
type
services
and
even
ingress
controllers.
Sometimes
do
this
with
micro
service
type
architectures
and
in
the
world
about
their
TLS
client
certificate
authentication
is
sometimes
used.
It's
not
super
prevalent,
but
it
is
used
occasionally
and
in
these
cases
the
actual
back-end
application
often
needs
or
wants
to
know
something
about
the
client
certificate
Biltz
design.
K
I
just
wanted
to
give
some
context
and
I'm
I'm
here
in
this
working
group,
sort
of
by
a
way
of
a
conversation
that
started
off
in
the
OAuth
working
group
around
the
related
draft,
using
mutual
TLS
authentication
and
people
sort
of
bemoaning
the
difficulties
of
getting
it
to
work
with
different
components
and
different
types
of
software,
and
that
in
turn,
sort
of
led
to
a
draft
that
was
ultimately
moved
into
dispatch
to
be
discussed
and
sectors.
That's
more
or
less
dispatched
to
work
here.
K
K
So
the
draft
currently
is
a
simplistic
proposal
that
ideally
could
potentially
enable
turnkey
type
interoperable
integration
between
independently
developed
and
deployed
components.
It's
it's
pretty
straightforward.
Basically,
the
client
would
make
a
normal,
mutually
authenticated.
Tls
connection
and
HTTP
request
is
sent
over
that
the
reverse
proxy
component
verifies
the
certificate
on
presentation,
and
then
it
sanitizes
this
HTTP
header,
on
each
request
and
once
that's
done,
it
passes
the
least
certificate
information,
the
leaks
client
certificate
as
a
new
header,
with
defined
name
and
encoding
to
the
origin.
Server
on
the
backend
and
the
origin.
K
Server
then,
can
do
what
it
needs
to
do
with
information
and
contact
from
this
client
certificate
itself.
Typically.
Well,
I!
Guess
it's
not
a
typical,
but
the
idea
is
that
by
passing
a
client
certificate
you're
relying
on
the
reverse
proxy
to
do
certificate,
validation
and
authentication,
but
providing
information,
contextual
information
about
the
client
due,
including
the
whole
certificates
or
the
Georgian
server,
and
do
what
it
needs
to
do
with
it.
Whether
that's
customizing
content
based
on
that
the
content
of
the
certificate
or
making
more
granular
application
level,
authorization
policy
decisions
or
whatever
it
might
be.
K
But
the
basic
idea
here
is
that
we
take
that
pilot
certificate.
I
feel
like
I'm,
repeating
myself,
I'm
sorry
and
pass
it
as
an
encoded
header
to
the
backend
application
in
a
standardized
way,
which
would
you
know
allow
for
more
ease
of
Interop
through
reverse
proxying
origin
servers
that
are
independently
developed.
So
with
that
next
slide,
please
mark
so
some
things
to
consider
it
is
war.
This
is
being
brought
to.
The
working
group
here
is
the
question
of
whether
there's
interest
in
working
group.
K
Adoption
in
this
document,
I
know
a
number
of
individuals
and,
on
behalf
of
themselves,
and
sometimes
on
behalf
of
their
employers,
have
expressed
interest
in
the
concept
as
a
whole,
and
this
working
group
seems
like
it's
likely
the
best
forum
to
proceed
with
work
on
a
document
like
this.
If
there's
sufficient
interest,
but
there's
been
a
number
I
guess
more
substantial
issues
raised
just
on
the
individual
draft
and
I.
K
Don't
know
if
these
necessarily
are
prerequisites
to
considering
or
working
forward
to
adoption
or
evolution,
consider
adoption
and
then
sort
of
dive
into
the
particular
issues,
but
I
listed
here
alongside
it.
Just
for
the
sake
of
conversation
in
the
main
issue,
I
think,
that's
that's
come
up
is
getting
to
an
appropriate
mechanism
to
prevent
another
injection.
The
current
draft
requires
that
the
reverse
proxy
always
sanitize
the
headers,
and
by
that
I
mean
it
would
overwrite
or
remove
the
clients,
er
header
from
all
inbound
connections,
and
this
presumes
that
there's
not
some
other
way
through
that.
K
First
of
all,
that
the
reverse
proxy
actually
does
that
effectively
and
if
there's
not
also
some
other
way
to
send
requests
directly
to
the
backend
and
sub-step,
a
client
could
spoof
the
client
serve
header
and
send
something
under
its
own
control
director.
The
back
end
sort
of
fluid
into
believing
that
concert
authentication
did
happen
at
the
front
of
him
currently.
But,
as
I
said,
the
draft
is
working
off
a
sanitization.
K
There's
been
a
number
of
folks
that
have
expressed
concern
that
that's
either
an
insufficient
security
mechanism
or
one
that's
easy
to
get
wrong
as
well
as
something
that
sort
of
fails
unsafe.
It's
it's
not
clear
when
there's
no
obvious
failure
mode
when
it's
not
being
done
correctly
so
the
function,
it
looks
like
it's
working
fine,
but
the
vulnerability
may
exist.
So,
let's
easily
identify
on
there's
a
lot
of
different
ways
that
this
could
be
approached
so
I
guess
there's
this
sort
of
a
question,
a
larger
question
of
whether
this
is
sufficient.
K
When
we
need
to
do
something
more
and
if
something
more
is
desired,
there's
an
entire,
a
whole
litany
of
different
ways.
It
could
be
approached
so
I'm
going
to
that
necessarily
now,
but
amongst
that
school
didn't
spill
about
applicability.
So
this
idea
of
sort
of
passing
meta
information
from
a
reverse
proxy
to
a
back-end
is
is
not
unique
to
this
draft.
So
that's
the
idea
of
sanitization
or
production.
K
These
kind
of
had
errors
or
ensuring
their
integrity
is
not
unique
to
this
draft
I'd
be
a
bit
loath
to
define
sort
of
one-off
solution
beyond
sanitization
and
the
scope
of
this
draft
along,
but
the
doing
something
larger,
certainly
beyond
the
scope
of
the
strap.
So
there's
a
a
bit
of
question
about
how
how
to
address
that
if,
in
fact
something
more
was
to
be
desired
there,
that's
probably
the
biggest
open
issue,
I
guess
right
now
being
discussed.
There
are
some
other
questions
regarding
sort
of
the
sufficiency
of
just
passing
the
whole
and
entity
certificate.
K
That's
what
the
draft
currently
does
is
take
the
past
the
client
in
any
certificate
in
its
entirety,
because
there's
different
use
cases
of
different
needs
for
the
content.
Various
bits
of
content
from
the
circuit,
the
subject
DN
various
SAN
entries,
sometimes
approach
when
you
use
the
entire
certificate
itself
or
various
other
parts
of
its
are
just
passing.
The
whole
thing
seemed
to
be
a
nice
way
to
accommodate
everything,
but
it's
potentially
large
in
some
cases.
A
L
Okay,
eckhart
yeah,
so
I
mean
I.
Think
there's
been
a
bunch
of
back
and
forth
about
how
fancy
this
mechanism
should
be
I'm,
not
I.
Don't
want
to
discuss
that
now,
but
I
think
it'd
be
useful
to
understand
whether
there
how
much
appetite
there
is
for
people
who
want
this
mechanism
for
more
fanciness,
because
the
answer
is
No,
but
then
there
are
people
who
think
she
more
fancy,
then
probably
answer
is
why
don't
you
go
to
Co
point
some
other
way
and
if
the
answer
is
yes,
then
we
should
think
about
adopting
it.
L
K
It's
there's
a
question
in
my
mind
of
how
much
appetite
there
is
for
doing
something
more
and
whether
or
not
that
means
that
it's
necessary
to
move
this
forward.
It's
yeah,
it's
the
largest
question.
I
have
I,
don't
want
to
over
engineer
something
that
will
then
become
unuseful
but
I'm
sensitive
to
desires,
of
having
something
more
there,
but
would
want
to
try
to
get
to
a
broader
understanding
of
what
the
consensus
or
rough
consensus
really
is.
A
K
Think
so
I've
been
struggling
with
and
I
still
try
to
save
earlier
on
in
this,
it's
a
little
bit
unclear
to
me,
which
you
know
which
needs
to
come.
First,
the
discussion
has
been
ongoing.
A
lot
and
I
certainly
can
have
some
more
of
it,
but
it's
sometimes
it
feels
a
little
fruit
listed
to
continue
sort
of
thrashing
up
on
the
same
issue
on
an
individual
draft
I.
A
A
F
F
The
the
main
reason
that
I
submitted
to
draft
a
couple
of
months
back
this
is
an
informational
draft.
I
didn't
really
feel
like.
There
would
be
a
kind
of
a
broad
use
case
for
a
broad
need
for
it
option
for
it,
but
I
wanted
to
kind
of
write
something
down,
because
I
felt
probably
other
people
would
face
similar
issues
and
might
want
to
tackle
it
in
a
in
a
in
a
standardized
way
later
than
I.
Am
you
know
ending
up,
don't
work
with
lots
of
different
approaches
to
this
type
of
problem.
F
We
have
errors
that
arise
due
to
functional
issues
in
the
product
itself
in
the
engine
and
the
hosting
platform,
and
hopefully
those
are
rare
and
if
they
occur,
that
we
can
respond
to
them
quickly
and
deal
with
them
and
triage
them
and
get
them
fixed.
On
the
other
hand,
customers
are
going
to
be
writing
a
sequel,
a
sequel
themselves,
they're
going
to
be
doing
iteratively
they're
going
to
make
mistakes.
F
F
F
Error
means
that
the
the
system
that
they're
using
is
broken.
They
don't
really,
you
know,
do
too
much
thinking
about
what.
Why
is
that?
Is
that
cheap
to
myself?
Or
is
that
you
today
the
person
that's
operating
the
platform
and
sometimes
and
so,
and
they
all
follow
sportage
and
for
people
didn't
with
lots
of
different
products.
They
don't
necessarily
know
the
ins
and
outs
of
any
particular
product
they
look
at
it,
they
see
a
500
internal
server
and
they
say,
but
in
my
experience
also
means
that
there's
something
wrong
platform.
F
Let
me
ask
you
that
that's
a
development
seems
that's
something
they
should
look
at
and
you
can
see
how
you
know.
Everybody
ends
up
wasting
time
and
money
trying
to
resolve
the
issue,
and
so
that's
what
we're
kind
of
thinking
that
something
that
sends
a
strong
signal
to
the
developer
off
the
resource
that
maybe
there's
something
wrong
with
their
script
that
they
need
to
take
another
look
at.
It
might
save
everybody
a
bit
of
time.
So
next
slide.
F
First
of
all,
what
we
did
was
we
changed
the
the
reason
phrase
on
the
HTTP
status.
They
say
user
defined
resource
er.
What
a
500
status
and
I'm
adopted
how
to
some
degree.
We
also
put
a
message
in
the
response
body
saying
you
know
this
request
failed
because
there
was
an
error
evaluating
the
script.
You
know
something
to
try
and
include
the
developer
in
that
there's
something
wrong
with
their
script,
and
we
put
an
header
on
the
response.
An
error
reason
header
that
has
you
know,
being
coded
error
message:
error
message
from
the
database.
F
We
have
to
escape
because
you
know
it
might
have
its
own
selecting
characters
in
it
that
conflict
with
the
headers
and
syntax
and
some
problems
with
that
in
access
logs
in
automated
monitoring
tools.
They're
generally,
just
looking
at
the
status
code,
they're
not
really
looking
at
the
reason
phrase
it's
not
shown
in
the
access
log
and
uptime
monitoring
tools
and
things
like
that.
Don't
just
be
looking
at
the
status
code.
They
don't
care.
F
What
the
reason
phrase
is
so
that
signal
that
we're
trying
to
communicate
gets
lost,
and
you
have
things
like
this
could
be
in
a
multi-tiered
application
and
there's
a
customer
error
page
in
front
of
this
500
status,
so
they
changed
the
status
code.
Sorry
to
change
the
error
page
to
show
you
know
just
a
generic
text
about
something
went
wrong
server
and
so
our
phrase
is
completely
lost
and
the
error
reason
header
is
proprietary
to
us.
Nobody
knows
about
it
and
it's
a
you
know
without
special
maj
clients,
just
ignore
it
so
I'm
okay.
F
Could
we
try
and
standardize
a
reason
number
like?
Well,
you
know
there's
already
to
hate
to
be
problems,
syntax,
RFC
and
maybe
we
can
try
and
extend
something
there,
but
we
still
feel
that
you
know
losing
that
strong
signal
in
the
access
log
and
in
the
monitoring
tools.
It's
going
to
limit
its
effectiveness.
So
next
slide.
F
F
You
know
find
out
pretty
quickly
that
it
means
there's
probably
something
wrong
in
their
script
and
that
they
can.
You
know,
look
at
that
as
the
first
cause,
rather
than
assuming
that
the
platform
has
a
problem
and
that
they
need
to
contact
their
support,
and
so
hopefully
all
of
that
would
lead
to
saving
time
and
money.
And
that's
that's
the
that's
the
proposal
and
so
I
submitted
a
draft,
an
informational
draft.
It's
linked
on
the
agenda
and
I
guess
what
I
would
like
to
see.
G
H
Knowing
that
this
is
the
server
is
operated
by
multiple
entities?
It's
not
something
that
clients
typically
care
about
when
it
comes
to
getting
an
error,
it
sort
of
says
well,
500
is
reserved
for
the
operator
of
the
server
first,
the
thing
that
runs
in
the
server,
but
that's
all
internals
of
the
server-
and
it
seems
like
this
case-
is
to
sort
of
externalize
some
of
those
internal
structuring
of
the
server
so
that
the
client
sees
that
and
I
don't
see
a
whole
lot
of
value
to
a
client
in
having
this
single
at
all.
F
F
About
this
myself,
the
only
way
that
I
can
see
that
you
can
avoid
having
to
do
this.
So
it's
good,
partly
because
there's
kind
of
a
new
model
starting
to
evolve
where
the
operator
of
a
service
isn't
necessarily
the
person
who
writes
the
services.
You
know
traditionally
till
now.
You
know
in
one
shape
or
form
the
person
who
writes
the
api's
or
you
know
whatever
the
resources
are
in
your
web
service.
F
Is
the
person
that's
operating
the
server
as
well
to
a
large
degree
and
now
we're
starting
to
move
into
a
world
where
you
know
standard
things
like
these
edge
can
be
a
facilities
and
platforms
as
a
service.
Where
there's
a
you
know,
there's
a
fairly
clear
line.
That's
obvious
from
the
server
side
between
you
know
the
operator
of
the
service
on
the
order
of
these
users
to
find
resources.
It
makes
absolutely
no
difference
to
the
client
right.
F
They
shouldn't
care
right,
and
it's
back
says
that
if
you
don't
know
what
a
500
States
code,
it
is
he/she
treated
as
500.
Exactly
not
something
else
that
you
know
the
spec
says
500
so
I
would
I
would
say
that
that's
clients
should
treat
this
code
and
any
error
code,
as
you
know,
and
the
500
code
as
exactly
the
same
so.
H
So
the
question
here,
though,
I
think,
might
help
illuminate
my
point
a
little
bit
more
clearly.
Is
that
say
you
have
someone
who's
operating
in
a
cloud
service,
so
there's
an
infrastructure
provider
and
they're
running
something
in
a
VM
somewhere
or
in
some
sort
of
container
on
that
cloud
service
and
they
have
contracted
with
a
CDN.
There
are
now
two
infrastructure
providers
involved.
D
M
Hello,
yes,
okay,
so
I
think
if
I
understand
correctly
this,
for
instance,
what
happened
if
the
defined
resource
would
be
defined
with
invalid
sequel
statements?
So
did
you
consider
whether
it
would
be
possible
to
detect
the
broadness
of
the
user-defined
resource
at
the
time
of
when
the
user
defined
resources
defines
I
mean
these
are
modified
using
HTTP
paralyzed
so
potentially
possible
to
actually
eject
creation
of
these
user-defined
resources
if
they
are
brought
in
some
way?
So
so
the
end
users
would
not
actually
see
them.
Yeah.
F
A
I
agree
with
Martin.
You
know
we
have
very
similar
problems
with
intermediaries,
where
it's
it's
not
clear.
Who
generates
the
message
and-
and
that
brings
up
another
issue
for
me,
which
is
that
you
could
say
that's
not
only
about
a
500
error
but
also
maybe
the
the
infrastructure
generated
a
redirect,
or
maybe
the
user
code
did
or
any
other
status
code,
and
so
you'll
have
this
multiplication
of
status
codes.
A
If
you
follow
this
to
its
conclusion
and
that's
what
concerns
me,
it
seems
worth
or
'''l
to
the
core
semantics
of
what
a
status
code
is
and
so
to
me,
I
think
it'd
be
much
more
successful
if
we
defined
a
header
to
convey
this
information.
I
think
I
give
you
complimentary
to
the
problem.
Detail
stuff
make.
K
A
F
And
what
about
you
know
the
fact
that
you
know
that
typically
Hatter's
don't
get
you
know
caught
in
access
logs
and
intermediates
like
to
the
same
degree,
is
the
status
code
thing
I
know
they
can
be,
but.
G
C
Real
quick
I
have
to
first
is
that
I
do
think
there
is
some
prior
art
like
noting
Martin
and
Marc's
objections.
There
is
prior
art
in
the
HTTP
500
range
with
502
you
in
this
multi
CDN
case.
You
can
absolutely
have
multiple
gateways,
any
of
which
could
be
generating
your
502
and
likely
cascade
a
502
I
think
the
same
problem
applies,
but
more
broadly,
I
agree
with
Marc's
more
general
assessment,
which
is
this
doesn't
necessarily
strike
me
as
a
problem
that
is
best
solved
in
HTTP
status
code
space.
C
This
strikes
me
as
a
problem
for
the
infrastructure
provider
to
solve
through
their
own
monitoring
infrastructure.
So
the
question
is:
why
should
the
entire
web
bear
the
cost
for
any
one
infrastructure
provider
to
communicate
to
their
users?
This
is
a
problem
that
exists
in
a
lot
of
other
places,
AWS
as
well,
for
example,
and
by
and
large
the
solution
is
seems
to
be
to
provide
appropriate
monitoring
and
logging
such
that
users
are
capable
of
determining
the
difference
between
an
infrastructure
501
that
they
generated
themselves.
F
And
so
to
try
and
sum
up
our
hearing,
it's
strong
preference.
If
you
were
to
go
down
this
route
to
use
a
header,
is
there?
Is
there
any
kind
of
consensus
that
people
would
like
to
have
such
a
header
or
still
the
overall
feeling
would
be?
Actually,
you
know,
there's
probably
other
ways
to
address
this
issue
and
we
don't
even
need
the
header
in
the
first
place.