►
From YouTube: HTTPBIS WG Interim Meeting, 2021-02-09
Description
HTTPBIS WG Interim Meeting, 2021-02-09
A
Great
press
record:
okay,
let's
get
started,
hopefully
that
won't
text,
my
computer
too
much
so
this
is
the
notewell.
If
you're
not
familiar
with
this,
you
should
be.
These
are
the
terms
and
conditions
under
which
you
participate
in
the
itf
regarding
things
like
intellectual
property,
harassment
conduct
so
forth,
and
so
on.
We
do
take
it
seriously.
A
If
you
have
any
issues,
please
contact
tommy,
now
your
chairs
or
any
questions,
and
and
if
we
can't
resolve
it
for
you,
we
can
point
you
to
the
right
person
for
that,
whether
it's
the
home,
splits
team
or
the
area
director
or
whoever
else.
A
Oh,
it's
very
agenda-like.
Okay,
I
have
probably
to
unshare
the
web
browser.
Maybe
there
we
go
so
today
we're
going
to
talk
just
about
the
active
extension
drafts
so
start
with
prioritization.
Then
signatures
then
digest
then
the
cookie
bis
then
h2bis
and
then
finally
bcp56bis.
A
A
That's
okay
with
me:
okay,
we
have
time,
thank
you
and
we
need
folks
before
I
forget
to
fill
out
the
blue
sheets,
which
is
also
linked
here.
If
you
could
just
go
to
that
page,
which
does
have
the
correct
data
in
it,
and
now
I'm
wondering
why
didn't
update
that
one
as
well,
you
can
add
your
name
and
your
affiliation,
so
we
have
a
record
who
of
who's
attended.
A
I
will
also
take
a
snapshot
of
the
zoom
list,
but
it
would
be
really
helpful
if
you
can
put
your
name
in
there
as
well.
Okay,.
B
Well,
I
think
we
can
go
ahead
and
and
move
along
that
just
first,
maybe
a
quick
note
before
we
jump
into
that
saw
that
we
now
have
official
rfcs
for
both
structured
headers
and
clan
hints.
So
I
think
that's
why
we
worked
on
those
and
glad
to
get
those
finally
stamped.
A
Yes,
it
took
a
little
while,
okay,
so
first
up
lucas,
you
want
to
take
it
away
with
prioritization.
C
Not
for
this
one,
it's
a
tale
of
two
spec
presentations
for
me
today.
So
on
the
priorities
draft,
we
are
zero,
open
issues,
kazuo
and
I
put
a
draft
03
out
early
january
and
sent
an
email
to
the
group
to
advertise
that
closing
out
those
issues,
and
so
we
haven't
really
hit
anything
back
from
that.
So
I
think
we're
in
a
good
place.
C
I
haven't
heard
much
from
people
who
have
actually
like
tried
priorities
out
and
and
give
any
feedback,
but
given
how
we
simplified
the
scheme
and
that
we've
addressed
the
kind
of
the
tension
with
the
signaling.
I
I
think
we're
at
a
good
place.
You
know
speaking
personally
for
http
3.
We
have
the
ability
to
prioritize
according
to
that
scheme,
but
we're
not
yet
in
a
place
where
that's
rolled
out
for
any
logical
experimentation.
I'm
aware
of
chrome
say
sending
that
frame
but
yeah.
C
I
I
think
we
welcome
input
as
long
as
it's
not
on
any
anything
that
we've
already
covered
and
closed
out.
There
was
quite
a
lot
of
discussion
around
the
summer
autumn
time
trying
to
get
through
some
of
maybe
the
philosophical
side
of
the
prioritization,
and
we've
tried
our
best
to
reflect
that
discussion
into
considerations
for
implementers
without
getting
bogged
down
with
giving
specific
advice
to
you
know,
this
is
how
you
should
do
it
where
we
know
that
probably
won't
work
for
the
certain
kinds
of
implementations
who
might
want
to
consider
this
scheme
so
yeah.
A
So
what
would
you
suggest
the
path
forward
be?
Do
you
think
we're
ready
for
working
group
last
call?
Do
we
want
to
let
implementers
play
with
it
a
bit
more?
Where
do
you
see
this
going.
C
I
I
I
don't
know,
I
think
I
I'd
like
to
say:
let's
go
for
a
last
call
and
then
and
then
see
if
anything
comes
back
during
that
process.
If
we
get
some
more
review
and
then
yeah,
let's
see
how
it
goes,
but
I'll
take
the
chair's
steer
here
because,
like
that,
we
we
had
priorities
go
wrong
before
and
I
don't
want
to
rush
something
into
the
into
the
rfc
process
that
we
might
find.
Isn't.
C
Okay,
my
my
hope
and
my
belief
is
that
won't
be
the
case
because
it's
simplified,
but
you
know
the
some
of
this
back
porting
to
h2
might
be
a
bit
trickier,
but
h3
is
now
you
know
almost
close
to
done
as
well
and
we're
going
to
see
more
stable,
mature
rollouts
of
of
quick
and
h3.
D
C
A
C
I
think
it
would
work
for
me.
I
haven't
seen
anyone
saying
like
in
the
last
year
that
we
we
really
have
to
get
this
done.
B
Okay,
yeah
seeing
implementation
results
would
be
very
helpful.
It
does
look
like
we
have
alan
in
the
queue,
so
he
wants
to
sneak
up.
E
C
Yeah
I
mean
my
my
experience
is
effectively
channeled
through
the
you
know,
the
quick
interrupt
work
that
we
do
so
looking
at
some
of
the
multiplexing
angles
of
stuff,
maybe
doing
hackathons.
It's
not
you
know
a
conclusive
demonstration.
I
think
part
of
the
the
difficulty
there
is
even
like
the
the
interrupt
test
for
h3
compared
to
the
quick
transport
layer.
Slim
I
would
say,
and
so
trying
to
design
tests
for
this
is
is
tricky.
It's
a
quite
you
know,
subjective
and
manual
process,
I'd
happily
work
with
anyone.
C
F
Thanks
looks
like
pence
is
also
in
the
queue
hi.
My
name
is
ben
sabeke.
I
work
for
google.
I
just
wanted
to
give
a
quick
heads
up
about
the
implementation
status
in
terms
of
h2
and
h3,
both
google
servers
and
in
terms
of
chrome.
F
So
as
far
as
h3
goes,
we
never
had
any
priority
scheme
implemented
other
than
the
priority
update
and
it's
implemented
and
it
seems
to
be
working.
We
haven't
done
a
lot
of
investigations
about
you,
know,
performance
or
what
is
actually
happening,
but
you
know
at
least
things
don't
crash
and
we
don't
error
out,
which
is
it's
great?
I
don't
have
a
lot
of
feedback
in
terms
of
implementation.
F
It
was
relatively
straightforward,
except
for
the
part,
of
course,
when
you
have
to
store
priorities
for
streams
that
have
not
been
created
yet,
but
it's
it's
not
it's
not.
I
didn't
find
it
to
be
a
big
deal
as
far
as
h2
goes
note
that
our
current
implementation,
with
the
h2
dependency
and
weight
scheme
is
that
we
effectively
at
the
beginning.
F
You
know
when
we
had
speedy
at
the
beginning.
We
had
buckets
just
like
the
priority
update
specification
says
so
it's
it's
nicely.
F
We
are
going
back
to
the
original
scheme
in
a
way
and
when
the
hdb2
dependencies
and
weights
were
rolled
up,
then
what
we
did
first
was
to
encode
the
priority
bucket
number
into
the
weight
and
then
only
consider
the
weight,
which
is
not
the
way
it
was
intended
to
do,
of
course,
and
then,
finally,
we
go
around
to
doing
a
dependency
based
approach,
but
our
dependency
was
a
chain
and
we
essentially
had
things
you
know
order
by
bucket,
and
that
is
what
is.
We
are
what
we
are
doing
right
now
in
chrome.
F
Google
servers
right
now
are
still
considering
the
bait,
so
it's
still
not
doing
exactly
what
the
h2
spec
tells
us
to
do.
But
this
means
that
when
we
actually
get
around
implementing
the
http
and
I'm
sorry
the
priority
update
the
new
frame
type
then
from
a
functionality
standpoint,
it's
not
going
to
be
a
huge
difference,
so
I
do
not
expect
a
lot
of
difference
in
terms
of
performance
and
it's
on
my
plate
to
implement
it
in
chrome
and
also
in
the
server.
F
But
it's
not
it's
not
going
to
happen
very
very
soon,
so
I'm
not
I'm
not
making
a
request
to
hold
on
with
the
last
call
until
we
have
implementation
experience,
but
it's
it's
coming.
We
do
certainly
intend
to
implement
it.
C
Thanks
for
that,
just
just
one
clarification
point.
Last
time
I
checked
it
was
in
an
older
version
of
the
priority
update
frame
that
you
were
using.
I
think,
maybe
one,
and
there
was
a
breaking
change
in
that
frame
format.
So
I
just
wondered
if
you're
up
to
date
with
draft003
or
yes,.
F
For
http
3,
we
are
up
to
date
with
the
new
frame,
type
and
format.
B
E
A
All
right
any
other
comments
or
input
on
the
fertilization.
A
A
I
guess
tommy
and
I
have
a
chat
about
the
precise
process
we
want
to
use
for
that,
but
we
would
encourage
folks
to
take
another
look
at
it
and
make
sure
that
you
have
any
questions
or
any
issues
you
raise
those
now
and
we
might
use
a
working
group
last
call
as
a
mechanism
to
make
sure
that
that
gets
done,
but
we'll
have
a
chat
and
then
we'll
talk
to
lucas
and
kazumo
and
figure
out
the
path
forward
of
this
draft.
G
H
Yep,
so
I
should
be
on
here
all
right,
so
yeah
I
also
don't
have
any
slides.
So
if
you
want
to
pull
up
the
draft
that'd
be
fine
and
not
not
a
whole
lot
of
updates.
H
Just
like
well
so
so
to
to
explain
why
my
surprise
may
have
actually
been
warranted.
It's
been
really
difficult
to
get
a
hold
of
our
lead,
editor
annabelle
over
the
last
six
months,
or
so
I
think,
just
her
day.
Job
has
taken
her
in
a
direction
where
she
hasn't
been
able
to
really
dig
into
this
particular
piece
of
work.
I've
also
not
been
really
looking
at
it
directly
because
I've
been
focused
on
you
know
the
gnap
working
group
and
some
other
other
standard
stuff.
H
All
that
said
she
did
do
a
lot
of
really
important
sort
of
major
surgery
changes,
sort
of
the
first
round
of
changes
that
was
back
in
december.
I
think
that
was
that
that
got
put
in
and
those
changes
are
going
to
pave
the
way
for
the
next
steps
that
we
need
to
take
with
the
draft.
H
So
the
biggest
change
is
actually
a
dependency
on
structured
headers,
so
the
signatures
now
explicitly
uses
structured
headers
and
not
only
to
present
the
signature,
but
also
as
a
mechanism
of
signaling
which
components
are
signed.
So,
as
most
of
people
know
in
the
group,
the
hard
part
is
not
signing
a
con.
H
The
content,
the
hard
part,
is
getting
the
content
in
a
state
that
you
can
actually
sign
it,
which
is
why
it's
here
in
the
http
working
group
and
not
in
one
of
the
security
area
working
groups
right
so
with
with
that
said
now
that
we
have
this
based
on
http
signatures.
H
Our
next
steps
are
to
sort
of
further
that
canonicalization
algorithm,
that's
in
here,
so
how
you
pick
apart
the
different
bits
of
the
http
message
and
put
them
into
the
signature
base
for
both
the
http
client
and
http
server,
so
the
sender
and
receiver
side
of
of
the
signature
bit.
H
I
cannot
speak
to
annabelle's
availability
in
the
immediate
future,
but
this
is
work.
That
is
that
other
things
that
I'm
working
on
depends
on.
So
I
am
actually
going
to
plan
to
grab
the
pen
and
try
to
do
try
to
further
sort
of
the
steps
that
she
had
started
back
late
late
fall
early
winter
and
push
those
forward
to
the
next
step.
So
this
is
going
to
be
there's
some
open
questions.
We've
got
tons
of
open
issues,
but
there's
open
questions
on
you
know.
How
do
you
indicate
what
was
signed?
H
How
do
you
choose
the
algorithm
and
indicate
the
choice
of
the
algorithm
and
protect
that
there's
some
stuff
in
here
that
got
added
about
doing
multiple
signatures
which
she
presented
at
the
last
at
the
last
interim?
If,
where
we
presented
this,
if
you
guys
recall
all
of
that's
good
stuff,
it's
all
very,
very
rough
in
there
right
now
and
and
it
just
it,
it
needs
a
lot
of
fit
and
finish
for
for
it
to
be
really
usable.
H
I
have
also
not
yet
implemented
this
draft
as
as
it
stands,
I
have
an
implementation
of
the
old
cabbage
signatures
draft,
on
which
this
was
originally
based.
It's
my
intent
to
take
that
which
was
built
in
java
and
and
update
it
to
to
speak
this.
That
said,
we
have
received
a
bunch
of
feedback
from
people
sort
of
outside
the
hp
working
group
who
are
really
eager
for
this
to
exist.
H
So
the
mastodon
project
has
gone
and
taken
this
draft
and
then
kind
of
forked
it
as
as
they
do
and
so
we'll
be
looking
at.
You
know
what
they've
done
in
their
fork
and
hopefully
be
able
to
bring
that
back
into
this
eventual
rfc
in
the
next
steps
and
yeah.
So
sorry,
I
don't
really
have
more
of
an
update.
I
wasn't.
I
wasn't
really
planning
on
making
this
presentation
today,
not
a
problem.
That's
fine!
Thanks
for
doing
that.
The
last
minute.
A
I'm
a
little
concerned
to
hear
that
they've
proactively,
forked,
it
it'd
be
great
if
we
could
encourage
them
to
come
and
participate
here,
it's
not
to
cost
them
anything.
So.
H
Yeah
I
agree
and
we've
we've
extended
the
invitation,
so
one
of
the
co-editors
has
ties
into
that
community.
The
thing
with
the
mastodon
project,
if
you're,
if
you're
familiar
with
it
at
all
they're
kind
of
their
own
ecosystem,
and
so
it's
it's
a
de
facto
whatever
is
implemented.
There
is
what
everybody
just
has
to
use.
H
So
I
think
it's
a
it's,
not
an
intentional,
like
we're
forking
this.
For
some
philosophical
reason
it's
this
is
just
what
we
implemented
in
order
to
do
what
we
needed
to
do,
and
so
that's
that's
stuff
that
you
know
ought
to
be
considered
and
incorporated
in
here,
and
it
would
be,
it
absolutely
would
be
best
if
they're
here
as
part
of
the
conversation,
I
don't
have
direct
ties
into
that
community,
but
I
know
people
who
do,
and
so
hopefully
we
can
continue
that
outreach
and
and
bring
that
forward,
because
I
wholeheartedly.
I
H
You
know
the
people
that
have
been
using
cabbage
in
its
several
dozen
variations
over
over
the
last
decade
really
should
be
feeding
that
wealth
of
experience
into
this.
A
Okay,
so
it
sounds
like
you:
you've
hit
a
couple
of
bumps
but
you're
still
working
on
it
and
you
have
a
fair
amount
of
of
encouragement
to
continue
and
then
conclude
pretty
quickly.
A
So
if
there's
anything
we
can
do
to
help,
and
especially
if
there
are
checkpoints
that
you'd
like
to
get
feedback
from
the
working
group
or
review
of
the
document
or
answers
to
questions,
please,
you
know
bring
it
to
the
mailing
list
or
talk
to
tommy
and
I
and
we
can
make
sure
that
everybody
takes
a
look
at
the
right
times.
H
All
right
will
do
yeah,
for
you
know,
for
the
for
the
chairs
benefit
largely.
My
next
plan
is
to
do
an
editorial
pass
over
the
document,
because
there's
a
lot
of
language
like
you
can
see
the
this
work
was
originally
based
on
coverage
and
has
been
adapted
blah
blah,
there's
a
lot
of
that
stuff
that
just
kind
of
needs
to
be
editorially
kind
of
cleaned,
up
and
excised.
H
Now
that
this
is
a
working
group
draft,
so
that's
going
to
be
my
first
step
and
then
the
second
step
is
to
sort
of
further
the
work
that
annabelle
started
when
she
moved
everything
to
structured,
headers,
so
the
the
whole
construction
bit
and
like
signing
the
signing
the
declaration
of
what
you're
going
to
sign
those
are
those
are
pieces
that
were
missing
from
the
input
drafts
and
are
really
really
important
to
the
security
model
of
this.
H
So
those
are
those
are
my
my
plans
with
this
draft
I've
also,
I
have
reached
out
to
annabelle
and
ostensibly
I'm
going
to
be
meeting
with
her
next
week
sometime,
but
I'm
I've
been
encouraging
her
to
to
continue
on
this
work.
Like
I
said,
I
don't
know
the
details,
but
I
just
know
that
she's
been
kind
of
pulled
in
other
directions
as
soon
enough
it
happens.
H
A
Does
happen
yeah
and
and
if
you
turn
up
you
know
you
end
up
needing
more
help
editorially
we
can.
We
can
have
a
chat
about
that
too.
Does
anyone
have
any
feedback
or
questions
for
justin
about
this
draft.
A
So
before
we
go
on
it
occurred
to
me,
we
probably
need
to
make
one
more
announcement,
which
is
that
we
are
having
a
change
of
area
director,
not
quite
yet
the
next
ietf
we'll
have
a
new
area
director,
and
so
I'd
like
to
thank
barry
who's
been
our
area
director
on
and
off
for
the
last.
What
is
it
now
15
years?
Something
like
that?
It's
been
a
pleasure
working
with
you,
barry,
thank
you
and
also
to
welcome
barry.
Can
you
actually
do
you
have
a
camera?
D
Hi
and
it's
a
pleasure
working
with
you
guys
too
I'll
miss
you
yeah
put
up
your
round.
A
And
our
new
director
is
francesca.
I
hope
I'm
saying
that
correctly,
who
will
be
officially
taking
over
at
the
itf
meeting
in
march,
so
hello
and
welcome.
C
A
C
Looks
good
to
me
so
yeah,
ideally
roberto,
would
have
been
presenting
here
because
he's
kind
of
taking
point
on
some
of
the
digest
stuff
I'll
be
talking
about
today.
He
couldn't
make
it
that's
life,
so
I'll
do
my
best
job
of
channeling,
his
expertise
in
the
area.
But
please
do
forgive
me
if
I
get
caught
up
in
the
right
terms
for
payloads
and
representations
and
whatnot.
C
Please
the
last
time
we
presented
was
back
in
october
and
we
went
through
a
few
issues
since
then,
what
we've
been
working
on
is
making
the
editorial
stuff
that's
been
sitting
in
the
ed
this
copy.
We
haven't
pushed
a
new
update
out
because
we
wanted
to
try
and
push
on
on
some
of
the
more
designy
aspects.
C
C
So
that's
all
really
appreciated
at
this
stage
and
we
encourage
people
to
keep
doing
that
and
we'll
try
and
address
those
things
as
as
quickly
as
possible,
and
if
anyone
cares,
you
can
click
that
link
and
it'll
show
you
the
current
diff
between
the
editors
copy
and
what
the
last
draft
was.
C
Oh,
I
mean
the
talk
always
looks
pretty
bad,
but
here
it's
mainly
you
know,
hyperlinks
to
things
going
through
the
examples
making
sure
those
are
properly
formatted,
all
the
good
stuff
that
we
like
to
see.
There's
probably
still
nits
and
problems
with
them.
I
mean
on
the
face
of
it.
Those
things
look
a
bit
like
a
structured
header
but
they're,
sadly
not
and
the
ship
sailed
on
that
point,
but
that
that's
it.
C
So
it
does
look
like
a
lot
of
change,
but
I
I'd
say
none
of
it
is
super
substantial,
but
yeah.
If
we
could
go
on
to
the
back
to
the
slides
and
and
talk
about
the
things
that
kind
of
matter
here,
we've
got
two
issues
that
we
could
use
some
input
on.
The
first
of
those
is
the
one:
that's
really
a
sticking
point
I'd
say
for
us,
so
this
is
all
about
digesting
requests.
C
Yeah,
let's
just
go
on
to
the
next
slide
and
I'll,
try
and
explain
what
that
means.
So
the
digest
header
can
be
used
in
requests
and
responses
and
to
understand
what
that
means.
Kind
of
a
bit
of
historical
context
might
help
here
for
people
not
super
familiar
so
rfc
3230,
it
was
called
instance
digest,
but
kind
of
ignore
the
instant
stuff.
C
What
we
wanted
to
do
is
is
update
that
document
to
reflect
the
terminology
that
rfc
7231
and
whatever
the
new
core
drafts
will
be
for
semantics
to
try
and
capture
how
people
think
of
http
today,
so
that
digest
draft
is
is
a
standard
and
it's
going
to
be
updated
or
obsoleted
by
this
document.
C
I
don't
know
the
right
term.
I
get
confused,
but
but
ultimately
it
acknowledged
some
issues
with
content
md5,
which
was
deprecated
not
by
digest,
but
by
rfc
32
7231,
and
this
is
mainly
because
people
were
inconsistently
implementing
content
md5
when
it
came
to
partial
responses.
That
text
is
taken
from
the
appendix
and
so
but
again
content
md5
could
be
used
for
request
and
response.
C
So
historically,
there's
been
some
issue
where,
when
you're
calculating
a
hash
of
a
payload
body
or
content
like
we
might
want
to
call
it
now
that
people
make
the
wrong
assumption
about
what
the
bots
are
being
hashed
out.
Is
it
a
complete
message?
Is
it
facial?
You
know
that
just
causes
interrupt
problems,
so
digest
kind
of
fixed
a
lot
of
those
issues
so
go
on
to
the
next
slide.
C
And-
and
so
what
we
have
at
the
moment
is
I'd
say
a
bit
of
a
difference
of
opinion
between
roberto
and
julian,
and
anyone
who
might
want
to
pick
a
side
on
that
on
what
does
digest
mean
in
relation
to
requests
for
responses?
C
It's
fairly
straightforward
because
we're
familiar
we're
used
to
now
working
with
you
know,
range
requests
and
getting
partial
responses
back,
but
the
digest
of
a
request
is
a
bit
of
a
weird
mental
model,
and
so
what
we've
got
is
this
kind
of
cluster
of
issues
they're,
not
all
the
same
thing,
they're
all
slightly
different,
but
they're
they're
related
to
us
trying
to
get
an
answer
to
what
we
think
we
mean
in
respect
to
say
about
digests
on
request
bodies,
and
I
think,
if
we
can
unstick
that
that
it
gets
this
document
closer
to
being
done.
C
We've
got
a
whole
set
of
other
issues
that
we
can
just
make
progress
on
in
the
background,
but
we
keep
kind
of
circling
back
to
this
one.
So
the
next
few
sides
are
going
to
try
and
dig
into
this
a
bit
more.
It
might
be
a
bit
overkill,
but
let's
just
see
what
people
think.
C
Yes-
and
so
I
just
just
to
ground
this
in
reality
like
what
what's
an
example
of
where
you
might
want
to
send
a
partial
request,
but
before
the
http
police
come
and
arrest
me
on
that
and
say
you
can't
do
that.
You
know
the
use
case
might
be
that
you
want
to
upload
different
ranges
of
a
large
file
and
that
could
be
to
to
support
a
resumable
upload
model
where
you've
got
a
gigabyte.
C
Maybe
all
of
those
trends
were
individually
integrity,
okay
and
then
you
kind
of
wind
them
all
up
together
at
the
end
and
and
check
the
integrity
of
the
whole
thing,
so
that
use
case
might
want
to
use
digest
this
specification
to
help
that
upload
process
to
validate
the
integrity
of
some
or
all
of
those
things.
C
And
so
today
there
are
people
doing
this,
for
you
know
things
like
cloud
storage
and
they
don't,
as
far
as
I
can
see
within
the
server
that
they
don't
use
digest,
but
they
use
headers
that
are
like
digest
in
this,
in
the
case
of
like
taking
a
a
hash
of
a
thing
and
having
a
process
to
transfer
that
to
one
from
one
side
to
the
other
and
then
validate
that.
So
if
you
go
on
to
the
next
slide,
it's
kind
of
as
part
of
the
http
core
work.
C
C
If
a
client,
for
instance,
is
trying
to
put
a
a
partial
resource
with
the
language
that
no
request
method
in
the
spec
is
defined
to
support
content
range.
C
But
meanwhile
it
also
says
that
put
is
kind
of
implemented
by
some
people
inconsistently
and
and
relies
on
private
agreements.
So
it's
this
kind
of
helps
in
a
way,
because
until
now
we
didn't
have
anything
in
semantics.
That
acknowledged
what
was
happening.
I
don't
know
if
that
shifts
the
needle
on
coming
to
a
decision
on
anything,
but
it
it
it.
C
And
so
this
table
is
probably
over
simplifying
the
discussion,
but
it's
kind
of
a
high
level
view
and
it's
also
biased,
because
it's
from
the
authors
of
the
digest,
spec,
and
so,
if
I'm
giving
any
unfair
waiting
to
digest
compared
to
say
julian's
view
I
do
apologize,
but
but
that
wasn't
the
point
I'd
like
to
to
get
this
table
kind
of
filled
out
more
properly
in
some
way,
whether
it's
just
logically-
but
you
know
one
school
of
thought
here-
the
top
row
is
that
a
digest
in
a
request
is
always
computed
on
the
payload
data.
C
We
ignore
the
possibility
of
any
partial
payloads
in
requests,
and
so
you
get
the
whole
thing.
This
is
easier
for
say
like
a
server
to
implement
when
it
receives
a
request,
it
can
just
calculate
a
digest
quite
easily.
It
doesn't
need
to
consider
partiality.
C
C
Md5
behavior,
which
was,
as
I
showed
earlier,
kind
of
deprecated
for
a
reason
because
of
that
inconsistency
of
implementation,
and
so
we've
also
got
here-
is
that
there's
an
asymmetry
between
the
way
that
the
world
views
digest
usage
between
request
and
response
that
the
digest
says
it's
about
representations,
and
it
applies
equally
on
request
and
response,
but
that
we're
applying
this
kind
of
rule
that
it
doesn't
within
the
constraints
that
we
have
from
semantics
today
and
and
yeah.
But
that's
about
the
only
cons.
C
I
can
think
that
I
don't
know
if
we
can
get
over
them
because
of
the
rules
that,
like
the
corner
we've
painted
ourselves
into.
But
but
that's
how
we
see
it.
If
we
look
at
representation
data,
maybe
to
put
into
contrast
the
pros
that
we,
we
believe
that
if
we
apply
the
rules.
C
Similarly
to
requests
to
allow
partial
requests
that
we
maintained
the
intent
that
rfc
3230
had
we're
not
kind
of
revoking
a
contract
of
design
that
it
was
intended
to
have,
even
if
no
one
actually
used
that
so
far
and
what
it
gives
us
is
a
coherent
definition
of
using
digest
across
requests
and
responses.
C
C
We'd
like
to
do
this
once
and
kind
of
move
on,
there's
some
other
things
there,
I've
probably
already
covered
there
is
a
con
here
that
you
know,
intermediaries
that
implement
digest
for
a
partial
request
would
need
to
be
able
to
distinguish
that
and
maybe
they
they
can't
based
on
the
surveys
that
we've
done
and
and
the
language
and
semantics-
and
I
know
I
don't
know
how
we
get
over-
that
one
so
next
slide,
please.
C
I
there's
a
lot
to
digest
there
and
I
I
fully
appreciate
people
aren't
like
probably
that
closely
tracking
this,
and
I
think,
that's
part
partially.
Why
we're
getting
stuck
on
thing?
Oh,
very
funny,
geoffrey
thanks.
I
should
have
thought
of
that,
and
so
so
what
I
wanted
to
do
is
just
put
up
a
straw
person
argument
here,
not
even
an
argument.
It's
not
an
argument.
It's
a
path
for
just
there's
some
way
to
let
us
move
on
and
unstick
this
and
kind
of
get
done
with
this.
C
What
we
thought
was
a
quick
rewrite.
So,
and
this
isn't
you
know
if,
if
anyone
hates
this
just
say-
and
we
can
come
up
with
something
else
but
yeah,
we
can
just
agree
that
digest
applies
to
request
representation
data,
but
actually
there's
nothing
really
defined
today.
That
would
let
anyone
use
that?
C
Yes,
there's
this
partial
put
use
case:
it's
not
commonly
solved
the
way
that
people
use
digestive
requests
today
is
just
to
send
the
whole
thing
and
sending
the
whole
thing
is
is
the
same
as
sending
the
payload
data,
it's
kind
of
equivalence
and
that's
fine,
but
that
we
shouldn't
prevent
some
future
usage
that
the
people
might
want
to
do.
C
And
so,
therefore,
if
needed,
somebody
else
can
go
and
standardize
partial
requests
and
that
activity
should
probably
consider
that
if
they
want
to
do
integrity
check
of
the
payload
data
that
they
need
a
new
header
that
emulates
what
content
md5
did
but
gives
the
flexibility
to
use
different
algorithms
that
are
actually
safe
to
use
today,
and
that
could
be
a
similar
format
to
digest,
but
with
a
different
name
and
a
different
definition.
That
makes
it
abundantly
clear
what
it's
there
for
so
yeah.
That's
that's
a
suggestion,
we're
happy
to
take
feedback
or
anything.
C
J
Hello,
hello,
so
I
think
it
would
be
extremely
helpful
if
we
actually
had
real
world
examples
for
this.
It's
really
hard
to
argue
about
this
about
without
something
that
we
can
look
at
and
see
a
message
exchange.
So
when
you
say
the
request
representation
data
for
a
partial
request.
What
is
that.
C
Yeah,
so
I
would
say
if
you
want
to
send
a
a
a
thing
that
is
10
bytes
long,
but
you
don't
want
to
send
the
whole
thing
you
want
to
put
two
bytes
of
it.
Then
the
digest
that
you
would
put
the
digest
you
would
include
on
their
request
message
would
be
the
digest
of
the
whole
10
bytes,
not
the
two
bytes
that
you
were
sending
in
the
payload
data.
C
I
don't
know,
but
by
by
the
by
the
law
or
by
the
intent,
that
digest
has
that
this
is
the
same
as
saying.
Well,
if,
if
I,
if
I
do
a
head
request
for
a
file-
and
I
don't
get
any
bytes
back,
what
was
the
point
in
receiving
the
digest
on
those
bytes?
I
don't
want
to
kind
of
litigate
on
what
people
want
to
do,
but
I
do
fully
appreciate
the
your
comment
that
having
a
use
case
actually
can
help
us
figure
out
what
we're
trying
to
do
here.
J
So
I'm
absolutely
with
you
that
I
don't
want
asymmetry,
I
want
consistency
and
if
we
can,
I
think
we
should
try
to
get
to
have
requests
and
responses
treated
the
same
way,
so
that
are
absolutely
good
goals.
J
And-
and
maybe
if,
if
you
say
you-
you
mentioned
content,
md5
and
the
confusion
about
what
it
applied
to
I
mean
I,
I
think
that
was
a
back
of
the
specs
that
defined
it.
So
they
never
said
so.
Nobody
knew
and
some
picked
one
answer
some
picked
the
other
answer
and
there
was
no
intro
and
the
obvious
answer
is
to
pick
one
of
these
and
and
to
to
see
which
one
makes
more
sense.
J
And
that's
the
discussion
I'd
like
to
see.
I'm
absolutely
not
yet
sold
on
the
idea
that
if
you
do
a
range
request
on
a
resource
that
the
digest
actually
should
apply
to
the
full
resource,
because
if
I
have
an
http
library
that
checks
digests,
it
can't
check
check
it
because
it
doesn't
see
the
full
resource.
J
So
maybe
my
world
view
about
what
this
is
about
is
misguided,
but
I
have
that
idea
that
if
I
write
an
http
library
and
that
I
can
flip
a
flag
and
say
check
digests,
and
it
can
do
that
and
if
we
are
hand
wavy
about
what
the
digest
applies
to
and
if
that
is
something
the
http
library
actually
doesn't
see.
Then
I'm
concerned,
because
I
would
want
that
to
be
something:
that's
a
component
that
builds
an
http
request
that
sends
it
or
a
component
that
receives
every
http
requests.
J
I
can
actually
implement
the
spec.
C
Yes,
I
I
accept
that
point.
I
would
say
that
having
in
a
previous
life
used,
signific
use
digest
plus
signatures,
but
let's
ignore
that
part
digest
to
effectively
reassemble
a
a
file
that
was
retrieved
using
http
over
different
mediums
to
give
an
integrity
check
of
the
thing
above
any
any
kind
of
specific
context.
That
was
like
an
application
level
check
based
on
http
metadata.
C
That
was
useful
for
me
to
do
or
further
you
know
to
be
able
to
fetch
the
like.
Do
a
head
request
to
fetch
the
digest
of
a
thing
I
want
to
check
and
prove
that
it,
it
is
what
I
think
it
is
was
also
useful.
I
agree
that
there
is
a
use
case
here
for
integrity
checking
the
actual
payload
that
was
sent
in
a
message,
but
I
I
think
that's
tangential
or
additional
to
the
digest
of
the
whole
thing.
J
Yeah
yeah,
but
if
you
I
understand
the
reassembly
thing,
so
I
mean,
if
you
do,
if
you
have
let's
say:
10
range
requests
to
get
a
resource
representation
whatever,
and
so
let's
assume
that
you
get
the
digests
for
each
chunk
and
then,
if
you
want
to
see
whether
what
you
reassembled
is
the
correct
thing,
you
can
always
do
in
head
without
a
range
and
will
get
the
the
digest
for
the
full
resource
and
you
can
check
it
locally.
J
So
I'm
not
sure
whether
flipping
that
to
it
must
be
the
full
resource
as
opposed
to
it
must
be.
The
payload
is
the
easiest
answer,
because
if
we
make
it
the
digest
of
the
payload,
the
answer
for
the
non-chunked
resource
is
is
in
in
the
ad
request
that
you
can
do,
and
you
will
always
get
that
and
we
don't
need
to
have
two
different
things
to
send
over
the
wire
to
different
header
fields.
K
Also,
can
I
suggest
something:
there's
there's
literally
no
way
that
you'll
ever
resolve
one
or
the
other,
it's
impossible,
because
different
people
have
different
use
cases
so
just
label
the
ones
you're,
defining
as
what
they're,
what
they
are
if
necessary,
define
two
two
different
header
fields
or
two
different
parameters
inside
the
one
header
field
that
describes
what
it
is
it's
making
a
digest
of.
I
don't
it's
pointless
to
decide
which
one
of
these
arbitrarily
based
upon
who
has
the
strongest
argument
for
a
use
case.
C
Yeah
I'd
say
what
we've
been
trying
to
do
is
keep
it
in
the
spirit
of
what
we
understand
rrc3230
to
do
and
be
used
today.
Maybe
that's
not
fully
accurate.
I
don't
know
it's
hard
to
pull
people,
but
my
my
understanding
is
that
if
we
tried
to
to
turn
into
what
julian
said
or
suggested
that
we
we
kind
of
break
a
load
of
people
who
are
using
digests
today-
and
I
don't
want
to
do
that.
A
Okay,
we
have
martin
and
then
justin
and
q,
and
I
don't
think
we
said
it
explicitly
before,
but
if
you
want
to
say
something,
please
queue
in
the
chat
just
by
typing
q
plus
go
ahead.
Martin,
so.
D
I'm
kind
of
swayed
by
julian's
comment
here
when
I
look
at
requests.
There's
there's
not
a
lot
of
cases
that
I'm
aware
of
where
you
actually
convey
a
representation.
I
think
put
might
be
the
only
case
that
we're
concerned
with
and
then
we're
further
only
really
concerned
with
this
in
the
context
of
a
partial
put,
which
is
kind
of
in
the
core.
Specs,
now
very
clearly
carved
out
as
this
sort
of
crazy
you're
on
your
own
territory.
D
It's
not
really
standardized
in
any
way
and
explicitly
so
so
I
was
wondering
whether
or
not
the
asymmetry
is
something
that
we
can
just
live
with,
not
having
in
this
case
and
say
that
the
the
digest
applies
to
the
to
the
content
of
the
message,
as
opposed
to
the
representation
selected
representation,
which
is,
in
most
cases
no
different.
And
so
then
we
don't
have
to
worry
about
posts.
D
We
don't
have
to
worry
about
all
of
patch,
for
instance,
which
is
a
kind
of
a
partial
upload,
but
it
never
really
selects
an
individual
representation,
and
so
I
think
I
think,
we're
good
if,
if
we
have
the
asymmetry,
unfortunately
the
julian's
argument
about
what
does
an
implementation
do
really
kind
of
nailed
it.
D
For
me,
the
server
when
it
produces
a
response
has
access
to
its
view
of
what
the
selected
representation
actually
contains,
whereas
the
client,
when
it
makes
a
request,
is
always
making
an
assertion
about
something
that
maybe
exists,
maybe
doesn't
exist,
and
so
trying
to
address
this,
for
just
put
seems
like
it's
not
really
necessary.
A
H
Okay,
so
somebody
who
is
very
interested
in
using
this
spec
as
part
of
the
whole
signature
stack.
This
is
this
is
where
we
want
to
get
protection
of
of
body
on
put
and
post
requests.
Most
specifically
could
be
used
on
responses,
but
my
use
cases
are
largely
driven
by
protecting
the
requests,
and
in
these
cases,
the
it
kind
of
boils
down
to
the
the
semantic
differentiation
between
this.
You
know
this
partial
representation
versus
the
representation.
H
Now
this
might
be
because
I'm
still
a
bit
on
the
outside
of
all
of
the
depth
of
http
semantics,
and
I
understand
why
we're
going
in
we're
using
those
to
define
this
new
version
of
this
spec,
and
you
know
it's
important
to
align
that
said.
H
Am
I
supposed
to
just
take
this
byte
array
that
I
have
and
chuck
that
through
the
hash
function,
or
is
there
something
that
I
need
to
do
to
it?
First,
because,
like
you
know
similar
to
what
julian
was
saying,
I'm
when
I'm
implementing
this
or
when
I
have
implemented
this,
it
goes
into
a
layer
in
in
my
library
functions
and
it's
just
here's
the
http
message
that
the
you
know
this
request
object
that
I
want
you
to
make.
You
know
it's
got
a
method.
H
It's
got
a
url,
it's
got
a
body,
it's
got
some
headers
do
some
magic
and
add
this
digest
header
to
it,
and
then
there
will
be
another
step
that
says:
do
some
other
magic
and
add
a
signature
header
to
it,
and
so
I
need
to
be
able
to
understand
what
exactly
that
magic
is
in
order
to
have
the
the
appropriate
layers
and
libraries
put
those
pieces
into
place.
That's
really
hard
to
do
right
now,
brian
campbell
posted
an
issue
to
that
effect
back
in
december
as
well
and
yeah.
H
H
How
do
we
fix
that
going
forward
and
still
align
with
htv
semantics?
I
don't
rightly
know,
but
ultimately
we
do
need
this
to
be
very
precise
and
very
clear
about
what
do
I
do?
What
is
that
magic
that
I
need
to
put
into
that
implementation
of
a
digest
function
and
therefore,
of
my
signature
function?
C
Thanks
justin,
like
it's
great,
to
hear
these
kinds
of
comments
like
I
agree
the
the
kind
of
the
input
that
brian
gave
us
earlier
in
the
year.
It
was
very
helpful
because
you
kind
of
forget
how
people
might
just
read
the
draft
and
and
use
it.
I'd
say
that
that
the
trouble
here
is
that
it's,
it
is
really
hard
to
get
your
head
around
and
I
think,
there's
always
editorial
improvements.
C
That
document
can
take
and
I'm
probably
sounding
defensive,
but
the
fact
that
it's
hard
doesn't
mean
we're
wrong
like
getting.
This
right
is
very
tricky
and
what
we've
tried
to
do
is
use
existing
terminology
and
replace
what
instance
digest
was
doing.
Maybe
we
got
it
wrong.
I
don't
know,
but
I
think
if,
if
you
think,
through
the
perspective
of
your
http
layer,
that
is
going
to
take
some
some
response
or
some
file,
you
have
on
the
system
and
return
that
there's
probably
going
to
be
a
layer.
C
That's
going
to
do
some
content
negotiation
and
maybe
encode
it
based
on
what
the
ui
asked
for
so
the
ua
asked
for
and
and
in
in
this
case,
representation
digest,
which
is
what
we
labeled.
This
expect
to
start
with,
that
value
would
vary
based
on
the
content
negotiation,
not
transferring
coding
that
that
complicated
things
on
the
top
and
I'd
say
like
if,
if
we
get
too
far
back
to
how
people
think
this
might
work
in
the
easy
case,
we
risk
treading
back
to
like
content
md5
and
getting
it
wrong
and
ending
up
without
interoperability.
H
H
You
guys
need
to
solve
the
general
case
for
this
and
that's
hard
and
I'm
glad
you're
doing
it
not
me,
but
the
the
thing
is
so
from
my
perspective,
most
of
the
stuff
that
I
need
to
sign
is
going
to
be
created
in
memory
by
calling
jsonobject.2string
and
passing
that
into
a
rest,
template
or
you
know,
url
lib
or
something
like
that,
as
as
a
byte
array
or
as
a
string
and
that's
what
I'm
going
to
be
working
on
there.
There
is
no
file.
H
There
is
no
negotiation,
especially
on
the
request
side
right.
It's
just.
This
is
the
thing
that
I
am
sending
you
and
I
need
to
cover
that
in
this
protected
envelope.
Somehow-
and
I
I
need
to
have
a
really
clear
way
to
protect
that,
so
this
may
honestly
be
a
case,
and
you
know,
I
think,
that
the
partial
put
argues
for
this
as
well,
that
as
elegant
as
parallelism
is,
and
I'm
often
generally
a
fan
of
it.
This
may
be
a
place
where
they're
not
actually
parallel,
because
they
are
different
kinds
of
operations.
H
Like
the
information
you
have
available
to
you
as
an
http
client
versus
an
http
server
is
not
going
to
be
the
same,
and
the
spec
may
need
to
take
that
into
consideration
there.
There
may
be
natural
asymmetry,
I'm
not
saying
that
there
necessarily
is,
but
there
may
be.
C
Cool,
I
just
just
to
very
quickly
respond
in
this
case,
something
you
you
might
be
able
to
do
in
in
what
we
introduced
in
in
this
digest
back
is
the
what
we
call
the
id
hash
algorithm
so
that
that
would
always
be
clear
in
the
header
that
you're
calculating
a
digest
over
the
the
identity
representation.
So
it
doesn't
matter
what
it
looks
like
on
the
wire
say.
C
The
other
side
always
knows
to
take
off
any
encoding
and
kind
of
go
back
to
identity,
encoding,
which
isn't
a
thing
but
okay,
whatever
and
and
then
do
the
validation
on
that
which
can
help
mitigate
some
of
these
issues.
But
but
absolutely
thanks.
Thanks
for
the
feedback,
and
if
there's
any
specific
suggestions
you
have
on
how
we
can
make
things
clearer
like
please,
please
do
them,
while
they're
fresh
in
your
head.
A
So
I
heard
I
guess,
two
things
go
by
there.
One
was
julian
wanted
to
see
more
examples.
I
think
that
would
be
really
helpful
because
sitting
here
thinking
about
this,
I'm
not
entirely
sure
that
I
believe
that
the
payload
digest
is
the
one
you
want.
If
there
are
partial
puts,
there
are
gonna,
be
cases
where
it's
important
to
protect
the
integrity
of
the
whole
resource,
the
whole
representation
and
indeed,
there's
some
attacks
that
could
take
place.
A
If
you
don't
do
that,
so
we
we
probably
need
to
work
with
those
use
cases
and
roy
made
the
suggestion
in
chat,
which
I'll
just
make
sure
people
see,
which
was.
Maybe
we
need
two
headers
one
for
digest
and
one
for
content
digest,
and
that
might
be
something
to
consider.
C
A
C
My
question
was
more
is:
is,
is
this
thing
undeployable
without
the
two?
I
just
want
to
check
my
my
gauging
of
the
work.
Is
there
a
significant
push
back
that
like
this
is
fine?
Okay,
this
digest
did
exist,
but
this
work
is
revealed
actually
it's
kind
of
a
bit
broken
and
we
need.
We
need
something
else.
C
J
So
if
I
understand
correctly,
the
way
this
spec
was
born
was
by
the
wish
to
fix
the
old
digest
back
to
be
consistent
with
current
specs
and
so
on,
and
maybe
we
need
to
realize
that
the
way
that
spec
was
written
itself
in
itself
was
so
ambiguous
that
we
can't
fix
it
without
breaking
somebody's
implementation.
J
So
maybe
a
better
approach
would
be
actually
to
say
we
are
guided
inspired
whatever
by
this
old
rfc,
but
we
are
ready
to
to.
We
don't
need
to
be
compatible
with
it.
We
just
define
different
header
fields
and
try
to
get
those
right
instead
of
trying
to
be
compatible
with
something
that,
as
far
as
I
understand,
doesn't
work
in
practice,
but
I
mean
I
hear
about
implementations
using
digest,
but
do
they
really
exist?
Are
they
widely
deployed?
Is
that
a
private
agreement
between
a
specific
server
and
a
specific
library?
J
C
A
We've
got
brian
and
I
think
we
need
to
wrap
this
up
we're
a
little
bit
over
time.
We
do
have
some
slop
in
the
schedule,
but
we
need
to
move
on
pretty
soon.
L
Yeah,
I
I
just
wanted
to
follow
up,
I'm
not
even
sure
exactly
how
to
express
it,
but
for
for
the
sort
of
regular
person
that
might
come
to
this,
I
I
would
ask
again
that,
even
if
it
is
made
to
be
the
not
the
body,
the
potential
partial
representation,
if,
if
the
digest
back,
could
be
more
clear
and
declarative
about
when
and
where
one
can
understand
what
the
actual
content
being
fed
into
the
digest
is,
it
would
be
really
helpful
and
I
understand,
there's
a
new
semantics
document.
L
L
I
know
that's
not
super
actionable,
because
I'm
asking
you
to
describe
it
for
me
because
I
don't
understand
it,
but
I
don't
think
I'm
alone
in
the
world
of
people
that
won't
instantly
graph
this
and
it
would
be
yeah.
It
would
be
really
helpful
and
more
examples
would
also
be
helpful.
But
I
came
to
the
issued
repository
trying
to
understand
it
and
running
across
the
that
then
actually
weren't
a
digest
of
what
was
in
the
example.
So
making
sure
those
are
done.
Added
and
done
correctly
would
be
also
helpful.
C
No,
no
brian,
like
thanks
for
reaching
out
and
actually
engaging
here.
The
examples
is
unfortunate.
C
We
broke
that
that
was
due
to
kind
of
the
classic
reformatting
and
flowing
activity,
but,
like
I
am
very
sympathetic
that
this
is
hard
to
just
come
to
and
say:
oh
there's,
this
header,
I
need
to
figure
out
how
to
validate
it
or
how
to
produce
it
like
I,
I
really
would
like
to
make
things
more
accessible,
but
I
think,
like
I
don't
want
to
have
to
describe
all
of
semantics
like
there's,
there's
a
big
draft
that
does
all
of
this,
and
I
I'm
willing
to
to
work
on
that.
C
But
it
is
tricky
and
we
I.
L
So,
maybe
maybe-
and
maybe
I'm
asking
too
much-
I
don't
know-
but
you
know
at
least
getting
being
specific
about
the
areas
of
semantics
that
are
relevant,
how
they
might
impact
it
at
least
getting
the
links.
You
know,
there's
some
broken
rings
right
now
and
the
references
that
makes
it
all
that
harder
to
try
to
backtrack
and
figure
out
where
things
are
going
on
getting
working
examples,
yeah
do
it,
I
get
it.
L
Those
are
just
areas
of
feedback
that
I
think
would
be
would
be
really
useful
and
I
think
potentially
another
header
that
does
more
of
just
a
dumb
payload
digest
would
be
useful,
but
I
I'm
not
sure
it's
even
strictly
necessary
if,
if,
if
we
can
get
to
understanding
when
and
when
not,
this
is
applicable
to
the
kind
of
use
cases
we've
been
looking
at
yeah.
Sorry
again,
not
very
actionable.
A
Appreciated,
okay,
so
maybe
we
should
move
on
to
the
the
next
issue
and
try
and
speed
up
a
bit.
C
So
next
slide
yeah,
okay,
so
so
this
is
a
completely
different
thing
to
like
how?
How
do
you
calculate
whatever,
based
on
weird
semantics?
No
one
can
find
the
terms
for
instead
this
is
issue
one
three,
seven,
seven,
which
is
how
to
deal
with
old
algorithms.
C
So
what
we
we
currently
have
is
an
iona
table,
that's
out
there
and
existing
and
the
digest
draft
we
have
today
wants
to
change
some
of
that.
So
I
don't
want
to
like
put
the
whole
table
here,
but
to
give
a
summary,
what
we're
going
to
do
to
that
table
is,
alongside
the
algorithm
listing
name,
create
a
column
that
has
a
status
to
indicate.
C
You
know
that
actually
a
lot
of
the
algorithms
that
are
there
that
they're
not
defined
by
digest,
but
they
they
are
used
in
digest
to
calculate
the
digest,
but
that
we
are
kind
of
saying
they're,
all
don't
use
them,
but,
like
you
can
see
here,
we've
got
different
statuses.
We
have
standard
deprecated,
obsoleted
and
a
special
obsoleted
like
there's,
not
a
consistency
in
in
the
eye
on
a
table
that
we
would
make
out
of
this
document.
And
personally,
I
don't
think
that
really
helps
much.
We
have
had
questions
about.
C
Oh
well,
you
know
should,
should
we
stop
recommending
md5
yeah
well
great,
we
can.
But
what
about
unix?
Some
like
it
was
inconsistent
like
I'm
not
saying
well,
we're
not
saying
unix
here
is
better
than
md5
they're,
probably
all
broken.
The
people
who
who
might
use
md5
classically
wanted
something
that
was
more
secure
and
a
stronger
hashing
algorithm,
but
it
busted.
I
may
you
could
argue
that
maybe
they
were
using
it
in
the
context
that
they
thought
it
was
better
than
unix
some.
C
But
I
don't
know
it's
it's
kind
of
lots
of
what
ifs
should
it
could
have?
Would
us
here
so
we've
gone
to
the
next
slide.
The
things
we
can
control
is
that
the
statuses
that
we're
defining
or
allowing
in
in
this
document
are
really
confusing.
There's
no
pattern
I
can
find
in
them.
I
think
we've
kind
of
organically
grown
based
on
different
issues
that
have
come
up
to
say.
Oh,
let's
deprecate
that
one
and
obsolete
this
one
and
coming
back
to
this,
I
was
just
instantly
confused
by
it.
C
So,
for
the
sake
of
of
just
simplicity,
if
you're
going
to
the
next
slide,
I
think
to
help
resolve
this
issue.
What
I'd
like
to
do
is
just
obsolete
everything
except
known,
decent
algorithms,
sha-256-512
and
say
everyone
must
not
use
the
other
ones,
but
knowing
that
they
they
probably
will,
if
they
want
to
the
digest.
Spec
is
weird
in
that
say:
you
provide
a
digest
header
on
a
response
for
a
thing.
There's
no
requirement
to
do
anything
with
that.
C
You,
the
the
the
server,
could
provide
three
different
digest
values
with
different
algorithms,
that
don't
match
when
they're
they're,
verified
and
and
there's
like
the
existing
language
is
pretty
loose
and
again
if
we
change
that
we
risk
not
reflecting
reality
or
breaking
all
the
implementation.
So
so
my
suggestion
is
just
to
make
things
simple
for
people
reading
the
spec
to
say
I
shouldn't
use
those
things,
okay,
understood
and
that's
that's
it
on
this
issue.
C
I
wonder
what
people
think
about
that
proposal
and
if,
if
they
think
it's
okay,
then
I
will
land
the
pr
to
do
that.
A
So
I
iq
yeah
every
I
iana
registry
deals
with
this
problem.
You,
you
are
not
alone
the.
I
think
this
is
a
fine
approach.
It's
keep
it
simple.
Just
two
different
statuses
is
great.
The
only
thing
I'd
say
is
you
might
want
to
consider
using
deprecated
rather
than
obsolete,
because
obviously
it
has
special
meaning
in
the
itf
process,
but
no
strong
feeling
about
that
makes
sense.
C
That's
called
the
one
question
I've
got
here
is
that
our
designated
expert,
I
think
james
manager,
I
might
got
the
name
wrong
there
and
their
role
in
the
modification
of
ayana,
where
they
they've
interacted
a
bit
on
the
github
issues
in
the
past.
But
it's
been
a
bit
quiet
and
I
wonder
kind
of
how
much
tow
treading
we
might
be
doing
here,
but
we
could
probably
resolve
that
off
list.
A
A
If
we
want
to,
I
might
want
to
talk
about
how
we
enable
it
to
be
managed
on
github
rather
than
the
other
processes,
but
that's
neither
here
or
there,
the
the
expert
themselves,
and
that
could
be
one
or
more
people
is
designated
by
the
area
director
and
if
we
we
feel
like,
we
need
an
aerator
expert.
We
can
talk
to
the
director
about
that
or
we
can
talk
to
james
james
is
still
around.
He
still
participates
in
iotf
stuff
he's
just
quiet
sometimes
so,
but
we
can.
We
can
talk
about
that.
N
So
a
quick
question:
I'm
not
too
sure
about
all
the
dynamics
around
this.
D
So
the
tls
working
group
sort
of
already
trodden
this
ground.
They
have
a
recommended,
yes
or
no
column.
Maybe
that's
the
way
we
can
deal
with
this
if
you
want
a
different
way
to
spell
it,
but
I
think
this
is
the
right
thing
to
do.
The
fact
that
we
had
told
people
not
to
use
md5
but
were
perfectly
okay
with
unix
sum
was
bizarre
to
my
my
understanding
and
this
this
matches
the
sorts
of
uses
that
we're
seeing
from
people
nowadays.
D
The
sort
of
things
that
justin
and
brian
have
been
talking
about
doing
here
really
does
depend
on
a
cryptographic
hash.
So
I
think
this
is
the
right.
The
right
outcome
here.
N
Yes,
so,
as
I
said,
I'm
not
too
sure
if
I
understood
the
whole
dynamics
around
this
digesting,
but
I
know
git
uses
shawn.
Hash
is
right
for
their
whole
tree
structure
and
all
right.
N
Yeah
and
and
also
in
web
archiving
world,
we
have
been
using
shower
hashes
for
like
for
last
two
decades,
plus,
basically
every
time
we
archive
a
page,
we
create
shower
hash
and
shove
it
in
the
in
the
work
record.
Here.
Basically-
and
I
don't
see
that
listed
here
at
least
not
in
the
standard
part
of
it-
is
it
sha
isha
one.
D
I
think
that
those
people
who
are
using
it
still
because
they
have
legacy
need
need
to
be
very
careful
about
how
they
use
the
the
algorithm,
but
if
it
is
just
an
integrity
check,
it's
still
actually,
okay,
but
the
difference
here
is
what
we're
trying
to
recommend
it
for
use
using
and
if
you're,
using,
for
instance,
as
a
key
in
order
to
find
things
and
you're
worried
about
collisions,
then
it's
totally
not
appropriate
to
do
that.
If
you
have
any
adversarial
content
involved.
N
Well,
yeah
I
mean
in
in
in
get
they
they
use
it
both
for
integrity
and
it
also
kind
of
indexes
the
data
and
they
it's
they.
Don't
I
mean
they
were.
I
know
they're,
like
you
know,
attempts
to
create
collisions
intentionally,
but
the
anyways
it's
it's
a
little,
it's
a
little
too
late
to
fix
like
20
years
of
work,
basically
in
in
web
archiving
world,
for
example,
going
forward.
N
Maybe
tools
adapt
to
more
modern,
hashing
algorithms,
but
then
everything
needs
to
kind
of
you
know
work
backward
in
that
case.
That's
it!
That's
it
from
my
side.
A
I
think
we
need
to
move
on
unless
folks
have
other
feedback.
Look.
If
you
have
two
more
slides,
you
want
to
go
through
real,
quick.
C
Yeah,
so
so,
just
on
that
backlog,
one
because
it's
kind
of
come
up,
we
we
had
this
id
prefix
thing
for
identity
encoding
and
it's
it's
kind
of
nice.
But
it's
a
bit.
It's
the
one
part
the
spec,
that's
new,
and
so
there
was
a
suggestion.
Well,
actually,
maybe
it
would
be
nice
to
take
it
out
and
make
a
spec
or
or
some
language
that
would
allow
an
id
prefix
on
on
any
algorithm
so
that
it
was
clear
that
it
was
done
on
the
the
identity
encoding.
C
I
I
don't
need
an
answer
right
now,
like
I
think
I
wonder
what
people
want,
given
that
we
just
said
we
want
to
obsolete
all
the
rest
of
them.
It's
not
necessarily
an
issue,
but
but
for
the
future
it
could
be
up
given
how
much
interest
they
may
or
may
not
be
in
digest.
So
if
people
care
like
go
and
have
a
look
at
the
ticket
and
comment,
that's
all
I
have.
A
Okay,
so
now
we
will
move
on
to
the
cookies
draft
please,
and
for
this
we
have
just
added
two
new
editors
so
before
we
had
mike
west
and
and
john
wylander
and
they
are
being
joined
by
lily
chen
and
steven
engelhardt
so
to
to
get
this
draft
over
the
line
and
shipped
so
so
welcome
to
the
process.
They're
they're
they've
been
participating
on
the
github
issues
for
a
while
and
now
they're
they're
they're
editors
lily.
I
think
we
have
slides
from
you.
I
Yeah
hi,
so
I'm
lily.
I
work
for
google.
I
work
on
chrome,
primarily
on
cookies,
and
so
I'm
really
honored
to
be
joining
the
editor
team
for
665
best,
along
with
stephen
who
I'll
let
introduce
himself.
O
O
So
we
have
29
open
issues
and
there's
a
bunch
that
we
think
are
not
really
in
scope
in
scope
for
this
initial
or
for
for
the
work
that
we'd
like
to
complete,
and
I
think
these
fall
into
some
categories
that
basically
either
lack
consensus
or
they
need
more
work.
And
it's
things
like
the
future
of
cookies.
And
will
you
still
have
access
and
like
specifying
how
access
to
third-party
cookies
will
look
in
the
various
implementers
and.
I
O
Think
that's
something
that's
just
we
don't
have
consensus
around
yet
also
in
that
zone
are
kind
of
some
proposals
for
changes
to
security
mechanisms.
I
think
that
that
needs
more
work
and
then
a
bunch
of
topics
around
cookie
exploration
and
eviction.
O
So
for
those
issues
at
the
top,
we
think
they
should
be
deferred
or
closed,
and
I
think
if
people
checked
it,
it
would
be
good
to
know
and
then
the
interop
issues
we'll
I'll
discuss
in
a
little
bit
more
detail
on
the
next
slide.
But,
aside
from
that,
we
have
some
editorial
things.
We
need
to
make
changes,
we
need
to
make
and
then
lily
later
on
in
the
presentation
we'll
go
through
the
kind
of
progress
we've
already
made
and
hopefully
the
things
we've
already
resolved
so
next
slide.
Please.
O
And
so
so
the
interop
issues
fall
into
that
we'd
still
like
to
to
work
on
fall
into
these.
Like
three
broad
categories.
We
know
there's
some
issues
around
syntax
and
parsing
of
cookie
and
set
cookie
headers.
I
think
this
is
both
where,
like
we
actually
need
alignment
between
implementers
and
also,
I
think,
areas
where
the
spec
could
be
more
specific
and
one
area,
that's
like,
I
think,
very
difficult
in
the
spec.
O
Is
there
isn't
really
a
good
description
of
how
non-http
apis
should
retrieve
cookies,
and
so
I
think
it's
particularly
evident
around
same
site,
and
so
there's
been
a
bunch
of
questions
around
around
that,
but
also
just
in
general,
and
I
think
the
approach
we're
thinking
to
take
there
is
to
take
what
is
currently
like.
A
here's.
O
How
you
build
a
cookie
header
algorithm,
make
it
a
little
bit
more
generic,
so
you
could
hook
into
it
from
a
non-http
or
an
http
api,
and
then
you
know
attempt
to
work
out
the
details
from
there.
So
that's
that's
yet
to
be
done
and
then
there's
some
domain
attribute
semantics
that
we
still
need
to
work
through,
and
this
is
things
like
what
happens
when
the
psl
changes?
O
What
happens
if
the
domain
attribute
is
empty
or
what
happens
if
localhost
is
best
specified,
so
these
are
things
that
aren't
specified
in
the
spec
and
that
we'd
like
to
have
there,
and
so
we
need
to
figure
out
where
all
the
implementers
fall
and
also
where
we
want
the
spec
language
to
go
and
so
I'll
hand
it
over
to
lily
to
talk
about
what
we've
done
so
far.
I
Thanks
next
slide,
please.
I
Yeah
so
some
of
the
things
that
have
happened
recently,
some
issues
that
we've
either
resolved
or
are
in
the
process
of
resolving.
There
was
an
issue
about
parsing
of
multiple
same
site
attributes.
So
let's
say
you
had
same
site:
equals
lacks
same
site,
equals
garbage
same
site,
equals
blacks
again
like
how
would
that
be
parsed,
and
so
the
resolution
was
firefox
aligned,
their
behavior
with
chrome
and
safari,
which
was
to
take
the
last
attribute
and
that's
already
consistent
with
what
the
spec
says.
I
I
I
think,
and
then
another
issue
is
same
site
versus
cross-site
requests
with
respect
to
reload
requests,
and
there
is
also
an
open
pr
to
address
that
which
defines
different
behavior
for
a
reload
request
that
is
initiated
by
a
user
agent's
ui
interface.
I
And
lastly,
there
was
an
issue
about
the
request
method
on
redirects,
with
respect
to
same
site,
for
example
a
post
that
redirects
into
a
get,
and
so
there
was
a
pr
that
clarified
the
method
clarified.
The
method
of
the
current
redirect
top
is
used
there
next
slide.
I
Please
and
then
also
since
the
last
intro
meeting,
we
had
a
call
for
adoption
for
sections
3.1
to
3.3
of
the
cookie
incrementalism
id,
and
there
was
strong
support
for
that.
I
So
thank
you
to
everyone
who
gave
feedback
on
that
thread
and
we
ended
up
merging
in
three
sections
of
the
cookie
incrementalism
draft
into
six
two
six
five
bits
which
were
treating
cookies
at
the
same
site
equals
lacks
by
default,
requiring
secure
for
same
site
equals
none
and
introducing
scheme
full
same
site
cookies
next
slide,
please
and
then
lastly,
just
to
give
an
update
on
the
web
platform
tests.
I
Thank
thanks
to
mike
taylor.
Rewrite
of
the
http
state
test
suite
originally
by
abarth
is
complete,
and
so
there
are
new
tests
for
each
of
the
cookie
attributes
and
you
can
see
the
results
from
a
recent
run
on
wpt.fyi,
so
the
old
tests
never
really
worked
and
the
new
tests
work
now
so
thanks
again
to
mike-
and
I
believe
that's
it.
A
Great,
it's
I'm
really
happy
to
hear
about
the
test
cases.
Yeah,
that's
fantastic!
So
it's
good
to
hear
that
it
seems
like
this.
This
draft
is
picking
up
some
steam.
If
there's
anything
that
you
need
from
the
working
group
in
terms
of
feedback
or
input,
please
feel
free
to
ask
if
you
need
any
support
from
the
chairs.
You
know
if
you
want
to
set
up
a
regular
meeting
for
the
editors
or
anything.
Likewise.
Does
anybody
have
any
comments
or
questions
on
this
draft.
A
Okay,
next
up,
we
have
h2biss,
which
is
martin
thompson.
D
All
right
go
for
it
yeah,
so
not
a
lot
to
report
here.
The
next
slide
has
the
details
we
submitted.
A
new
version
of
the
draft
has
many
business
in
it
too
many,
but
probably
the
good
news
is
that
corey
is
joining
me
to
help
on
the
work,
and
we've
already
made
some
pretty
good
progress
on
some
of
the
editorial
things
that
are
coming
there.
D
Probably,
the
big
thing
we're
working
on
right
now
is
making
sure
that
the
text
lines
up
with
the
changes
to
the
semantics
terminology,
and
so
we're
working
through
that
in
a
moment.
But
what
we're
here
today
to
talk
about
is
some
of
the
issues
that
we've
got.
The
next
line
is
a
link.
D
D
Okay,
let's
try
this
how's
that
readable
it's
manageable.
I
I
can
make
out
the
words.
I
don't
know
what
people's
preferences
are.
My
preference
is
probably
to
go
through
the
probably
not
ones,
because
I
think
we've
got
some
fairly
good
sense
of
that
and
I'll
try
to
record
some
conclusions
from
the
discussion
here.
Can
we
talk
about
788,
which
is
the
static
table.
D
I
don't
think
anyone
ever
wanted
to
do
this,
and
so,
if
anyone
wants
to
speak
up
for
doing
this,
which,
by
the
way
requires
that
we
revise
hpac
as
well
as
http
2,
then
speak
a
piece.
This
implies
a
lot
of
other
things.
Probably
it's
a
bad
one
to
have
picked
first,
but
I
think
I
think
we're
in
a
position
to
close
this
one.
A
D
We
need
a
new
version
of
the
protocol
in
order
to
do
this
at
all
and
there's
a
lot
of
people
who
are
unwilling
to
do
even
that.
So
I
think
what
we'll
do
here
is
we'll
say
that
we
talked
about
it
and
we
might
confirm
on
the
list,
but
I'll
not
do.
D
Okay,
great
787
is
a
similar
one.
People
were
talking
about
incompatible
changes.
A
That's
correct,
I
should
we
leave
this
until
we
find
some
reason
to
consider
changing
it.
I
mean
at
the
current
time.
I
don't
think
we
have
any
impetus
for
this
change
all
right,
but
if
ian
wants
to
bring
something
to
the
group
for
consideration,
we
can
talk
about
that
and
that
might
trigger
a
new
ap
lpn
version,
but
I
think
the
bar
for
doing
this
based
on
the
way
we
chartered
this
and
the
discussion,
the
data
is
relatively
high.
B
D
D
A
No,
what
I
suggested
was
to
send
settings
enable
push
0
by
default
and
then
making
it
an
extension.
That
was
my
suggestion.
D
Yeah,
which
is
essentially
what
what
corey
suggests
in
the
comment
there,
it
would
require
a
new
lpn
in
order
to
have
it
be
the
default,
but
a
lot
of
implementations
can
send
that
themselves
on
their
own
cognizance
anyway,
so
we
could
say
you
should
send
it
unless
you.
B
D
B
A
My
comment
was
more:
if
we
factored
server
push
out
into
an
extension
into
a
separate
document
to
give
a
little
more
separation
from
http,
2.
and
then
just
say,
the
document
always
said
setting
enables
push
unless
you
support
this
extension,
but
I
I'm
aware
that
that
would
cost
some
editorial
work.
P
A
Looking
through
the
text
here,
real
quick,
so
this
is,
I
think
we
are
kind
of
moving
towards.
This-
is
adding
a
note
or
adding
some
context
about
the
the
use
of
the
feature
to
make
it
clear
that
this
doesn't
really
have
it.
It
can
have
some
pointy
edges
and
it's
not
terribly
well
used
in
a
lot
of
cases
all
right,
so
we
have
some
q.
We
have
ian
and
then
lucas
go
for
it.
M
Yeah
thanks
yeah.
I
think
we're
moving
this
at
this
point.
If
we're
not
sorry,
if
we're
not
going
to
remove
it
entirely,
moving
to
an
extension
dock
is
is
probably
just
work,
it's
probably
just
turn
and
more
confusion
than
it's
worth.
You
know
I,
I
think
I
think,
adding
some
notes,
and
maybe
you
know
there
are
notes
both
in
the
context
of
apis
that
are
typically
available
in
web
browsers.
M
I
not
a
particularly
rich
set
as
well
as
notes
about
the
pointedness
of
use
and
caching
limitations,
and
such
that
we've
kind
of
learned
since
it
was
originally
launched,
would
be
hugely
helpful.
You
know
so
I
think
and
and
the
fact
that
it
may
not
be
supported
by
all
clients.
I
think
I
think
calling
out
all
those
things
would
be
would
be
helpful
and
kind
of
add
the
context
that
we've
gained
since
it
was
originally
added.
P
C
I'll
I'll
just
agree
with
some
of
ian's
points
there
that,
like
pointing
out
that
the
apis
and
the
tooling
and
the
debugging
of
push
is
like
pretty
bad,
might
might
help
something
getting
that
text
right.
It's
probably
difficult,
but
I'm
happy
to
review
stuff
there,
but
the
the
bigger
point
I
wanted
to
say
is:
I
think
I
agree
with
martin's
comment
about
the
editorial
work
required
for
this.
C
D
G
I
think
default
disabling.
It
would
be
totally
reasonable.
Just
anecdotally
whether
we're
talking
about
inside
a
browser
outside
a
browser,
I'm
not
seeing
hardly
anybody
using
it
at
least
on
apple
platforms.
So
we
we
can
talk
about
disabling
it.
We
can
do
a
ton
of
editorial
work
or
very
little
editorial
work,
but
it
seems
like
the
the
much
of
the
chicken
and
egg
problem
of
getting
it
deployed
has
sailed,
and
so
the
the
longer
we
keep
it
around
the
the
more
we're
extending
that
maintenance
work
forever.
A
I
believe
that
when
we're
talking
about
defaulting
to
not
to
not
sending
it,
what
we
mean
is
explicitly
setting
the
setting
with
a
value
of
zero,
because
changing
the
defaults
in
the
true
meaning
of
changing
a
default
would
require
a
new
lpn,
and
I
don't
think
we're
talking
about
that.
If
anybody
disagrees
with
that,
please
say
so.
That's
where
I
think
we're
at
mike
go
ahead.
Q
No,
I
I
think
it's
totally
reasonable
to
say
you
should
disable
it
unless
you
support
it.
That's
just
a
clear
announcement
of
your
feature
set
akamai
uses
it.
We
do
see
some
benefit,
but
it's
small
and
I
know
a
lot
of
people
are
not
able
to
use
it
successfully
so
yeah,
but
let's
default
it
to
off
and
move
on.
Okay.
D
Martin,
if
you'd
like
that
would
be,
that
would
be
really
good.
I
can
write
the
same
sort
of
thing,
but
if
you
have
the
context
that
would
be
great
yeah.
D
D
So
790
lucas
suggests
that
we
provide
some
more
advice
about
how
to
design
new
fields
to
take
better
the
benefit
of
compression,
and
we
probably
won't
look
at
structured
fields
here
as
well
in
terms
of
describing
the
the
rules
here.
D
A
A
It'd
have
to
be
much
less
narrative
than
the
current
blog
entries.
I
think.
D
Right
so
I
think
the
main
the
main
consideration
here
is
simply
having
using
the
comma
separator
for
individual
pieces
and
and
making
sure
that
you
are
able
to
reuse
those
pieces,
but
that
could
only
be
a
couple
of
paragraphs.
If,
if
you
want
to
go
that
far,
I
don't
know,
maybe
lucas
can
propose
some
text.
C
Yeah
I
just
want
to
clarify
this
is
like
one
of
those
classic
drive-by
comments
when
I
was
just
like
flicking
through
the
spec
and
and
saw
this
weird
section
on
cookie
crumbling
that
I
kind
of
always
forget
about
and
then
read
and
I'd
recently
read
mark's
blog
like
so
it
was
all
in
my
head
at
the
time.
I
don't
even
remember
creating
this
issue,
so
I
think
like
I
could
probably
live
without
anything
being
done,
but
it's
an
opportunity
to
maybe
help
guide
people
who
are
doing
things.
C
A
D
Yeah
that
suggests
to
me
that
maybe
we
don't
unless
we
manage
to
get
something,
that's
really
good.
Can
we
flip
this
to
editorial.
A
C
D
Oh
it,
unfortunately,
the
the
joining
of
multiple
cookies
needs
to
remain
in
the
spec,
because
you
use
a
semicolon
rather
than
a
comma,
and
people
need
to
know
that
yeah,
the
next
one
I
think
I've
already
flagged
is
editorial.
It
was
a
misclassification,
it
shouldn't
appear
here.
The
last
one
is
770,
which
talks
about
frames
with
multiple
errors.
D
It
turns
out
that
there's
a
number
of
ways
in
which
people
can
encounter
errors
in
the
formatting
of
frames-
maybe
maybe
they're
too
long.
Maybe
they
contain
a
stream
id
that
doesn't
exist
and
should
those
sorts
of
things-
and
the
text
says,
use
this
error
for
this
error
code
for
this
error
and
use
this
other
error
code
for
this
other
error,
and
it
doesn't
really
provide
any
guidance
about
how
to
resolve
conflicts
where
there
are
two
different
error
codes
that
you
might
want
to
send.
D
A
A
little
wary
of
of
you
know
it.
Is
this
an
interop
issue,
or
is
this
just
a
someone
wants
deterministic
behavior
with
with
the
errors
that
they
get
in
their
logs.
D
D
P
I
don't
think
I
meaningfully
disagree
with
martin
in
any
assessment
here.
I
do
want
to
note
that
the
error
handling
text
in
the
quick
document
may
not
necessarily
be
all
that
helpful
here,
not
least
because
it
explicitly
says
an
endpoint
may
use
any
applicable
error
code
when
it
detects
an
error
condition
which
doesn't
necessarily
help
us
at
all.
P
If
we
really
did
want
to
establish
a
hierarchy
of
error
codes,
we
could,
it
would
seem
like
the
easiest
thing
to
do
would
be
to
say,
connection
errors,
trump
stream
errors,
and
if
you
have
multiple
errors
at
the
same
level,
use
the
one
with
the
lowest
numerical
identifier.
I
mean
in
many
ways
it
doesn't
matter.
I
think
the
practical
answer
is
that
you
can
see
any
of
the
errors
that
might
trigger.
You
would
hope
that
connection
areas
trump
stream
errors,
but
might
not
get
it
in
practice.
D
So
the
whole
point
of
the
quick
change
was
to
remove
any
hope
of
determinism
for
those
people
who
are
seeking
it.
While
I'm
here,
I
will
also
object.
P
To
the
use
of
the
phrase,
the
most
appropriate
error
code,
with
no
clarification
of
what
appropriateness
might
mean
exactly.
D
A
D
That
was
upgrade
very
specifically
upgrade
yeah,
so
it
looks
like
we've
got
a
case
of
an
intermediary
acting
more
like
a
tunnel
than
an
intermediary
and
we're
seeing
the
consequences
of
that.
P
Is
it
reasonable
to
read
jeffrey's
original
issue
as
basically
pointing
out
that
we
have
specified
a
behavior
that
does
not
appear
the
majority
of
browser
user
agents?
Don't
appear
to
actually
implement?
Do
we
know
what
the
current
state
of
the
browser
user
agent
is
here?
Has
anyone
converged
on
safari's
behavior,
since
this
issue
was
filed
and
if
not,
I
think
this
is
the
standard.
P
W
working
group
question,
which
is
to
what
extent
are
we
required
to
document
widespread
non-conformance
to
specification
in
this
instance,
I'm
inclined
to
say
it
probably
doesn't
matter
if
everyone
actually
does
accept
this
and
the
spec
says
you
shouldn't,
I
mean
it's
on
the
client
end.
I
don't
think
I
don't
think
there
are
huge
risks
here,
but.
D
D
We'll
leave
that
in
the
do
nothing
box.
A
D
It
seems
like
no
one
else
wants
to
do
it
either.
So
that's
probably
good.
D
A
Okay,
so
go
ahead
and
close
this,
and
if
people
feel
differently,
they
can
comment
and
we
can
reconsider
it
later
down
the
road.
D
D
So
I
actually
wrote
a
pull
request
on
this
one
which
went
through
and
articulated
a
proposal,
and
then
it
was
pointed
out
to
me
that
that
wasn't
in
the
original
scope
of
the
work
that
we're
taking
on,
which
is
probably
just
my
fault,
because
I
didn't
actually
check
when
the
chairs
passed
that
that
email
by
me,
I
was
on
the
holidays,
and
so
yes,
so
the
the
the
pull
request
that
I
have
here
basically
just
goes
through
and
guts
all
the
priority
stuff
from
the
spec.
D
It
leaves
enough
there
for
implementations
to
understand
the
framing,
apply
all
of
the
rules
with
respect
to
the
length
of
the
frame
and
the
the
fields
that
are
in
the
frame,
so
that
the
frame
can
be
properly
validated
and
if
they
receive
a
frame.
They'll
know
what
to
do
with
it,
but
they
don't
have
to
do
any
of
the
prioritization
stuff
as
a
result.
Sort
of
replaces
it
all.
D
With
the
tombstone
saying
that
this
entire
priority
scheme
was
not
particularly
successful
and
leaves
it
at
that,
it
seemed
like
some
people
on
the
mailing
list
were
reasonably
comfortable
with
that
general
plan.
But
I
want
to
check
whether
there's
any
support
for
this
change,
because
it's
kind
of
big
and
could
be
a
little
disruptive.
A
So,
to
be
clear,
this
doesn't
require
any
lpn
it
effectively
just
makes
the
priority
information
in
that
people
might
still
be
sending
less
meaningful
in
that
it
was
always
hints.
It
was
always
an
optimization
it
just
we're
removing
any
meaning
that
it
might
have
had.
D
Yeah,
so
so
what
would
happen
is
that
if
someone
implements
the
old
spec,
they
they
might
send
these
frames
and
someone
implementing
the
new
version
of
this
would
receive
those
frames,
but
essentially
just
throw
them
away.
You
would
get
no,
no
information
out
of
them.
I
think
it
does
point
out
that
it
might
be
might
be
sensible
to
consume
that
information,
if,
if
at
all
possible,
because
there
is
value
in
having
some
of
that
information,
but
it
doesn't
really
expand
on
how
you
would
do
that.
It
refers
to
the
old
spec
for.
P
P
Corey,
I
only
just
noted
this
on
the
issue,
but
I
think
this
intersects
with
lucas
and
kazuo's
priority
draft
in
interesting
ways
that
we
might
want
to
try
to
flesh
out
at
some
point
if
we
are
interested
in
going
down
this
road,
in
particular
the
priority
draft
nominally
deprecates
the
thing
we
are
just
about
to
rewrite,
and
I
don't
know
entirely
how
that's
going
to
read,
and
I
also
wonder
whether
just
we
should
consider
in
this
work
directing
towards
the
priority
draft
in
the
event
it
finishes
earlier.
D
Yeah,
so
I
originally
didn't
have
a
pointer
into
that
priority
draft
because
I
wasn't
sure
how
far
along
we
were
with
it.
But
given
the
update
we
had
from
lucas
earlier,
I'm
actually
thinking
that
it
might
be
a
good
idea
to
provide
that
pointer
because
I
suspect
I'll
finish
around
the
same
time
now.
D
M
Yeah,
if,
if
they
finish
around
the
same
time,
I
would
definitely
be
supportive,
I
think
it
it
makes
sense.
I
think
I
wish
I
had
reviewed
this
pr
before
for
some
reason
I
missed
it.
I
apologize,
I
will
definitely
review
it,
but
I
think
in
a
general
direction.
This
is
the
right
way
to
go,
but
the
reference
to
the
priority
draft
makes
it
a
lot
more
compelling.
A
So
we
we,
we
seem
to
have
some
levels
of
work
for
this.
I
just
want
to
ask:
does
anybody
have
concerns
about
merging
this
pr
going
down
this
direction?
I
think,
if
from
where
I
sit
now,
it
seems
like
we
should
have
a
consensus,
call
on
the
list
just
to
make
sure
that
everyone's
seen
this
is
obviously
subsequent
and
then,
if
that
goes
well,
we'll
we'll
go
ahead
and
merge
the
pr
does
anybody
have
any
concerns
they
want
to
talk
about
at
this
point.
B
That
makes
sense
for
me
just
to
insert
myself
quickly
so
martin
you
mentioned:
does
this
pr
actually
reference
the
old
version
of
http
2
like?
Would
this
take
you
back
to
that
old
rfc.
D
B
D
A
D
So
the
only
other
one
that
I
kind
of
wanted
to
get
a
sense
of-
and
we
probably
don't
have
time
for
it-
is
the
upgrade
mechanism.
It
turns
out
that
I
don't
think
anyone's
implemented
that
at
all
some
people
do
the
prior
knowledge
thing
for
clear
text
h2,
but
I
don't
think
anyone's
done
the
upgrade
implementation.
I
could
be
wrong,
but
there's
no
there's
no
widespread
interoperability
of
that
and
we
could
potentially
do
the
same
sort
of
tombstone
trick
for
that.
D
All
right,
so
that
one
requires
a
helpful
request
and
I'll
have
to
have
to
work
on
that.
We
can.
N
A
D
D
Oh,
we
can
do
this
or
not.
I
have
no
opinion.
M
Yeah,
I
I
think
there
are
ways
to
move
forward
on
this,
but
I
think
they're
pretty
complicated
and
I
don't
know
I
mean
there
are
a
lot
of
bad
clients
and
bad
servers
out
there
that
are
just
not
going
to
be
upgraded
in
a
time
frame
that,
like
we're
happy
with
like
if
you're
waiting
when
you
wait
10
years,
then
I
think
there's
a
hope,
but
that's
an
awful
long
time.
M
So
I
don't
know
what
people
want
to
do
here,
but
I
I
you
know,
even
when
we
talk
about
sending
new
settings
and
things
like
everything
like
that
seems
fairly
concerning
to
me.
So
I
think
I
would
like
everyone
to
take
like
a
close
look
at
this
and
really
think
about
whether
they
think
in
their
deployment
scenario
they
could
actually
deploy
like
a
new
setting
or
a
new
frame
and
if
no
then
try
to
figure
out
like
what
they
can
do
to
make
that
better
or
basically
say
like
no.
M
This
is
basically
impractical
for
us,
yeah
ellen.
E
So
we've
taken
a
couple
of
stabs
at
trying
to
break
this
rust
off.
Most
recently
was
toward
the
end
of
last
year.
We
tried
to
deploy
the
websocket
setting
again
on
all
of
the
vips
that
facebook
serves
and
we
got
it
up
to
whatever
90,
with
nobody
really
complaining
when
we
made
it
a
hundred
percent
that
made
it
so
that
the
ok
http
clients
that
were
only
succeeding
because
they
were
retrying
now,
would
consistently
fail
and
the
people
who
the
people
who
are
still
using
it,
came
and
chased
us
down.
E
Coincidentally,
we
had
someone
come
by
that
week
and
say
that
speedy
wasn't
working
anymore
and
we've
had
that
disabled
for
like
five
years,
but
anyway
beside
the
point,
so
we
we
had
to
back
off
and
that
team
told
us
that
they
would
fix
their
client
and
move
forward,
and
so
maybe
we'll
be
able
to.
But
you
know
we're
not.
You
know
we.
We
have
a
little
bit
more
control,
probably
over
the
general
clients
that
that
hit
us,
and
then
everybody
else
does
so,
but
that's
just
some
data.
This
is
this
is
hard.
A
So
I
just
wanted
to
say
you
know,
based
on
this
discussion
and
our
previous
discussions
about
greasing.
I
don't
think
we
should
try
and
use
this
spec
as
an
opportunity
to
coordinate
greasing
and
it
certainly
shouldn't
require
greasing.
But
if
we
could
put
if
without
a
lot
of
effort,
we
could
put
some
text
into
this
draft
to
help
support
people
when
they
do
greece
to
to
give
them
some.
You
know
something
to
point
out
and
say:
hey
the
spec
says
we
can
do
this
and
it
encourages
us
to
do
this.
A
That
would
be
helpful,
but
I
think
that's
probably
as
far
as
we
can
go
with
this
particular
effort.
That
doesn't
mean
that
people
shouldn't
continue
doing
experiments
and
we
talk
about
coordination
later
on,
but
I
think
in
this
fact,
that's
probably
as
far
as
we
can
go.
M
I
just
add
the
note
that
we've
enabled
web
sockets
for
http
2,
either
two
or
three
times,
I'm
not
sure
which
and
all
those
times
we
have
to
roll
it
back
for
various
reasons
and
the
reasons
we're
more
complicated
than
what
alan
called
out,
because
some
of
them
were
internal
clients
and
some
of
them
weren't
external
clients
and
yeah.
We
can
force
all
the
people
at
google
down
there,
but
we
can't
force
anyone
else.
A
A
Okay,
martin,
do
you
have
anything.
A
Have
more
time
I'm
going
to
give
a
30
second
update
on
three
more
drafts,
thursday.
Sorry,
we
could
also
just
talk
about
visit
on
thursday.
We
have
time,
then
it's
true.
We
do
have
two
hours.
The
core
doesn't
look
like
it's
that
bad
so
leo.
Let's
get
that
to
that,
so
we'll
see
everyone
in
two
days.
I
think
for
most
of
you
it's
thursday,
but
for
some
of
us
it'll
be
wednesday,
no
friday,
fine,
all
right
thanks!
Everybody
take
care.