►
From YouTube: HTTP WG Interim Meeting, 2020-05-26
Description
HTTP WG Interim Meeting, 2020-05-26
A
B
B
A
A
A
B
B
B
B
B
B
B
C
C
C
C
D
A
I
can
hear
you,
okay,
my
microphone
was
working,
but
my
headphones
were
not
that's
what
it
is
great.
A
A
E
C
E
C
It
is
now
the
top
of
the
hour,
so
let's
go
ahead
and
get
started.
This
is
the
HTTP
working
group,
virtual
interim
meeting,
I'm
Mark,
Nottingham
I'm
gonna
be
running.
The
slides
tell
me
polling
is
going
to
be
running
the
queue
we
have
five
agenda
items
to
do
to
get
through
and
about
an
hour
and
15
minutes.
So
let's
go
ahead,
get
started
the
blue
sheets
as
they
are,
are
on
etherpad.
Everyone
see
my
safari,
my
Jeff.
Still,
yes,
okay!
Great!
So
please
add
your
name.
Do
you
think
the
etherpad?
C
C
Right
Justin,
thank
you
very
much
if
everyone
else
to
pitch
in
make
sure
they
stay
on
track,
that'll
be
much
appreciated,
so
we
have
scribes.
We
know
where
the
blue
sheets
are
as
a
reminder.
This
meeting
is
under
the
ITF
note.
Well,
terms
is
the
terms
of
which
we
participate
regarding
things
like
intellectual
property,
go
to
conduct
anti-harassment,
so
forth,
and
so
on.
I've
been
up
familiar
with
this.
Please
use
your
favorite
internet
search
engine
to
find
it
by
searching
for
IETF
note
well,
inscribe
more
scribes
feel
free
to
interject
any
time.
C
C
H
C
C
F
C
A
Out
and
while
we're
waiting
just
as
a
reminder
for
people
how
to
manage
the
queues,
if
you
would
like
to
speak,
you
can
type
in
plus
Q
in
the
WebEx
chat
and
Q.
If
you
want
to
get
out
of
line
and
I'm
going
to
assume
that
if
you
do
plus
Q
during
the
middle
of
the
talk
that
it's
a
clarifying
question
and
we'll
try
to
get,
you
inserted,
do.
C
F
F
Late,
fruit
yeah.
So
what
are
we
talking
about
we're
talking
about
ways
to
create
durable
signatures
over
HTTP
message,
parts
so
selectively
identifying
parts
of
the
request
to
sign,
signing
those
and
attaching
that
to
a
HTTP
message
so
that
that
signature
can
survive
all
the
various
transformations,
a
mangling
that
happens
to
requests
as
responses
as
they
go
from
sender
to
recipient
next.
F
F
The
message
elements
that
are
signed
currently
presented
in
the
confusingly
named
headers
parameter
and
then
a
couple
of
other
metadata
properties
like
created,
expires
and
algorithm,
which
you
don't
see
here,
is
a
default
value.
Okay.
Next,
it's
a
quick
overview
of
how
the
algorithm
works
today,
alright.
So
how
are
the
other
mechanism
works
today?
Where
are
we
at
it's
been
adopted
as
a
working
group
item
earlier
this
year
we
put
out
the
initial
draft
based
on
that
got
it
converted
to
markdown.
F
Finally,
I
have
a
PR
out
for
pulling
that
into
the
working
group
repo,
and
then
we've
got
a
number
of
open
issues
that
the
working
group
gets
to
work
on,
many
of
which
are
now
in
issues
in
the
repo.
Thank
you,
I
think
mark
for
tagging
those
properly
so
mostly
today,
I
want
to
get
feedback
on
an
incoming
change.
Go
to
the
next
slide
to
address
some
of
the
open
issues,
specifically
replacing
the
signature.
The
kind
of
a
spoke
signature,
header
field
format
with
structured
headers,
so
go
to
the
next
slide.
F
F
So
if
you
look
in
the
example
there
on
the
right,
you'll
see
that
in
the
signature
input
we
it's
a
dictionary
keyed
off
of
this
signature
identify
our
token.
The
value
is
a
is
the
list
of
identifiers
for
what
content
is
signed
and
then
further
metadata,
like
the
key.
The
creation
timestamp
is
attached
as
parameters
to
that
that
inner
list,
and
then
the
signature
header
is
just
a
a
dictionary
of
the
that
same
signature,
identifier
to
sequence,
representing
the
signature
itself.
F
A
F
C
We're
on
this
slide
in
instruction
headers
keys,
in
other
words,
another
key
value
pairs
so
for
parameters
dictionaries
our
fourth
lowercase
and
I
notice.
Here
you
use
key
ID
with
a
capital
I.
If
that's
an
issue
for
you,
we
should,
if
you
just
don't,
want
to
force
lowercase
what
we
should
talk
about
it,
because
it's
just
at
the
door-
and
we
know.
F
We
can
we
can
for
it.
We
can
lowercase
that
key
ID,
that's
just
the
capitalization
on
the
parameter
today,
since
we're
changing
your
dramatically
changing
the
format
here,
I,
don't
see
any
issue
with
making
that
forced
lowercase,
you
know
doing
a
snake
casing
or
something
like
that.
You
know
whatever,
whatever
fits
I
I
have
no
problem
with
that.
Okay,
great
thanks.
F
Okay,
no
other
questions
go
to
the
next
slide.
We're
gonna
talk
a
little
bit
about
some
of
the
some
of
the
things
I
mentioned
here,
some
of
the
drivers
or
the
things
we
get
out
of
doing,
structuring
it.
This
way
that
we're
not
doing
today
or
that
we're
not
showing
there
is
this
structure
by
by
pulling
the
content
identifier
for
what's
been
signed
out
into
this
list
it
it
gives
us
a
nice
way,
then,
to
be
able
to
start
attaching
parameters
to
those
content,
identifiers
themselves,
I'm
showing
giving
kind
of
some
examples.
F
What
that
could
look
like
you
know
where
we
could
use
a
parameter
to
specify,
say
that
we're
signing
a
specific
cookie
within
you
know
not
not
the
whole,
not
all
the
cookies
to
say
a
specific
cookie.
Maybe
it's
used
for
authorization,
for
example,
or
we
are,
you
know,
maybe
even
showing
another
slide,
the
possibility
of
having
multiple
signatures.
F
You
could
have
piece
change,
so
one
signature
actually
incorporates
is
actually
signing
over
the
previous
signature,
or
we
can
have
parameters
that
allow
us
to
identify
specific
pieces
of
a
us,
another
structured
header
value
that
we
want
to
sign.
For
example,
the
first
four
elements
in
a
list,
the
first
or
the
or
a
specific
key
within
a
dictionary
header,
which
would
then
give
us
a
way
to
have
structured
headers
that
are
kind
of
durable,
as
potentially
more
items
are
added
to
those
signature
feel
or
sorry
to
those
structured
header
fields,
as
the
request
is
mutated.
F
F
I
mentioned
multiple
signatures
and
here's
an
example
of
what
that
might
look
like
you
could
imagine
if
we
had
that
signature.
One
added
at
one
point
to
the
to
the
message.
Potentially
another
system
downstream
adds
this
signature
to
you
know
where
we're
signing
over
the
original,
the
signature.
F
F
So
this
takes
care
of
lots
of
interesting
problems.
For
us,
it
gives
us
a
chance
to
fix
the
confusingly
named
headers
parameter
name,
since
it
doesn't
just
include
headers
it
references
other
things.
Headers
is
a
bad
name.
It
gets
rid
of
our
bespoke
field
value
format.
It
gives
us
a
nice
easy
way
to
include
signature
parameters
within
the
signature.
Lets
us
support.
Multiple
signatures
gives
us
with
the
content.
Identifier
parameters
gives
us
ways
to
reference
parts
of
structured
or
unstructured
headers,
and
it
actually
potentially
makes
it
easier
for
us
to
construct
this.
F
Currently,
we
have
kind
of
each
parameter
or
each
each
signature,
metadata
property
can
add
it
as
its
own
entry.
In
the
signature
input,
we
could
potentially
replace
that
by
just
including
the
structured
header
or
the
structured,
a
header
or
inner
list
for
that
signature
as
the
first
line
of
the
signature
input,
so
we're
kind
of
reusing
that
same
structure
within
the
signature
input
to
make
it
a
little
bit
easier
to
construct
these
things
so
and
that
guarantees
that
everything
that
we're
putting
in
the
in
the
message
to
describe
the
signatures
included
in
the
signature
input.
F
I
Alright
picker
Erika
scroll,
oh
so
I
have
no
opinion
now.
If
you
structure
headers,
not
that
seems
fine,
but
the
idea
that
you're
picking
out
individual
pieces
of
other
headers,
which
have
semantic
value
and
not
covering
the
rest
of
them,
strikes
me
as
making
this
problem
of
selective
coverage
even
worse
and
almost
impossible
to
secure
a
violation
of
what
the
system
is
supposed
to
do
so
I
I
can't
say
I'm
super
in
favor
of
that.
I
Yes
and
then
actually,
building
and
actually
building
an
application
that
does
anything
coherent,
is
gonna,
be
it's
gonna,
be
difficult.
I
mean
like
this
is
exactly
this.
Take
XML
d68
if
I
trying
to
like
cover
individual
pieces
of
things
and
then
make
Anastasia
that
security-
and
you
know
like
there's
some
really
obvious
mix
and
match
attax
I
mean
I,
saw
your
initial
slide
where
you're
like
what
we're
covering
some
of
it.
We're
not
covering
you,
know,
content
type
and,
like
you
know,
given
we
know
there
been
security
for
mr.
I
web
because
I'm
not
covering
content
type.
You
know
you
know
putting
attacks
on
my
stuffing
stuff
like
that,
like
you
know
like
this
seems
like
it
seems
like
we're,
going
deeper
and
deeper
down
the
hole
of
where
reasoning
about
the
security
system
is
like
nearly
impossible.
So.
A
F
You're,
absolutely
right
that
you
know
in
order
to
really
understand
if
we're
actually
building
a
secure
solution,
we
have
to
consider
what
were
including
in
that
signature.
The
challenge
with
signing
HTTP
messages,
though,
is
that
everybody
on
this
list,
I,
don't
have
to
tell
you
that
use
cases
for
HTTP
are
many
and
varied,
and
between
any
given
use
cases
there
could
be
tons
of
overlap,
no
overlap
between
which
parts
of
the
request
are
important,
which
ones
aren't
from
a
you
know,
security.
You
know
application
level
semantic
standpoint,
which
one.
F
Me
so
the
the
intention
here
is
that
we're
giving
a
foundation
that
then
system,
you
know
levels
of
the
application
that
are
closer
to
those
use.
Cases
can
then
build
on
to
say
all
right
for
our
use
case.
Here's
what
you
have
to
sign
the
kooky
one
is
an
interesting
one
because,
depending
on
hire
applications
built
depending
on
how
your
your
site,
what
layer,
you're
signing
you're
doing
your
signing
at
you
may
or
may
not
have
access
even
to
all
of
the
cookies
that
are
being
passed
through
to
be
signed.
I
From
certain
time,
so
you
know
I,
guess
I'm
just
quickly.
You
know
this
seems
like
cut
against
I
think
that
the
practice
in
the
rest
of
the
world,
where
we're
trying
to
design
parameters
which
are
safe
to
use-
and
this
is
a
primitive
which
actually
parse
I'm
sort
of
like
confidence
to
you
at
all.
So,
like
that
said,
digging
that
hole
deeper
seems
sad,
and
this
is
what
DC
thought
they
were
doing
it.
It
didn't
work
out
at
all,
and
I
mean
it's
worth
noting.
I
I
think
that
this
is
like
you
know
there
were
the
point
where
you're,
basically
picking
and
choosing
individual
headers
to
sign
this
is
like
basically
weak
sauce
compression
over
just
covering
it's
just
stuffing
the
data
directly
this
into
herself,
and
it
seems
like
that
would
have
a
lot
clearer
semantics,
namely
this
is
stuff.
I
actually
mean,
as
opposed
to
you
know,
here's
the
signature
and
I'm
supposed
to
go
and
like
real,
simple,
so
almost
like
that.
G
Thanks
Anabella,
these
work
is
in
queue
for
our
work
group.
For
a
long
time,
I
just
saw
some
of
the
issues
you
filed.
I
think
I
have
two
main
questions.
The
first
one
I
see
in
your
slides
and
in
the
issues
that
you
remove
the
expires
property
from
this
and
I
think
it
is
not
a
secure
choice
because,
yes,
I
sign
something
with
my
private
key
I
want
to
be
in
control
of
the
signature,
but
not
defer
or
delegate
somebody
else
for
the
policy
of
sassani.
Then.
G
My
question
is
the
there
are
some
data
just
like
the
time
interval,
the
validity
of
the
signature,
for
example,
created
experts
and
some
selected
signature
metadata
I
think
they
should
be
conveyed
together
with
a
signature,
because
it's
it's
important
for
me
that
the
signature
is
self
contained
in
one
header
and
the
sign
day
could
be
outside,
but
the
metadata
their
signature
validity
and
some
other
selected
information
should
be
in
clear
text.
Together
with
the
signature.
F
Okay,
I'll
take
those
separately
first,
regarding
expires
that
hasn't
actually
been
removed.
It's
just
not
in
any
of
my
examples
here,
because
fitting
examples
on
a
slide,
while
keeping
them
legible
is
hard.
It's
also
optional
in
the
current
text.
Let's,
let's
continue
the
conversation
on
that
one.
If
we
can,
through
the
the
issues
I'm,
you
know
personally,
I
under
I
see
both
both
sides
of
that
argument.
I
created
that
issue
mostly
so
we
could,
you
know,
have
a
place
to
have
that
conversation,
not
necessarily
because
I
think.
F
F
If
we
want
to
have
that
content
identifier
list,
it
starts
to
get
hard
to
fit
all
of
that
into
a
single
header,
a
structured
header
field.
If
you
also
want
to
be
able
to
do
multiple
signatures
and
I
do
think,
multiple
signatures
is
something
we're
going
to
want
to
support
both
because
of
multiple
parties.
F
If
you
want
to
maybe
reply
on
list
as
to
you,
know
a
little
more
context
on
if
you
have
security
concerns
about
separating
the
two
I
think
since
we're
covering
the
metadata
in
the
signature
itself
in
this
proposed
change.
I
think
that
mitigates
any
any
security
concerns.
But
I
curious,
if
you
know,
I'm
missing
something
there.
C
F
Let's
we
can
cut
it
here,
there's
other
stuff
going
into
other
topics.
If
we
had
time,
I
will
raise
those
as
topics
on
the
list
preview
that
is
primarily
it's
around
algorithm
selection
identification
and
how
we,
coupled
that
or
not
with
the
key
definition,
so
we'll
have
fun
with
that
one.
Thank
you.
Thank.
C
B
B
Interesting
team
does
is
that
now,
instead
of
having
one
switch
to
turn
the
feature
on
or
off,
and
then
letting
the
influencer
decide
what
to
request
or
sin,
you
can
separately
specify
your
support
for
client
to
server
or
server
to
client
certificates
so
that,
for
example,
an
implementation
that
wants
to
support
certificates,
but
not
let
the
sir
publish
traditional
identities
or
vice
versa.
Canoe
path,
we've
gotten
more,
a
more
rational
division
of
how
errors
are
nimble.
B
So
basically,
it's
reading
anything
with
the
misuse
of
the
protocol
is
a
stream
here,
but
getting
rid
of
stream
errors
for
things
like
expired
certificates
and
letting
those
just
be
handled
at
the
GP
level
and
say:
okay,
four
or
three
not
authorized
because
you
have
conspired
or
not
trusted
or
letter,
and
then
the
last
one
is
just
editorial.
There
was
already
a.
We
have
already
changed
around
to
require
in
the
clan
to
declare
what
certificate
and
in
dentist
used
to
request,
but
we
didn't
state
that
the
server
must
not
consider
any
other
certificates.
B
The
client
published
so
making
it
clear
that
if
the
client
doesn't
say
you
use
the
certificate,
you
don't
use
the
certificate.
So
I
would
appreciate
people
looking
at
the
most
recent
version
of
the
graph
to
see
if
there's
anything
else
that
they
would
like
to
see
touched
up,
but
in
the
large
part,
I
think
the
draft
is
in
good
shape
next
slide.
B
So
what
I'm
seeing
in
discussions
about
this
are
that
we
kind
of
got
things
going
in
two
different
directions
here.
This
draft
enables
two
main
scenarios
on
the
client
certificate
side.
It's
a
way
to
use
Glenn
certificates
in
h2
and
h3
we're.
Currently,
you
can't
really
do
that
unless
you
request
it
at
the
beginning
of
the
connection
as
part
of
the
TLS
handshake
and
that
kind
of
goes
both
directions.
B
On
the
one
hand,
there
are
people
who
want
to
discourage
the
use
of
client
certificates
as
security
practice
and
therefore
don't
want
to
add
features
to
support
it
and
I
understand
that
perspective
and
then
the
other
side.
You
have
people
who
have
deployments
with
client
certificates
and
would
really
like
to
be
able
to
use
h2
it.
B
We
see
kind
of
a
similar
split
in
usage
on
the
certificate
side
that
it's
great
to
be
able
to
use
the
connection
for
an
additional
name
without
having
to
go
through
a
DNS
or
SMI
and
therefore
expose
the
hostname
that
you're
asking
about
to
the
network.
On
the
other
hand,
the
client
asking
about
any
particular
certificate
tells
the
server
about
who's
names
that
it
might
not
actually
be
authoritative
for,
and
so
some
browsers
have
said
that
they
would
be
hesitant
to
do
that
piece.
B
So
that
kind
of
leads
to
are
we
doing
this
because
I've
gotten
one
useful
review
on
the
most
recent
draft?
There
were
no
no
reviews
on
the
PRS
I.
Let
them
sit
for
a
month
before
I
merge
them
and
as
far
as
I
see,
we
don't
yet
have
any
client
implementations
of
this
now
exported
authenticators
is
going
for
its
I.
Think.
Third,
so
our
dependency
is
about
to
be
satisfied,
but
it's
really
difficult
to
look
at
moving
this
graph
forward
without
some
good
implementations
and
some
experience
playing
it.
J
Watson,
like
Leffler,
we're
very
interested
in
this
proposal.
I
just
want
to
say
that
for
the
privacy
concerns
with
Omega,
no
that's
for
and
with
the
privacy
concerns
we're
thinking
in
terms
of
the
server
offering
up
certificates
it
knows
about.
It
knows
the
clients
interested
because
say,
there's
sub
resources
on
a
page
which
would
be
requested
right
right.
B
C
This
is
your
last
line
right
now.
Yes,
so
just
for
my
standpoint,
you
know
Tom,
you
know,
I
have
been
watching
and
talking
about
this
one
for
a
little
while
and
we
were
concerned
and
precisely
about
the
issues
you're
raising
here.
Mike,
that's
all
right,
I
thought
was
ten
talk
about
this
in
Vancouver.
C
This
is
obviously
our
first
chance
to
do
that
since
everything
changed,
and
so
we
want
to
get
feedback
about
whether
we
work
this
indefinitely,
whether
we
push
that
is
experimental,
whether
we
abandon
it
until
we
can
get
somebody
implement
on
the
client
side.
I
think
that
there
are
plenty
of
people
who
are
interested
on
the
server
side
as
what's-what's
and
said
it's
just
a
question
of
whether
we
can
actually
get
something
in
the
corn
side
dampening.
C
C
K
Martin
names
on
the
draft
I,
so
it's
quite
clear
that
I
think
this
is
worth
doing
at
some
level
that
the
problem
for
us
is
always
being
simply
justifying
the
expense
relative
to
the
benefit.
And
oh,
we
can
fit
this
on
a
road
map
when
there's
so
many
other
things
that
are
occupying
the
time
of
the
teams
that
are
involved,
we're
doing
quick,
we're
doing
all
of
these
other
things,
and
that's
just
always
been
second
to
a
bunch
of
other
things.
K
I
I
think
me
score
a
lot
I
mean
so
we're
the
same.
Martin
are
the
same
browser
but
I
think
I
mean
one
can
throughout.
We
try
to
get
a
sense
of
like
what
the
actual
functionality
is,
that
is,
most
people
have
enthusiasm
for
coz.
My
impression
is
that
that
from
other
Watson
is
interested
in
the
server
booyah
certificates
and
Mike's
interested
in
the
clients
boy
that
certificates
and,
though
they're
actually
quite
different
things
with
pretty
different
properties,
and
you
know
when
this
was
first
floated.
I
I
So
and
being
yes,
it's
a
fair
share
of
the
complexity,
so
I
think
it
would
be
worthwhile,
although
perhaps
that
doubt
would
try
to
guess
and
stuff
like
you
know
what
what
is
people
actually
wanted
here
and
if
the
balance
is
strong
at
once
area
there
that
might
help
move
the
deal.
B
B
I
K
Want
to
add
to
that
the
complexity
around
getting
UX
right
and
making
sure
that
you
can
correctly
wire
up
all
of
your
requests
with
the
right
certificates
and
all
that
sort
of
thing
is
non-trivial.
Well,
we've
looked
at.
If
it's
just
that's,
where
the
complexity
lies,
you
can
plumb
the
tail
I
stack
relatively
easily.
You
can
plumb
the
HTTP
stack
with
work,
but
the
rest
of
it
is
kind
of
kind
of
tricky
to.
A
Tell
me
check
this
one
later
on.
Perhaps
thank
you.
Thanks
for
the
feedback,
everyone
thanks
very
much
yeah
thanks,
Mike.
L
First
of
all,
I'd
like
to
apologize
about
the
slides
I
sent
out
like
three
weeks
ago,
but
somehow
managed
to
mangle
them
with
kind
of
a
version
from
Singapore
in
some
way.
So
I
had
to
update
me
shortly
before
the
presentation.
The
good
news
is
that
the
issues
that
we
had
there
are
still
relevant.
So
anyone
who
did
read
ahead,
you
you're
not
missing
much,
but
we've
only
got
a
few
slides
here,
and
so,
let's
take
it
away
next
slide,
please
just
a
brief
overview
of
who's
using
and
I
use
that
a
bit
loosely.
L
But
these
are
these.
Are
things
were
aware
of
that
reference
digest
headers
in
some
ways,
we've
got
my
content
coding,
signature
specs,
where
they
want
to
protect
the
payload
body
and
sign
the
digest
header
as
a
means
of
doing
that,
and
some
banking
api's
also
by
use
of
signatures
next
slide.
Please.
L
So
since
we
last
presented,
there's
been
some
changes,
nothing,
nothing
huge
another
editorial
sweep
to
try
and
improve
some
of
the
readability
and
presentation
of
the
data.
If
anyone's
looked
at
the
draft,
you'll
see,
there's
a
huge
section
of
examples
to
help,
try
and
explain
some
of
the
nuances
of
using
digests,
and
you
know
well
what
Roberto
and
I
have
discussed
whether
they're
they
useful.
L
We've
added
some
clarification
on
the
digest
of
error
responses
and
we've
been
trying
to
update
our
terminology
to
align
with
HTTP
calls.
So
this
is
mainly
a
clarification
around
the
term
of
HTTP
field,
so
the
digest
can
appear
in
a
header
or
a
trailer
effectively,
depending
on
how
you
might
want
to
use
it
so
so
being
careful
to
allow
both
of
those
use
cases
by
using
this
new
terminology,
so
so
we're
making
progress.
L
L
There
are,
there
are
more
issues
than
this
in
the
in
the
tracker
if
anyone
wants
to
go
to
that
link
at
the
bottom
and
look
at
them,
but
these
3/4
are
kind
of
the
main
ones
we
just
want
to
highlight
today.
So
we
have
the
relationship
with
validators
in
cache.
We
also
have
this.
This
weird
parameter:
spec
gap.
Those
are
fairly
easy,
I
think
if
in
going
on,
so
we
can
probably
prepare
some
text
and
just
land
it
with
some
review
and
be
done
with
it,
the
less
straightforward
one.
L
Is
this
970
which
I'll
explain
in
a
couple
of
slides?
So
if
we
go
on
to
the
next
just
address
each
of
these
issues,
one
by
one
for
some
reason,
the
ordering
got
messed
up
from
what
you
just
saw,
but
yeah
this
parameter.
Spec
cap.
You
know
that
the
previous
version
of
this
document,
RFC
30
to
30
States
this
text
for
some
algorithms,
one
or
more
parameters,
may
be
supplied.
The
BMF
for
parameters
is
used
in
2616
all
digest
algorithm
values
a
case
insensitive
and
that's
all.
It
really
said
about
the
matter.
L
L
In
our
new
terms,
who
say,
the
resource
is
specified
by
the
effective
request,
URI
and
any
validated
field,
and
there's
been
some
commentary
about
this
last
time
around
I
think
you
know
we're
not
quite
sure
what
what
specify
means
in
this
context
and
I
believe
in
Singapore,
maybe
mark
well
the
comment
that
we've
actually
just
important
some
of
these
terms.
It
doesn't
help
and
there's
a
there's
a
slightly
better
way
for
us
to
reword
things
within
the
document
that
would
just
tidy
up
all
of
these
slightly
confusing
things.
L
L
L
L
L
A
C
Have
marketing
q
yeah
I,
don't
know
if
it's
gonna
be
worth
it
to
go
into
the
details
here.
I
don't
probably
need
more
contacts,
but
I
think
I
can
help
at
least
some
of
those
issues
and
sure
Julian
can
help
you
out
as
well.
So
please
feel
free
to
you
know
ping
us
if
you
don't
hear
from
us
to
Don
the
issues
some
of
it
goes
down
to
some
fairly.
C
You
know
your
gear
you've
taken
on
the
the
joyous
task
of
revising
a
very
old
HTTP
specification
which
diverged
in
a
timeline
from
from
the
web,
where
everything
else
went
so
there's
some
fairly
hairy
issues.
There's
your
discovery,
but
you
know
definitely
try
to
help
top
you
have
it
help
you
out
yeah.
L
I
think
you
know
I'd
say
speaking
for
Roberto,
who
might
want
to
correct
me,
but
you
know
we
have
the
time
to
spend
on.
You
know
with
smithing
and
editorial
work.
I,
don't
see
that
being
huge
like
technical
issues
of
making
progress
on
this
document,
which
is
I
think
as
most
people
would
expect,
because
it's
just
a
document
update
and
we've
been
careful
not
to
extend
the
scope
of
this
work
in
in
any
way
so
yeah.
G
C
C
J
Note
the
draft
it's
it
seems
to
indicate
that
the
part
of
the
goal
is
to
protect
against
buggy
compression
methods,
but
if
I'm
looking
through
his
examples
and
understand
what
it's
computed
over
correctly,
if
you
had
say
a
proxy,
that's
taking
in
zip
compressed
data
on
one
side
and
pumping
out
broadly
on
the
other,
it's
going
to
change
the
digest.
It's
not
computed
on
the
content,
or
rather
the
content
encoding,
and
so
it's
I,
don't
think
that
you
actually
protect
against
a
bug
in
the
gazette
or
broadly
implementation.
G
L
It's
absolutely
possible.
I
am
confused
at
this
point.
This
this
might
be
something
Watson
and
I
can
can
take
offline
and
and
yeah
I
can
see.
Maybe
we
need
to
discuss
through
some
of
the
scenarios
here
and
and
I'll
take
a
task
to
reflect
anything
back
into
an
issue
or
a
list
item
people
happy
with
that.
M
M
M
The
first
update
is
related
to
RFC
RTC
through
65
bits,
specifically
that
continues
to
plug
forward
slowly.
I
promise
mark
that
I
would
have
it
done
by
the
end
of
q1.
I
am
now
looking
at
the
end
of
q2
in
despair.
It
is
not
quite
done
yet
I
think
we
are
in
the
fixing
Nikoli
issues
stage
of
things
and
I
would
note
that
web
platform
tests
have
been
quite
helpful
in
giving
us
insight
into
the
way
that
different,
browsers,
treat
cookies
today
and
helping
us
understand
the
ways
in
which
browsers
have
diverged
from
the
spec.
M
On
the
one
hand,
and
the
way
is
the
browser's
have
diverged
from
each
other
on
the
other
I
think
we've
made
good
progress,
especially
in
the
O
five
and
those
six
versions
of
the
draft
at
fixing
a
couple
of
long
outstanding
issues,
and
my
expectation
is
that
we'll
be
able
to
fix
the
remainder
in
a
relatively
short
period
of
time.
I've
said
that
before,
but
this
time
it
might
actually
be
true.
M
If
you
look
at
the
web
platform,
tests
browsers
generally
are
beginning
to
converge
on
a
set
of
behaviors,
which
I
think
is
good.
The
web
platform
tests
for
the
domain
attribute
in
particular,
are
pretty
buggy,
like
the
tests
are
buggy,
not
the
behaviors,
something
that
requires
a
couple
of
infrastructure
changes
or
one
infrastruc
to
change
to
web
platform
tests
to
allow
it
to
begin
a
test
on
a
different
domain
than
it
currently
does.
But
it's
quite
reasonable
to
do
and
seems
like
something
that
we'll
be
able
to
get
done.
M
M
It
turns
out
that
browsers
that
or
user
agents
generally
that
have
shipped
support
for
same
site
have
done
so
based
on
different
versions
of
previous
drafts,
which
means
that
there's
some
divergent
behavior
between
user
agents,
that
I
think
we
need
to
come
to
agreements
on
I
expect
us
to
be
able
to
do
that,
because
many
of
the
questions
that
are
being
asked
and
those
issues
have
reasonably
clear
answers
or
have
agreement
between
at
least
two
widely
used
user
agents.
Already
there
are
some
other
larger
outstanding
issues
like
support
for
UTI.
M
Excuse
me
support
for
you
to
have
utf-8
in
header
values.
That
is
a
large
problem.
It's
one
that
I
think
is
going
to
be
non-trivial
to
change
within
the
context
of
this
update
and
I,
wonder
whether
it's
something
that
we
ought
to
spend
time
on
in
and
put
in
scope
for
the
bits
or
whether
we
should
continue
punting,
that
down
the
road
for
someone
else
to
deal
with
like
many
issues
around
cookies.
That
feels
to
me
to
be
important,
but
not
urgent.
M
M
With
regard
to
changing
the
default
value
of
the
same
site
attribute,
we
were
almost
able
to
get
same
site
locks
as
a
default
out
at
the
door
earlier
in
this
year.
We
ended
up
rolling
it
out
to
about
50%
and
stable.
We
rolled
it
back
in
early
April,
due
to
some
unexpected
breakage
at
a
particularly
bad
time
for
web
developers.
We
have
it
rolled
out
at
50%
for
non
release,
channels
that
is
canary
dab
and
beta,
and
my
expectation
is
that
we
will
try
to
roll
it
out
to
stable
again,
probably
over
the
summer.
M
I
would
also
note
that
Safari
has
begun
blocking
third-party
cookies
entirely
as
of
late
March.
Getting
access
to
those
cookies
on
the
storage
access
API,
which
is
being
discussed
in
the
privacy
community
or
in
the
w3c.
So
if
you're
interested
in
that,
it
would
be
a
good
idea
to
hop
over
to
that
community
room.
M
Finally,
I
would
note
that
the
editors
draft
of
incrementally
better
cookies
is
probably
interesting
to
some
folks
on
the
list.
I've
posted
a
couple
of
updates
to
the
working
group
mailing
list,
but
I
would
point
people
in
particular
to
the
scheme
bound
cookies
proposal,
as
well
as
the
scheme
full
same
site
proposal
that
are
described
in
that
draft
and
I
would
particularly
invite
feedback
on
section
3.6
of
the
draft
which
tries
to
you
propose
a
different
definition
of
what
a
session
means
and
we
currently
have
in
most
user
agents.
F
Coming
up
this
from
a
service
provider
and
identity
provider
standpoint,
a
lot
of
the
changes
that
are
talked
about
here
with
same
site
and
so
the
other
non
RFC
changes
that
you
talked
about
are
you
know,
potentially
impacting
the
legitimate
cross-origin
use
cases
and
have
tended
to
be
tripping
up,
identity
providers
and
some
use
scenarios.
People
need
doing
auth
to
open
ID
connect.
M
Have
two
answers
to
your
question:
one
is
that
the
changes
that
browsers
as
user
agents
in
particular
are
making
are
broad
in
scope
and
go
well
beyond
cookies
and
well
beyond
the
HTTP.
A
lot
of
the
conversation
around
those
changes,
more
generally
is
happening
in
the
privacy
community
in
the
w3c
and
also
the
web
incubation
community
group
in
the
W
and
the
w3c.
M
The
second
answer
is,
more
specifically
towards
identity.
I
think
that
identity
is
something
that
is
clearly
broken
by
some
of
the
changes
that
browsers
wants
to
make
to
the
way
that
state
on
the
web
is
persistent
and
I
also
think
that
there's
a
reasonably
well
shared
understanding
that
that
breakage
is
undesirable.
M
The
challenge
is
to
find
a
way
to
maintain
the
good
stuff
that
we
like
about
transferring
identity
and
transferring
assertions
of
identity
between
entities
on
the
web,
while
not
enabling
the
kind
of
wild-west
that
we
see
on
the
web
today.
Given
the
side
effects
that
it
has
and
the
way
that
those
side
effects
affect
users,
I
I
think
it
would
be
quite
reasonable
for
user
agents
to
have
some
distinct
primitive
for
identity.
M
F
Think,
thank
you
for
that.
I
appreciate
that
I
know
one
one
challenge
of
looking
for
a
in
a
browser
level
or
user-agent
level.
Change
like
that
is
obvious
deployment
challenges
there,
but
that's
one.
We
can
pursue
that
conversation
and
probably
the
w3c
spaces.
I
think
you're,
probably
right!
That's
where
it
would
happen.
F
I
would
ask,
as
far
as
60
to
65
bits,
I
I
haven't
like
deeply
read
through
it,
but
I,
don't
recall,
seeing
any
kind
of
examples
around
or
thorough
examples
around
the
behavior
of
same
site,
maybe
I
missed
them,
but
that's
something
where
I
think
a
kind
of
clear
set
of
sort
of
test
case.
Examples
would
be
really
helpful
for
website
developers
to
understand
how
this
thing
is
actually
going
to
affect
how
the
different
galleries
are
actually
going
to
affect
them.
F
M
K
Last
I
Gration
about
the
future
of
cookies
on
the
web
is
here.
One
thing
that
I
think
is
really
clear
from
all
of
this
is
that
this
is
going
to
be
a
moving
target,
the
next
little,
while
potentially
almost
indefinitely,
given
the
nature
of
the
beast.
So
what
I'm
curious
about
is
what
you
see
the
endpoint
thing
for
this
one.
Personally,
my
can
other
people
who
have
other
opinions
as
well.
M
So
I
think
that
you're
entirely
correct
that
there's
going
to
be
iterative
change
over
time
and
that
it's
difficult
to
say
today
what
the
exact
end
state
is
going
to
be
I
personally
think
that
an
in-state
along
the
lines
of
the
HDTV
state
tokens
proposal
would
be
pretty
reasonable.
There
are
some
things
that
I
would
change
about
that
proposal
and
that
I
intend
to
write
up
at
some
point,
but
that
direction
seems
still
seems
good
to
me.
M
K
I
think
this
is
this
is
a
good
answer
that
one
one
of
the
ITF
is
really
uncomfortable
with,
which
is
to
say
that
this
will
take
time,
and
there
are
a
number
of
points
in
time
which
we
have
documentation.
There
matters
reality,
but
then
reality
will
pull
ahead
again
or
the
documentation
will
call
again
I
had
again
in
different
ways
and
we'll
just
have
to
work
out
how
to
deal
with
that.
But
we're
not
gonna
be
publishing
an
RFC
that
says
what
well
the
state
of
the
the
cookies
world
is.
K
Anyone
expects
anything
yeah
I,
don't
think
anyone
expects
anything,
but
if,
if
the
goal
is
to
to
eventually
continue
to
push
the
cookies
thing
towards
something,
that's
a
little
less
weird
and
then
that
would
be
good
sort
of
coming
back
to
this
utf-8
issue.
I
had
to
refresh
my
memory
if
I
can
dig
up
to
the
issue,
but
if
the
behavior
of
browsers
is
consistent
but
not
documented,
I
think
we're
in
a
bit
of
a
poor
state.
K
M
When
one
thing
I
would
note
here
is
that
there
is
a
real
opportunity
for
someone
who
cares
about
that
kind
of
problem
and
who
has
free
time
you
take
it
on
to
be
perfectly
blunt:
that's
not
something
that
I
have
time
to
do,
nor
is
it
high
on
my
priority
list.
Although
I
agree
with
you
that
it
is
important,
it
doesn't
feel
urgent
to
me
going
back
to
this
distinction
that
I
mentioned
earlier.
M
C
C
A
C
Linked
is
weird:
yes,
yes,
it
is
I'm,
sorry
quickly
about
this
draft
and
hopefully
we'll
have
time
for
some
substantial
discussion
about
it.
So
I
found
an
issue
on
HTTP
core
earlier
this
year
when
the
same
title
content
like
this
weird
and
that
eventually
began
a
draft
and
that's
the
drive
for
today
and
have
lunch
at
length:
zero,
zero.
C
In
a
nutshell,
we
get
down
here,
it's
weird
because
it
serves
more
than
one
purpose.
We
use
content
like
the
HTC
One
dot
X
for
a
message
of
delimitation,
that
is
by
nature,
extremely
security,
sensitive,
there's,
a
lots
of
different
kinds
of
attacks
that
can
take
advantage
of
control
over
messages
delimitation.
That's
how
clear
rooms
where
and
so.
Typically,
it's
not
under
direct
application
control.
C
We
constrain
lots
of
different
HTTP
information
to
constrain
access
to
content
lengths,
and
it's
all
used
for
this
purpose
in
1
dot
X,
but
we
also
have
cases
where
content
length
is
used
to
set
clear
expectations
about
size.
So,
for
example,
when
you
want
to
send
a
post,
a
lot
of
servers
require
you
to
send
a
content
length,
so
they
can
decide
whether
or
not
they
want
to
accept
three.
C
You
can
plan
some
data
or
whatever
it's
going
to
be,
or
on
the
client
side,
when
you're
getting
a
response,
a
browser
can
use
or
another
client
can
use
or
content
efficient
with
with
chunked
encoding
or
or
with
htv-2
framing
you
don't
have
that
capability.
You
just
have
how
big
is
the
next
chunk
and
great
precision
is
not
necessarily
needed
for
that.
It's
not
security,
sensible,
it's
just
setting,
and-
and
so
you
know,
we
talked
about
the
security
aspects
of
this.
C
It's
to
create
a
new
header
field
for
conveying
an
advisory
links
for
those
latter
use
cases,
content
like
just
for
message
to
limitation,
or
some
people
use
it
for
that
purpose,
we're
not
going
to
stop
them.
We
can't,
but
it's
to
try
to
separate
those.
So
we
don't
have
the
same
constraints
and
the
same
guardrails
around
this
interplanetary
length
so
that
it
can
be
exposed
to
applications
a
little
more
freely.
The
name
is
a
bike
shed.
That's
why
it's
called
a
bike
shed.
C
It
has
the
same
syntax
as
cut
it
like
that,
would
probably
make
it
structured
field
no
constraints
about
when
and
when
it
can't
be
sent
in
some
prints
on
it.
It's
just
you
know,
interpretation,
the
presumption
being
that
you
know,
if
you
send
this
header,
for
example,
in
a
post
request,
the
recipient
and
server
will
look
at
that
and
say:
oh,
he
says
he's
gonna.
Send
me
three
gigabytes.
I
guess
I'll
accept
that.
But
then,
as
you're
reading
the
message
you
make
sure
that
he
doesn't
send
more
than
much
more
than
3
gigabytes.
C
So
the
question
then,
at
the
workgroup
we
talked
about
this
in
the
mailing
list
for
a
little
while
and
seemed
to
be
a
reasonable
amount
of
interest
is
standardizing.
This
header
field
helpful
and,
if
so,
be
in
the
phonetic
draft,
because
we're
working
on
that
and
it's
relevant
there
and
it's
we're
telling
relatively
small
into
addition
or
some
way
to
keep
itself.
C
A
K
C
N
K
I
well
know
this
seems
pretty
clear
as
an
as
an
extension
to
me
think.
There's
anything
that
requires
it
go
in
the
core
cymatics
document
and
I'm
a
little
uncertain
about
how
much
someone
can
do
with
this.
How
many
of
someone
sends
you
three
gigabytes
and
then
the
file
is
three
bytes
long.
Why
is
that
wrong
right?
C
K
Here
I
agree,
I
think
that's
that's
clearly
the
use
case
the
pic
were
considering
here
and
it
seems
perfectly
reasonable
to
do
that.
I
see
a
lot
of
cases
where
you
get
in
an
undefined
amount
of
data
left
remaining
because
someone's
used
to
chunked
encoding
on
what
have
you
and
and
they
didn't
provide
a
content
length,
and
maybe
they
did
that
because
they
really
weren't
sure
how
much
was
there.
But
if
they
could
say
I
was
and
maybe
100
100
Meg.
Then
there
would
be
quite
useful,
I
think.
C
K
H
One
possible
solution:
how
long
do
you
think
it
would
take
to
get
a
document
with
just
this
in
it
out
the
door?
How
long
will
it
take
for
the
semantics
document
to
get
respond
for
internet
standard?
The
answer
might
be
that
get
this
out,
get
some
experimentation,
get
some
get
some
limitation
with
it
and
then,
by
the
time
the
semantics
document
is
done.
You
know
whether
you
want
to
roll
this
into
it
or
not.