►
From YouTube: HTTPBIS WG Interim Meeting, 2021-02-11
Description
HTTPBIS WG Interim Meeting, 2021-02-11
A
So
that's
the
note!
Well,
our
agenda.
For
today
we
have
a
minute
taker.
We
have
blue
sheets
mentioned.
We've
done
the
note.
Well,
the
agenda
for
today
is
just
discussion
of
http
core
issues
and
then
the
proposals
of
which
we
have
one,
which
is
the
cdn
cache
control
header.
I
think
I'd
also
like
to
inject
here.
If
we
can
a
30
second
update
on
cash
status
and
proxy
status
in
bcp
56
base
yeah,
let's
start
start
with
others.
Okay,
and
does
anybody
have
anything.
A
A
Okay,
it's
not
hearing
any
bashing,
okay,
so
bcp56bis.
A
We
actually
went
to
last
call
on
it
a
long
time
ago
and
got
consensus,
so
we
decided
to
park
it
until
core
shipped
in
the
meantime,
I
went
back
and
looked
at
things
and
we
got
a
few
issues
followed
on
it
from
folks,
and
so
we've
done
some
fairly
substantial
rewrites
of
bits
of
it
not
huge,
but
not
small,
either,
and
I've
more
recently
gone
and
adjusted
it
or
started
to
adjust
it
in
terms
of
the
making
sure
the
references
to
court
were
correct.
A
I
probably
want
to
do
one
more
pass
on
it
and
I
think
it's
probably
wise
to
do
another
working
group
last
call
on
it,
since
it
has
changed
so
I
I'd
actually
be
interested.
If
a
couple
people
could
go
and
read
it
and
and
file
any
issues
they
see.
A
Ish,
if
that
makes
sense
well,
I
want
to
get
it
out
the
door,
but
I
want
to
make
sure
that
it
still
represents
our
current
thinking
and
also
you
know,
since
we
did
this
work,
the
http
api's
working
group
has
been
chartered
this
document's
a
little
bit
different
in,
in
that
it's
targeted
just
really
at
iatf
efforts
to
create
http
apis
or
use
http
for
things
other
than
browsers,
but
I
think
it'd
still
probably
be
courteous
to
pass
it
by
them
with
that
context
and
see
if
they
have
any
comments.
A
B
I
was
going
to
start
it
now
that
we
finished
last
call
on
core
okay.
A
A
Okay,
any
questions
on
any
three
of
those.
A
Okay,
so
let's
move
on
to
http
core.
A
Can
people
see
the
issues
listed
on
the
screen.
A
So
we
have,
I
think,
still
28
issues
open
of
those
we
have
proposals
for
closing
three,
I
think
probably
a
handful
of
the
rest
of
those
are
editorial.
These
are
the
issues
that
the
editors
thought
would
benefit
from
discussion
at
this
meeting.
So
we
can
go
through
these.
Having
said
that,
if
people
want
to
discuss
any
of
the
other
issues,
I
think
we
have
plenty
of
time
to
do
that.
A
So
maybe
what
we
should
do
is
start
by
going
through
these
issues
and
then
take
a
look
at
the
rest
of
the
issues
list
to
make
sure
that
nobody
has
any
other
input
on
those.
So
it
makes
sense.
A
So
really
says
intermediaries
that
process
hp
messages
must
send
their
own
hd
version
and
forwarded
messages,
and
he
says
I'd
rather
say
must
send
a
version
no
higher
than
their
own
imported
messages.
A
The
reason
being
that
he
wants
to
an
intermediary
when
it
forwards
a
message
to
be
able
to
drop
down
versions,
to
kind
of
advertise
that
perhaps
one
of
its
peers
doesn't
support
a
higher
level
protocol.
And-
and
I
think
there
are
a
couple
aspects
to
this
from
my.
D
A
This
was
always
a
pretty
firm
requirement
in
http
that
you'd
send
the
highest
possible
version
that
you
understand,
you
don't
try
and
anticipate
what
your
downstream
peers
can
do,
and-
and
we
should
have
a
discussion
about
that,
because
part
of
that
mechanism
was
relying
on
the
fact
that
via
would
advertise
what
you
downstream,
you
know
the
people
upstream
for
you
rather
would
would
be
capable
of
and
and
in
practice
that
people
don't
tend
to
send
via,
unfortunately,
for
whatever
reason
and
and
the
other
is
that
what
was
the
other
bit?
A
E
Well,
I
mean
it's,
it's
basically
what
you
said
and
the
the
main
reason
it's
a
must
is
to
encourage
people
to
send
the
version
they
actually
support,
because
otherwise
clients
will
send
a
safe
version,
what
they
consider
to
be
the
safe
version
first
and
and
respond,
and
wait
for
the
server
to
indicate
first,
whether
it's
actually
a
higher
version
or
not,
and
in
practice,
then
the
servers
will
send
back
a
safe
response
because
they
don't
know
what
what
the
client's
version
is
either,
and
it
happens
regardless
of
where
you
know
whether
we're
talking
about
intermediaries
or
origin
servers.
E
So
and
that's
why
it
said
must
because
that
we
wanted
to
basically
insist
that
clients
send
their
highest
best,
their
best
version
or
that
people
were
and
when
people
responded,
even
if
they
only
used
the
features
of
1.0,
they
would
say
whether
they
supported
1.1
or
not.
That's
what
the
history
was
and
it
worked.
E
Now,
in
terms
of
the
change
I
mean,
we
could
certainly
change
that
must
to
a
should
or
or
make
the
change
that
willie
requests,
which
is
less
than
the
version
no
higher.
I
mean
that's,
certainly
a
valid
requirement
to
make,
but
we
also
have
to
recognize
that
the
the
version
of
the
protocol
is
intended
to
perform
that
that
additional
purpose
so
as.
A
F
F
E
C
So
I
think
the
requirement
for
must
use
your
highest
version
is
fun
for
clients
and
servers.
I
think
for
intermediaries
in
particular.
G
G
I
think
that's
that's
like
the
hard
requirement
that
I
think
willy's
trying
to
get
it
and
the
second
one
is
roy's
point
which
I
think
was
really
very
good
in
the
1.1
era,
maybe
less
so
when
we're
talking
about
two
and
three
given
the
way
that
the
the
decision
making
goes.
G
G
A
So
the
other
thing
that
I
thought
earlier
was
that
I'm
assuming
that
this
only
applies
to
effectively
minor
versioning
in
http
1,
because
it's
in
the
h1
messaging
document,
it
doesn't
really
apply
to
two
or
three,
but
I
think
personally,
I
could
see
maybe
some
wording
around
intermediaries.
A
You
know
intermediary
specific
wording
about
requests
when
there
are
buffering
considerations,
acknowledging
that
you
know
if
something
along
the
lines
of
if
an
intermediary
feels
that
it
can't
rely
on
via
to
to
understand
you
know-
or
it's
peer
can't
really
envy
to
understand
whether
or
not.
A
A
A
H
I
are,
we
mostly
concerned
here
about
transfer,
junk
encoding
and,
like
default
connection,
keep
alive
state
or
is
there
some
other
one-on-one
feature
that
100
can't?
Do
I
mean
the
way
I
view
it
is
like
if
I'm
in
the
upstream
half
of
a
intermediary-
and
I
am
sending
a
message-
that's
1.1,
even
if
the
client
that
I'm
acting
on
behalf
of
is
1.0,
then
I'm
agreeing
to
downgrade
that
myself
and
if
I'm
unwilling
to
do
that,
I
shouldn't
be
sending
a
1.1
request
in
the
first
place.
A
What
will
he's
saying
effectively
yeah,
but
you
know
the
original
design
was
that
you
would
just
append
a
via
header
that
basically
says
the
downstream
or
the
upstream.
I
always
get
those
mixed
up
peer
is
1.0,
and
then
the
server
knows
that
it
should.
You
know,
try
and
buffer
and
send
content
books.
That's.
E
D
Corey
I
mean,
I
think
my
question
is
a
variation
on
alan's
one
just
there,
which
is
not
only
it's
not
is
it
not
requirement?
Is
anyone
aware
of
server
that
behaves
this
way,
you're
unaware
of
a
server
that
checks
via
before
it
decides
what
features
it
should
use
when
sending
responses?
I
know
that
I'm
not.
E
Yeah,
I'm
pretty
sure
that
apache
doesn't
either
mostly
because
we're
tr.
We
were
trying
to
force
that
evolution
there.
There
are
conditions
under
which
we
change
our
our
our
own
version
back
to
1.0
and
specifically
when,
when
we
know
that
the
client
is
advertising
1.1
but
is
not
compliant
with
the.
D
E
D
E
It's
it's
it's
worth
thinking
about
and
see.
If
we
can
come
up
with
a
better
way
to
phrase
it
I
don't
really
have,
but
without
the
actual
proposal,
yet
there's
not
much,
we
can
do
until
I'll
actually
write.
B
I
F
A
So
there's
been
a
lot
of
back
and
forth
about
whether
midstream
trailers
is
something
that
we
should
accommodate
in
the
spec.
I
think
we
had
originally.
This
discussion
started,
I
think
in
the
quick
working
group,
and
then
we
added
something
to
the
core
specs
to
accommodate
that
if
one
of
the
birds
of
the
protocol
wanted
to
do
midstream,
trailers
or
multiple
trailer
sections
interleaved
with
the
body.
A
Sorry
content,
I
think-
and
the
point
is-
is
that
no
current
version
of
http
has
the
capability
to
do
this,
and
I
guess
the
question
is
you
know:
do
we
want
to
include
to
we're
effectively
changing
the
signature
of
what
http
is
in
terms
of
you
know
how
people
conceptualize
or
how
an
api
surfaces
it
by
adding
this?
Is
that
something
that's
appropriate
to
do
when
we
don't
have
any
implementation
or
in
terms
of
specification
of
a
wire
protocol,
discuss.
G
Martin
yeah,
I
don't
know
what
cory's
saying
no
too,
but
perhaps
my
position
here.
It
was
pretty
simple.
I
looked
at
the
document
and
it
was
very
very
clear
that
it
was
describing
a
protocol
that
was
deployed
apart
from
this
bit,
and
this
was
basically
the
only
thing
that
had
no
no
basis
in
reality,
and
that
was
a
really
jarring
point
in
the
document.
I
think
this
is
something
that
could
be
done
as
an
extension.
G
D
The
same
reasons
that
martin
just
outlined,
I
have
no
objection
to
the
feature
in
general
and
I'd
be
more
than
happy
to
see
the
working
group
adopt
an
extension.
E
Disagree
well,
I
I
think
martin
is
correct
in
every
respect,
except
for
that
desire
to
get
it
done,
but
the
so
I
I'm
I'm
willing
to
to
certainly
accept
the
working
group's
opinion
on
this.
It's
it's
something
that
we
can
make
proposals,
but
if
the
working
group
doesn't
want
to
do
it,
then
that's
it.
J
Well,
the
thing
that
trips
me
up
a
bit
is
the
the
fact
that
we
actually
put
this
into
the
course
back,
because
in
the
quick
http
work,
the
decision
was
made
not
to
put
that
into
h3,
because
the
core
specs
do
not
define
it
and
then
we
defined
it,
and
then
we
were
too
late.
Apparently,
but
I
mean
there's
a
reason
why
why
this
was
put
in
and
as
roy
pointed
out
in
an
earlier
comment,
there
is
an
h2
extension
that
does
this,
so
I
really
see
no
reason
to
remove
that
now.
K
C
C
E
A
Oh
just
personal
two
cents,
you
know
h3
did
not
differ
from
doing
this
work
because
it
wasn't
in
core
it
didn't
do
it
because
there
wasn't
concrete
implementer
interest.
I'd
characterize
the
discussion
as
it
was
brought
up
and
people
said.
Oh,
that's
a
cool
idea,
maybe
we
should
do
that
someday
and
then
interest
was
lost.
A
I
have
yet
to
see
somebody
saying
I
want
to
put
this
in
my
hp
implementation
today
and
I
am
extremely
concerned
about
doing
this
in
core,
because
there
is
already
a
lot
of
uncertainty
and
fuzziness
about
how
trailers
work
in
the
world
and
they
are
borderline,
unusable.
We've
done
a
lot
of
work
in
core
to
try
to
make
them
more
usable
and
more
consistent,
and
I
don't
want
to
add
more
fuzziness
to
it.
A
When
somebody
reads
a
spec
and
says:
oh
look,
trailers
can
do
this,
then
they
find
they
can't
or
somebody
creates
an
api.
You
know
that
tries
to
expose
this
or
doesn't
and
does
it
in
different
ways.
We
need
interop
to
put
in
the
spec
and
we
don't
have
interop,
especially
who
wants
us
to
go
to
full
standard.
A
So
I
would,
I
think
my
preference
would
be
at
most
to
put
a
note
in
the
spec
that
a
future
extension
might
do
this
or
if
we
really
have
to
keep
this
in
the
spec,
to
confine
it
to
a
note
and
separate
it
from
the
other
language
about
trailers
to
make
it
clear
that
it's
not
interoperable.
It's
not
available
yet.
B
My
sense
is
that
if
we
have
an
extension,
someone
wants
to
do
this,
there's
nothing
stopping
them
from
defining
it.
Even
if
semantics
doesn't
bring
like
someone
can
extend
that.
Would
you
have
a
strong
objection
to
be
having
this
be
pulled?
B
E
Well,
for
me,
the
question
of
whether
whether
trailers
come
in
the
midstream
would
would
have
to
be.
That
would
be
and
the
end
for
them
in
terms
of
an
extension.
It
would
have
to
be
independent
of
the
existing
stuff
that
we
have
now,
which
isn't
terribly
bad.
I
mean
it
gives
you
a
little
bit
more
freedom
as
well.
It's
just
that
we
would
have
to
essentially
close
down
the
headers
and
the
trailers
as
they
are
today,
and
go
back
and
revisit
that
text
to
do
it.
E
But,
besides
that,
not
really
I
mean
we
it.
Basically,
we
would
close
down
the
existing
terms
and
then
do
as
martin
suggested,
which
is
develop
an
independent
extension
that
would
support
all
the
platforms.
C
And
from
my
standpoint,
I
I
think
characterizing
my
position
as
we
should
keep
this
text
might
be
a
little
strong.
Sorry,
don't
I
mean
that,
but
I
I
understand
yeah.
If,
if
this
is
something
the
working
group
wants
to
pursue,
I
think
it
is
reasonable
to
have
it
in
semantics,
called
out
that
abstract
http
there
may
be
multiple
instances
of
field
sections
here.
B
My
concern
is
to
some
degree
that
if
two
is
hard
enough,
given
that
so
many
people
assume
one
is
defined-
and
I
totally
agree
with
you
on
that-
encouraging
good
implementations,
because,
assuming
that
they
read
this
nuance
and
then
they
they
made
their
code
handle
and
sections
of
fields
is
impossible
to
prove
until
they
try
it.
Okay,
anyway,
q
is
growing
solid.
E
Clear
this
is
only
is
how
do
you
interpret
it
if
you
receive
it's
not
you
have
to.
You
also
have
to
also
have
the
mechanism
to
send
so
we're
not
defining
a
mechanism
to
send
these
right,
but
but
you
can
do
it
using
two
or
three
using
extension
frames,
but
you
can't
do
it.
E
L
In
one
1.1,
some
metadata
can
be
sent
in
if
it
is
chunked
encoded.
For
example,
such
encoding
has
extra
metadata,
you
know,
name
value
pairs
can
be
assigned
there
and
there's
an
opportunity
to
send
something
in
there,
but
buffered
response.
L
D
Interpreted
corey
so
to
backing
up
a
couple
seconds
to
to
mike's
question
about
the
semantics
doc
saying
that
if
we
my
recollection
of
what
mike
said
good
thing
I
have
the
notes
in
front
of
me
is
that
if
semantics
says
there
are
exactly
up
to
two
field
sections
period
and
we
introduce
weight,
carry
three
plus
no
client.
Not
every
client
will
be
able
to
interpret
that.
I
I
don't
see
us.
D
If
we
write
in
the
semantics
document,
there
may
be
n.
I
don't
see
us
avoiding
that
problem
at
all.
I
think
mark
asked
in
chat.
Would
any
implementation
change
its
api
as
a
result?
And
I
think
this
is
exactly
the
right
question
who
writes
an
api
for
a
feature
that
doesn't
exist
in
any
deployed
version
of
the
protocol?
D
You
would
have
to
change
your
wire
implementation
of
the
protocol
to
support
these
features
if
they
were
added
at
that
point,
you
would
also
tackle
the
semantic
question
of
how
they
how
they
operate,
but
there's
no
point
no
one's
going
to
add
code
parts
for
the
hypothetical
emitters
that
one
has
deployed
and
there
is
no
evidence
that
anyone
will.
It
just
seems
like
a
very
unusual
way
to
build
a
critical
implementation.
A
Semantics,
it
just
occurred
to
me
that
this
is
starting
to
have
this.
Based
on
what
corey
said,
this
is
starting
to
have
the
smell
of
what
I
tend
to
refer
to
as
hook-based
standardization,
and
I
feel
like
without
implementation
experience.
I
don't
think
we
can
have
any
confidence
that
we're
actually
going
to
do
it
right.
E
B
B
E
That
you
know
I'll
I'll
take
an
action
item
to
strike
it.
If
you
want
and
put
a
proposal
in
and
then
you
know,
if
we
decide
to
strike
it,
we
can
strike
it.
Okay,.
A
A
Okay,
733
arbitrary
limitation
on
authentication
parameters,
so
another
fun
one.
A
So
I
noticed
that
semantics
requires
credentials
to
be
either
token
68
or
off
param,
and
my
understanding
the
history
here
is
the
token
68
was
put
in
to
back
port
for
basic,
but
that's
a
very
limited
set
of
characters,
and
I
was
wondering
why
it
doesn't
allow
colon,
for
example,
because
that
rules
out
an
authentication
scheme
carrying
a
just,
a
very
url
which
is
maybe
useful,
for
example,
in
a
bearer
authentication
scheme,
and
I
was
wondering
if
we
could
open
up
that
character
set
obviously
excluding
any
delimiters
that
might
confuse
the
without
the
program
julian
pushed
back.
E
Well,
the
question
I
would
have
is
not:
we've
had
this
requirement
in
there
for
almost
10
years
now
and
what
are
we
going
to
break
by
changing
it?.
A
Well-
and
I
guess
that's
part
of
it-
I
I
don't
think
that
any.
I
would
have
a
hard
time
understanding
that
any
software
out
there
is
actually
checking
to
make
sure
that
an
unknown
authentication
scheme
conforms
to
token.
J
So
when
we
did
that
seven
eight
years
ago,
the
intent
was
to
have
a
syntax
that
actually
is
compatible
with
basic
but
to
to
discourage
the
use
of
that
syntax
for
any
new
scheme,
because
any
new
scheme
that
chooses
that
syntax
will
be
non-extensible
because
you
can't
have
both
so
token68
or
whatever.
We
call
and
the
list
of
authors
it's
either
or
so.
J
The
idea
was
to
to
make
the
existing
set
of
authentication
schemes
actually
conform
to
the
grammar,
but
to
make
sure
that
everything
new
actually
use
auth
parents
and
I
think
the
goal
should
this
is
a
good
goal,
and
I
don't
see
why
there's
a
need
to
change
something
here,
because
if
you
say
why
don't
we
allow
a
colon
here?
Somebody
might
want
to
use
that
the
whole
point
of
this
design
was
to
discourage
the
use
of
that
syntax.
J
A
I
think
the
problem
for
me
is
is
that
I
would
like
to
be
able
to
use
a
uri
in
a
bearer
token,
because
we
just
published
the
secret
token
url
scheme
and
we
already
have
people
reaching
out
to
try
and
do
that,
and
you
can't
really
do
that
according
to
the
specs,
because
of
this
limitation
in
token
68.
A
and
as
you
point
out,
bear
preceded
the
spec,
it
still
conforms
to
the
basic
syntax.
So
I
I
suspect,
what's
going
to
happen,
is
people
are
going
to
want
to
use
uris
in
vera
tokens
and
the
spec
is
going
to
say?
Oh,
but
you
can't
and
they're
going
to
say.
Well,
you
know,
no
software
is
actually
going
to
restrict
that
I'll
just
go
ahead
and
use
them
and
in
fact
that's
the
discussion.
I've
already
had
with
one
implementer
who
says
this
looks
really
cool.
I
want
to
use
these
uris
and
bear
tokens.
A
I
think
that's
the
root
of
the
question
here
well
or
or
sorry,
and
the
third
option
is,
is
to
have
implementation
diverge
from
specification,
which
is
where
we're
at
right.
Now
I
mean.
J
G
So
I
I
thought
julian's
argument
there
was
was
kind
of
interesting
in
that
there
is
a
there
is
a
path
to
including
a
uri
in
an
authorization
or
challenge,
or
what
have
you
and
all
it
requires
is
a
little
bit
of
quoting
you
know
you
have
to
put
x,
equal
and
then
double
quotes
and
you
put
the
uri
in
and
then
you
close,
the
double
quotes
and
everything
everything's
happy.
G
Yeah,
I
don't
think
anyone
actually
cares
about
bearer
per
se.
That's
my
understanding,
because
it's
just
a
tag.
A
Yeah
but
people
have
a
significant
amount
of
deployed
software
and
practice
around
it,
and-
and
I
guess
my
instinct
here
is
that
it's
easier
for
the
few
of
us
people
in
this
room
to
adjust
the
spec
than
it
is
for
the
multitude
of
people
who
are
already
using
bearer
to
adjust
their
practice.
K
Yeah,
so
my
question
here
is:
doesn't
this
it
and
I
I
may
be
just
not
understanding
the
layering
of
the
specs
here
so
apologies,
but
wouldn't
having
a
more
open
definition
in
semantics,
allow
bearer
to
effectively
constrain
it
just
for
its
own
space,
because
isn't
isn't
that
what's
being
requested
by
this
issue,
is
to
remove
the
token
68
character
restriction
here
to
open
it
up
to
add
some
more
characters
right,
yeah,
exactly
so
bearer
would
be
able
to
effectively
still
exist
and
specify
the
subset
right
right.
So
we
might
need.
K
Of
their
respect,
but
yes
right
exactly
which
we
could
bring
up
with
the
oauth
working
group,
where
I
do
a
lot
of
my
work
and
honestly
thinking
back
to
when
we
wrote
6750
where
that
comes
from,
I
don't
think
that
there
was
a
lot
of.
There
was
a
lot
of
debate
about
a
character
set.
It
was
just
let's
pick
something
that
we
know
will
probably
work
and
put
it
there.
K
So
if
that
does
need
to
be
updated
but
and
one
I'm,
I
agree,
I
don't
think
that
actual
implementations
will
notice
if
the
spec
updates
that
and
two,
I
think
that
the
I
think
that
room
you
know,
keeping
the
core
semantic
specs
such
that
they
are
not
limited
by
a
potentially
arbitrary
decision
made
by
the
oauth
working
group
many
years
ago
is,
is
the
right
thing
to
do.
K
K
G
A
A
Well,
I
I
I
I'm
a
little
hesitant
to
start
a
liaison
in
the
sense
that
if
we
try
and
get
a
sense
of
the
auth
working
group,
I
think
we're
going
to
add
a
significant
delay
to
publishing
these
documents.
And
I
don't
think
anybody
really
wants
that.
But
yeah
as
martin
says
in
the
chat
justin.
If
you
want
to
you,
can
carry
a
message
about
what
we
do.
If
we
decide
to
make
a
change.
K
Okay,
I
will,
I
will
point
the
group
at
this
issue
with
this
discussion
that
you
know
the
text
in
6750
is
is
having
you
know
is,
is
causing
some
potential
headaches
in
http,
because
that's
not
what
we
meant.
M
D
I
just
wanted
to
note
that
I
think
it
might
surprise
a
surprising
number
of
people
using
bearer
that
there
is
a
spec
for
it.
I
think
a
wide
range
of
bearer
implementations
just
assume
that
you
write
the
magic
word
bearer
and
then
you
put
something
there
and
that
is
the
magic
earth
way.
I
K
J
J
J
So
I'm
not
sure
whether
it's
happening
yet,
but
if
the
intent
is
to
use
bearer
with
a
uri
instead
of
a
base
64
something
we
we.
Hopefully
we
agree
that
this
breaks
the
syntax
definition,
both
in
the
htt
speakers,
http
spec
and
in
the
barrel
spec.
J
So,
even
if
we
did
change
that
in
the
http
spec,
it
would
still
break
the
barriers
back.
The
way
to
fix
that
is
to
fix
the
barrel
spec
to
allow
auth
params
instead
of
token
68,
and
then
you
can
have
send
all
your
uris
and
multiple
uris
and
additional
parameters,
as
you
like,
and
this
does
not
require
a
change
to
http.
It
does
require
a
change
to
the
barriers.
J
E
E
Yeah,
I
guess
my
my
problem
is
it's
also
there
to
make
it
possible
to
parse
this
parts,
the
field
unambiguously,
just
because
a
url
could
also
contain
the
equals
and
the
look
like
a
parameter
as
well.
So
it's
not
as
simple
as
just
changing
it.
I
just
I'm
I'm
a
little
frustrated
that
this
even
needs
to
be
discussed,
because
I
don't
see
a
need
for
the
change.
A
Mark
so
to
respond
to
julian
we
we
could
certainly
update
bearer.
We
could
certainly
you
know,
go
to
those
steps,
but
I
would
posit
that
no
implementation
is
going
to
pay
attention
to
that.
There
is
widely
deployed.
A
So,
given
that
you
know
that
is
the
way
things
are
going,
I
would
rather
see
the
specifications
reflect
the
reality
than
what
we
would
like
the
reality
to
be,
and
regarding
what
roy
said,
I'm
not
proposing
wheel
now
equals
or
anything
else
that
makes
it
ambiguous
in
the
field,
so
it
would
effectively
be
a
constrained
uri.
It
couldn't
be
any
uri
by
the
specs.
I
suspect
people
will
still
use
any
uri
and
it'll
work.
Just
fine,
but
you
know
respect
purposes.
A
And
says
why
not
be
64
your
uri
I'll
answer
that
real,
quick,
because
the
whole
point
of
secret
token
is
to
make
it
easy
to
recognize
leaked
secrets,
and
if
you
encode
them,
you
can't
recognize
them.
K
Yeah,
so
sorry,
I
did
a
little
bit
more
historical
digging
and
just
for
this
group's
insight,
the
reason
for
this
restriction
in
6750
comes
from
the
fact
that
oauth
tokens
were
designed
to
be
also
passed
as
form
parameters
and
as
query
parameters
as
such
they
needed
to
be
url
safe.
K
K
So
yeah
I
mean
I.
I
don't
think
that
http
should
change
in
order
to
facilitate
that,
but
if
we
can
at
least
in
oauth,
one
have
the
guidance
for
the
you
know
token,
construction
and
bearer
header
usage
aligned
with
this
reality.
I
I
think
we'll
be
better
off.
A
A
I
just
thought
that
you
know
it
might
be
worth
something
disgusting,
so
tommy.
However,
you
want
to
declare
consensus
here
will
work
for
me.
I
I
think
my
preferences
are
done.
I
B
B
A
729
martin
says
this
text
says
the
proxy
doesn't
store
things
like
proxy
authenticate
unless
the
cash
key
includes
the
proxy
identity.
I
just
wanted
to
do
this
live
to
make
sure
that
you
just
weren't
misunderstanding
martin.
This
was
talking
about
the
cache
being
co-located
with
the
client,
not
the
proxy,
so
it's
about
when
I
make
a
connection
and
and
and
I'm
competing
a
cash
key
for
something
I'm
getting
the
identity
of
the
proxy.
I
am
using
as
a
client
factors
into
the
cash
key
is.
Is
that
what
you
were
thinking
or?
G
I
think
I
think
you're
you
pointing
out
that
this
was
a
client-side.
Cash
was
not
obvious
to
me
from
context,
but
maybe
I
wasn't
reading
it
and
I
don't
have
the
context
on
the
screen.
So,
okay,
as
long
as
that's
clear,
then
that's
okay,
but
I
I
read
this
as
as
a
proxy
cache
and
at
the
point
that
the
proxy
puts
this
identity
into
the
cache.
That's
kind
of
pointless.
A
A
So
this
is,
I
think,
the
higher
level
issue
here
is:
how
do
we
add
preconditions
to
the
protocol
in
a
manner
that
is,
can
be
relied
upon
by
the
client
martin?
Do
you
want
to
talk
us
through
this?
A
bit.
G
Yeah
so
having
having
sort
of
gone
through
the
exercise
of
trying
to
define
a
precondition.
When
I
was
reading
through
this
section,
there
wasn't
a
lot
of
support
for
anything
in
that
in
that
area
and
there's
a
very
clear
algorithm
that
you
would
follow.
That
said.
G
Do
this
then
do
this
then
do
this
then
do
this,
but
then
it
didn't
really
contemplate
the
possibility
that
there
were
other
preconditions
and
there's
a
couple
of
things
in
there
that
sort
of
stood
out
when
you,
when
you
read
it
that
way-
and
I
think
probably
that
quota
text
is
the
worst
offender
and
but
there's
a
bunch
of
other
things
that
do
mention
it
like.
G
So
there's
the
text
that
roy
quotes
regarding
the
if
web
dev
sort
of
generic
precondition,
and
so
there
there
is
some
text
on
adding
preconditions,
but
I
don't
think
there's
quite
enough
support
and
I
think
it's
it's
a
little
difficult
to
work
through.
G
A
I
I
mean,
from
my
perspective,
just
just
adding
preconditions
is
is
hard
right.
There
are
a
lot
of
trade-offs
and
there
are
no
certainties.
So
I
I
roy
self-assigned,
this
roy.
Are
you
going
to
write
up
a
a
proposal
for
this?
Just
like
a
new
little
section
or.
J
The
other
thing
is,
as
martin
pointed
out,
the
fact
that
it's
tricky
to
define
the
interaction.
If
you
have
multiple
conditional
header.
J
A
request,
and
in
the
case
of
the
webdav,
if
head
of
you,
that's
not
that
hasn't
been
a
problem
in
practice
because
for
webdav,
when
you
do
authoring
of
resources,
the
assumption
always
was
that
you
have
e-tax.
J
So
all
the
last
modified
stuff
is
irrelevant
anyway,
and
the
functionality
of
the
e-tec
based
conditionals
in
http
is
actually
part
of
the
if
header
field,
so
in
the
webdav
design.
The
idea
was
that
you
would
never
in
practice,
need
to
send
need
to
combine
the
if
header
field
with
some
of
the
other
conditional
header
fields.
It
would
be
always
only
the
federal
field,
because
the
federal
field
gives
you
can
express
all
the
conditions
that
you
have
on
etext
anyway.
J
So
just
as
a
context,
I'm
not
claiming
that
it
works
very
well,
but
that
was
the
design.
A
A
So
martin
pointed
out
that
that
we
still
make
claims
about
white
space
in
oh
sorry,
this
is
a
different
one.
This
is
about
white
space
in
raw
field.
Values
is
removed
when
fields
are
parsed
as
part
of
semantics.
G
An
editorial
comment,
really
it
just,
I
think,
needs
to
say
that
they
might
be,
or
maybe
a
particular
version
may
specify
rules
that
cause
white
space
on
in
field
values
to
be
removed
either
of
those
two.
I'm
not
sure
it's.
A
G
E
Yeah,
I
think,
what's
happened
here.
Is
we
we?
We
changed
the
wording
a
little
bit,
so
it
looks
like
we're
actually
physically
removing
the
the
bytes,
as
opposed
to
what
it's
supposed
to
say
is
that
the
field
content
excludes
that
white
space,
so
it
is
essentially.
This
is
what
happens
when
you
interpret
the
field.
Whenever
you
interpret
the
field,
you
always
skip
the
beginning
white
space
and
don't
include
the
ending
white
space.
E
A
E
G
So
that's
that's
very
different
to
what
I'm
reading
from
this,
which
is
that
messaging
itself
defines
the
white
space
as
part
of
its
internal
structure,
and
so
the
white
space
that
surrounds
something
in
in
messaging
in
1.1
doesn't.
Actually
the
value
is
not
does
not
include
that
white
space.
E
Right
so
so
we're
we
are
agreeing
with
the
pr
with
the
with
the
issue,
in
the
sense
that
yeah
we
should
fix
that.
Okay,
good.
G
Question
yeah,
so
I
suspect
this
is
not
implemented
as
as
stated,
though
there
there
are
various
ways
in
which
it
might
be.
So
if,
if
there
are
carriage
return
line
feeds
in
headerfield
values,
I
suspect
that
some
implementations
are
going
to
be
causing
that
to
treating
that
as
bad
or
the
null
character.
The
zero
byte
might
similarly
be
regarded
as
being
bad,
but
I'm
not
sure
that
it's
true
that,
if
something
doesn't
doesn't
fit,
the
the
grammar
of
a
particular
header
field,
implementations
we'll
be
dropping
those.
G
J
But
but
that
was
wasn't
actually
not
my
point,
so
my
point
is
that
the
spec
says
valid.
Characters
are
defined
by
the
field,
content,
abnf
rule
of
http,
and
it
says
if
a
character
is
not
permitted
in
the
field
value,
the
request
must
be
treated
as
malformed,
so
leading
and
trailing
white
space
is
not
allowed
in
field
contents,
so
I
was
wondering
whether
any
h2
implementation
enforces
that.
G
Not
that
I'm
aware
of,
but
that
was
always
the
intent,
the
the
intent
was
to
make
sure
that
the
leading
controlling
white
space
was
removed.
I
think
we
need
to
open
an
issue
on
h2
there.
Yeah.
D
I
just
follow
up
to
martin,
I
I
have
a
vague
recollection
of
having
filed
some
issues
against
core
on
this
as
well.
It's
not
even
clear
to
me
that
those
characters
are
effectively
policed
in
a
h1,
let
alone
in
h2.
There
are
definitely
cases
of
cr
in
header
field
values,
for
example,
being
allowed
through
an
existing
implementations.
D
D
B
Jillian's
still
in
cuba-
or
I
guess
julian
had
spoken
before
cory,
so
I
didn't
know
if
he
had
more
sorry.
B
B
So
if
you
come
back
again
all
right
all
right.
B
K
A
G
This
is,
this
is
just
another
one
of
those
things
that
it
just
it's
just
weird
for
someone
to
say
something
like
this
in
a
protocol
spec,
particularly
with
2119
language
attached
to
it,
I
sort
of
get
where
roy's
coming
from
here.
But
we
don't
say
this
anywhere
else
ever
in
my
experience
and.
I
E
Yes,
the
requirement
on
a
sender
is
that
the
syntax
that
they're
sending
actually
reflects
what
they
are
stating
in
the
syntax.
You
know
so
it's
effectively
what
what
martin
says.
You
must
not
lie
the
effect
of
that
on
the
recipients.
Is
they
don't
have
to
go
by
the
syntax
loan
as
their
basis
of
interoperating
via
hdp?
E
They
can
interpret
a
something
that
they
know
to
be
a
lie
as
not
standard.
B
E
C
E
That
you
know
the
whole
point
is
the
standards
already
say
something
like
that:
dnt
in
particular
said
that,
and
then
one
company
decided
that
no
they're
going
to
send
it
anyways
because
they
know
better.
So
I
mean
that
that's
a
very
specific
example
where
the
hdp
spec
overrides
their
opinion
about
another.
E
You
know
specs
semantics,
so
it's
it's
not
a
question
of
of
this
is
going
to
force
someone
not
to
lie.
It's
a
question
of
how
do
you
recover
from
that
situation?
How
do
you
deal
with
the
the
consequence
and
still
label
yourself
as
being
conformed,
while
your
peer
on
the
other
side
is
not
conforming.
A
I
mean:
am
I
muted
you're
good,
so
I
think
p3p
and
dnt
are
exactly
the
right
specs
to
use
examples.
Here
I
worked
on
p3p
roy.
A
You
worked
on
dnt
and
in
the
case
of
p3p
we
had
people
still
today,
even
though
it's
not
really
implemented
by
any
browsers
making
false
p3p
policies
so
that
the
browser
would
behave
in
a
certain
way,
even
though
they
didn't
mean
what
they
were
saying
and
in
dmt,
as
you
well
know,
a
browser
announced
that
it,
you
know
an
intent
that
was
supposed
to
be
the
intent
of
the
user
when
it
wasn't.
A
No
one
looked
to
the
http
spec
for
a
requirement.
That
said,
are
you
lying
or
not?
No
one
looked
to
those
specs,
the
p3p
or
dnt
for
a
requirement
about
whether
you
were
lying
or
not.
What
people
looked
to
was
the
specifications
and
the
intended
semantics
and
then
they
went
and
said
well.
Is
there
a
legal
backup
for
these
semantics?
Is
there
a
legal
requirement
to
shore
that
up
and
both
failed,
because
there
wasn't
in
those
contexts-
and
I
know
that
there's
movement
in
certain
places
and
so
forth,
but
this
requirement
is
a
no-op.
A
It
doesn't
do
anything,
it's
not
a
matter
of
whether
it
belongs
here
or
the
other
specs.
It
doesn't
belong
in
any
of
them,
because
this
isn't
the
domain
of
architecture.
This
isn't
the
domain
of
the
protocols
we
define.
This
is
a
much
more
complex,
social,
legal.
It's
norms.
It's
it's
law!
It's
not!
It's
architecture,
semantics!
A
A
D
E
A
sentence
to
each
other-
and
I
say
something,
and
you
hear
the
other
thing,
the
only
reason
we
have
any
agreement
on
what
we
talked
about
is
because
the
meaning
of
the
terms
are
shared
by
both
of
us.
When
we
lose
that
meaning
we
lose
interoperability,
we
lose
the
ability
to
understand
each
other.
That
is
the
essence
of
semantics.
I.
A
E
A
G
Okay
yeah,
so
I
think
mark
articulated
this
reasonably
well
yeah.
I
think
the
when
someone
decides
to
present
semantically
false
information
in
a
protocol
element-
and
here
I
was
specifically
concerned
about
the
content
and
of
a
reque
of
a
response
that
is
the
representation
of
a
resource.
So
if
a
if
a
resource
decides
to
provide
information,
that
is
false,
that's
it's
not
a
protocol
problem
it
because
the
semantics
are
still
clear.
G
The
fact
that
the
the
element
generating
the
information
intended
to
provide
a
falsehood
is
is
something
that's
on
them,
not
the
protocol
and
I
think
we're
we're
best
striking
this.
E
E
To
a
falsehood
or
not,
the
semantics
are
clear
in
the
messaging.
So
when
you
receive
a
message,
it
says
a
set
of
semantics,
but
if
you
know
the
person
on
the
other
side
is
lying
to
you,
can
you
ignore
what
they
said
and
respond
in
a
non-compliant
way
to
what
they
said
right
because
they
lied
to
you.
G
E
N
Boy
here,
so
this
is
very
interesting.
I
I
guess
my
question
is:
if
you
find
out
that
someone's
lying,
what
do
you
do
about
it?
What's
your
sanction?
What's
your
remedy
and
therefore
I'm
intending
to
agree
with
mark,
because
it
seems
to
go
well
beyond
the
scope
of
the
bits
and
bytes
that
we're
dealing
with.
E
Yeah
guys
do
remember
that
our
protocols
are
not
just
implemented
by
technical
people.
Our
protocols
are
examined
by
governments
and
governments
look
to
the
protocols
to
determine
what
we
expect
in
terms
of
our
interactions.
What
we
expect
in
terms
of
our
communications,
it's
not
as
simple
as
saying.
Well,
what
can
you
do
at
that
point
because
you
can
do
things
at
that
point?
You
have
justifications
to
do
things
at
that
point.
If
you
can
show
that
that's
part
of
the
protocol.
A
Mark
absolutely
agreed
roy,
but
I
don't
see
how
this
requirement
helps
a
competition
regulator
or
a
legislator
or
a
court
make
that
determination
it
is.
They
will
be
looking
at
the
concrete
semantics
of
each
individual
element
and
making
their
decisions
based
on
that,
this
general
requirement
doesn't
add
anything
to
that
yeah
and
that
they
won't
even
know
it's
there.
A
E
D
E
A
B
B
Can
we
have
a
compromise
here
in
which
we
don't
have
that
particular
sentence,
as
is,
but
we
do
be
very
explicit
that
if
a
recipient
receives
something
that
they
can
that
they
detect
as
a
lie,
they
can
do
whatever
they
want
and
that
essentially
their
social
contract
with
the
other
peers
gone
and
they
can
reject
it.
They
can
they
don't
have
to
be
spec
conforming
after
that
point
like
they
are
within
their
rights
to
respond.
However,
they
want
or
not,
respond
does
not
solve
the
same
fundamental
problem.
B
Right,
I
just
think
like
if
we're
trying
to
it,
feels
like
putting
a
normative
requirement
on
the
responsible
party
who
could
ignore.
Everything
anyway
is
not
necessarily
productive.
While
what
we
want
to
do
is
give
the
freedom
or
the
you
know,
the
blessing
for
the
the
good
party
who's,
detecting
the
problem
to
not
be
non-conformant
if
they
reject
people
that
they
detect
as
doing
malicious
things.
E
Yeah,
I
can
explain
the
the
distinction
is
that
when
you
go
to
a
regulator-
and
you
say
this
person
is
not
following
the
standard-
here's
the
example:
here's
the
requirement
where
they
are
not
following
the
standard
and
you
don't
have
a
requirement
associated
with
what
they're
not
allowed
to
do
it's
much
more
difficult
than
saying
that
look,
I'm
allowed
to
do
whatever
I
want,
because
this
says
you're
not
allowed
to
lie
it's
it's
a
it's
a
minor
distinction,
but
yeah.
It
does
make
a
difference.
B
B
In
saying
that,
oh
yeah,
the
recipient,
has
to
do
all
the
right
things
in
response
like
no,
we
can
make
sure
that
the
recipient
can
do
whatever
they
want
if
they
detect
a
lie,
but
at
the
same
time,
I
think,
beyond
the
fact
that
this
is
not
really
enforceable
for
testing
purposes.
I
may
put
a
fake
location
header
in
or
whatever
just
to
do
something
like,
and
I
don't
that
doesn't
mean
that
I'm
immediately
non-conforming,
then
I
can't
do
this
if
I
want
to
within
a
situation
where
the
recipient
is
okay.
With
that
response,.
B
B
A
That's
right,
I
I
I
I
have
to
say
I
you
know,
I'm
still
surprised
at
roy's
assertions
here,
but
I'm
willing
to
give
him
the
benefit
of
the
doubt
in
that.
Certainly
in
in
the
p3p
discussions,
the
the
arguments
he's
making
about
people
saying
look,
I
can
say
whatever
I
want
in
the
protocol
wasn't
or
even
made
by
some
parties.
What
I'm
skeptical
about
is
that
a
court
or
or
another
legal
authority
would
pay
attention
to
protocol
conformance
as
an
indicator
of
legality.
A
But
on
the
other
hand,
I
don't
think
that
this
requirement
is
actually
harmful
to
include
in
the
spec
beyond
harming
protocol
designers,
sensibilities
and
in
the
prior
constituencies.
That's
pretty
low,
so
my
question
is:
does
you
know,
does
anyone
violently
object
to
this,
or
are
people
just
not
liking
the
smell
of
it
as
as
as
a
speck,
I
frankly
I
I
don't
think
it
does
a
lot
of
harm
in
this
back.
It's
just
weird
and
I'm
going
to
leave
it
if
before
I
feel
that
strongly
about
it.
G
G
I
do
kind
of
like
the
idea
that
we
have
something
in
the
specification
that
perhaps
not
normative
to
support
the
sorts
of
things
that
we're
talking
about
here,
but
I
don't
see
any
normative
interoperability
requirement
derived
from
this.
That
makes
any
sense.
N
James,
I
am,
I
guess,
I'm
new
to
all
of
this,
but
I
posted
a
couple
of
links
into
the
chat
window
that
perhaps
are
relevant
to
this
conversation,
and
I
suppose
the
summary
of
this
is.
Is
this
the
right
place
to
address
this
problem,
but
I
do
agree,
it's
a
problem,
so
we
have
the
global
privacy
control
flag,
which
I
think
is
an
extension
of
do
not
track.
N
And
to
me
this
is
quite
problematic
because,
if
someone's
receiving
this
indicator
but
wishes
to
exercise
their
right
as
far
as
privacy
is
concerned,
in
a
way
different
to
the
flag
being
transmitted,
what
is
their
legal
obligation
to
accept
the
flag?
For
example,
I
know
that's
not
perhaps
this
precise
scenario
that
was
intended,
but
perhaps
it's
one
that's
relevant,
and
then,
if
we
go
to
privacy
budget,
for
example,
a
proposal
in
relation
to
the
information
that's
sent
over
these
protocols,
it's
specifically
contemplating
lying
under
certain
conditions.
N
So
I
I
definitely
see
this
as
a
problem
and,
having
listened
to
you
all,
I
I
still
see
it
as
a
problem,
if
not
more
than
I
did
10
minutes
ago.
I
guess
I
don't
know
whether
this
is
the
best
place
to
address
it,
but
it
certainly
feels
like
something
that
that
should
be
something
that
we
we
spend
more
time
talking
about
in
the
coming
months
and
year,
because
it's
getting
real
and
is
more
of
a
problem
that
perhaps
it
has
been
in
the
past.
N
I
would
also
like
to
support
whoever
said
that
the
lawyers
look
at
the
protocols-
yes,
they
don't
understand
them,
but
they're,
increasingly
looking
at
the
documents
that
the
itf
and
the
w3c
produce.
So
this
is
important,
so
I
wouldn't
just
sort
of
leave
it
there.
If
the
intention
is
that
nothing
would
be
done
about
it
better
to
be
clearer
one
way
or
the
other.
A
E
F
F
C
B
B
I
E
Sorry,
the
I
hope
you
left.
E
No,
there
are
different
ways
of
phrasing
it
that
I
I'd
be
willing
to
to
wander
past.
I
don't
have
any
reason
to
remove
the
requirement
and
I
have
no
desire
to
there's,
certainly
nothing
preventing
me
from
accepting
the
working
group's
decision.
But
it's
if
you
want
my
opinion,
it
would
be
a
horrifyingly
bad
decision
to
remove
it
and
believe
me,
I
don't
say
that
very
often.
J
Had
all
the
information
that
was
just
presented,
that
would
help,
so
maybe
we
can
have
an
aside
somewhere.
That
explains
maybe
even
mentioning
do
not
track,
but
maybe
just
invented
header
field.
What
this
is
about
and
then
maybe
people
are
more
comfortable
with
that
requirement,
and
I
I
think
I
agree
with
roy
that
if
this
is
not
in
a
must
not
in
uppercase
characters,
then
those
people
who
this
is
addressed
will
ignore
it.
J
O
D
Is
also
in
the
queue
mine
is
just
a
wordier
plus
one
I
think
julian's
inside
here
is
relevant.
I
think
the
one
caution
I
want
to
add
is
that
this
has
got
to
be
the
most
widely
broken
normative
requirement
in
this
specification.
D
Because
of
tommy's
I
mean
tommy's
note
about
testing
is
a
particularly
good
example
like
we
routinely
lie
when
we
implement
protocols
in
this
way.
So
I
I
think
that
the
most
cautious
note
I
have
to
add
is
language
that
clarifies
around
this
might
might
want
to
be
very
careful
to
not
undermine
the
reason
roy
put
the
must
not
language
there.
D
E
To
be
clear,
I
mean
that's
normal,
I
mean
it's
normal
to
have.
You
know
things
where
you're
testing
and
you're
lying
about
the
testing.
It's
not
a
matter
of
you
know.
You're,
not
when
you
violate
a
requirement
doesn't
mean
the
the
sky
falls.
It
means
that
the
interoperability
is
not
going
to
be
there,
that
you
expect.
D
Yes,
I
agree
with
that
again,
it's
one
of
those
things
that
I
think
is
often
the
nuance
on
that
is
often
missed.
People
tend
to
read
every
normative
word
with
the
same
weight,
especially
early
on,
and
that
can
be
a
little
bit
tricky.
E
Yeah,
I
should
probably
to
be
clear.
The
problem
with
dnt
particularly,
was
that
people
were
lying
and
then
the
servers
were
trying
to
ask
the
user
to
do
things
that
the
user
had
no
idea
what
to
do,
because
they
hadn't
set
sent
the
the
request
to
begin
with.
So
it
was
causing
an
interoperability
at
multiple
layers
that
there
was
nothing.
E
The
servers
could
do
about
it
because
you
can't
expect
users
to
know
what
on
earth
is
going
on
in
hdp,
but
you
can
expect
at
least
the
clients
and
the
servers
to
be
talking
the
same
language.
So
I
mean
that's,
that's
how
it
came
from
okay,
but
we
are
actually.
You
know
to
be
clear
that
the
this
particular
requirement
has
been
in
the
spec
a
lot
longer
than
that
it
just
wasn't
phrased
the
way.
It
is
right
now
as
a
requirement.
C
I
will
point
out
that
I
was
at
microsoft
in
the
time
when
we
were
legally
obligated
to
document
and
disclose
any
time
we
deviated
from
requirements
to
the
protocol
specs
and
a
must
like
this
makes
those
conversations
very
interesting,
but
it
also
gives
you
the
escape
hatch.
That,
I
think,
is
israel's
point,
and
I
I
still
don't
like
this
must
not,
but
I
think
roy's
point
that
there
needs
to
be
an
escape
patch
when
a
server
detects,
something
going
on
is
well
taken.
E
B
I
mean
I
think,
it's
clear
that
no
one
feels
as
strongly
about
removing
it
as
you
do
about
keeping
it
in,
and
I
think
through
the
discussion.
The
point
makes
sense,
but
it's
also
clear
that
for
someone
doing
a
read
through
the
document,
the
context
isn't
clear
as
it
is
and
it
just
raises
the
you
know
the
pedantic
reader's
alarm
bells
of
like.
Oh,
you
can't
enforce
that
so,
like
couching,
it
will,
I
think,
strengthen
the
point
as
well.
L
A
A
A
E
Characters
and
traditionally
we
sort
of
allowed
whatever
was
capable
of
robustly
handling,
even
though
for
for
interop
reasons,
we
required
a
constrained
character
set,
so
we
had
requirements
that
restricted
us
to
ascii,
but
then
we
also
said,
except
anything,
that's
okay,.
G
Someone
should
go
back
and
like
we
all
talk
about
going
back
in
in
time
to
remove
all
the
bad
things
from
the
past
and
one
of
the
bad
things
in
the
past
is
the
robustness
principle,
but
roy's,
probably
right
once
once
you've
made
the
decision
to
accept
the
the
the
junk,
then
you
have
to
accept
the
junk
forever
and
that's
where
we're
at,
and
so
I
think
probably
the
right
decision
here
is
something
along
the
lines
of
what
what's
in
in
fetch,
is
having
having
a
hard
requirement
not
for
carriage
return
line,
feed
or
the
the
zero
byte
and
strongly
recommending
that
you
use
the
things
that
actually
achieve
interoperability,
which
does
include
control
characters
and
actually
has
everything
that
conforms
to
the
abn
f,
for
whatever
header
field
you're
on
exchanging
and
beyond
that.
E
E
G
Yeah,
I
think
so
so,
there's
there's
carriage
return
and
there's
line,
feed
and
cr
is
a
particularly
tricky
one.
E
J
The
gist
of
the
discussion,
I
think
so
so
I
I
think
I
mean
the
difference
between
what
we
have
right
now
and
what
the
published
spec
says
is
that
we
say
that
if
you
receive
control
characters
and
that's
not
about
cr
but
about
the
others,
you
must
either
reject
the
thing
or
convert
those
to
white
space
and
that's
a
new
requirement
that
we
put
in,
and
the
question
is:
if
we
take
out
that
requirement,
are
we
okay,
because
that
would
be
wonderful,
because
then
we
would
be
just
saying
what
we
said
before
and
we
won't
have
to
spend
too
much
brain
cells.
J
On
that
I
mean
allowing
cr
to
be
treated
as
cr
is
probably
been
asking
for
trouble
right.
E
Yeah
yeah:
we
can't
do
that
because
I
mean,
within
with
existing
servers
that
becomes
a
security
hole
on
clients
and
we're
slowly
trying
to
close
that
off.
E
So
basically,
people
are
filing
vulnerability
statements
against
the
servers
for
not
removing
their
cr,
because
it
now
causes
request
smuggling
in
in
clients
that,
for
some
reason,
have
decided
to
interpret
cr
as
a
line
line.
E
A
J
I
think
we
need
to
allow
the
recipient
to
treat
cr
and
now
as
rogue
messages.
Yes,
so
we
don't
require
them
to
be
converted
to
the
white
space.
But
what
we
can
say
is
that
if
you
don't
consider
those
invalids,
then
you
may
replace
them
by
white
space
instead
and
must
replace
them
by
white
space
instead.
A
A
My
two
cents
is
that
historic
might
make
sense,
but
I
don't
even
know
that
it
does,
but
I
don't
think
we
need.
I
think
we
can
close
this
election.
A
M
A
A
G
Martin
you're
out
of
this,
I
can
live
with
that.
I
just
asked
the
question,
because
we
did
this
in
tls
when
we
defined
tls
1.3
we
obsolete,
1.2
and
so
on
down
the
chain.
It
seems
like
this
is
a
different
protocol.
L
J
A
F
F
F
E
Cool
to
make
sense,
let's
say
roughly
two
did
you
want
to
talk
about?
Should
we
put
upgrade
in
the
other
in
messaging,
move,
upgrade
back
to
messaging
again
or
oh
or.
A
So
there's
there's
the
discussion
of
we've
gone
back
and
forth
on
this
several
times.
Where
does
upgrade
belong
and
where
do
the
transfer
codings
right
now
are
split
awkwardly
between
semantics
and
and
messaging,
because
te
is
used
for
version
non-specific
semantics.
Unfortunately,.
E
Know
it
also
defines
protocol
and
a
couple
other
things
that
are
used
in
the
document
and
then
gosh
a
camera.
There
are
other,
it's
kind
of
messy
to
have
it
in
messaging,
because
then
it
has
to
it
has
to
point
to
lots
of
things
in
semantics
and
then,
if
it's
in
semantics,
it's
kind
of
messy,
because
it's
really
a
1.1
only
header
field.
I
think
the
idea
was
we
put
it
in
semantics
and
we
actually
say
in
the
upgrade
header
field
that
it
only
applies
to
1.1.
G
Be
fun
I
was
just
looking
to
make
sure
that
it
had
been
considered.
I
understand
the
complexity
of
trying
to
extricate
the
the
necessary
one,
not
one
specific
text,
but
if
you,
if
you're
willing
to
say
that
that
upgrade
is
defined
in
the
following
fashion,
but
it
really
only
applies
to
101,
then
I
think
we're
probably
okay.
G
A
Okay,
I
think
another
motivation
was
that
we
wanted
to
have
as
few
header
fields
that
were
version
specific
as
possible
if
they
happen
to
not
be
useful
or
or
honored
in
a
particular
version.
I
think
that's
fine,
but
you
know
version
specific.
M
G
Learned
that
yes,
yes,
unfortunately,
this
one
is
also
very
awkward,
but
I
don't
think
we
fixed
that
by
moving
it.
J
Yeah
I
wanted
to
comment
on
martin's
question
whether
we
have
normative
references
from
semantics
to
messaging.
J
That's
actually
a
different
ticket,
I
think
of
the
ones
that
he
found,
and
I
I
just
wanted
to
remind
people
about
that.
One
is
we
the
rest.
We
require
the
response
format
of
the
trace
method,
to
use
the
http
message,
media
type
and
that
media
type
is
defined
in
terms
of
1.1
messaging
and
there
may
be
a
few
other
things,
but
that's
the
one
that
was
obvious
to
me
that
if
we
actually
want
to
fully
decouple
semantics
from
1.1,
we
actually
have
to
kill
that
requirement
for
trace.
E
G
So
my
my
suggestion
there
was
that
the
semantic
level
requirement
was
that
the
server
the
server
or
intermediary
in
this
case
produce
a
a
response
that
contains
the
message
that
it
receives
and
not
necessarily
specify
the
format
in
semantics
and
then
have
an
informative
reference,
saying
that
they
could
use
the
http
one
one
wire
format,
and
there
are
plenty
of
other
informa
informative
references
to
the
messaging
doc.
For
that
purpose,
and
this
would
just
be
one
more.
B
B
All
right,
I
have
to
mention
it
time
check.
Do
you
want
to
go
through
anything
more
here,
or
should
we
go
to
our
last.
O
A
So
this
is
a
spec
that
I
put
together
with
ichen
and
stephen.
I
believe
both
on
the
call
for
a
new
response,
header
field-
that
has
pretty
much
the
exact
same
syntax
and
semantics
as
cache
control,
but
it
is
targeted
at
cdns,
and
the
reason
for
that
is
that
cdns
now
all
do
this
themselves
in
various
ways
and
there
are
subtle
differences
in
the
practice
in
each
one.
So
there's
a
you
know,
fastly
specific
way
to
do
this.
There's
an
akamai
specific
way
to
do
this.
A
There's
a
cloudflare
specific
way
to
do
this,
as
well
as
for
other
cdns.
It's
a
very
common
use
case
for
a
content
provider
to
want
to
cache
differently
in
the
cdn
where
they
control
they
have
a
relationship
with
the
cache
and
from
from
other
caches,
where
they
don't
have.
A
relationship
with
that
cache
and,
and
so
having
a
separate
control
mechanism
is,
is
great
and
really
useful
and
is
now
very
common
practice.
A
But
it's
problematic
to
have
all
these
different
cdn
specific
ones,
because
if
you
change
cdns,
you
have
to
change
the
headers.
You
send
not
only
the
name
of
the
header,
but
also
you
have
to
figure
out
what
the
right
semantics
are
for
that
particular
cdn,
because
there
are
subtle
differences
and,
and
so
for
me,
the
the
main
driver
for
interrupt
here
is.
A
I
want
third-party
frameworks
like
wordpress
or
symfony,
or
whoever
else
to
be
able
to
emit
a
cache
control
header
for
cdns,
knowing
that
it
will
be
honored
properly
by
a
cd
that
implements
this
without
having
to
have
different
plugins
for
different
cdns
or
whatever
else
they
want
to
do
it's
it's
a
fairly
straightforward
interoperability.
Ask
and
the
discussion
around
this
seems
to
so
far
mostly
been
around
well,
what
about
other
non-cdn
caches
that
might
want
their
own
special
cache
control
enter,
and
indeed
what
about?
What?
A
If
you
have
cache
control
that
you
want
a
specific
cdn
to
pay
attention
to
over
the
general
cdn
advice,
because
you're
doing
multi-cdn,
which
is
actually
quite
common
now
and
and
there's
a
there's,
an
appendix.
I
think
we've
made
an
appendix.
A
Down
here,
where
you
can
create
your
own
one,
you
can
create
another
version
of
cdn
cache
control,
for
example
with
the
same
syntax
and
semantics,
and
I
think
really
what
we
in
the
other
use
cases
of
what,
if
I
build
my
own
cdn
or
what,
if
there's
a
another
kind
of
reverse
proxy
like
a
localized,
reverse
proxy,
that's
a
different
layer
of
caching
that
I
want
to
target
and
I
think
what
we
really
need
here
is
a
generic
targeted,
cache
control
mechanism
that
allows
you
to
target
the
different
layers
and
one
of
those
layers
might
be
generic
cdn's
and
I
believe
strongly
that
that's
a
target
that
needs
to
be
distinct
for
that
interoperability
purpose
that
I
explained
and
and
and
what
I
would
propose
is
that
if
the
working
group
adopts
this
specification,
we
turn
it
into
that
kind
of
framework
where
it
allows
you
to
create
different
forms
of
cache
control.
A
No,
I'm
not
dividing
a
header
field,
name,
suffix,
defining
convention,
martin,
you
know
how
I
hate
prefixes
and
suffixes
in
registries.
Nice
try
but
in
other
words
code,
wouldn't
key
off
of
the
cache
control
suffix.
It's
just
a
convention.
A
Much
like
content
dash,
I
think
the
the
discussion
that
needs
to
be
had
here
is
in
back
channel
with
folks
one.
Maybe
two
people
have
suggested
well.
Why
don't?
We
just
have
one
mega
cache
control
header
to
do
this
rather
than
having
multiple
headers
and
I'm,
even
though
we
have
structured
headers.
Now
I
think
that
is
much
more
complex.
A
It's
much
less
human
friendly
it
it
it's
more
liable
to
implementation
errors,
and
I
say
that
because
that's
kind
of
how
circuit
control,
I
was
the
person
who
specified
circuit
control
way
back
in.
I
guess
it
was
2000
or
2001
and
it
really
didn't
work
out.
Well,
it
was
too
complex,
it
was
overspecified
and
I
really
don't
want
to
walk
that
down
that
road.
A
Again,
it's
much
simpler,
just
to
say
to
do
the
multiplexing
between
the
different
layers
of
capturing
at
the
header
level
and
say:
here's
the
generic
cache
control,
header,
here's,
the
cdn,
cache
control
header
and
here's
the
cactual
header
for
that
guy
over
there
that
that
to
me
is
much
simpler
and
more
straightforward.
It
means
we
can
reuse
people's
understanding
of
cash
control
as
it
works
today,
rather
than
inventing
something
that
blacks
knew
that
people
have
to
understand.
A
B
P
Not
support
in
so
this
multi-cdn
thing.
Would
it
not
be
an
idea
to
add
that
as
an
additional
I'm
going
to
mess
up
my
terminology
key
or
or
something
as
part
of
the
list
as
part
of
the
value
of
the
header,
I
doubt
that
people
be
using
more
than
two
or
three
cdns
in
this
right.
I
B
B
Of
course
sure
why
not?
I
have
other
cdns
other
than
the
three
indicated
interest.
A
I've
heard
on
the
list
and
in
back
channel
interest
from
people
who
implement
software,
who
can
be
that
can
be
used
as
to
create
a
cdn.
Yes,
frankly,
we
don't
have
great
communication
with
other
cdns.
Yet.
B
B
You
know
some
interesting
engagement
on
it,
certainly,
and
it's
good
to
see
that
there's
a
variety
of
companies
involved
in
the
authorship
of
this
one
to
start
three
is
good.
B
A
F
I
M
A
Sure,
no,
I
meant
just
it's
9
59
here.
So
oh
yes,
we're
at
the
end
of
the
day.
So
I
think
that's
all
we
have
isn't
it
so
yeah.
Actually
on
that
point,
so
folks
should
we're
done
with
the
working
group
last
call.
But
if
you
see
any
more
issues
that
you
think
need
to
be
resolved,
please
do
bring
them
up
and
we'll
try
and
get
some
drafts
out.
I
think
tommy.