►
From YouTube: IETF101-HTTPBIS-20180320-1550
Description
HTTPBIS meeting session at IETF101
2018/03/20 1550
https://datatracker.ietf.org/meeting/101/proceedings/
A
A
A
This
is
the
note-
well
hopefully,
you're
familiar
with
this
by
now
at
this
point
in
the
week,
this
is
a
new
one.
It
covers
not
only
the
intellectual
property
terms
of
participating
here,
but
also
things
like
working
group
procedures,
anti-harassment
code
of
conduct
and
copyright,
which
is
IPR
as
well
in
the
privacy
policy.
So
if
you're
not
familiar
with
this,
the
best
thing
to
do
is
to
go
and
look
up.
Ietf
note
well
on
your
favorite
internet
search
engine.
A
Our
agenda
for
today
we
have
one
session
this
week.
We've
done
the
blue
sheets,
correct
they're,
flying
around
yep.
We
have
scribes.
Thank
you
so
much
on
right.
Now
we're
doing
the
agenda
bash
and
so
we're
going
to
go
over
our
active
drafts
as
quickly
as
we
can
and
then
we're
going
to
talk
about
a
related
piece
of
work,
signed,
HTTP
exchanges
and
then
some
work
that's
been
proposed
for
start
sooner,
which
is
h-2b
Turk.
We
talked
about
a
few
times
the
S&L
service
parameter
and
the
alt
service
via
DNS
and
their
aunts.
A
Traps
that
marks
the
author
of
is
first
so
I,
don't
have
any
slides
for
this
draft
BCP
56
bits
for
those
who
don't
remember
is
the
revision
of
the
use
of
HTTP
is
a
substrate
draft
and
that
was
put
together
in
the
early
part
of
this
century.
Based
on
current
practice,
then
current
practice
has
moved
on
quite
a
bit
and
so
we're
trying
to
write
that
down
again.
I,
don't
have
much
to
say
about
this,
except
that
we
seem
to
be
making
pretty
good
progress.
A
I
think
we're
not
quite
done
yet
more
examples,
more
details
and
certainly
more
review
are
required,
but
there
aren't
too
many
open
issues
on
it
right
now.
So,
if
you
have
it
already
or
if
you
haven't
for
awhile,
please
have
a
look
at
the
document.
Make
sure
that
it
makes
sense
to
you
and
that
you're
comfortable
with
it
and
raise
any
issues
or
or
have
discussions
is
necessary.
A
B
I
will
make
the
comment
that
it
has
been
actively
used
in
a
couple
other
scenarios
outside
of
this
working
group,
which
is
actually
it's.
You
know
designated
audience
right
so
I
think,
even
though
it's
still
you
know
at
the
internet
draft
stage,
it's
been
a
useful
North
Star
in
the
work
in
doe
and
I
was
able
to
reference
it
in
a
TLS
which
is
really
you
don't
have
a
DAB
on
stage.
B
But
you
know
it's
been
a
very
useful
sort
of
thing
to
point
to
to
say
you
know:
how
does
your
design
you
know
flow
against
this?
So
hopefully
we
could
expect
some
comments
from
those
constituencies,
as
well
as
on
its
usefulness
as
well.
As
you
know,
from
this
room
on
its
correctness
right
and
as
with
anything
we
want
to
publish
you
know
we're
interested
in
hearing
their
diversity
of
voices.
That
say,
you
know,
they've
read
this.
A
And,
and
and
we're
trying
to
walk
a
line
with
this
document
where
it
is
not
dokoni
and
it's
not
saying,
thou
shalt
not
or
thou
shalt
must
do
this
or
whatever,
but
more.
You
know,
as
that.
The
status
intends
its
best
practice
it's.
This
is
why
you
should
probably
think
about
doing
this,
this
way
and
so
forth
and
so
on.
So
if
you
see
a
place
where
you
think
it
is
still
too
draconian
or
or
it's
not
on
the
market
place,
bring
it
up,
we'll
talk
about
I.
B
B
It's
easily
appropriate
there
we
go
spring
in
London,
no
next
one
all
right,
here's
the
wrap
for
where
we
are.
Last
time
we
met.
We
discussed
an
individual
draft
of
mine
and
we
essentially
reached
consensus
shortly
thereafter
on
the
mailing
list
and
in
the
meeting
on
refactoring.
That
draft
to
include
a
similar
level
of
header
information
in
the
HT
request
that
was
going
to
bootstrap
web
sockets
as
the
old
style.
It
should
be.
B
B
The
only
changes
since
then
have
been
editorial
in
nature:
Thank,
You,
Julian,
making
it
better
and
we've
had
a
successful
browser,
implementations,
I'm,
hoping
to
get
Ben's
to
come
up
and
say
a
couple
words
about
when
we
get
to
come
in
time
in
chrome
and
several
independent
server
implementations
that
that
has
has
worked
against
with
oo.
So
if
it's
kind
of
the
state
of
things,
I
essentially
have
two
issues,
I
am
aware
of
that
I'm
happy
to
talk
about
in
my
10
minutes
today.
B
On
the
left
and
the
next
point
on
the
Left
uses
the
connect
method
to
build
a
tunnel
very
similar
to
the
way
a
bi-directional
byte
stream
has
traditionally
works,
but
with
a
few
slight
differences,
the
primary
one
being
that
there's
a
protocol
pseudo
header
on
here
that
defines
how
you
route
and
terminate
that
tunnel
instead
of
routing
to
a
new
IP
address
as
as
Connect
might
normally
do
through
a
proxy.
The
other
request.
Headers
there
are
typical
web
Asaka,
64
fifty-five
stuff
in
the
same
stuff.
That
was
on
the
get
an
HTV
one.
B
The
next
flight
says:
okay,
that's
great.
It
includes
its
own
set
of
web
sockets,
headers,
doing
sub
protocol
negotiation,
and
then
you
just
use
the
H
to
stream
back
and
forth
as
if
it
were
a
TCP
session
that
have
been
upgraded
next
okay.
So
we
have
one
github
open
issue
open
here,
which
says
consider
using
a
LPU
and
registry
values
for
the
protocol
pseudo
header.
The
draft
uses
the
upgrade
registry
currently
which
corresponds
to
the
same
tokens
HTTP
one
uses.
B
Fundamentally.
This
means
that
you
send
a
protocol
pseudo
header
with
a
WebSocket
about
you,
which
just
obviously
not
exist
in
the
a
OPN
registry
which
tends
to
have
things
like
h1
h2,
a
hole.
Connection
in
my
might.
My
rationale
there
is
that
we
don't
want
a
hole
connection
based
protocol
value.
What
we're
looking
for
is
really
the
the
semantics
we
had
with
the
upgrade
registry.
So
my
recommendation
is
close
this
without
action,
but
there
was
no
further
discussion,
so
I
want
to
raise
it.
Oh
yes,
I
am
pausing
for
further
discussion.
C
Where
you
write
so
you
you
get
given
a
WSS,
your
eye
and
you
you
faithfully,
go
off
and
connect
to
that
thing
over
TCP
which,
which
LP
ins
do
you
put
in
that
to
have
the
greatest
chance
of
success
now.
Obviously,
this
is
the
best
way
to
proceed,
because
then
you
get
to
use
the
connection
for
other
things
as
well,
but.
B
Well,
I
mean
yeah
I'm,
not
sure
we
need
to
prescribe
that
that
could
be
an
implementation
implementation
choice
about
how
you
want
to
do
that.
You
can
happy
eyeballs
that
you
do
a
few
different
things.
You
know
my
approach
would
probably
start
by
thinking
about
whether
I
wanted
to
use
the
connection
for
different
things
right,
but
they
always.
B
C
Loosely
in
the
sense
that
if
the
answer
to
that
question
was
a
OPN
and
I
think
your
response,
I'm
quite
happy
with
suggests
that
it
doesn't.
Then
no,
it
doesn't
read
on
this,
but
there
was
a
potential
here
for
this
to
say:
well,
I
need
an
LPN
and
then,
and
then
it
sort
of
now
we're
down
the
rabbit,
hole
and
creating
ourselves
other
issues.
C
But
if
your
response
there
is
no-
and
the
answer
is
work
it
out
for
yourselves
people-
sorry
and
we
just
document
that
side
of
things
I
think
it's
probably
okay,
I,
don't
know
what
other
browser
implementers
another
WebSocket
is.
This
is
largely
a
browser
problem.
What
other
browser
implements
think
about
that.
D
Eric
Nygren,
Akamai
on
and
I
think
it
that
is
somewhat
decoupled
from
this,
because
in
some
ways
this
is
that
this
issues
about
the
portal
pin
for
the
protocol
version,
but
I
think
there's
that
question
for
for
with
that,
whether
or
not
for
the
h2
with
connect
with
connection
protocol.
It's
worth
having
a
h2
with
connection
protocol,
LPN
token
that
you
can
put
before
HTTP
one-one
and
along
with
H,
wait
on
the
list
list
when
you're
making
a
WebSockets
can
actually.
B
E
Meant
Bakey
from
Google
first,
as
Patrick
asked
me
to
comment
very
briefly
on
the
state
of
implementation.
Chrome
has
an
implementation,
it's
rough
around
the
edges,
but
it
seemed
to
work.
There's
two
server
implementations,
I
know
of
NGH
ep2
and
live
WebSockets.
The
respective
developers
set
up
publicly
available
servers
for
testing
and
confirmed
that
they
were
able
to
make
it
work
with
chrome.
I
have
not
tried
myself,
I'm,
sorry,
there's
a
third
implementer
who
I'm
corresponding
it
privately.
It
doesn't
work
yet
with
them.
E
Second,
to
contribute
to
the
discussion
that
Patrick
just
prompted
about
this
particular
github
issue.
I
support,
metrics
recommendation:
I,
don't
think
that
connection
level.
A
LPN
registry
is
right
for
protocol
values.
Third,
to
respond
to
Martin's
concern
about.
There
is
no
way
for
the
client
to
know
in
advance
whether
the
server
does
support
WebSocket
over
HTTP
or
not,
and
this
is
a
performance
issue,
because,
if
I
open
up
an
h2
connection,
even
if
I
have
previous
knowledge
about
the
server
having
support
exams
occur
in
the
past,
I
need
to
wait.
E
A
roundtrip
to
receive
the
settings
from
the
server
before
I
can
send
a
request.
So
I'm
kind
of
inclined
to
just
send
the
WebSocket
request
according
to
this
specification
at
the
server
before
getting
the
setting,
because
if
the
server
doesn't
support
WebSockets
over
h2
is
gonna
fail
anyway,
and
then
I
can
retry
on
h1.
C
So
mom
Tom's
to
the
point
performance
point
I
think
that's
have
and
I'm
an
eminently
reasonable
thing
to
do.
I
will
also
point
out
that
in
until
s13
the
server
consented
settings
first,
so
you
won't
have
to
worry
about
that
particularly
work
much
in
in
the
future.
So
it's
it's
a
temporary
problem
and
there's
certainly
not
a
particularly
dial
one.
So
yeah.
B
All
right,
thank
you.
One
more
slide,
one
more
issue,
okay,
so
the
other
piece
of
feedback,
I've
gotten
it
doesn't
have
a
good
hope
issue.
Open
I
can
see
this
primarily
from
mark
and
I.
Think
I
know
we
talked
about
this
briefly
in
Singapore,
as
it
was
the
same
in
my
individual
draft
is
that
the
this
mechanism
is
built
around
the
connect
method
and
the
argument
for
using
the
command
act
method
is
that
you
know
Connect
is
a
special
snowflake,
which
you
know
it's
consistent
with
the
background
on
the
slide.
B
So
it
must
be
the
right
choice
and
then
everything
surprising
about
you
know.
Connect
is
exactly
what
we
need
in
this
case.
So
it
satisfies
the
principle
of
least
surprise
that
there's
only
a
small
change
in
behavior
the
behavior
around
Connect.
Being
that
you
know
terminates
to
a
to
a
protocol
version
instead
of
terminating
to
a
another
host
as
it
traditionally
does,
but
the
sense
there's
a
special
snowflake
and
in
every
code
base,
which
is
quite
a
few
left
at
that
implement.
B
Http
connect
is
really
handled
in
a
different
code
path
because
it
needs
to
bi-directionally
process
the
byte
stream,
which
is
considerably
different
than
is
somewhat
store-and-forward
oriented.
You
know
message
semantics
that
go
on
with
them,
some
other
with
with
other
methods,
including
unknowns.
I
would
summarize,
you
know
the
argument
against
is
that
you
know
creating
a
link,
level
modification
or
hop
two
hop
level
modification.
B
You
know
using
a
different
of
a
modification
of
a
method,
is
itself
quite
surprising
and
you
could
use
an
alternative
method
and
it
quite
totally
new
namespace,
like
you
know,
get
with
upgrade
or
something
like
that,
and
that
could
be
a
bit
cleaner.
I
have
concerns
around
that.
If
that
method
is
referenced
at
all
outside
of
this
hop
hop
semantics,
its
meaning
is,
is
totally
undefined
and
isn't
like
rooted
in
any
kind
of
Asia.
So
I
comments,
I'm
sure.
A
To
that
first,
actually,
thank
you.
If
that
was
the
intent
there,
so
it
I
would
state
it
a
little
differently
in
that
I'm.
Just
I
want
to
make
sure
that
we
consciously
do
this,
which
is
have
a
a
protocol,
specific
protocol
version,
specific
extension
in
HTTP
to
setting
override
this
mimics
of
a
generic
HTTP
method
and
and
I
think
your
point
about
it
being
a
special
snowflake
is
is
very
well
taken.
It's
always
been
weird,
and
it
could
be
that
maybe
the
outcome
is.
Is
we
decide?
A
G
A
B
C
Madden
Thompson
I'm
I
just
put
two
and
two
together
and
Vince's
comment
about
sending
one
of
these
requests
to
a
server
that
may
or
may
not
support.
This
I
just
realized
that
if
you're
gonna
put
this
new
pseudo
header
in
a
request
and
send
it
to
a
server
that
you're
you
haven't
received
settings
from
yet
I
predict
based
on
the
general
posture
that
we
took
in
HTTP
to
regarding
stuff
that
we
didn't
allow
explosions
right
so
can.
B
We
avoid
that,
well,
we
can,
by
reading
it
because
to
yell
it's
one,
three
sentence.
First,
and
that's
a
problem
actually
not
just
with
the
proposed
here,
it's
true
of
any
of
the
proposals
that
seem
to
be
able
to
do
this.
We
don't
have
the
semantics
we
need
for
this,
so
we
need
an
extension
of
some
sort,
a
method,
a
new
method
that
has
this
behavior
is
also
not
really
well
isn't.
C
C
B
B
D
A
B
I
mean
I
think
we
did
so
we
can
revisit
the
issue.
So
the
resolution
of
that
issue
was
this
is
negotiated
and
negotiations
take
a
round
trip
and
maybe
that's
settled
because
it
doesn't
require
bilateral
negotiation.
So
if
service
ends
first
it
it's
a
half
a
round
trip
right
and
then
this
is
the
scope
of
the
use
case.
This
is
we
have
this
discussion
same
for
we
can
revisit
it,
but
they
don't
have
us
really
a
solution
space.
Isn't
it
often
okay,
I
see.
D
A
H
H
C
A
A
B
A
F
Well,
to
be
fair,
dar
check,
I
got
it
published
last
night,
so
yeah
so
I
guess
I!
Don't
really
have
any
slides
today
just
wanted
to
draw
attention
to
the
new
draft
who
just
had
editorial
changes
and
one
clarification
on
the
shift
buffer
buffer
representations
so
and
says
it
just
went
out
last
night.
I
can't
really
expect
people
to
have
reviewed
it
and
oh
hi,
everybody
I'm
apologize
for
not
being
there
again,
hey
someday,
maybe
I'll
make
it
so
more
just
trying
to
draw
attention
to
the
new
draft.
F
It's
not
significantly
different
from
Oh
to
mostly
editorial
thanks
to
Julian
and
Martin,
and
also
thanks
for
the
feedback
on
shift
buffers
and
but
I'm
willing
to
entertain
any
questions
right
now
that
anyone
has
I'm.
Sorry,
if
my
face
is
like
10
feet
tall
because
I
don't
have
any
slides
there
now
I
didn't
think
about
that.
Let's
see!
Look
good.
B
So
when
Mark
and
I
were
you
know
discussing
this
in
our
regular
meeting,
what
we
were
looking
forward
to
was
getting
a
sense
from
the
working
group
of
whether
you
know
they
thought
this
effort
was
substantive
enough
and
applicable
and
opportune
up
different
people
that
it
was
worth
going
through
the
publication
process,
for
you
know
and
and
since
then
I
mean
right,
feel
free
to
weigh
in.
But
there
have
been.
You
know
a
number
of
comments.
F
Yeah
and
thanks
for
rattling
the
cages
and
and
get
everyone's
attention,
I
think
that's
what
helped
us
get
some
feedback.
Nothing
really
I
mean
that
the
one
area,
I
kind
of
expected
there
to
be
a
few
clarifications
on,
is
exactly
where
we've
had
them,
which
is
in
that
last
section
of
our
draft.
Regarding
the
shift
buffers
there's
one
question
out
there
that
I'll
answer
on
the
mailing
list:
I
think
that
has
a
pretty
clear
answer.
F
The
and
so
no
there's
nothing
nothing
I
can
think
of
I
mean
this
is
I'd
rather
I'd
like
to
get
this
draft
to
bed.
I,
don't
want
to
add
anything
to
it.
I
mean
if
somebody
found
a
fatal
flaw
in
that
in
the
shift
buffers
I
was
already
just
that
section
is
autonomous
really
from
the
rest
of
it.
I
don't
know.
If
that's
the
right
word,
it's
discrete.
So
we
could,
you
know
not
even
discuss
it.
The
core.
F
C
Partin
I
have
a
suggestion.
I
think
we
should
work
in
group
last
call
this
document
right:
we've
had
a
lot
of
people,
look
at
it
and
those
people
are
now
largely
happy
and
people
can
pay
extra
attention
to
the
shift.
Buffers
section
in
that
working
group
last
call.
That's
not
unusual
for
us
not
only
usual
for
us
to
do
that,
but
otherwise
I
don't
think
we
need
to
hold
on
to
this
document
any
longer
very.
A
Good
and
and
if
you'll
recall,
we
already
had
one
working
class
call,
it
was
a
fairly
long
one.
You
know
I
think
the
next
one
can
be
relatively
short
say
two
weeks
so
Craig,
unless
you
have
any
concerns,
I
think
we'll
probably
start
that
after
the
meeting.
I
F
F
F
F
It
should
always
move
forward,
and
if
it's
ever
used
for
any
other
content,
it
should
be
a
different
representation
with
a
different
URI.
That
would
be
my
answer
to
that
I
mean
so
that
you
can't
eternally
have
this
model
of
a
circular
buffer,
in
my
mind,
using
HTTP.
If
you
want
to
be
casual,
it
always
needs
to
move
forward.
That's
my
answer,
I
think
to
the
question.
F
F
B
H
Well,
I
wasn't
sure
I
needed
to
walk
all
this
way
to
make
a
short
statement
but
sure
so
we
adopted
it
last
time
around
we
published
it.
We
have
merged
the
integration
with
exported
authenticators,
which
is
making
progress
in
ALS
group
we
still
have
one
real
open
issue,
which
is
handling
cases
where
the
client
wants
to
offer
a
cert,
because
they
expect
the
server
to
want
it
and
negotiating
things
around
that
and
the
bigger
meta
issues
it
crops
up
in.
H
That
is
how
we're
using
frames
on
on
streams
with
requests
that
may
have
already
ended,
which
in
HCP
yeah
it's
all
a
TCP
connection.
You
can
send
the
frame.
The
reference
is
a
closed
stream
and
nobody
cares
and
it
should
be
over
quick
that
doesn't
work
so
well,
because
the
stream
is
actually
gone.
So
we
need
to
find
a
different
way
to
deal
with
that
which
is
probably
sending
on
the
control
stream.
I
just
haven't
done
that
yet.
C
Go
away
Martin,
so
mutton
Thompson,
the
the
the
other
thing
to
point
out,
is
that
there's
a
lot
of
activity
going
on
and
on
this
work
in
TLS,
and
until
that
resolves
a
bunch
of
this
stuff,
won't
really
be
able
to
be
buttoned
down
anyway.
So
we're
working
in
parallel,
but
we're
primarily
focusing
on
the
working
TLS
right
and
then-
and
that
was
understood
when
we
took
it
on
so
we're
happy,
don't
let
it
bubble
along
for
a
while
and.
B
It's
good,
you
know
that
relationship
is
going
well.
You
know
you
put
your
peanut
butter,
my
certificate,
vice
versa.
I
actually
think
this
is
going
to
be
a
pretty
important.
You
know
draft
the
future
uses
of
HTTP
and
I
saw
I
wanted
to
draw
people's
attention
to
that.
To
make
sure
it
got.
You
know
fairly
wide
review.
B
B
A
K
J
I
Ryan
sleevee
Google,
so
I've
looked
through
the
draft
and
and
in
particular
it
seems
and
I
think
have
you
framed
it
it's
it's
a
rethinking
of
how
we
think
about
the
TLS
crypto
I
mean
quite
quite
simply,
and
from
talking
to
various
server
operators.
It
seems
like
this
one
of
the
things
that
is
not
yet
addressed
on
the
draft.
I
might
be
curious
to
see
here
how
the
author's
plan
to
address
is
the
it
addresses,
client
risk,
meaning
you
know
the
client
may
want
to
look
at
DNS
in
addition
to
the
certificate
frame.
J
J
This
is
you
obtain
a
a
certificate
for
and
I'll
use,
Google
comm
as
an
example,
and
you
have
a
connection
and
you
can
advertise
that
I
am
authoritative
for
Google
comm
in
today's
model
you
have
things
like
EGP
detection
that
you
can
use
to
detect
this,
but
this
would
no
longer
apply
and
so
I'm
trying
to
understand
and
I
did
not
see
discussion
on
list
on
these
security
concerns.
Yet
I'm
just
curious.
If
there's
a
way
forward
with
that.
L
So
I'm
not
one
of
the
signers
of
this,
but
I
err,
Chris
Farlowe,
but
I
spent
a
lot
about
it
just
like
to
make
sure
we
got
our
Terms
straight.
So
in
the
existing
design
I'm,
the
only
gonna
compromise
didn't
forget
for
google
comm,
then
and
I'm
able
to
with
DNS
then
I
can
then
I
can
then
I
can
mount
serious
attacks.
Sorry
gonna
come
out
serious
attacks.
We
agree
about
that
right
right
so
or
v,
GP
or
BGP.
Yes,.
J
M
L
L
J
L
So
I
think
you
obviously
accurately
summed
up
the
situation,
and
there
was
a
bunch
of
discussion
about
this
and
I
think
that
the
working
group
kind
of
felt
okay
with
that.
But
that
could
be
I
think
that's
certainly
something
people
are
aware
of,
but
that
may
have
been
the
wrong
decision.
And
so,
if
you
want
to
relitigate
that
I'm
not
gonna
so.
J
Editor
queue
right,
the
the
relaxation
of
DNS
was
done
in
the
origin.
My
highlight
here,
though,
is
that
the
security
considerations
list
on
the
client
may
be
concerned
with
this
right
and
stuff.
The
client
may
want
to
take
provide
a
sufficient
level
of
assurance,
such
as
optionally
checking
DNS,
as
the
ordinary
mentioned.
What
I'm
speaking,
though,
is
the
server's
ability
to
detect
or
mitigate
compromised
to
its
own
services
right.
J
So,
in
the
case
of
say,
Google
com,
you
could
look
at
BGP,
routing
tables,
advertisements
and
see,
is
someone
say,
Pakistan
claiming
a
YouTube
I
P
in
a
case,
a
certificate
frame.
You
no
longer
have
these
mitigate,
able
or
detectable
things,
even
if
it's
after
the
fact
detection
the
mechanism
disappears,
and
so
the
security
considerations
highlight
except
the
client
may
want
to
take,
but
not
steps
for
a
server
operator
concerned
about
compromised.
Do
you.
B
B
L
I'm,
just
like
tired
today,
so
bear
with
me
I
the
what
the
different.
How
is
this
different
from
the
organ
frame,
in
the
sense
that
so
so,
if
I
can
acquire
a
certificate
that
is
for
Google
and
myself?
It
seems
to
me
that
the
origin
frame
is
the
same
security
properties
that
this
does.
What
am
I
missing,
I.
J
Now
is
you
at
least
have
there
is
some
day
extent
of
detect
ability
of
the
comm
and
that
certificate?
Whether
and
this
depends
right
prior
to
TLS,
1
3,
the
unencrypted
and
the
certificate
things
like
that,
but
the
whole
threat
model
itself
doesn't
seem
to
be
explored
as
to
different
levels
of
mitigations
right.
L
O
A
certificate
legitimization,
that's
right,
I
mean
the
certificate
transparency
work
is,
that
is
the
goal.
Defense
pants
work
is
to
detect
certificate
in
assurance,
so
to
say
that
this
removes
the
ability
to
detect
that
there's
a
problem.
It
seems
like
it's
I
mean
we're
not
giving
up
on
CT,
right
and
I.
Believe
CT
is
one
of
the
recommended
things
for
clients
to
do.
J
H
Think
really,
that
is
the
difference
in
that
with
origin.
You
have
to
get
a
Miss
issued
cert
for
your
own
domain
with
an
extra
sayin,
and
this
this
would
go
a
step
further
and
let
you
expand
that
attack
to
also
if
I
got
the
surviving
and
there
your
protection
is
revoking
the
key
when
you
discover
the
compromised,
but
you
have
to
find
the
compromise
and
I
think
that's
that
is
independent
of
this
draft,
but
you're
right.
It
is
a
real
operational
security
concern.
H
D
Yeah
Eric,
nygren,
yeah,
actually
I
think
when
we
think
we're
thinking
about
for
origin
and
I
still
have
people
ended
up
from.
Is
it
the
the
ID
how
we
looked
at?
It
was
a
you
was
it.
You
have
ct2
for
detecting
this
issue
in
tuned
OCSP
stapling
as
the
way
to
respond.
I
think
that
corner
the
case
of
compromise
is
one
that
not
really
well
covered
there.
Another
related
one
that
this
does
make
worse
is
in
the
case
where
you
have
a
a
side
loaded
certificate
from
an
enterprise
into
this
CA
list.
D
This
also
makes
it
worse
because
that
means
that
a
enterprise,
for
example,
could
capture
traffic
much
more
easily
without
being
on
path,
which
is
kind
of
it's
not
compromised
and
I
know
that's
a
case
where
kind
of
like
yeah.
If
that
we're
doing
this
once
you
have
that
CA
and
they're
all
bets
are
off,
but
it
does
just
make
that
these
each
of
these
us
they
can't
combine
together,
make
things
a
little
bit
worse.
P
Kyle
Noke
Rhodes
I'm
also
very
concerned
with
this
issue
on
both
how
it
makes
it
a
lot
easier
to
exploit
a
compromised
certificate
in
mass,
and
it
also
makes
it
a
lot
easier
to
targets
exploitation,
but
I'm
wondering
if
we
were
to
expand
on
a
security
considerations
into
suicide.
What
options
do
we
even
have.
H
L
J
J
On
musicians,
it's
actually
key
compromised
that,
in
terms
of
thinking,
you
know
what
our
threat
model
is
for
what
the
future
the
PKI
ecosystem
is
because
of
CT
key
compromises.
Actually,
because
you
have
these
certificates
online
on
active
server
systems,
it
seems
much
more
likely
at
the
set
of
attacks.
We're
gonna
see
over
the
next
five
years
are
going
to
be
relate
to
key
compromising.
We
already
see
this.
J
L
Right
I
think
that's
only
true
I
mean
I.
Suppose
so
one
thing
I
mean
so
one
the
whole
things
we
could
do.
You
could
imagine
doing
right
you
guys.
One
thing
I
would
say:
is
it
sub
certs
do
how
to
help
this
a
little
bit.
The
sense
that
you
can
have
it.
You
can
have
the
subscribers
for
a
lifetime
and
then
and
then
you
and
then
you
rotate,
you
rotate
their
work.
They're
working
keys,
of
course
create
other
problems.
L
The
another
thing
we
could
do
that
I
have
just
just
reinvented
now
so
probably
doesn't
work,
but
is
have
some
sort
of
weak
sauce,
painting
kind
of
version
that
basically
we're
in
there
stiff
acute.
It
says,
like
you
know,
do
basically
not
pin
me,
but
do
not
ever
let
me
be
a
certificate.
L
Don't
ever
let
me
be
applied
this
mechanism
right
and
if
you're
like,
like
HSTs
P.
If
you
remember
that,
then
then,
then
that
would
be
a
pretty
safe
and
thing
to
omit
like
nhts,
but
it
would
also
give
you
a
fair
amount
of
protection
against
this
kind
of
against
the
kind
of
attack.
Now,
of
course,
we
need
to
like
standardize
that
but
I
mean
like
I
guys.
L
Can
you
form
a
queue,
because
you
think
you
know
I
think
we
think
that
flux
is
obviously
unfortunate,
that,
like
we've,
created
a
situation
where
we
have
a
new
risk
for
everybody.
You
know
a
new
risk
for
everybody
that
they
can't
really
mitigate
without
doing
taking
active
steps,
but
that's
one
possibility
there
might
be
some
where
I
didn't
vote
that
have
it
be
opt-in,
but
I
haven't
bought
that
three
lexemes.
J
Order,
yeah
and
that's
I
mean,
and
what
you're
saying
is
very
similar,
we're
thinking
about
this
in
the
context
of
what
packaging
and
that's
where
these
the
similarities
between
web
packaging
certificate
frame
are
very
strong
and
the
the
mitigation
of
opt
and
versus
opt
out,
etc.
I
I
do
want
at
the
risk
of
pivoting
to
sort
of
a
second
conversation
point
on
the
draft.
One
of
the
things
that
that
did
seem
missing
is
a.
J
Things
like
that,
and
so
that
would
be
the
other
piece
that
I
would
suggest
for
consideration
in
the
draft
is
figuring
out
what
the
D
conflict
sort
of
scenario
looks
like
when
you
have
these
multiple
connections.
It's
mentioned
in
security
considerations
as
a
possible
confusion,
but
not
a
guidance
for
implementers
as
to
how
to
prioritize
and
select
available
connections
that
you
have
when
you
do
have
multiple
connections.
The.
A
A
A
H
Would
welcome
some
text
on
that,
but
my
initial
inclination
is
right
now
HCP
to
talks
about.
You
may
use
a
connection
for
certain
domains
and
if
you
wind
up
with
multiple
connections
that
are
acceptable
for
that
domain,
it
already
gives
no
guidance
on
what
to
do
and
browsers
already
deal
with
that.
A
common
case
for
that
is
you
don't
know
whether
its
support
HCP,
1
or
2.
You
open
two
connections
in
parallel,
because
that's
what
you
would
do
in
HQ
p11
LPN
kicks
in
they're,
both
HCP
2
and
in
practice,
what
winds
up
happening
is.
P
Q
I
was
also
going
to
second,
the
idea
that
I,
sorry
in
sweat,
for
all
the
reasons,
only
he
just
mentioned,
opt
in
would
make
me
feel
more.
Warm
physique
is
I,
certainly
know
of
Surtsey
Google,
where
you
know
we,
wouldn't
it
wouldn't
actually
be
the
end
of
the
world
if
they
got
compromised
and
then
there
are
others
that
would
be
be
in
the
world,
so
you
know
I
mean
we.
We
do
have.
H
Q
So
I
mean
I,
don't
I
haven't
thought
this
through
enough
to
have
a
mechanism
I
mean
previously.
When
we
talked
about
this,
we
basically
decided
like
this
seems
scary,
but
we
didn't
know
what
to
do
about
it.
I
mean
that's
basically
what
we
ended
up
with
when
we
talked
about
this
last
week's
all
right,
I
literally
have
no
idea
what
to
do
about
it:
yeah,
okay,
okay,
I'm!
Sorry,
for
that
I
apologize,
I,
don't
have.
If
I
had
a
solution,
I
would
suggest
it.
Okay,.
A
Ad
kg
of
the
cuse
closed.
Thank
you.
It
sounds
like
we
still
have
more
to
talk
about,
and
this
isn't
going
to
working
group
blasts
school
anytime
soon.
So
thanks,
but
thank
you
for
the
substantive
discussion.
Indeed.
Next
up
is
expect.
Ct
Emily
can't
be
with
us,
but
she
sent
a
summary
of
where
she's
at
which
I've
put
on
the
screen
so
they're
having
meta-major
changes
to
the
draft
in
a
while
only
some
minor
tweaks
to
reporting,
which
I
think
is
mirroring
some
of
the
other
stuff
happening
in
reporting
in
other
browser,
HDPE
extensions.
A
The
open
design
issue
is
where
the
header
should
have
include
some
domain
support
and
she's,
not
inclined
to
add
this
at
the
moment.
Unless
another
implementer
comes
along
and
wants
it
we're
unlikely
to
implement
in
chrome,
because
it's
a
moderately
complex
implementation,
the
marginal
value
is
somewhat
small.
A
A
A
What's
him
hello
next
slide,
so
this
is
structured
headers.
So
as
a
reminder
to
folks
the
goals
of
this
effort
are,
we
want
to
make
it
easier
and
more
reliable
for
people
to
specify
and
two
pars
HTTP
headers,
there's
a
long
history
of
people
specifying
HTTP,
headers
and
messing
it
up,
sometimes
in
major
ways,
for
example,
cookies,
and
we
have
recently
made
a
somewhat
bad
habit
of
putting
a
tremendous
workload
on
Julian
of
verifying
the
the
different,
a
B
and
F
of
the
people
trying
to
put
in
their
specs.
So
that's
not
fair
to
Julian.
A
So
after
some
discussion
in
Singapore,
we
agreed
to
rebase
the
draft
on
a
draft
that
I
put
together
and
talk
to
Paul
Henning
about
so
that's
otwo
and
then
in
draft.
Oh
three,
we
did
a
lot
of
refining
of
the
algorithms
in
the
spec
which
addressed
various
issues.
We
split
numbers
into
integers
and
floats
and
I
really
hope
that
into
the
bike
shedding
on
numbers,
but
we'll
see
and
lots
of
other
stuff
like
we
would
throw
an
error
on
child,
trailing,
garbage,
settles
and
so
on.
A
And
then
we
didn't
know
for
more
recently
lots
more
editorial
work,
changed
labels
to
identify
hers
and
made
some
other
adjustments
next
slide.
So
one
thing
I
wanted
to
highlight
the
folks
is
that
the
current
design
that
we've
settled
on
you
have
a
number
of
possible
top-level
types
for
use.
When
you
define
a
structured
header,
so
you
say:
I'm
gonna
define
the
Foo
header.
The
Foo
header
is
a
structured
header
and
it's
top-level
type
is
one
of
these
types.
A
You
have
to
specify
that
and
more
importantly,
when
someone
goes
to
parse
it,
they
have
to
tell
the
parser
in
some
fashion
that
which
type
it
is
at
the
top
level
and
that
might
be
with
the
method
name
or
any
other
number
of
programming
language
specific
constructs,
but
you
have
to
say:
I
went
to
par
section
area
now
the
bits
below
that
are
hinted
on
the
wire
and
it
will
parse
those
automatically.
You
have
to
validate
that.
That
in
structure
you
get
is
the
data
structure.
A
A
None
of
these
are
top
level
types
wreckers.
We
have
an
item,
a
simple
item,
so
you
know
identify
or
a
float
in
a
chair,
a
string
or
a
binary
content,
and-
and
that's
all
that's
available
and
in
discussions
with
bunches
of
folks
for
new
headers.
We
think
that
this
has
enough
descriptive
power
for
most
use
cases.
It
may
not
be
a
hundred
percent
of
new
HTTP
headers
on
the
planet,
but
it's
enough
to
add
some
value
and
that's
what
we're
shooting
for.
A
We
think
next
slide
a
couple
of
open
issues
that
would
be
good
to
get
input
on
number
433
length
limits
right
now.
The
spec
specifies
length
limits
on
all
of
the
different
structures.
So
it
says
a
string
must
be
no
more
than
I.
Forget
I
think
it's
added
a
thousand
24
characters.
You
must
have
no
more
than
so
many
items
in
a
list
so
forth
and
so
on,
and
that's
to
make
sure
that
there
are
same
limits
so
that
parsers,
don't
overflow
or
whatever
integers
are
64
bit
sign.
A
For
example,
this
helps
assure
Interop
and
assists
optimizations
down
the
road,
and
it
also
means
that
specifications
don't
have
to
specify
limits.
They
can
rely
on
the
built
in
limits,
even
though
they
might
be
very
large.
So
the
question
is:
is
that
a
good
approach
in
in
previous
discussion,
some
people
that
said
no,
these
limits
that
make
sense.
Everyone
who
specifies
a
structured
header
needs
to
specify
their
own
limits
if
they
want
limits,
and
we
don't
want
to
prematurely
constrain,
how
many
things
people
use,
for
example,
items
in
a
list.
A
C
C
That
doesn't
mean
that
someone
can't
define
go
off
the
path
and
do
their
own
thing,
and
that's
that's
why
I
think
that
it
doesn't
actually
matter
too
much
what
the
right,
what
the
limits
are
that
you
choose
in
here
and
they
seem
sensible.
Did
you
have
a
limit
on
the
number
of
items
in
a
list
or
the
number
of
yeses.
A
A
C
I'm
comfortable
I
think
you
had
256
parameters
or
something
crazy
about
512.
So
is
that
yeah,
those
are
big
numbers.
If
anyone
ever
decides
to
use
all
the
space
they
have
available
to
them,
things
will
explode
in
other
ways.
So
I'm
not
concerned
about
the
numbers
that
you
have
and
I'm
also
perfectly
content
with
letting
people
just
go
ahead
and
do
that
so
I
think
this
is.
This
is
about
right.
G
Truly,
unless
you
come,
I
I
think
it's
a
bit
confusing
to
put
a
limit
on
the
number
size
on
the
same
page
as
like
limits
on
the
number
of
identifiers,
because
they
are
very
different
things
than
implementation,
so
I'm
I
think
I
can
be
convinced.
That's
having
limits
on
the
integer
sizes
are
good,
but
I'm
very
skeptical
about
saying
256
identifiers
are
okay
about
257
are
not
okay,
so
maybe
we
should
discuss
that
as
two
different
kinds
of
limitations,
so.
A
M
You
that's
all
good
I,
consider
this
from
a
extension
point
of
view
and
for
numbers
and
strings.
You
always
have
the
option
to
extend
the
spec.
If
you
want
to
include
a
large
number
so
I'm,
pretty
fine
with
that.
So
I
think
that
you're
naughty
Julian's
comment.
If
we
could
provide
a
way
to
two
may
add
more
than
256.
A
I
want
to
push
back
on
that
a
little
bit,
because
the
way
that
it's
currently
specified
it's,
you
literally
go
through
the
algorithm,
and
if
you
have
let's
say
the
limit
for
sense
for
purpose
of
argument
is
256.
If
you
get
more
than
256,
you
throw
a
hard
error,
and
so
then,
if
a
header
specifies
no
don't
pay
attention
to
that,
that
means
that
you
have
to
modify
you
know.
The
whole
point
here
is
to
have
generic
implementations
of
structured
headers.
A
Now
you
have
to
modify
generic
implementations
and
if
someone
else
is
parsing
the
header
and
they're
using
somebody
else's
implementation,
they
have
to
have
the
ability
to
modify
that.
So
it
seems
like
it's
making
things
worse,
not
better.
Like
I
said,
I'm
I
think
it
would
be
great
if
people
are
uncomfortable
with
the
size
of
the
limits
making
them
bigger.
A
If
that's
the
issue
that
people
are
having,
but
if,
if
we
want
to
have
no
limits
whatsoever,
then
I
get
into
this,
you
know
the
the
other
concern
that
comes
up
is
that
well
implementation,
a
you
know
decided
that
256
or
too
many
an
implementation
be
decided
at
512
or
too
many,
and
then
you
get
interoperability
problems
in
that
sense,
just
like
we
do
for
for
integers,
you
know
so
having
some
sort
of
sanity
line
is
I.
Think
a
good
thing.
Yeah.
P
A
R
But
aside
from
that,
if
we
do
this
and
it's
in
10
or
extension
header
fields
in
the
in
the
future,
I
think
it's
really
important
that
it
be
self
descriptive
in
the
sense
that
if
you
have
a
specific
type,
you
if
partial
needs
to
know
what
the
type
is
before
it
begins.
Parsing,
then
that
should
be
visible
in
the
text
of
that.
Are
you
talking
about
the
gravy's
issue
now
talk
about
well,
yeah,
I,
love
that
the
whole
thing
so
I
mean
I
for
this
particular
issue.
R
A
Now
it's
enforced
by
the
parser,
and
the
idea
is,
is
that
if
I'm
saying
the
foo
header
I
can
say
it's
it's
a
list
of
no
more
than
five
items
and
then
the
you
know
a
value
comes
off
on
the
wire
I
gave
it
to
my
generic
parser.
If
the
value
length
is
over,
what
the
inspect
limits
are
so
one
or
24
for
example,
then
it
will
raise
an
error
and
I.
Don't
ever
see
it.
A
R
I,
so
that
that
clarifies
what
it
is,
you
would
expect
for
the
invitation.
What
I
don't
understand
is
why
would
you
ever
want
the
implementation
to
end?
At
that
point
and
say:
look
the
parts
were
made,
a
decision
for
me
and
what
what
is
the
correct
response
after
that?
Do
you
and
the
connection
terminate
the
stream
no.
S
A
It
so
when
I
define
again,
if
I'm,
to
find
the
food
header,
if
I'm
defining
the
food
header,
then
I
have
to
tell
you
know,
I
have
to
specify
how
to
handle
the
errors
in
person.
The
header,
if
it's,
if
it's
a
header
that
affects
the
entire
stream,
maybe
I
do
terminate
the
stream
that
it's
probably
well
ignore
the
header
or
take
some
sort
of
default
action.
Because
in
our
card
in
in
the
respect
of
in
respect
to
the
semantics
of
that
header,
John
Lennox,.
Q
Some
of
this
guide
answered
in
those
discussion
since
I
got
in
mind,
but
so
I
mean
I
think
that
I
mean
clearly
specifying
the
defaults.
You
know
so
that
somebody
who's
not
thinking
very
hard,
missus
use
the
algorithm.
It's
great
I
mean
whether
the
ability
for
a
specific
header
to
say.
Oh
hey,
you
know
we're
looking
at
this.
This
is
something
or
if
this
is
routinely
going
to
be.
You
know
a
fork,
a
fork,
a
string
because,
for
whatever
reason,
you're
gonna
put
a
fork,
a
string
and
a
header
I
mean.
Q
So
so
what
we
need
to
override
this
one
parameter
I
mean
as
I
see
it
from
an
implementation
point
of
view,
just
as
the
implementation
needs
to
know
the
the
top-level
type
for
any
given
header
header
type,
you
all
may
identify
of
parameter
overrides.
So
basically
the
values
which
the
the
the
limits
I'll
take
default
values,
but
you
can
override
any
of
the
default
values
on
the
when
you
call
the
function
just
at
the
same
time.
You,
if
you
need
to-
and
you
know
I-
would
assume
that
most
extensions
would
not
need
to.
A
T
A
A
See
what
you
did
there
so
I
I'd
be
concerned,
yeah
about
the
complexity
of
that
and
and
remember.
The
whole
goal
here
is
not
to
to
cover
every
possible
future
HTTP
header.
If
a
header
has,
if
you
know
we
decide
one
case,
the
limit,
for
example,
and
had
a
real
ones,
do
4k
we'll
go
and
define
a
normal
h-e-b
header.
Just
don't
use
this
framework
or
define
an
extension
of
this
framework
as
a
new
top-level
tie
means.
Q
A
Certainly
suspect
they
could
do
that.
What
I
imagine
what
would
happen
would
be
that
at
least
while
they
were
beginning
to
deploy
it.
None
of
the
existing
implementations
would
support
that.
They'd
have
interrupt
problems
that
people
would
have
to
write
header
parsers
from
scratch.
If
it
was
a
popular
use
case,
then
of
course
you
know,
support
would
evolve.
Maybe
that's.
Okay,.
G
Sorry
Julian
bershka,
maybe
it
would
make
sense
to
think
about
length
limits
in
a
different
way
than
currently
I
mean.
What
we
are
really
interested
in
is
the
complete
size
of
the
header
of
you
tried
lots
how
long
an
individual
part
of
it
is
I
mean
if
I
want
to
send
a
string
that
is
longer
than
1024
characters.
I
might
be
tempted
to
define
a
list
of
strings
just
to
get
it
into
the
head
of
you
and
that's
not
that's
not
the
intent
of
that
right.
B
B
A
A
So
that's
interesting
because
you
know
when
I
hand,
let's
say
I've
got
a
data
structure
and
I
hand
it
to
my
implementation
to
generate
a
header
from
that
data
structure.
Right
now,
there's
ambiguity
between
identifiers
and
strings
as
to
you
know,
if
I
just
represent
that
in
the
data
structure
as
a
string
well,
do
I
sterilize
that
as
an
identifier
or
a
string,
for
example,
I
could
annotate
that
with
metadata
in
the
data
structure,
for
example,
I
could
wrap
it
in
an
object
or
something
depending
on
the
language
I'm
using.
A
But
you
know
the
question
is:
is:
is
this
something
we
want
to
try
and
and
an
address
we
could
say
the
current
design
is
okay.
You
just
have
to
make
sure
that
when
you're
jittering
strange
you're,
not
stringing,
sorry
you're,
generating
identifiers
and
strings
you're
not
ambiguous,
or
we
could
do
something
like
remove
identifiers
from
the
item
so
that
that's
not
ambiguous
anymore.
A
A
So
that's
it
except
we've
had
some
pretty
good
participation
on
github
with
this
more
of
eyeballs
are
always
appreciated.
I
think
we're
about
ready
for
prototype
implementations
to
really
prove
this
out.
We'd
love
to
get
some
more
experience
with
it,
and
and
after
that,
we're
probably
gonna
be
ready
for
working
group.
Last
call
we
wanna
get
this
one
out
fairly
quickly,
so
that
folks
could
start
start
taking
advantage
of
it.
G
G
You
know
in
a
way
that's
not
really
compatible
with
the
base
back,
because
the
base
spec
tells
you,
for
instance,
that
you,
if
you
have
multiple
ahead
of
you,
two
instances
and
you
have
an
empty
head
of
get,
it
doesn't
count
and
in
the
list-
and
it
also
talks
about
when
you
are
allowed
to
recombine
header
fields
and
as
you
are
not
using
the
list
syntax
here,
it
doesn't
really
apply.
So
I
think
that
can
be
addressed
with
pros
in
the
current
document
and.
U
A
U
In
which
case
I
would
just
suggest
that
the
length
of
this
maximum
length
of
a
string
take
that
into
account.
Yes,
okay,
fair
enough
thanks.
B
A
B
B
Kazuo
reign
is
primary
author
and
because
of
the
way
we
have
changed
the
digest
format,
we're
going
to
remove
mark
from
the
list
and
we're
going
to
add
a
gyro
which
reflects
the
types
of
filters
that
are
used
in
their
primary
contributions
in
that
space.
So
thank
you
to
mark
for
your
efforts
and
thank
you.
Yo
F,
for
you
know,
joining
this
crazy
ship.
We've
got.
M
So
I'd
like
to
talk
about
and
to
open
issues
in
cache
digest
and
there's
actually
one
more
editorial
pointed
out
by
Julian.
Thank
you
for
that.
But
it's
not
covered
in
this
presentation.
So
we
have.
We
now
have
draft
zero
three
and
it
has
the
switch
to
use
Coco,
swish,
caca
wash
and
the
two
open
issues
these
two.
So
please,
yes,
the
first
one
is
about
how
we
should
negotiate
their
use
of
caches
next,
please.
M
So
the
current
approach
is
to
let
the
client
send
a
indication
and
then
the
server,
if
it
sees
an
indication
waits
for
the
client
is
in
the
cache
that
is
before
it
decides
what
to
push
and
on
the
other
hand,
the
client
wait
for
a
service
signal
to
which
indicates
the
service
willingness
to
accept
the
cache,
Dallas
and
interest.
One
point:
three:
the
signal
can
be
sent
in
the
0.5
RTD
fright,
so
it
doesn't
cause
any
delays
and
for
the
resuming
sessions
that
could
be
easier
to
do
assumption.
M
A
client
can
remember
the
signal,
along
with
the
musician
ticket
message
and
use
that
as
a
signal,
so
that
this
is
the
current
wall
next,
please
so
the
issue
is
there,
so
we
have
basically
two
issues.
Well,
the
service
indication
is
per
connection,
whereas
in
case
of
a
service
supporting
multiple
origins,
we
might
want
to
make
that
decision.
Power
G.
So
is
that
a
good
idea,
or
in
that
case,
should
we
the
origin
frame
in
some
form,
to
negate
that
thing,
and
the
next
issue
is
dead?
M
B
K
Alessandro
godina
CloudFlare
regarding
the
first
question:
I,
don't
think
it
should
be
per
origin.
The
indication
basically
indicates
that
the
server
supports
cache
digests
and
that's
a
per
server
thing
for
origin,
I.
Think,
but.
V
M
V
H
Bishop
but
I
will
make
the
comment
that
the
main
case
where
you
want
to
spell
it
out
is
when
you
have
wild
card
subdomains,
because
I
remember
correctly,
origin
never
did
pick
up
a
wild
card.
Did
it
that's
correct,
and
so,
if
you
you're
going
to
have
to
enumerate
to
the
client
all
the
domains
on
which
you
think
it
could
possibly
make
a
request
and
you'd
like
to
see
the
digest,
even
even
if
your
cert
is
start
up
whatever
and
you
don't
send
an
origin
frame,
so
the
search
stands.
A
Well,
if
you're,
not
caching,
it
between
connections,
it
seems
like
the
origin
frame
is
the
right
match.
But
if
you
are
caching
things
beyond
the
scope
of
a
connection,
then
it
doesn't
make
sense.
I.
Think
caching,
which
cashing
this
information,
okay,
yeah
I
mean
origin,
does
have
extensibility
I
did.
A
A
B
So
that's
about
your
question
of
is
this
per
origin,
or
is
this
scoped
to
the
connection
right?
Because
origin
frame
is
clearly
just
by
its
name
scoped
to
the
connection
right,
you
can
have
two
connections
to
the
same
server
with
different
sets
of
origin
frames
and
they're
very
good
uses
cases
for
doing
so.
Yes,.
T
From
gondwana
I
haven't
read
this
back
admin
so
I'm
just
listening
along
and
taking
notes,
but
I
guess.
The
question
here
is:
what
does
the
client
need
to
know
to
decide
whether
to
send
something
back
or
not?
If
the
service
supports
handling
a
request
that
says,
here's
the
digest
and
handles
it
correctly,
even
if
it
can't
use
it
for
a
particular
origin
that
its
handling
is
that
a
problem.
M
M
T
Where
I'm
getting
to
this,
is
it
scoped
whether
the
server
can
handle
it
or
not,
rather
than
whether
a
particular
resource
can
handle
it
or
not?
It
is
my
impression,
which
wouldn't
means
it
makes
sense
for
a
connection
rather
than
a
specific
origin,
because
it's
the
server
capability
that
you're
asking
for
rather
than
the
resource
capability.
It's.
A
M
A
N
B
N
To
add
to
the
complexity
of
this
caching
frame,
once
we
have
secondary
certs
I
can
envision
servers
that
want
to
that
are
34
and
want
to
receive
cash
digests
for
multiple
hosts
and
they
need
to
indicate
which
hosts
they're
interested
in
getting
cash
digests.
For
because
the
browser
doesn't
necessarily
want
to
send
the
digest
for
everything
that
server
is
authoritative
for
yeah.
B
M
So
quickly
we
have
four
types
of
digest,
which
is
a
combination
of
how
we
create
chocolate
chasm
and
whether
it
deals
with
a
freshly
cached
response
or
still
just
response.
But
unfortunately,
the
only
one
which
is
known
to
work
well
is
for
high
for
the
for
the
first
row,
which
is
a
using
a
URL
already.
P
M
Just
to
calculate
the
hash
time,
as
well
as
indicating
it
to
show
the
list
of
freshly
cached
objects,
other
all
the
other
patterns
have
issues
related
to
how
hep-2
is
specified
or
how
it
is
implemented
or
how
a
service
implemented
next,
please
so.
But
the
fortunate
thing
is
the
the
first
up,
the
first
row.
One
is
known
to
work
well
and
actually
Europe
has
gather
some
data
from
HTTP
archive,
and
it
shows
that
the
it
shows
that
the
most
of
the
long
term
charged
resources
actually
the
most
critical
ones.
M
K
Alessandro
Katina,
Cutler
I
agree
that
removing
would
be
better,
there's
also
a
complication
where,
if
you
have
an
HTTP
to
start
separate
from
your
cache-
and
you
don't
really
know
the
e-tag,
then
and
the
client
actually
uses
the
the
validator
or
say
that
it's
the
stuff.
And
then
you
can't
really
you
don't
really
know
what
to
do
with
it.
You
were
probably
just
going
to
ignore
itself
so
agree
on
removing.
Thank
you.
A
A
Moving
on
next
up
is
client
hints
and
we
have
again
Ilya
couldn't
make
it
so
we
have
an
update
from
him.
So
most
the
reason
act
activity
has
been
around
except
CH
lifetime
and
the
privacy
implications
of
client,
instant,
general,
and
so
he's
saying
we
converged
on
clients.
Delivery
is
restricted
to
secure
contexts,
which
is
a
w3c
term.
It
means
basically
HTTP
and
similar
things
opt
in
his
origin,
scoped
and
by
default
extends
to
one
party
resources.
A
A
A
A
Don't
think
I
haven't
looked
at
in
a
little
while
they
used
to
describe
how
to
use
it
with
key
or
at
least
in
reference
key
I
think
that's
been
taken
out.
I
think
they're
waiting
for
variants
to
be
adopted,
or
at
least
for
its
future,
to
be
clarified
before
they
take
any
further
steps.
I
have
talked
to
Elliot
extensively
about
client,
hints
and
variants
to
make
sure
it
meets
his
use
cases,
though.
Ok
back
microphone.
B
S
Dodi
UC,
Berkeley
and
I
agree
with
these
changes.
I
think
I
think
as
it's
noted
that
there's
potential
increased
benefits
over
the
previous
drafts
for
privacy
and
security,
so
so
great
work.
I
am
still
a
little
bit
confused
about
just
sort
of
a
status
and
direction.
Sometimes
in
the
group
it
seems
like
this
is
just
for
DPR
and
client
width
and
then
other
times
it's
like
hey.
S
We
could
put
anything
into
a
client
in
and
we'll
start
sending
geolocation
lat/long
and
these
headers
and
just
there's
there's
a
big
difference
in
what
kind
of
review
we
need
to
do
or
like
how
we
can
consider
all
the
implementations
based
on
what
kind
of
data
is
gonna
go
through
there.
So.
A
So
the
current
draft
contains
all
the
hints
that
we're
talking
about
now.
People
do
raise
issues
in
the
repo
to
say:
Oh
we'd,
like
this
one,
we've
so
far,
shied
away
from
adding
more
ones
without
a
lot
of
process
around
that
and
I
think,
while
client
hints
is
of
considered
to
be
a
framework.
In
other
words,
it's
something
that
you
can
add
new
ones
to
later.
I
think
the
expectation
is,
is
that
you'd
have
similar
security
and
privacy
review
before
you
actually
adding
them.
Okay,.
S
Great
and
the
other
question
is
the
lifetime
idea-
is
a
little
bit
unusual
like
compared
to
like
in
some
ways
where
analogizing
to
JavaScript
web
browser
permission,
interaction
and,
and
in
that
case,
like
sites,
don't
indicate
how
long
they
want
permissions
for
yeah.
Maybe
they
should
or
like.
So
what
was
the
comment.
A
Maybe
they
should
yes,
maybe
they
should
okay
yeah.
S
C
An
interesting
point:
Martin
yeah,
Simon
Thompson,
that's
a
great
point:
I
like
that
I,
actually,
don't
like
the
lifetime
thing,
but
considered
myself
to
be
in
the
minority,
and
so
would
I'd
be
okay
with
removing
it.
If
that's
where
people
want
to
go,
but
that's
not
what
I
got
people
to
say.
These
changes
that
you
see
on
the
slide
and
not
the
changes
that
may
be
slightly
more
happy
with
the
privacy
properties
of
the
protocol.
Please
don't
actually
change
things
materially
at
all.
C
The
the
properties
that
made
me
more
happy
was
the
more
robust
text
around
the
the
sort
of
considerations
you
might
apply.
When
you
decide
to
start
sending
one
of
these
things
and
under
what
conditions
you
might
send
them,
and
so
there's
a
ton
of
that
in
the
draft
now
and
I
won't
say
that
I'm
100%
happy,
but
it's
much
much
better
than
it
used
to
be.
C
How
did
the
reason
that
text
was
added
was
in
response
to
comments
about
geolocation
I
would
be
firmly
opposed
to
anyone
putting
geolocation
in
this
one
with
no
matter
how
much
text
you
put
in
there,
because
we
simply
don't
understand
how
to
control
that
sort
of
information,
and
it's
very
different
to
the
sort
of
things
that
the
current
draft
concentrates
on.
So
just
setting
some
expectations
there.
C
If
someone
tries
to
put
something
in
here
and
then
really
what
we're
talking
about
is
defining
a
new
HTTP
header
field
that
looks
private
information
about
users
on
in
at
this
layer,
then
we
assess
that
on
its
own
merits,
and
this
just
gives
us
a
little
bit
more
text
that
allows
us
to
deal
with
with
those
sorts
of
things
and
maybe
explain
to
people
the
sort
of
things
that
you
need
to
consider
when
you
go
into
that.
It's
kind
of
odd
because
we
haven't
had
that
in
the
past.
L
Great
good,
so
it
seems
like
this
condition
three
cuts
in
two
directions
right,
so,
on
the
one
hand
you
know
the:
if
you
can
store
cookies
and
run
JavaScript,
then
you
can
extract
this
information
and
just
shove
in
a
cookie
somewhere,
and
so
it's
not
clear,
not
clear
what
this
does.
On
the
other
hand,
for
privacy,
on
the
other
hand,
if
you
can
shut
this
this
information
and
with
with
stories
or
in
JavaScript,
then
why
do
we
need
that
arc?
L
It
seems
to
be.
The
only
difference
is
for
the
first
is
for
the
first
hip
right
am
I
missing
something
important
I.
L
B
W
Preload
scanner,
you
don't
get
a
crack
at
it
in
JavaScript,
the
preload
scanner
executes
fetches
about
content
in
the
document
well
before
JavaScript
executes.
So
potentially
the
requests
may
be
fetched
in
a
way
that
the
server
will
want
to
accommodate
without
the
document
having
had
a
chance
to
modify
the
outbound
requests
in
the
first
place
and
with
okay.
A
B
A
Work
so
I
don't
have
an
update
from
Mike
on
the
cookie
spec,
but
I
did
have
another
photo
from
London
which
I
liked
so
we've
been
doing
cookies
for
a
while
I
know
that
he's
incorporated
some
of
at
least
some
of
the
drafts,
so
we're
hoping
to
close
this
down
relatively
soon.
But
we
understand
that
you
know
there
are
other
things
that
are
happening
as
well,
so
we
don't
really
have
an
update
per
se,
but
hopefully
we'll
be
able
to
wind
that
up
in
the
near
future.
You
know.
A
I
I
There's
a
collection
of
use
cases,
the
the
one
that's
currently
justifying
a
lot
of
investment
from
Google
is
that
is
something
we're
calling
privacy-preserving
prefetch,
which,
along
with
some
other
changes
that
other
people
are
working
on.
Let's
Google
search,
treat
amp
and
non
amp
content
the
same,
which
should
help
with
a
lot
of
the
the
hatred
toward
amp.
I
Another
use
case
is
avoiding
the
Slashdot
effect.
So
if
you've
got
a
popular
website
that
links
to
a
small
websites
content,
you
might
be
able
to
use
an
Origin
signed
xchange2
to
avoid
knocking
over
that
website.
I
should
step
back.
An
Origin
signed
exchange
is
a
representation
of
an
HTTP
exchange,
so
a
request
response
pair
that
is
signed
as
an
Origin
and
then
a
browser
could
trust
that
it
actually
came
from
that
origin.
So
that
has
some
interesting
security
properties
that
that
we're
going
to
talk
about
later.
I
I
So
we
have
to
trade
that
off
there's
a
potential
use
case
that
to
to
push
content
from
other
CDNs
I,
don't
know
how
to
actually
do
this
in
a
private
privacy-preserving
way,
but
if
people
can
figure
it
out,
the
CDNs
are
very
excited
and
you
can
do
it
if
you
don't
care
about
caching
and
then
the
the
original
goal
for
all
of
this
work,
which
doesn't
work
yet
with
just
storage.
Unsigned
exchanges
is
to
be
able
to
share
whole
websites
peer-to-peer,
while
you
don't
have
any
connection
to
the
internet.
I
So
this
helps
with
developing
countries
or
people
with
very
expensive
data
plans.
Next
slide,
the
the
basic
structure
here
is:
you
have
an
HTTP
exchange.
We
define
a
new
signature
header
which
signs
a
collection
of
things
it
it
identifies
a
certificate
chain
and
gives
the
hash
of
the
certificate
you
expect
to
find
there
that's
a
URL
rather
than
just
embedded,
because
you
might
have
the
same
certificates
for
several
different
signed
exchanges.
I
It
gives
a
validity
range
which
is
currently
limited
to
seven
days
to
match
the
OCSP
validity
window.
It
allow
gives
you
a
URL
that
you
can
use
to
update
the
signature
so
that
you
can
get
a
new
signature
at
the
end
of
that
validity
range
without
redownload
in
the
entire
exchange.
It
identifies
a
header
that
guard
guards
the
payloads
integrity
for
the
initial
implementation.
I
That's
Martin
Thompson's
mi
header
of
the
Merkel
integrity
encoding,
but
it
could
be
other
ones
in
the
future
and
then
you
actually
sign
using
the
certificates
public
key,
a
concatenation
of
a
subset
of
those
things
that
we
think
are
important
to
to
make
sure
our
authentic
it
defines.
We
define
three
ways
to
transfer
these
signatures.
You
can
use
a
normal
HTTP
header
in
a
normal
response
for
a
same
origin,
signed
exchange.
I
We
believe
that
privacy
is
just
about
the
same
as
what
you
currently
get
over
HDS.
This
is
surprising
because
a
signature
provides
authenticity,
but
not
confidentiality,
but
the
reasoning
is
that
someone
who
is
who's
trying
to
get
a
victim's
private
information
has
to
give
them
the
link
to
the
signed
exchange,
and
so
they
they
know.
I
The
link
that
someone's
following
and
link
tracking
is
a
thing
they
can
follow
a
redirect
series
or
just
use
JavaScript,
to
know
that
someone
clicked
on
an
exchange
if
we
were
to
add
a
feature
to
discover
exchanges
through
some
side
channel
that
would
break
this
property.
So
we
shouldn't
do
that.
There
is
one
change
compared
to
HTTPS.
The
link
source
can
guarantee
the
targets
content,
instead
of
just
being
pretty
sure
it's
what
the
link
source
retrieved
from
the
from
the
target
server.
I
It's
a
question
whether
this
matters
I,
don't
think
it
does
next
slide.
A
bunch
of
architectural
risks
have
been
raised
so,
unlike
the
something
in
a
client's,
HTTP
cache.
If
a
user
can
load
a
signed
exchange
without
ever
connecting
to
the
origin
server
now
the
assigned
exchange
can
then,
if
it
can
execute
JavaScript,
it
can
ping
the
origin
server
and
say:
hey
I've
been
fetched
if
it's
just
an
image
it
can't.
I
This
is
very
similar
to
old,
HTTP
non
gos.
Caching
proxies
people
pay
CD
ends
to
do
this,
but
there's
a
question
of
whether
someone
whether
it
should
be
possible
to
opt
in
by
signing
an
exchange
to
someone
you
didn't
hire
to
to
cache
your
your
content.
It's
a
totally
new
proxy
model,
so
we've
had
normal
proxies
that
are
configured
by
the
browser.
I
I
There's
a
bunch
of
security
risks.
So
all
the
risks
with
certificate
frame
also
applied
to
signed
exchanges.
It
is
possible
to
replay
messages
to
clients,
so
0
RT
T
in
TLS,
1,
3
and
and
session
resumption.
Let
you
replay
requests.
This
allows
you
to
replay
responses,
there's
some
defenses
against
the
attacks
that
opens
up
in
the
draft,
but
there
are
also
other
things
like
JavaScript
setting
cookies
that
we
can't
just
block.
I
So
a
server
has
to
has
to
be
aware
of
that
when
they're,
when
they're
deciding
to
sign
an
exchange,
it
allows
downgrade
attacks
within
the
signatures
of
validity
window.
So
if
a
server
finds
an
XSS
exploit
in
their
JavaScript
upgrades,
the
JavaScript
pushes
out
any
website
an
attacker
can
notice.
The
change
in
the
website
take
their
cache
scientist
and
send
it
to
victims,
so
this
was
this
was
already
possible
if
the
user
caches
vulnerable
JavaScript,
but
now
the
attacker
can
force
the
victim
onto
the
vulnerable
JavaScript.
I
I
I
I
People
could
have
opinions
that
we
we
should
make
it
bigger
or
smaller
and
there's
a
way
to
refetch
signatures.
So
a
client
that
wants
to
check
if
it's
being
downgraded
could
fetch
that,
from
the
server
directly
from
the
origin,
server
signing
Oracle's
are
defended
against
by
requiring
a
new
x.509
extension
so
that
you
can't
use
a
TLS
signing
key
as
the
signing
key
of
a
pact
for
a
package.
So
you
have
to
you
have
to
at
least
claim
that
you
don't
have
a
signing
Oracle.
Q
I
So
the
date
range,
the
like
begin
date
to
end
date
is
seven
days,
but
it
is
possible
to
sign
one
that's
valid
like
that
starts
being
valid
a
year
from
now
and
is
valid
until
a
year
and
seven
days
from
now.
Okay,
next
slide
so
chrome
is
currently
implementing
this
we're
doing
a
subset
of
the
draft
and
I've
kind
of
copied
and
subset
of
the
draft
into
a
new
draft
that
that
we
can
use
for
versioning.
I
R
I
R
Smaller
unconstrained,
entirely
I
see
no
reason
for
ability
wouldn't
do
at
all.
If
you're
concerned
about
changes
in
JavaScript
might
indicate
a
bug
just
required
not
be
used.
In
that
case,
you
know
we
require
that
the
identifier
be
changed
for
that
particular
type
of
resource.
It's
the
value
of
being
able
to
sign
public
documents
that
are
not
JavaScript
far
exceeds
you
know
for
for
the
sake
of
privacy.
It
far
exceeds
the
concern
of
that
they
described
thanks.
X
I
X
A
L
Dara
briscola
so
I
mean
first
thanks
for
attempting
to
ameliorate
some
of
the
horrifying
security
properties,
as
results
are
somewhat
less
horrifying.
L
My
two
of
the
legs
is
the
underlying
concerns
of
this
proposal,
one
of
which
I
suppose
this
technical,
one
of
which
I
suppose
is
it,
is
more
philosophical.
The
technical
one
is
I,
spend
a
bunch
of
time
trying
to
reason
about
the
security
properties
of
this
and
I
still
can't,
and
you
know
that
term.
That
makes
me
very
uncomfortable.
You
know
certainly
lots
of
terrible
things
can
happen
if
people
are
not
careful
the
way
they
have
minutes
their
headers.
L
So
you
know
having
this
having
something
which
is
a
which
is
a
statically
signed.
Object
appearing
in
the
origin
is
quite
concerning.
L
As
I
say,
we
could
probably
work
through
those
issues
in
the
case
of
a
sry,
caching
or
basically
same
origin
signatures.
The
order
of
origin
substitution
feature
strikes
me
as
something
which
is
unnecessary
for
a
large
number
of
use.
Cases
I
think
the
ITF
should
care
about
and
is
toxic
for
these
cases,
which
I
think
the
idea
probably
shouldn't
be
endorsing,
and
so,
if
we
were
to
adopt
this
I
think
we
should
really
focus
on
the
cases
that
are
not
do,
involve
Ward
substitution
and
see
how
that
goes.
O
Y
Y
I'm,
not
sure
did
you
have
in
mind
to
make
that
a
static
window,
because
responses
seven
days
is
a
maximum
defined
by
I
cap
forum.
Actually,
most
CAS
make
that
window
shorter
and
while
we
even
get
often
requests
to
make
it
even
shorter
and
you
also
have
a
cache
period,
so
a
request
could
be
valid
for
a
couple
of
days,
maybe
only
yes.
I
The
the
seven
days
is
just
kind
of
a
first
guess
at
what
the
right
the
right
maximum
length
is.
We
had
a
request
to
make
it
longer.
We
have
a
request,
a
possible
request
to
make
it
shorter
for
the
cache
headers
that
interact
with
the
signature
length.
We
we
think
that
we
will
stop
trusting
something
at
the
four
of
the
two
lengths,
depending
on
which
one
expired.
We
might
refresh
it
in
different
ways
that
we're
still
kind
of
working
out.
Okay,.
Z
Ben
Schwartz,
so
you
mentioned
potentially
having
more
aggressive
verification
or
more
aggressive.
Checking.
I
was
wondering
if
you
could
talk
just
a
little
bit
about
what
possibilities
you
see
there
off
the
top
of
my
head.
It
seems
like
you
could
imagine
something
like
static
content,
essentially
is
valid
immediately,
but
either
maybe
JavaScript
can't
run
until
after
the
signature
is
verified.
Online
or
JavaScript
can
run,
but
the
page
doesn't
have
an
origin
until
the
signature
is
verified,
so
it
doesn't
have
access
to
the
origin.
I
Are
those
are
possible?
We
haven't
figured
out
exactly
when
browsers,
should
fetch
the
validity
URL
to
double-check
the
the
idea
of
giving
it
no
origin
or
putting
it
in
a
unique
origin
until
it's
verified
is
difficult.
If
it
calls
API
is
that
access
storage,
because
then
you
kind
of
have
to
swap
out
its
storage
underneath
it
which
can
lose
data.
It.
S
Nikto
UC
Berkeley
I,
like
how
careful
you're
being
and
describing
the
security
properties
and
the
list
of
headers
that
might
indicate
that
this
is
stateful
I'm
curious
they're,
like
what
should
a
server
do
like
assuming
server
screw-up,
which
which
I
feel
like
is
a
safe
assumption.
What
should
the
server
do
after
they
screw
up
after
they
sign
something
that
was
stateful
but
but
somehow
got
through
those
checks?
How
would
they
on
screw-up
right.
I
I
I
A
Was
a
notion
there
I'm
not
I'm,
not
sure
to
that
solid.
Yet,
okay,
that's
that
was
one
of
the
things
discussed,
yeah
and
and-
and
so
that's
it
for
that's
it
for
this.
So
thank
you
for
that
that
that's
very
good,
like
Patrick,
said
we're
not
adopt
considering
adopting
this
now
that
doesn't
mean
that
that
will
not
ever
change
so
I'd
encourage
you
to
continue
talking
to
folks
both
here
and
on
the
list
and
if
folks
have
concerns,
or
they
support
this
or
whatever
it'd
be
great
to
hear
about
that.
So
thank
you
very
much.
A
B
However,
our
next
set
of
items
are
things
that
we
think
are
right
before
the
working
group
to
consider
adoption
on
the
first
one
is:
may
she
beter
we
talked
about
this
in
Singapore
and
before
this
presentation
starts,
I
just
want
to
sort
of
set
the
stage
a
little
bit.
You
know
we
I've
been
instructed
that
I'm
supposed
to
pronounce
that
Terr.
B
B
In
any
event,
you
know
the
discussion
we
had
in
Singapore
and
a
little
bit
on
the
list
scopes
this
as
a
statement
of
essentially
the
HTTP
semantic
layer
as
being
the
most
important
revision
of
the
7230
series
of
documents
that
we
all
know
and
love,
and
that
7540
refers
to
the
semantic
bits
of,
even
though
no
one's
is
actually
sure
where
those
are
defined,
and
so
that's
one
possible
very
important
outcome
of
this
work.
There
are
others
if
we
do
choose
to
adopt
this
work.
B
I've
asked
Julian,
Marc
and
Roy
all
who
worked
on
the
first
set
of
while
they
were
in
the
first
set,
but
the
the
last
Restatement
HTTP
the
best
two
to
be
editors
on
all
the
documents
that
may
be
output
from
this
effort
and
the
question
of
how
many
documents,
I'm
sure
will
come
up.
But
so
that's
my
instruction
and
Julian.
You
can
drive
this
service
marketing
drivers
are
we
gonna
drivers
come
join
us
on.
R
G
G
G
When
to
update
depending
on
how
much
we
want
to
do,
it
will
be
something
like
five
years
since
the
last
revision,
which
isn't
that
quick,
so
I
think
it's
it's
the
right
time
to
start
work
on
that
and
also
we
might
actually
hit
the
point
of
time
where
the
RFC
editor
starts
publishing
documents
and
HTML
formats.
So
people
can
actually
look
at
a
readable,
spec
x,
slide.
R
The
only
real
changes
were
are
where
places
where
the
the
existing
RFC's
refer
back
to
the
prior
RFC's
261,
six
we're
changing
those
we
that
to
refer
to
the
RFC,
7230
31,
yet
cetera,
and
we
took
out
the
history
sections
that
are
referring
back
to
previous
RFC's
as
well,
so
other
than
that.
It's
all
exactly
the
same.
The
only
mechanical
differences
really
er
that
RFC
to
H
XML
to
RFC,
some
of
the
some
of
which
are
bugs,
which
have
already
been
fixed
in
XML
to
RFC.
There's.
G
Next
slide:
okay,
scope
discussion,
so
the
obvious
things
to
do
is
applying
errata
updating
references,
resolving
the
or
trying
to
resolve
the
issues
that
have
been
collected
on
github,
and
the
number
of
these
issues
went
up
from
30
to
50
between
IDF
99.
Until
today,
which
is
a
good
sign
and
minimally,
we
need
to
say
that
there's
also
HTTP
and
where
to
find
it.
So
that's
something
that
absolutely
needs
to
be
done.
G
I
had
a
spec
telling
describing
how
a
server
would
tell
a
client
which
continent
Holdings
it.
It
would
accepts,
and
we
had
recently
mother
requests
for
service
to
be
able
to
specify
which
media
types
in
a
request
they
would
accept.
So
that
could
also
be
go.
There
I
also
mentioned
random
access,
because
at
some
point
we
thought
that
this
was
just
a
clarification
or
slight
extension
of
the
range
model.
I'm,
not
sure
whether,
where
we
stand
with
respect
to
that
right
now
and.
G
R
Yeah
I
mean
certainly
my
my
intention
goal
would
be
to
not
make
any
technical
changes
that
would
cause
it
not
to
be
the
standard
that
gotough
will
start
just
because
there's
there's
really
no
reason
after
this
to
to
revise
it
again
and
if
we
don't
have
full
standard
for
HTTP
it's
kind
of
stupid,
because
there
are
many
times
in
which
people
have
asked
us
that
to
move
it
full
standard,
even
with
known
problems,
whereas
we
can
actually
fix
all
the
known
problems
and
just
move
it
at
the
point.
Martin
yeah.
C
M
C
At
least
sorry
candidates
for
inclusion
in
the
in
the
main,
in
these
documents
that
doesn't
necessarily
conflict
with
your
goal
of
going
to
full
standard
I.
Personally,
don't
think
the
value
of
full
standard
merits
special
attention
from
us.
So
if
we
decide
that
something
in
here
is
this
needs
to
go
in
there,
it
should
just
go
in
there
and
we
proceed.
Otherwise,
it
is
our
responsibility
to
maintain
documents,
not
necessarily
strive
for
perfection,
because
we
know
how
how
well
that
works
in
practice.
I've
written
a
few
things
on
this
topic.
R
I'd
agree
with
that,
a
with
a
separate.
You
know
at
slight
extension
that
in
we
do
have
a
responsibility
to
communities
outside
of
just
the
ITF,
in
that
they
rely
on
us
to
provide
something
that
we
consider
to
be
a
standard
so
that
their
we
consider
to
be
a
standard,
can
reference
it
at
the
same
level,
I
mean
the
ITF.
G
G
B
Yeah
and
technical
bits,
so
thank
you
for
that.
I
would
actually
like
to
clarify
mostly
what
you've
put
up
on
these
slides,
which
is
great.
So
if
people
have
comments
specifically
about
time
line,
relation
to
the
quick
work
and
the
quick
working
group
and
whether
or
not
we
think
that
would
impact
this
document,
which
is
not
obvious
that
it
would
and
also
can
you
go
to
the
I-
do
about
Mike's
comments
there,
but
can
give
the
previous
slide.
B
You
know
also
the
bullet
here
about
split
information
specifically
exclusively
HDTV
one
into
a
separate
document
and
re
label.
Everything
else
just
issue,
HTTP
I
call
this
the
mysterious
semantic
layer
of
you
know
of
calling
it
out
and
I
won't
make
a
chair
comment
that
I
think
our
ecosystem
badly
needs
that
contribution,
so
I
would
want
that
to
be
in
scope.
Personally,
I
have
individual
contributor
comments
and
some
of
the
other
things
but
I'm
having
to
open
them.
The
Mike's
and
these
particular
questions
and.
A
H
H
Yes,
please,
it
would
make
referencing
hcp
separate
from
one
one
and
separate
from
HTTP
to
so
much
easier
and
then
there's
the
timeline
that
if
we
actually
want
the
quick
Doc's
to
be
done
by
the
end
of
this
year,
this
isn't
going
to
be
done
by
the
end
of
this
year,
and
so
we
we're
gonna
have
to
sort
out
I
assume.
We
don't
want
to
hold
HTTP
over
quick
for
this,
but
then
that
would
mean
you're
going
to
immediately
replace
what
I'm
referencing
with
something
better
right.
So.
A
Would
agree
so
justjust
I'll
respond
to
that
I.
Think,
first
of
all,
you
know
with
my
quick
head
on
the
intention
is
to
have
the
technical
work
done
at
the
end
of
the
year.
There
may
still
be
some
baking
after
that.
Having
said
that,
I
intend
to
put
a
fair
amount
of
energy
into
this
and
I'd
really
like
to
get
it
done
quickly.
I
do
not
want
to
spend
five
years
doing
this
I'd
like
to
spend
less
than
a
year
doing
it
I,
don't
know
how
you
guys
feel
right.
R
What
we
have
right
now
we
already
have
the
text
done
right,
new
text
from
so
it's
it's
squeezing
them
into
the
right
space,
which
I
could
do
this
week,
mm-hmm
and
then
the
going
through
this
set
of
issues
that
have
been
refined
already
for
the
previous
one,
which
is
again
it's
not
that's
not
going
to
take
the
ought
the
editor
as
much
time.
It
may
take
the
working
group
more
time
to
figure
out
if
we
have
working
group
consensus,
but.
P
R
A
Took
a
long
time
because
26:16
was
in
a
pretty
bad
state
and
we
had
to
work
through
a
lot
of
issues
to
clarify
things
and
figure
out
where
we
laid
on
you
know
the
interrupts
was
really
bad.
That's
not
the
case.
Now
we're
working
with
a
much
more
mature
document
set.
Okay,
if
we
do
say
so
ourselves,
yeah
I
was
I,
was.
B
Done
before
2019
at
the
earliest
sure
that
meeting
said
schedule,
risk
is
something
you
know
everyone's
going
to
assess
for
themselves
and
part
of
the
schedule.
Risk
is
gonna,
be
the
engagement
of
the
working
group,
as
you,
you
know,
mentioned
whether
you
know
they're
gonna,
be
all
help,
progress
these
documents
and
get
the
reviews
done
so
before
we
get
to
the
adoption
questions
I'm
just
interested,
maybe
in
a
show
of
hands
of
people
who
feel
that
they
would
be
willing
to
be.
You
know,
involved
in
in
reading
these
documents
and
helping
push
the
process
forward.
B
R
Wave
time
yeah
not
not
more
so
I,
just
go
back
to
this,
the
we
were
actually
talking
about
splitting
up
the
the
document
and
we've
we've
talked
about
this
various
times.
I
mean
the
the
history
behind
it.
We
split
at
the
beginning
of
the
the
previous
revision
process.
I
split
up
the
document
to
six
one
six
and
to
seven
parts
mostly
so
that
we
did
assign
different
editors
to
different
parts
and
do
things
in
parallel,
but
it
turned
out
that
we
didn't
actually
go
that
way.
R
We
did
all
the
documents
in
sequence
together
and
there
wasn't
really
that
need
to
split
them
out.
It
was
useful,
however,
to
isolate
the
issues
in
this
case.
My
personal
feeling
right
now
is
that
we
could
move
all
the
semantics
documents
into
one
document
move
the
HCP
Architecture
from
what's
currently
part
one
of
the
messages
into
the
semantics
document,
and
so
have
one
HCP
semantics
document,
one
HTTP
cache
document
and
one
HTTP
messaging
document,
and
that
would
be
it
so
we'd
end
up
with
three
documents.
R
A
P
B
I
would
just
have
a
certain
tension
there
between
when
you
have
a
smaller
granularity
right.
It
allows
us
to
revise
documents.
I
know
we
don't
want
to
do
this
again,
but
you
know
it's
not
implausible
that
we
would
rather
serve
eyes
them
independently,
and
so
that's
from
a
process
point
of
view,
I
think
that's
really
the
strong
strongest
argument,
the
resonates
with
me
for
multiple
documents.
You
know
as
an
engineer.
B
A
A
Think
it
stands
out
as
especially
different,
then
I
think
the
one
thing
about
it
is
it's
it's
long
ish
and
it
is
targeted
at
a
separate
implementer
right
that
that's
the
only
really
right.
So
that's
one
with
me
to
to
is
is
great.
One
doesn't
make
sense
to
me,
makes
sense
to
me
three
makes
sense
to
me
so.
A
B
Is
appreciated
and
I
did
actually
were
going
to
talk
about.
You
know
having
a
hump
to
adopt
it.
I
wanted
to
get
a
sense
of
the
scope
of
where
we
were
going,
but
I
do
agree
that
you
know
the
details
you
know
are
what
the
consensus
of
working
groups
are
made
of
and
will
make
that
happen.
You
know
and
after
adoption
I
think
that's
fine.
So
are
there
any
other
comments
that
would
like
to
be
made
before
we
I'm
hum
we've
had
a
two-hour
meeting
about
a
hum.
B
You
will
hum
for
the
first
option
and
the
second
option
is
not
to
adopt
this
work
at
this
time
and
if
we
end
up
there-
and
there
is
an
alternative
I
guess,
we
will
explore
those
paths-
okay,
so
I'm
option,
one
which
is
for
adoption-
please
some
now
and
option
two
to
seek
another
path.
Now,
oh
that's
great,
very
clear.
I
R
God,
a
new
meme
is
begun.
One
question
I,
actually
do
you
want
us
to
stay
in
the
repo
that
I
set
up
now,
which
is
just
in
my
private
repo
space?
I
was.
A
X
A
H
There
you
go
all
right,
so
we're
talking
about
keeping
the
data
leakage
down
from
putting
the
hosts
that
you're
trying
to
talk
to
into
your
TLS
handshake
I.
Imagine
if
you
come
to
an
IETF
before
you've
probably
heard
a
session
on
this,
because
it's
a
topic
that
keeps
coming
up
and
it's
a
hard
problem
like
slide
yeah.
H
So,
in
a
lot
of
ways,
encrypted
S&I
has
been
kind
of
the
Holy
Grail.
It's
something
that
we'd
really
like
to
do
in
TLS,
and
there
have
been
some
interesting
proposals
of
way
to
do
ways
to
do
that.
I
think.
Is
it
Thursday
that
we're
going
to
talk
about
the
tunneling
TLS
in
TLS
proposal?
I,
remember
another
talk
where
we
could
use
the
hint
chickpeas
for
0tt
to
instead
put
the
s
and
I
and
the
one
RTT
flight,
and
there
are
lots
of
ways
to
approach
this
and
it's
because
whose
names
are
interesting.
H
They
disclose
some
information
about
the
person
who's,
trying
to
visit
the
site
or
about
what
their
interests
are
or
what
their
plans
are,
and
we
would
like
ways
to
avoid
leaking
that
data
where
we
can,
and
there
are
lots
of
places
where
that
information
leaks.
Another
one
is
DNS,
but
sni
is
an
obvious
place
that
people
look
for
and
it's
a
hole
that
we
can
plug
next
slide.
H
So
we're
trying
to
plug
a
lot
of
holes
here
in
in
getting
this
to
be
more
private
DNS
gives
you
the
host
name.
We
have
do
to
help
with
that.
S&Amp;I
leaks
it
in
the
client
hello.
Well,
we
can
use
secondary
search
to
help
with
some
of
that
in
that,
if
you
can
open
a
connection
and
then
once
you
have
the
encrypted
context,
then
say
what
host
name
you're
actually
interested
in
and
get
the
cert
undercover
that
improves
the
privacy
properties.
H
There,
but
that
requires
the
client
to
have
some
kind
of
innocuous
host
name
that
it
can
present
in
the
actual
handshake
that
will
still
get
at
a
valid
cert.
So
the
piece
that
we're
proposing
to
address
that
is
to
extend
all
service
to
allow
you
to
specify
here's.
The
alternative
I
want
you
to
use
and
when
you
connect
ask
for
this
host
name
in
the
TLS
handshake,
and
then
you
might
get
a
might
get
back
a
certificate
that
covers
both
host
names.
H
You
might
get
back
other
certificates
just
valid
for
the
one
you
asked
for,
and
you
need
to
use
secondary
certs,
but
either
way
it
gives
you
a
path
to
actually
approach
an
alternative
with
a
more
innocuous
name.
That
is
visible
on
the
wire.
But,
as
Martin
pointed
out
when
I
sent
that
draft
out
there's
a
good
strapping
problem
and
that
you
still
have
to
talk
to
that
host
before
to
get
the
alt
service
record.
H
H
So
we've
got
a
couple
scenarios
here
that
we
own
that
we
like
to
make
work.
One
of
the
easiest
ones
to
deal
with
is
if
the
host
has
many
domains,
only
some
of
which
are
sensitive
you're
on
the
same
server,
so
you've
got
all
the
search
right
there.
It's
you
more
interesting.
If
you
have
a
wild
card
cert
that,
if
you
have
a
certificate
for
a
start
example
calm,
then
you
can
effectively
conceal
every
sub
domain
that
you
have
just
by
telling
clients
that
they
should
connect
asking
for
www.example.com
or
even
underscore
wild
card.
H
Just
just
pick,
something
that
is
not
going
to
give
away
with
your
real
destination
is,
and
it's
important
to
note
that
you
still
need
the
clients
to
validate
the
first
version
of
the
straf.
T'
said
that
the
client
could
completely
ignore
what
was
in
the
TLS
name,
shake
and
just
ask
for
the
real
one,
but
the
failure
with
that
is
that
you,
then
you
have
an
active
attacker
problem
that
an
active
attacker
could
catch
you
to
in
TLS,
handshake
min
in
the
middle
and
then
see
what
secondary
cert
request
you
made.
H
L
Just
trying
to
work
through
that
Mike,
if
you
don't
mind
so,
can
you
scope
out
the
various
light?
H
L
It's
sorry
so
so
I
sorry
so
I
do
on
the
starry
canopy
IO
and
I
and
I
resolve
for
CI
a
at
github
I.
Oh
and
then
it
says
you
could
do
it
any
30
good
and
you
go
to
innocuous.
Docket
have
the
I/o
and
then
I
already
have
all
that
crap
and
cache
so
I,
just
I
just
stare
on
drip
right.
Okay,
thank
you,
I'm,
not
entirely
sure
I
love
that
but
I
think
it's
okay.
H
So
the
the
SMI
extension
is
really
simple:
it's
just
one
one
more
parameter
on
the
old
service
record
that
says
what
s
and
I
you
suggest
the
client
should
use
and
sends
alt
service
extensions
are
optional
to
understand.
If
a
client
doesn't
understand
this,
then
they'll
just
go
use
the
alternative
like
normal.
They
don't
get
the
privacy
benefits,
but
it
should
still
be
a
valid
alternative.
H
H
Bends
draft
takes
an
old
service
record,
drops
it
in
DNS
so
that
you
can
go,
do
a
lookup
for
it
and
then
just
have
it
without
ever
having
spoken
to
the
host
before
so
that
helps
deal
with
the
bootstrapping
problem,
where
you
have
to
connect
in
the
clear
at
least
once
it
also
gives
you
some
nice
collateral
benefits
like
being
able
to
do
HTTP
over
quick
or
opportunistic
encryption
without
having
to
do
the
clear
text
or
TCP
path.
First
and
there
one
more
question.
B
A
question
so
I
I'm,
generally
sympathetic
and
I
think
this
is
this
is
good
work.
Alternate
services
has
a
couple
things
just
say
about
sni
and
I'm
wondering
if
we
we
think
this
dovetails
without
or
we
need
to
update
alternative
services.
It
says
that
a
client
must
not
use
a
TLS
based
alternative
service
unless
the
client
supports
s
and
I.
B
B
H
Think
the
alt
Indian
sp
said
that
could
actually
be
a
performance
win
and
so
I
would
expect
that
to
be
done
very
broadly,
because
you
have
good
potential
to
step
up
too
quick
or
hgp
to
on
an
alternative
or
an
alternative.
That's
closer
to
you,
hiding
your
s
and
I.
You
may
or
may
not
be
able
to
do
zero,
roundtrip
and
so
I
think
it's
going
to
be
a
an
implementation,
dependent
choice,
whether
you
value
more
consistent,
0tt
usage
versus
sometimes
paying
the
extra
round-trip
to
address
a
knife.
So
the
issue.
P
With
a
sni
value
in
DNS
that
you
send
didn't
get
the
proper
certificate
is
that
S
and
I
can
be
replayed
and
someone
else
gets
the
certificate
that
you
that
will
expose
CX
well
site
you're,
connecting
to
there's
ways
around
that.
If
you
can
give
Akey
that
you
mix
into
the
TLS
connection
as
well,
but
that's
more
more
modification
into
the
TLS
layer,
mm-hmm.
Z
Yes,
so
I
think
we
cover.
We
cover
both
cases,
if
you,
if
you
can
do
this-
that,
if
you
can
get
the
certificate,
you
need
without
disclosing
any
information
and
without
an
additional
round
trip,
then
that
is
what
will
happen,
and
if
that
is
not
possible,
then
you
will
get
the
wrong
certificate,
which
does
not
disclose
what
you're
trying
to
hide
and
the
client
will
will
pay
an
extra
round
trip
in
order
to
get
the
certificate
that
it
actually
wants.
We're
gonna
have
to
cut
the
lines
after
the
people
who
are
in
line.
V
Just
just
to
the
host
header
point:
it
has
been
a
while,
since
I
looked
at
these
deployments,
but
at
least
a
few
years
back,
there
were
deployments
where
the
s
and
I
check
was
used
to
make
sure
that
they
were
in
the
correct
set
of
cryptographic
context.
But
the
host
header
was
still
being
used
to
steer
to
one
or
more
servers
which
were
behind
a
load
balancer.
V
So
I
would
imagine
that
we
would
have
to
look
at
that
to
make
sure
that
it
was
okay
for
us
to
to
make
sure
that
they
changed
in
lockstep,
which
I
think
totally
makes
sense.
Given
what
you're
doing
here
but
I
think
that
the
language
of
the
current
alt
service
draft
might
need
a
minor
tweak
to
make
that
no
longer
forbidden
bytes
back
so.
H
V
It
was
actually
thinking
about
it
in,
in
particular,
in
the
use
like
the
alt
service
records
in
the
DNS
case,
where
you
haven't
gone
and
gotten
the
old
service
from
the
thing
directly
yourself,
but
I'll
have
to
think
through
it.
A
little
more
but
I
I
do
suspect
that
either
modern
practices
already
moved
away
from
this
back
or
we're
about
to.
C
Mom
told
in
a
very
brief
comment:
acne
you're,
looking
at
using
s
and
I
for
validating
ownership
of
domains.
This
would
be
a
landmine
for
them,
so
it
had
to
be
a
little
bit
careful
with
that
and
the
other
one
was
what's
the
relationship
between
the
expiration
time
that
you
have
in
the
record
here
and
the
are
set
things
to
think
about
not
to
not
to
respond
to
just
just
consider
those
things.
Z
L
L
Must
match
so
just
stay
here?
What
Ted
said
all
right?
So
it's
already
the
case
that
host
header
and
s
and
I
can
mismatch
because
that's
what
connecting
coalescence
requires
and
it's
not
in
fact
I've
read
the
spec
in
a
while,
because
I
include
to
me
the
first
request
has
to
match
the
s
and
I
like
say:
I,
you
know
say:
I
connect
to
you
know:
I
want
to
fetch
I'm
trying
to
fetch.
L
You
know
from
a
column,
be
calm
and
I
erase
the
connections
and
then
I
and
I
connect
it,
but
the
Viacom
is
higher
priority
and
so
I
connect
to
a
comm
discover
those
are
the
videos
for
be
a
calm,
be
calm,
I,
think
I
can
just
shove
the
be
not
common
connection
right
away.
I
want
to
write
so
so
it's
already
the
place
that
they
don't
have
to
match.
I
think
would
be
really
good.
On
Christian
we
don't
did
a
nice
job
of
trying
to
like
map
out
the
entire
like
threat
model.
L
Space
and
I
have
to
have
more
this
drafts,
and
maybe
it's
like
totally
go
awesome
about
this,
but
a
really
good
to
define
what
the
assumptions
are
like
in
particular,
I
assume
you're,
not
trying
to
conceal
the
fact
that
it
did.
You
know
that
the
funding
server
is,
in
fact
a
freaking
server
for
these
guys,
because,
of
course,
that's
hopeless.
It
you're
right
so
be
good.
L
H
That
draft
does
talk
about
one
possibility
of
having
an
HTTP
fronting
server
and
concludes
by
saying
there
are
discoverability
issues
with.
How
would
a
client
know
what
fronting
server
to
use
this
in
these
drafts
set
out
to
solve
that
particular
case,
while
Christians
Draft
goes
on
to
outline
a
TLS
generic
solution,
that
requires
a
little
more
knowledge
and
cooperation
from
the
server
side.
H
Z
A
Let's
keep
it
simple,
at
least
to
start
will
do.
Hums
first
will
be
if
you
believe
that
these
documents,
as
a
pair
are
ready
for
adoption,
and
if
you
do
not
the
second
hump,
and
if
we
have
the
second
hump
prevail,
perhaps
we'll
explore
a
bit
further.
So
if
you
think
that
both
of
these
documents
are
ready
for
adoption,
please
hum
now.
B
C
C
A
A
A
We
have
a
toy
implementation
to
prove
the
inspect
our
gardens
and
make
sure
that
they
actually
work
and
and
I
believe
that
it's
suitable
for
prototype
implementation
next
slide.
So
this
is
an
example.
Just
to
refresh
your
memory,
the
variance
header
explains
or
conveys
the
request
headers
that
are
helping
to
form
the
secondary
cache
key
along
with
the
possible
values
for
each
request,
header
and
the
variant
key
says,
which
of
those
values
was
used
to
select
this
particular
response
next
slide.
A
The
previous
proposal
that
is
in
this
space,
which
the
working
group
adopted
and
then
parked
after
we
kind
of
stalled
on
it,
was
key.
He
is
is
different
in
that
you
basically
ship
an
algorithm
in
a
response
header
that
says
how
to
determine
a
secondary
cache
key
for
a
particular
resource,
whereas
variant
that
algorithm
is
in
the
spec
of
the
headers
themselves.
So
it's
fixed.
This
has
a
number
of
trade-offs.
It's
a
little
more
rigid,
but
it's
it's
a
lot
more
reliable
for
the
server
to
produce
the
right
response.
A
Header,
though
the
big
bit
of
feedback
we
got
on
key
was
it
seemed
like
a
big
old
foot
gun
that
people
could
shoot
themselves
with
and
also
there
was
a
lot
of
implementer
reluctance
to
adopt
key.
They
were
concerned
about
the
unbound
nature
of
it,
whereas
variant
is
much
more
constrained
and
and
from
the
implementers
that
I've
talked
to,
there
seems
to
be
a
lot
more
excitement
about
it.
Next,
so
should
we
adopt
it
I
think
it's
ready
for
adoption.
A
C
Some
on
Thompson-
this
is
not
me.
This
is
Mike
real,
a
for
someone
who's
in
the
lake,
my
coop
and
I'm,
relying
something
from
the
chavĂn
room
from
Tom
Peterson,
who
says
it
appears
the
suspect
either
D,
normalizes
or
writes
off.
The
question
will
be
used
that
quality
value
indicators
which
are
common
in
accept
playing.
He
said
the
goal:
that's
the
Pacific
about
the
accept
header,
correct,
yes
and
I.
Imagine
there
are,
is
it
using
another
ones?
No.
A
B
The
room
that
have
read
the
draft
can
make
that
decision
or
we
need
to
go
to
the
list.
Can
I
have
a
show
of
hands.
If
you've
read
the
draft,
maybe
10
they
went
down
almost
asbestos,
they
went
up,
but
on
the
order
of
10,
do
you
feel
like
that
working
set
is
enough
to
to
help
you
make
this
decision
so.
R
D
X
A
B
B
Well,
that
was
you
know
a
modest
home
for
the
first
in
an
empty
issue
for
the
second,
so
I
think
will
confirm
that
Liss,
but
we
have
another
document,
dad
Dorsett.
A
Maybe
one
question:
we've
parked
key:
should
we
abandon
that
or
keep
it
parked
that.
B
B
V
Minute
left,
no,
it
was
actually
a
question
to
you.
You,
you
spoke
feelingly
at
the
sec
dispatch
meeting
earlier
today
about
possibly
having
people
in
this
room.
Look
at
some
of
the
work
that
have
been
brought
before
it
for
a
pepper.
Do
you
want
to
give
pointers?
Well?
Well,
you
have
everybody
with
you.
So.