►
From YouTube: IETF102-HTTPBIS-20180717-1550
Description
HTTPBIS meeting session at IETF102
2018/07/17 1550
https://datatracker.ietf.org/meeting/102/proceedings/
A
B
B
A
B
All
right
welcome
to
session
one
of
a
cheap
abyss
of
two
sessions
we've
done
this
week.
This
is
the
note.
Well
is
Tuesday
afternoon.
You've
probably
noted
well
by
now.
These
are
the
terms
that
govern
your
participation,
I
PR
wise
in
the
IETF.
If
you
do
not
understand
them,
what
your
chairs
and
your
80s
are
more.
B
B
My
understanding
that
someone
would
like
to
reorder
two
of
the
presentations
and,
if
that's
probably
something
we
can
accommodate,
but
this
is
agenda
for
today,
we're
going
to
start
by
going
through
each
of
our
active
drafts
that
the
working
group
has
adopted,
that
isn't
at
least
off
to
the
ISG
and
then
we'll
talk
about
some
proposed
work,
and
we
have
a
couple
informational
presentations
after
that.
Regarding
how
push
which
was
part
of
these
Chiefs
to
7540,
standardization
is
playing
in
the
real
world.
Would
anyone
like
to
make
any
changes
to
the
agenda?
Yeah.
D
B
E
F
F
B
B
Session
somewhere
else,
this
thing's
split
in
two
okay,
there's
some
overlap.
So,
let's
just
let's
keep
wets,
do
all
services
and
I,
then
helium,
then
CD
and
loop
as
the
as
plan
of
record
for
now,
but
it's
possible
we'll
be
able
to
move
through
some
of
the
active
extensions
like
like
quite
quickly.
Some
of
them
have
very
short
reports
to
go
on
so
I
suggest
we
get
started
on
that
and
see
how
we're
doing
right
all
right.
It's
not
good.
Okay,.
A
B
B
All
right
so
we're
gonna
start
with
variance
get
your
slides
up
here,
for
you.
A
A
Yeah
something
like
that
and
it's
been
coming
along
pretty
well
next
slide.
I
think
it
feels
like
it's
getting
close
to
done,
or
at
least
ready
for
some
sort
of
feedback,
slash
implementation,
loop
and
I.
Think
that's
the
big
question
in
front
of
this
draft
is
you
know
we
have
I've
been
talking
to
folks.
We
have
some
in
our
interest.
We
don't
have
any
actual
implementation
yet,
and
so
the
question
at
hand
is:
do
we
want
to
go
ahead
and
publish
this?
A
A
Yeah
so
I
since,
since
I
wrote
these
slides
I've
been
talking
to
some
folks.
First
of
all,
ask
the
question:
is:
does
anyone
in
the
room
intend
to
implement
this
in
the
near
future?
Okay,
so
I
have
a
hand
up
I
knew
about
that
hand.
That's
why
I
asked
the
question?
You
never
ask
a
question.
You
don't
know
the
answer
to
so
I
think
what
we're
gonna
do
is
we're
gonna
work
through
the
remaining
issues.
A
Lathe
has
said
that
he
thinks
this
is
something
he
can
implement
as
a
plug
in
in
in
traffic
server,
with
Brian's
help,
perhaps
and
and
over
a
very
reasonable
amount
of
time.
A
I'm
not
gonna,
talk
about
the
time
scale
that
you
gave
because
I
don't
want
to
make
a
commitment
for
you,
but
a
reasonable
time
scale
and
then
will
based
on
that
feedback,
and
maybe,
if
other
folks
get
interested,
then
we'll
take
a
look
at
where
we're
at
after
that,
if
that
makes
sense,
okay,
great
and
then,
and
that's
of
course,
in
the
cash
side
and
cash
implementation.
The
other
half
of
the
implementation
here
is
generating
the
headers
so
that
the
cash
can
take
advantage
of
them.
A
A
So
it's
basically
a
set
of
alternative
variant
keys
for
this
representation
that
for
the
response
that
the
variant
key
header
occurs
in
and
that
I
think
the
feedback
we
got
on
the
list
just
actually
yesterday
from
Yoav,
was
that
he
really
liked
that,
because
it's
more
expressive,
it
allows
you
to
to
avoid
situations
where
you
know.
If
there's
a
matrix
of
four
different
options
that
the
variance
builds
and
three
of
them
can
be
satisfied
by
one
response,
this
gives
you
the
ability
to
say,
okay.
A
A
There's
other
values
so
that
you
get
better
cache
efficiency
instead
about
right,
life,
okay
and
and
so
I
think
this
doesn't
need
much
discussion.
I
just
wanted
to
check
to
make
sure
that
nobody
else
had
heartburn.
With
that
sorry,
it
looks
like
what
you
see
here
on
the
bottom
very
key,
so
that
says
basically,
this
would,
if
we
adopt
this,
that
variant.
Key
semantics
would
be
this
particular
representation
that
these
two
headers
occur
in
identify
the
the
the
the
response
with
the
values
to
300
and
off
for
dpr
viewport
and
save
data
respectively.
A
But
this
response
can
also
be
used
for
the
one
600
off
and
to
600
on
responses
and
and
the
use
case
driving.
This
was
yo
talks
about
this
a
lot.
You
know
when
you
have
interdependent
headers
for
things
like
especially
you
know,
dpr
viewport
and
save
data
are
good
examples.
Often
you'll
have
different
combinations
of
those
that
could
be
satisfied
by
the
same
low-quality
image,
for
example,
or
high-quality
image.
You
don't
have
all
those
duplicates
in
your
cache.
B
Out
of
slides,
so
I
will
go
back
to
your
first
slide
for
publish
now
our
wait
for
our
experience
and
note
that
this
is
a
standard,
stret
document
right
and
one
cache
has
provided.
You
know
a
lot
of
cooperation
right,
but
it
has
an
implement
again.
I
think
I
can
give
you
another
cache
in
the
horribly
distant
future.
Excellent
I
was
gonna
point
out.
I've
made
some
new
friends
from
Comcast
this
week
running
one
of
these
caches
who's
implementing
I'm
wondering
if
they
would
be
willing
to
suggest
an
intent
to
experiment
with
how
this
works.
B
A
B
B
A
That'll
be
good,
so
we've
been
working
on
BCP
56
this
for
a
while.
Now
we've
gone
through
role,
cycles
of
editorial
additions
and
reviews,
I
I
feel,
like
we've,
gotten
fairly
broad
review
of
it,
both
from
folks
in
the
working
group
as
well
as
folks,
outside
I,
regularly
and
including
yesterday,
get
emails
from
people
in
the
ITF
that
I've
never
met
before
as
far
as
I'm
aware,
but
that
are
giving
me
very
nice
feedback
on
it,
saying
that
we
like
this.
This
is
very
helpful.
Have
you
thought
about
this?
A
Or
can
you
change
this
to
make
it
more
clear
or
you
know
what
do
you
think
about
this
and
and
so
I
feel
like
it's?
It's
a
lot
of
eyeballs
have
seen
it
and
and
the
rate
of
change
on
the
document
has
been
slowing
down
so
I'm
gaining
confidence
personally
that
it's
it's
getting
to
a
stage
where
it's
ready,
ish
to
ship
I.
Think
the
big
open
question
on
that
right
now
is
I.
Have
an
editorial
issue
open
on
it
to
wait
for
hjp
core,
so
we
can
reference
those
nice.
A
Well-Structured
core
documents
in
this,
rather
than
referencing
the
seven
to
3x
series
we'll
talk
about
core
tomorrow
but
I,
think
the
feeling
is
still
that
we're
gonna
finish
core
around
the
end
of
the
year,
so
I'm
personally
as
editor
happy
to
leave
it
open
for
a
while
and
maybe
incorporate
some
more
feedback
as
it
dribbles
in
and
get
even
wider
review
for
this
document,
which
is
always
a
good
thing.
But
I'd
be
curious
to
hear
what
other
people
think
about
whether
there's
an
urgency
and
shipping.
This
document,
or
not.
F
Some
modern
Thompson
I
think
having
it
is
more
important
than
publishing
it
at
this
point,
and
it
will
be
really
nice
if
this
went
out
alongside
those
core
specs,
because
I
think
that
really
sends
a
message
that
this
is.
This
is
part
of
the
core
and
in
a
way,
the
way
he
used.
The
protocol
is
almost
as
important
there's
a
definition
of
protocol
itself,
I
think,
and
so
my
preference
would
be
to
sort
of
try
to
try
to
hold
it
until
then,
as
long.
A
B
So
this
is
intended
as
a
BCP
right,
so
we
we
want
wide
review
in
the
room,
but
as
as
positive
data
points,
there
have
been
about
60
issues
open
to
concern
on
github,
almost
all
of
which
have
been
resolved
from
a
pretty
wide
range
of
parties,
including
people
that
don't
participate
regularly
in
this
group.
So
it's
getting
that
kind
of
review.
He
recently
added
a
privacy
consideration,
section
I
want
to
make
sure
people
look
at
that.
B
My
experience
with
talking
to
people
working
in
other
groups.
There
was
recently
interest
in
this
from
from
the
SOG
group,
and
the
DOE
group
was
also
very
interested
in
this
right.
Is
it
would
be
nice
to
have
something
not
just
to
guide
their
definitions
of
their
protocols,
but
actually
to
be
able
to
normally
reference
to
say
you
know
the
security
implications
or
the
privacy
implications
of
the
HTTP
portions
of
my
spec
are
reflected
in
VCB
56
bits
right.
So
if
folks,
you
don't
want
to
read
it
with
an
eye
towards
doesn't
answer
that
question.
F
J
Hoffman,
so
the
sag
message
that
you
saw
you
may
have
misread.
Basically,
he
will,
he
was
told,
puts
BCP
56
stuff
into
a
document
that
is
blatantly
breaking
BCP
56
and
it
says
so
it
says
screw
how
they
did
it.
We
just
wanted
to
shove
our
data
over
HTTP,
oh,
but
there's
the
Stockman
over
there.
That
tells
us
not
to
do
it
so
so.
B
J
J
At
least
now
I
mean
that
document
will
go
forwards.
It's
been
waiting
for
almost
a
decade,
but
basically
it's
specifying
something
that
predates
possibly
even
predates
BCP
56
and
it's
a
fuller
specification
of
it
and
it's
someone
who
could
care
less.
He
he's
over
in
your
continent
area,
so
you
could
chat
with
him.
It's
continent.
J
J
A
So
speaking
to
the
privacy
considerations,
part
and
well
then,
and
that
general
question
I
think
that
writing
that
kind
of
text
to
have
it
to
be
able
to
reference
somewhere
is
a
great
idea,
and
we
should
be
doing
that
personally.
My
initial
feeling
is
is
that
that
probably
belongs
in
core
the
idea
behind
BC
P
56
bits
is
it's
a
guide
for
people
who
are
writing
new
specifications
or
applications
using
HTTP,
and
it's
also
a
guide
for
people
reviewing
them,
especially
area
directors.
A
So
you
know
when
that
screw
that
specification
gets
to
the
isg
they
can
compare
and
make
their
own
decision.
It's
not.
There's
I,
don't
think,
there's
currently
anything
in
there.
That's
something
I
could
see
an
application
referencing
unless
it's
oh,
we
did
do
this
and
we're
pointing
to
it
informational
II.
Maybe
you
know-
or
we
didn't
do
this
and
we're
pointing
to
an
informational
II,
but
it's
not
really
a
referenceable
document
in
that
sense,
I.
A
Just
like
the
original
BBC
P
56
beside
on't
know
anybody
really
referencing
that
one
either
but
I'm
not
stuck
on
that
I
mean
if
we
want
to
change
the
nature
of
the
document.
That's
fine
I!
Just
to
me.
It
feels
like
it
kind
of
goes
in
core
cause.
You're
gonna
be
referencing
HTTP
anyway,
if
you're
using
it.
B
K
I'm
gonna
go
with
no
it's
dumb;
it
does
not
work
there.
You
go
alright,
so
there's
actually
been
a
fairly
broad
set
of
changes
in
since
London.
So
if
you
go
up
to
the
next
one
for
the
first
part
in
draft
0/0,
they
were
sending
the
frames
that
were
related
to
the
certificate
and
to
the
requests
on
the
control
stream,
but
the
things
that
we're
describing
for
this
particular
stream
I
need
this
search
and
now
I
can
proceed,
because
this
request
has
been
answered.
K
Those
were
happening
on
stream,
there's
a
slight
problem
with
that
flow,
though
that
is
not
fatal
in
its
gp2,
but
will
be
fatal
when
we
take
the
same
thing
dhcp
over
quick,
which
is
that
the
request
with
the
headers
likely
close
to
that
stream
in
that
direction
so
requiring
that
you
then
later
send
a
frame
on
the
closed
stream.
Not
so
not
so
nice
John
draft
one
everything
has
been
moved
to
control
stream.
K
K
But
if
you
have
multiple
certificates,
you
can
pick
which
certificate
to
associate
with
each
request
and
now
also
it's
not
confusing.
If
the
certificate
the
server
turns
around
and
asks
you
for
a
different
certificate
after
it's
seen
your
unsolicited
one
next
slide,
also
in
the
camp
of
trying
to
be
more
explicit
instead
of
having
clients
pick
an
unused
stream
and
say,
I
can't
use
the
stream
until
you
send
me
the
certificate
that
I
want,
which
is
kind
of
hokey
I,
have
to
admit.
K
K
Most
of
the
big
changes
in
the
dock,
though,
have
been
through
integration
with
the
exported
authenticators
draft
in
EOS
working
group.
So
the
original
version
of
the
drafts
that
we
adopted
and
that
we
still
had
I
think
back
in
London,
was
that
everything
was
being
done
in
the
HDPE
layer,
the
certificate
frame
curing
actual
certificates.
Then
you
had
a
certificate
proof
that
contained
a
signature
of
something
of
an
exporter
by
that
certificate.
K
The
ship
request
had
oil
filters
and
having
all
of
this
at
the
HCC
layer,
was
kind
of
ugly,
so
moving
the
draft
one,
there's
no
more
certificate
proof
frame
certificate,
curious
and
exported
Authenticator
TLS
deal
with
all
of
that
certificate
request
carries
an
exported
Authenticator
request,
all
of
that
Singh
capsulated
that
TLS
layer,
gay
sheep
HTTP
out
of
it
please
draft
to.
We
got
to
take
advantage
of
a
new
feature
and
export
of
authenticators.
That
was
added
during
their
first
working
group
last
call,
which
is
and
exported
basically
an
authenticated
refusal.
K
The
ability
to
send
I
do
not
wish
to
provide
you.
The
certificate
you've
asked
for
previously
used
certificate,
just
didn't
include
a
certificate
ID.
Now
you
actually
send
along
cryptographically
signed
I
am
refusing
signed
and
I'm
refusing
to
send
youth
which
you've
requested
so
we're
shifting
more
stuff
into
the
TLS
layer
doing
less
at
the
HTTP
layer-
and
this
is
goodness
for
separation
of
concerns.
K
Last
major
change
that
we've
had
so
far
is
a
way
to
detect
me
in
the
middle
sooner
because
if
you
do
have
a
TLS
man
in
the
middle
on
your
connection,
unfortunately
they
do
still
exist
under
the
old
model.
You
wouldn't
discover
that
until
you've
done
all
the
work
of
generating
an
export
of
Authenticator
and
then
it
didn't
validate
now
we
actually
stick
an
exporter
in
the
settings
instead
of
just
saying
one
turns
it
on,
and
if
you
see
that
the
exporters
don't
match
from
your
setting,
you
know
it's
not
going
to
work.
K
Don't
bother
attempting
to
done
to
do
the
crypto
to
generate
an
exported
authentic.
Our
next
slide
so
open
issues
that
we
want
to
talk
about.
First
off,
there's
the
question
of
how
all
these
frame
types
get
bound
together
and
the
draft
is
improved
at
explaining
this,
but
it's
still
kind
of
a
little
network
here
next
slide.
K
The
certificate
request
has
a
request,
ID
certificate
needed,
because
there
can
be
multiple
of
those
for
various
streams
point
to
a
certificate
requests
by
request.
Eddie,
you
answer
certificate
needed
with
a
use
certificate.
That's
referring
to
the
same
stream.
Id
saying
which
certificate
you're
using
to
enter
the
request
for
that
street
and
then
that
uses
a
serve
ID
to
tie
it
to
a
certificate
frame
that
has
an
export
an
authenticator.
Now
the
interesting
piece
is
inside
the
exported
Authenticator.
K
K
Well
likely
you
have
one
cert
that
covers
all
of
those
names
and
it
would
be
nice
not
to
have
to
generate
three
separate
export
authenticators
for
the
same
certificate.
All
responding
the
same
request,
none
too
different
request
IDs.
So
we
allow
you
to
say:
I
have
already
sent
you
this
certificate
and
use
it
to
satisfy
that
request.
K
K
Admittedly,
that
doesn't
line
up
perfectly,
but
it's
operationally
a
better
choice.
So
there
is
kind
of
a
question
of
do.
We
want
to
allow
cross
responses
like
that.
I,
mostly
think
that
we
do,
but
it's
not
entirely
clean,
and
then
there
is
an
open
issue
suggesting
that
the
certificate
frame
should
explicitly
contain
the
request
ID.
F
Wow,
okay,
Naughton
Thompson
I've
always
been
a
little
bit
sort
of
unhappy
with
the
situation
where
we
sort
of
ask
a
question
and
I
mean
indirectly.
Of
course,
we
we
have
to
go
asking
TLS
to
give
us
the
answer
to
the
question
that
we
should
be
able
to
answer
directly
ourselves.
If
that
means
that
you're
coding,
the
information
I
think
I
can
be
comfortable
with
that.
F
F
Is
goodness,
particularly
when
we're
talking
about
something,
that's
so
grossly
inefficient
as
certificates,
but
not
being
able
to
know
which
request
was
being
answered
without
going
all
the
way
into
the
opaque
blob
that
we
just
got
out
of
TLS
is
quite
annoying,
so
I'm
inclined
to
say
just
duplicate.
It
I.
K
Will
push
back
on
one
piece
that
you
said
there
at
the
end
yeah,
which
is
the
request
that
is
being
answered
for
in
terms
of
this
stream,
is
the
request
that
was
sent
for
the
stream,
but
that's
that's
not
necessarily
the
same
as
the
request
that
generated
this
export
of
authenticator
in
the
first
place.
That's
that's.
F
Don't
care
about
the
ones
that
it's
the
ones
at
the
top
I
mean
it's.
Yes,
it's
all
in
directed
throughout,
through
the
request
on
the
stream
and
all
that
business.
But
fundamentally
you
have
to
be
able
to
there's
some
magic
that
goes
on.
So
you
have
multiple
certificate
requests
outstanding.
At
the
same
time,
mm-hmm
the
HTTP
layer
doesn't
know
which
one
goes
with,
which
until
it's
gone
on
talk
to
the
tail
I
smile,
which
I
don't
like.
B
K
K
That's
essentially
what
a
use
certificate
frame
is,
so
you
send
a
certificate
frame
that
has
the
export
of
authenticator
once
and
then
on.
Every
stream
that
on
which
you
want
to
use
it,
you
send
use
certificate
with
just
the
cert
ID,
but
then
you
follow
the
cert
ID,
just
the
exported
Authenticator,
and
if
you
want
you
can
find
the
request.
Id
of
the
request
that
triggered
that
export
of
authentic
error
to
be
sent.
E
Sure
so
could
you
go
back
a
slide
or
Thanks,
so
you
certificate
doesn't
have
a
request
ID
in
it
correct
so
I
guess
the
question
would
be
you
know,
could
you
have
a
frame,
it's
like
certificate
and
like
request,
ID
and
also
a
pointer
like
here's,
the
expert
authenticators
over
there
so
effectively
explicitly
answering
the
request
to
say.
Like
yes,
I'm
answering
your
request,
the
answer
is
over
there:
okay.
L
K
E
K
E
F
N
K
K
Those
can
be
separate,
certs,
there's
no
certificate.
Transparency
link
between
the
two.
These
hacker
still
has
to
get
induced
navigation,
and
you
can
still
revoke
the
cert
when
you
see
it
in
certificate.
Transparency,
but
you
don't
have
the
breadcrumb
to
try
and
figure
out
who
did
that
now
how
strong
a
breadcrumb
is
that
going
to
be
considering?
It
was
probably
a
throwaway
domain
anyway,
probably
not
very,
but
it's
something
where
this
gets
a
little
bit.
K
K
That
says
this
is
okay,
which
kind
of
makes
me
sad
from
the
deployment
perspective,
because
it
means
all
existing
certs
and
all
existing
ways
of
issuing
assert
that
you
have
setup
are
not
usable
with
this
and
that's
going
to
slow
adoption
and
also
for
in
terms
of
an
opt-in
we
already
of
the
quote-unquote
owners
of
the
primary
connection.
They
want
to
be
able
to
control
whether
secondary
search
are
used
on
their
connections
and
now
we're
also
going
to
have
to
have
opt
in
from
the
ones
being
coalesced
onto
those
connections.
K
We
know
that
opt-ins
are
hard
to
get
uptake
on
double
opt-ins
exponentially
more
so
and
just
from
the
optics
perspective.
If
we
have
an
oil
that
says
it's.
Okay,
if
you
hijacked
my
circuit
nobody's
going
to
do
that,
so
we
need
to
at
least
have
some
somewhat
useful,
looking
mechanism
associated
with
it.
If
we
do
this
I,
don't
think
a
blanket
in
or
out
is
necessarily
the
best
choice.
K
So
next
slide
I
got
an
interesting
suggestion
during
a
side
conversation
on
this
that
instead
it
might
want
to
be
a
list
of
primary
domains
that
this
can
be
secondary
to.
So,
if
you
do
steal
the
search
from
images
macys.com
fine,
but
it
can
only
be
used
secondary
to
dub
dub
dub
dub
macys.com.
So
unless
you
also
have
that
cert,
who
cares-
or
at
least
you
know-
maybe
not
who
cares?
But
you
are
no
worse
because
you
can't
use
secondary
certificates
with
it
unless
you
have
that
primary
service.
K
Well,
honestly,
I'm,
not
wild
about
having
this
requirement
at
all,
but
I
can
understand
the
security
argument
that
we
need
to
do
something
to
mitigate
a
making
attacks
easier
and
if
we
need
to
do
something.
This
is
the
best
suggestion.
I've
heard
so
I
was
like
comments
on
whether
we
do
this
or
none
of
it.
E
E
K
And
I
think
there
is
some
value
and
if
you
are
confident
that
the
security
on
that
this
cert
is
probably
not
going
to
get
stolen,
maybe
it's
in
the
hardware
security
module
and
there
really
is
no
way
to
get
the
private
key
out.
Then
fun,
dual
wildcard,
throw
it
in
go
ahead
and
use
it.
If
you
think
the
certificate
is
at
risk,
you
can
tighten
the
scope
and
reduce
your
exposure.
K
E
K
O
Yeah
I
was
gonna,
say.
Basically,
this
is
Richard.
Barnes
I
was
gonna,
say
this.
The
same
thing
has
been
that
the
tagging
so
that
the
entry
point
domain
seems
really
awkward
for
things
like
CD
units
and
something
like
this
tagging
thing
seems
way
more
sensible.
I.
Wonder,
though
it
seems
like
at
that
point
you're
kind
of
hinging
some
of
your
security
on
the
difficulty
of
adding
that
tag
inside
I,
wonder
if
there's
something
to
hit
hook
to
like
something
that's
access
controlled.
O
For
that
tag
like
say
an
acme
account
ID
or
some
other
thing,
that's
difficult
to
get
that
might
be
external
or
this
specification.
It
might
be.
You
know
the
rules
that
say
a
browser
forum
puts
for
what
you
put
in
that
field,
but
you
know
might
be
worth
thinking
about
what
the
security
properties
are,
depending
on
how
difficult
it
is
to
get
that
that
binding
identifier,
yeah.
K
N
Different
suggestions,
so
super
banger,
so
one
of
the
options
might
be
actually
to
hash
the
straw
man
here,
but
hash
the
certificate
itself
contents
itself
of
the
primary
certificate.
So
that
means
that
not
only
binds
it
to
ownership
of
the
domain
itself,
but
also
ownership
of
the
private
key
associated
with
the
domain,
the
public
key
of
the
domain.
So
that
means
like
you,
have
to
not
only
get
a
certificate
associated
with
with
that
hostname
or
tag.
N
N
N
A
dire
restriction,
yes,
so
that
we're
going
with
typers
if
we're
doing
some
restriction
and
we're
doing
it
like
pretty
tight,
then
that
would
mean
that
presumably
with
property
we
want
here
is
that
a
server
like
when
you
revoke
the
certificate
that
you
have
and
the
primary
certificate
would
also
or
you
like-
destroy
the
key
or
something
is
gone,
or
that
certificate
is
not
longer
usable.
The
authentication
properties
are
the
things
that
are
secondary
to
it
are
also
not
usable
as
well.
K
I
think
the
property
that
Ben
was
suggesting
here
was
that
it's
difficult
to
know
which,
which
of
several
certificates
will
be
the
first
one
to
trigger
the
connection,
and
therefore
you
want
to
be
able
to
use
any
of
them
as
the
primary
and
the
others
are
secondary
and
if
you're
tying
it
to
a
particular
private
key.
We
don't
have
that
problem
right.
N
So
in
the
you
in
this
one,
you
have
an
Auror
case
right,
it's
a
list,
any
one
of
them
can
be
satisfied,
so
you
can
create
a
graph
of
authentication
with
an
aware
as
well
as
similar
to
how
do
you
do
it,
but
the
hash
you
do
the
list
as
well,
but
you
cannot
it's
kind
of
do
you
have
to
have
a
root,
and
after
that
you
can
have
leads,
but
it
does
restrict
you
in
that
way.
You
have
some.
B
O
P
C
O
It
has
to
verify
in
addition
to
whatever
it
would
normally
do,
that
the
applicant
controls
the
domain
and
they're
putting
in
the
tag
field
in
the
tag
extension
so
you'd
be
getting
a
certificate
that
says
this.
The
the
server
owns
customer
calm
as
well
as
cbn.com,
and
then
you'd
use
the
CDN
com1
through
through
everything
else.
I
think
that
seems,
implementable,
I
think
it
gets
the
right
security
properties
and
the
security
properties
you're
after
here
I
see.
K
O
B
A
So
structured
headers
we've
been
working
on
this
for
a
while.
Now
we
have
I
think
we
talked
about
last
time,
an
implementation
in
JavaScript,
a
partial
implementation
in
Chrome,
because
web
packaging
is
adopted
web
structure
that
history,
some
of
their
stuff.
We
have
a
partial
test.
Suite
going
contributions
are
very
welcome,
I
think,
overall,
it
feels
like
the
spec
is
getting
mature.
A
There
are
two
along
those
lines
that
are
thought
of
you.
Sort
of
talk
about
here,
one
is
is
that
we've
received
some
feedback
from
potential
header
users
of
structured
headers.
That
it'd
be
useful
to
have
an
ordered
dictionary,
so
a
equals
B
C
equals
D,
but
where
you
can,
you
know
access
that
as
a
dictionary
is
as
a
hash,
but
also
retain
the
order
for
that
and
and
whether
that's
important
enough
to
get
into
the
spec
or
not
is,
is
a
question
of
judgment.
A
I
do
notice
that
the
newest
version
of
Python
makes
order
dictionaries
the
default,
at
least
for
C
Python,
which
was
kind
of
interesting
coincidence.
So
you
know
we
could
we
could
take
a
couple
different
approaches
to
this.
We
could
require.
The
exist
exist
in
dictionary
object
to
the
order,
so
a
language
like
Python
would
be
able
to
just
use
its
normal
dictionary
data
structure,
our
language
that
doesn't
support
order.
A
For
example,
you
can
have
a
really
ugly
per
M
list
where
the
the
identifiers
you
know
like
if
the
premise
is
basically
like
a
list
of
mime
types,
you
have
a
thing
and
then
you
have
an
optional
set
of
parameters
on
it.
You
could
say:
okay
thing,
a
is
just
you
know
a
marker,
then
you
put
the
real
parameter
after
it.
Then
B
is
the
marker,
so
you
retain
your
ordering,
but
you
have
this
key
value
pairs,
it's
it's
kind
of
ugly,
but
it
could
work
with
the
existing
spec.
C
F
This
is
I
think
this
is
the
sort
of
the
default
that
we
want
to
have,
but
it
would
be
fine
for
individual
definitions
for
header
fields
to
say
they're
using
this
structure,
but
attribute
no
semantics
to
the
order
in
which
the
the
items
appear.
I
think
that's
probably
the
right
way
to
do.
This
I
tend.
A
F
A
A
A
I
think
that
most
languages
would
need
to
do
that
and,
and
the
driver
for
this
is
we
got
some
feedback
from
an
event
estrin
saying
that
he'd
really
like
to
have
their
identifiers
as
payload,
so
that
he
could
back
port,
potentially
structure
headers
on
to
existing
syntax
and
make
it
work
a
little
more
elegantly.
Does
anybody
have
any
feelings
on
this
one
where
they
are
to
be
clear?
I
was
the
driver
for
five
out
of
five
and
I
think
this
is
a
reasonable
feedback
to
add
them
back
in
it.
A
A
Right
now
identify
is,
if
I
remember
correctly
its
token
plus
a
few
characters.
I
think
there's
been
a
little
discussion
about
adding
a
few
more
characters.
We
need
to
keep
it
safe
and
extensible,
but
also
a
bit
flexible,
become
a.
A
B
Excellent
I
know
who
you
are.
Thank
you
so
much
for
your
for
your
contributions.
Thanks
mark
Kazuo
we're
doing
a
cache
digest
next,
so
we've
got
I.
Think
three
pretty
quick
updates
come
in
here
and
then
we'll
get
into
the
proposed
work
session
and
so
folks
to
be
prepared
for
the
ordering
there
I'm
going
to
suggest
that
CD
and
loop
prevention
get
this
first
slot,
simply
because
that's
the
only
one
in
which
the
presenter
himself
has
the
conflict
rather
than
folks
who
are
interested
in
the
work.
C
So
this
is
about
cash
dodges
and
there
hasn't
actually
been
new
updates.
We
use
the
new
version
four
and
it
was
to
remove
the
support
for
steal,
digest
and
we've
done,
and
there
was
a
push
back
on
the
github
pull
request
and
that
for
REST
API
that
serves
a
list
of
files
there.
It
would
be
beneficial
to
have
a
stale
digest
so
that
the
server
can
push
the
stale
ones
when
the
index
is
being
requested.
C
So
we
might
reconsider
this,
but
we
don't
need
to
hurry
because
they
open
you
soon,
that's
being
pending
and
which
is
about
changing
to
yet
another
bit
of
digest
algorithm
and
ultimately,
ultimately,
we
need
something
by
a
browser
that
works
well
on
the
browser
and
that's
also
pump.
So
your
honor
is
continuing
his
research
to
find
the
best,
algorithm
and
I
think
we
will
rather
wait
for
him,
then
trying
to
figure
out
without
having
any
implementation.
So
that
is
the
status
of
cache,
digest.
B
A
So
we
just
opened
a
last
call
working
group
last
call
on
client
hints
Ilya
submitted
the
new
version
of
the
draft
we've
held
it
open
for
a
while,
because
we
wanted
to
make
sure
that
coordinate
would
fetch
the
fetch
special
specification.
What
working
group,
as
well
as
the
HTML
specification
there,
so
that
it
was
a
align
with
what
we
did
and
I
think
that
the
folks
involved
will
agree.
It's
it's
settled
down
enough
where
we
can
go
ahead
and
ship
this
back,
so
we
opened
a
three
week.
A
A
So
please
have
a
look
at
that
specification
if
you
have
any
feedback
since
the
list,
especially
this
is
an
experimental
draft
and
we
called
it
experimental
because
we
only
had
one
a
browser
who
was
committed
to
implementing
it
and
so
we're
looking
for
interest
and
implementing
it
from
other
folks,
whether
it's
browsers
or
server-side
I
know,
we've
heard
from
some
folks
on
the
server
side
and
also
you
know,
support
for
publishing
it
interest
in
in
the
general
use
case
that
it
that
it
is
designed
for.
A
B
A
R
On
the
first
try
amazing,
so
our
TV
is
60
to
65.
This
there's
not
a
whole
lot
of
progress,
since
the
last
meeting
I
have
not
had
time
to
work
on
it.
Happily,
though,
we
have
John
Wieland
ur
from
Apple,
who
is
who's
agreed
to
hop
in
as
an
editor,
to
help
me
out
so
I'm,
very
hopeful
that
we'll
have
actual
progress
to
discuss
in
a
couple
of
months.
There
are
a
number
of
bugs
have
been
a
lot
of
those
around
the
same
set
attribute,
which
is
excellent,
because
people
are
actually
looking
at.
R
We've
got
about
50
50
some-odd
of
those
tests
ported
over,
and
we
aim
to
get
the
rest
of
them
ported
over
now
that
what
platform
tests
has
added
support
for
multiple
registerable
domains,
which
took
quite
a
bit
of
time,
I'll
note
a
couple
of
metrics
just
because
I
think
they're
interesting
around
this
time.
Last
year,
the
same
site
attribute
was
present
on
0.01%
of
setcookie
operations.
That
chrome
user
saw
that's
up
about
5x
over
the
last
year,
so
we're
up
to
about
0.05
percent
of
secresy
operations
using
the
same
site
attribute.
R
Likewise,
the
double
underscore
host
prefix
was
at
0.005
percent
a
year
ago
and
it's
hovering
around
0.01
percent
right
now,
so
about
a
2x
increase.
The
double
underscore
secure
prefix
was
hovering
around
nothing
a
year
ago
and
is
up
to
about
0.003
percent
over
the
last
month,
so
we're
starting
to
see
more
usage
of
the
things
that
are
defined
in
this
new
specification.
R
B
K
So
I'm
sure
we
all
are
familiar
with
the
various
approaches
to
encrypting
SMI
I
know,
for
me
saying
something
is
the
easiest
way
to
submit
it
in
my
mind
and
when
I
was
a
kid,
there
was
a
game
where
they
would
tell
you
something
and
then
do
charades
and
I
kept
saying
the
thing
they
would
tell
me
and
I
just
couldn't
stop
myself
and
web
browsers
are
doing
the
same
thing.
They
say
it
even
when
they're
supposed
to
keep
it
secret
and
so
we're
trying
to
find
ways
to
reduce
that.
K
All
right,
so
the
proposal
in
a
nutshell
for
is
to
add
an
S
and
a
parameter
to
an
old
service
record
to
say
for
a
given
alternative.
You
can
also
suggest
what
s
and
I
value
you
ought
to
use
when
connecting
to
that
onto
that
surface,
and
so
you
can
pick
something
innocuous
that
you
know
is
going
to
be
in
the
same
certificate
or
you
can
pick
a
certificate.
K
The
primary
certificate
on
the
connection
has
to
be
somehow
related.
It
might
be
the
host
name
that
they
gave
you
the
s
and
I
for
so
then.
So
whatever
the
innocuous
host
name
is,
it
might
be
the
host
name
of
the
origin,
which
is
what
alt
service
currently
says,
or
it
might
be
the
the
host
name
of
the
alternative
that
you
listed
in
the
alt
service
parameter.
M
Where
you
going
I
think,
but
up
to
thinking
through
maybe
ever
okay
having
all
three
of
those
like
seems
like
it's
really
hard
to
think
to
reason
about
like
if
hit
the
claim
that
this
says
go
to
this
says:
go
to
these
guys
right,
like
the
the
alt
service
says,
go
over
here.
Why
am
I
returning
type
the
original
one?
That's
like
not
that's
a
confusing
semantics.
M
A
A
If
I
can
insert
myself
behind
you
and
cue,
my
confusion
is:
is
the
dolt
service
requires
the
or
the
original
origin?
The
you
know,
the
Georgian
house
name
to
be
covered
by
the
cert,
but
it
doesn't
require
the
alternative
to
be
covered
by
the
search
and
I
can
understand
the
s
and
I
being
added
that
that
seems
like
okay,
we're
gonna
do
something
in
this
a
Sinai's
name.
We
should
probably
own
it
in
some
fashion.
But
why
add
the
alternative
to
the
next
so.
K
M
E
So
Ben
Schwartz
I
agree
with
your
analysis.
Essentially
this
draft
does
it
introduces
a
variety
of
modes
essentially
or
a
variety
of
combinations
where
it
can
be
used,
not
all
of
those
modes.
Quiet
provide
the
the
full
level
of
protection
against
a
probing
attacker.
Basically,
so
yes,
if
your,
if
your
threat
model
the
threat
model
that
you're
proposing,
is
a
reasonable
one.
If
that
is
you
the
threat
model
of
the
site,
then
they
probably
shouldn't
use
this
in
that
mode.
They
should
probably
make
sure
that
that
they
return
certificates
that
cover
the
sni.
A
H
N
Suppose
anger
so
I'm
wondering
why
we're
mucking
around
the
sni
itself.
That's
nice,
several
restrictions
that
must
be
like
a
s
key
character,
since
we
got
that
I
forget
and
must
be
a
domain
name.
So
it
seems
like
this
all
this
other
stuff
is.
Problems
will
be
great
being
created
by
the
fact
that,
like
your,
your
multiplexing,
the
s
and
I
would
be
possible
to
just
eliminate
the
SNI
the
usage
altogether
and
use
another
extension
instead
of
the
s
and
I
when
the
replay
field
or
well.
B
C
N
T
K
Right
so
the
the
primary
point
of
this
design
is
from
the
external
wire
image.
It
looks
like
a
connection
to
the
innocuous
thing
overall,
no
using
alt
service
potentially,
but
then,
when
you
get
inside
the
connection,
the
client
knows
it
actually
wants
some
other
certificate
or
some
other
hostname
on
that
certificate.
Erick.
K
U
Okay
and
then
the
clarifying
one
is
the
the
other.
There
is
a
like
the
the
even
just
a
simple
use
case
for
where
you
could
do
this
just
as
a
wild-card
case,
if
you
yes,
there
are,
there
are
a
number
of.
There
are
quite
a
few
sites
that
have
a
big,
covering
wildcard
sort
like
star
dot,
github.com
and
just
being
able
to
do
underscore.
Wildcard
github.com
in
case
in
the
alt
service
is
a
a
very
simple
use
case.
That
could
add
a
lot
of
value
without
needing
to
get
fancy.
E
Okay,
so
we're
switching
so
there
are
two
drafts
here.
That's
one
of
them,
the
other
one
is
DNS
alt
service.
In
principle,
this
is
separate.
So
this
a
this
is
applicable,
it's
functional
as
a
draft
whether
or
not
you
have
any
of
the
this
SNI
munging
capability
in
alt
service.
It
just
doesn't
necessarily
cover
all
the
same
use
cases,
so
DNS
alt
service,
basically
just
gives
you
a
way
to
distribute
alt
service
through
the
DNS.
It
does
the
simplest
thing
you
could
imagine
the
there
are
some
interesting
subtlety
here.
E
One
of
them
is
that
you
can
fire
off
a
DNS
query
for
the
alt
service
and
not
wait
for
the
response.
If
you
want,
you
can
just
race
ahead
and
essentially
the
the
draft
says
that
it's
up
to
client
policy,
whether
the
client
considers
it
mandatory
to
get
that
information
before
before
starting
the
connection.
E
Well
go
back,
I'm,
not
done
yet
so
a
couple
of
new
things
that
have
happened
here.
We
we
switched
the
order
of
the
prefixes
in
response
to
a
request
from
Martin
at
the
last
meeting,
and
thanks
to
a
very
detailed
review
by
Schumann
puch
in
DNS,
op,
there's
now
very
clear
semantics
for
what
to
do.
If
you
have
multiple
of
these
things
in
an
AR
are
set
in
the
in
the
DNS
and
that's
set
up
so
that
you
can
use
this
for
load
balancing
within
the
DNS.
E
E
Anybody
thinks
that
we're
being
too
unfair
is,
in
my
view,
these
these
drafts
are
they're
compatible
in
a
in
a
in
a
simple
sense
that
a
client
could
do
both
or
one
or
the
other
server
could
do
both
or
one
of
the
other
and
and
everything
would
work.
Fine
but
I'm
really
personally
interested
in
ways
that
we
could
potentially
combine
them
and
basically
get
the
best
of
both
worlds.
Yeah
one.
M
So
doesn't
that
everyone
fair,
then
perhaps
inappropriate
or
irrelevant,
but
we're
not
in
competition
here
unless,
unless
you're
planning
to
send
and
letter
for
they
should
be
working
groups
of
tales
for
group
saying
do
yes
and
I
I'm,
not
sure
we
need
this
need
this
comparison,
I
mean
the
relevant
question
is
on
so
I
mean
the
the
the
I.
Don't
think
it's
bad
to
have
DNS
all
service
I'm,
not
sure
it's
good,
don't
know,
I'm,
not
sure
it's
bad
either.
M
I
think
I'm,
like
less
persuaded
that
that
the
dolt
service
has
an
eye
on
the
/
to
that.
If
you're,
the
environment,
where
were
you're
doing,
is
you're
trying
to
go
to
like
server
a
and
you
do
nice
resolution,
it's
not
clear
to
me
that
having
the
having
an
SN,
I,
punt
and
then
followed
by
second,
if
it
gets
the
superior
dev
encrypted
s
and
I,
it's
clearly
slower
and
it
requires
an
hour
on
trip.
M
It's
there's
gonna
be
like
one
domain
in
the
marker,
and
so
there's
gonna
be
one
domain,
you're
gonna
be
showing
so
it's
gonna,
be
relatively
similar.
Marketing
properties
and-
and
also
the
thing
is
like,
basically,
you
could
use
totally
right
by
the
way
about,
like
it's
obvious,
what
you're
doing
but
like.
M
Basically,
when
that
one
domain
is
like,
you
know
that
one
that
one
that
one
domain
is
like
I
in
the
cover
domain,
like
it's
kind
of
obvious,
that
almost
everybody
there's
the
cover
on
and
also
I
mean
frankly
onesies
and
make
people
sad
about.
You
know
the
various
domain
fronting
debacles
over
the
past,
like
you
know,
past,
like
three
months,
was
that
someone's
name
got
wasted
as
like
the
cover
name
and
those
guys
got
somewhere
under
risk
is
selectively
blocking,
and
so
so
I
mean.
T
Daniel
Kondo
more
so
if
you
could
put
up
put
back
up
your
comparison,
I
wanted
to
there
was.
There
was
a
line
missing
there.
I
think
yeah,
so
so
in
particular,
yes
and
I
seems
to
be
designed
to
permit
a
client
facing
server
to
be
distinct
from
the
actual
origin
and
I'm,
not
sure
that's
the
case
for
alt
services
and
I
or
DNS
alt
service.
Maybe
you
could
comment
on
that.
I
think.
K
C
E
Sure
so
well
so
I'll
mention
two
things
here.
One
of
them
is
that,
because
the
because
the
DNS
alt
service
ties
the
the
alt
service
parameters,
including
the
choice
of
SNI,
to
the
load,
balancing
selection,
that
is
the
the
choice
of
our
our
it
makes
it
possible
to
have
to
to
load
balance
or
switch
across
destinations
that
behave
differently.
C
E
C
E
So
you,
you
can't
do
this
with
an
a
record
that
is
there's
no
way
otherwise
in
the
DNS
to
say,
if
you
find
yourself
load
balanced
on
to
a
particular
destination.
Ip
address
then
use
this
set
of
alt
service
parameters,
but
if
you
find
yourself
load
balanced
on
to
a
different
IP
address
and
use
a
different
set
about
service
privacy.
C
U
Eric
my
gran
Akamai
I'm
decoupling.
These
three
think
the
three
things
I
think
alt
service
DNS
is
extremely
valuable
and
a
lot
of
really
good
use.
Cases
for
and
I'm
very
positive
on.
It.
In
fact,
I
think
that
that
most
of
my
deployability
concerns
with
yes
and
I
go
away,
or
many
of
my
deployability
considered
with
with
yes
and
I,
go
away.
U
If
we
use
the
DNS
alt
service
record
as
the
thing
to
have
either
a
reference
to
the
AES
and
I
keys
or
or
the
SMI
keys
themselves,
like
the
multi
CDN
case,
it
solves
it
also
solves.
It
also
means
that
you
can
have
keeps
some
of
those
TTLs,
potentially
independent,
the
alts
and
I
won.
It
may
be
that
that
yes
and
I
makes
a
lot
of
the
Q's
cases
for
all
service
s
and
I
go
away.
It
may
be
that
the
wild-card
label,
one
in
particular,
is
one.
U
E
Me
just
respond
to
that
briefly
and
say:
I
think
there
is
a
I
think
there
could
be
a
possibility
there,
where
we,
we
say
that
you
know
basically
the
the
secondary
certificate
case.
If
this
stuff
is
covered
by
ES
and
I,
and
so
we
we
use
yes
and
I
for
those
cases
and
then
we
we
only
need
s
ni
replacement
for
the
essentially
the
cases
that
don't
add
a
round-trip.
B
So
star
was
just
a
short
co-chair
comment,
which
is
remember
that
we
can
consider
these
things
independently,
but
we
chose
to
present
them
because
they
obviously
have
implications
for
each
other
if
you
chose
to
adopt
them
both
right.
So
then,
as
an
individual,
a
lot
of
Plus
Ones
to
what
sort
of
Eric
said,
I,
think
DNS
auth
service
itself
very
interesting,
I'm
interested
in
be
able
to
know
what
protocol
negotiation
is
going
to
look
like
before
the
first
round
trip.
B
That
has
a
lot
of
actually
really
nice
latency
optimizations
for
the
browser
use
case,
so
I'm
interested
in
those
the
s
and
I
stuff,
I'm,
very
uncomfortable
with
how
it
impacts
the
the
core
rule
of
alt
service,
which
is
alt
service,
has
no
impact
on
how
you
interpret
origin-
and
you
know,
there's
some
interesting
complexities
there
around
the
how
the
certificates
are
I
understand
the
the
actual
rule
is
it's
not
violated,
but
it
muddies
the
waters
a
bit.
You
know
it's
pretty
the
discussion
about
the
certificates,
so
I'm
less
enthusiastic
about
the
working
group.
C
V
W
W
We
have
an
internet
draft
in,
and
this
would
be
very
interesting
for
an
origin
to
redirect
authenticated
traffic
around
a
mitigation
point
that
may
be
congested
we
met
last
week
with
would
then
also
to
discuss
what
we
were
considering,
and
this
would
actually
be
an
interesting
way
for
an
origin
to
redirect
a
client
to
a
to
an
edge
network
that
can
very
using
some
lightweight
mechanisms
validate
the
client
have
been
forwarded
on
to
the
origin.
So
our
very
interested
in
this
and
seeing.
W
V
H
V
All
right,
so,
let's
get
back
in
the
queue
please.
The
the
the
the
critical
question
really
here
is
that
if
you
think
that
the
DNS
human
readability
here
and
the
less
frequent
maintenance
will
interact
with
people's
attempts
to
use
the
DNS
for
redirect,
I
I
I
think
that
this
turns
into
something
where
it's
at
at
best
a
white
background.
And
if
you
believe
that
that's
because
everybody's
gonna
use
you
know
IP
multi
anycast
behind
the
specific
names
you're
gonna
put
in
here,
I
kind
of
get
that
that
might
be
the
case.
E
E
M
I
guess
now
I'm
clear
on
what
those
points
were
supposed
to
mean
so
I
think
the
or
certian
you're
making
is.
If
you
want
PFS
or
the
ESN
eye,
then
you
have
to.
Then
you
have
to
regularly
change
the
keys,
correct
piece
of
you.
Don't
current
PFS
for
the
sni,
then
you
don't
just
keys
like
basically
act
right,
hurry
now,
DFS
is
desirable,
but
it's
not
doing
the
wall.
Yeah
okay,
like
I,
said
I'm
sure
trying
to
figure
out
how
you
envisioned
this
looking
it
as
a
privacy
measure
so
resolves
I.
M
X
M
Guess
you
know
the
I'm
trying
to
figure
out
what
your
sir
model
about
how
this
works
is
because
is
your
model
the
same
one,
we're
floating
that
basically,
everybody
in
the
in
in
in
everybody
on
the
sort
of
CDN
is
gonna
use
all
service
to
redirect
to
exert
to
redirect
to
the
discover
name,
I.
K
M
M
One
RT
he
turns
into
two
RTT:
yes,
I,
don't
see
how
you
get
away
from
having
that
those
names
I,
don't
see
how
you
get
away
from
having
the
names
outside
the
surf
decision.
Wildcards
like
so
you
know.
So
you
know
this
is
a
staking
CloudFlare,
where
you
have.
You
know
n
main
names
in
the
cert
right
under
what
conditions
is
safe
to
put
like
to
put
symptom
surgery
like
you
know,
to
pretend
like
what
do
you
I?
K
M
M
K
E
If
this
is
true,
so
first
of
all,
yes
and
I
is
strictly
superior.
In
this
case,
I
I
think
that's
I,
I
agree
with
that
assessment.
The
the
thing
I
would
say
is
it.
This
is
not
as
impractical
as
you
might
think,
because
there's
a
small
number
of
sites
here
and
if
the
different,
if
the
choice
is
like
just
close,
is
that
the
visitor
is
viewing
my
site,
which
is
potentially
dangerous
in
different
ways
or
accept
an
extra
round
trip.
E
M
Again,
I'm
not
trying
to
like
draw
comparisons
between
mechanism
to
trying
to
analyze
whether
this
mechanism
actually
a
plausible
mechanism,
one
one
he
is
right
and
I
mean
it
seems
like.
Yes,
as
you
say,
you
could
make
that
work.
But
it's
like
you
know
it
is
I.
Don't
like
I,
don't
see
any
scenario
which
doesn't
involve
a
nice
two
round
trip
I
mean
I
said
like
I,
sorry
I
see
one
which
is
like
some
enormous
domain,
which
nobody
is
willing
to
censor.
Combined
with.
M
E
Yes,
but
there
are
other
threat
models
to
consider,
so
one
of
them
is
I
am
the
only
domain
on
my
IP
address.
There
is
in
fact,
no
way
for
me
to
be
confidential,
but
I
can
at
least
avoid
explicitly
leaking
my
identity
on
the
wire
and
instead
I
can
I
can
zero
my
sni
and
then
I
can
try
to
basically
hop
IP
addresses,
and
you
know
the
IP
address
is
all
that's
left
to
identify
me
right
like
these
are
not
cryptographic
threat
models,
but
they
are
real
threat
model.
So.
A
I'm
gonna
interject
here
we're
not
gonna,
make
a
decision
about
this
today
sure
and
we're
over
time.
Okay,
so
we
I
think
this
is
gonna,
be
a
great
hallway
conversation,
okay,
good
endless
conversation.
So
so
we
wanted
to
just
get
a
sense
of
the
do
you
have
you
you're
you're,
good
right.
We
want
to
get
a
sense
of
the
group
who
is
interested
in
interested
in
continuing
this
discussion.
We're
not
talking
about
adopting
documents,
we're
not
talking
about
any
decision
just
who
is
interested
in
this
discussion.
Continuing
up
I'm
assuming
that
that's
a
yes
right.
B
Because
this
is
the
second
time
we've
we've
worked
to
this
material,
so
we
want
to.
We
want
to
see
if
there's
interest
from
the
working
group
to
continue
or
if
we
just
kind
of
need
to
move
on
in
a
different
direction.
So
we're
gonna
do
two
moms
right,
one,
presumably
with
the
simpler,
the
DNS
all
service
right,
see.
If
how
much
interest
there
isn't
that
and
then
you
know
separately
and
I
guess
gated
on
the
first
one
interest
in
the
alt
service,
SMI
right.
A
A
No
because
we're
just
gauging
interest
yeah,
this
interest
is
everywhere.
Okay,
thank
you
very
much
showing
so
next,
the
CDN
loop,
yes,
yep,
sorry
we'd
close
the
queues.
Thank
you.
E
E
Delivery
networks
are
organized
as
a
reverse
proxy
for
websites,
often
TLS
terminating,
because
customers
can
configure
a
CDN
to
point
to
an
origin,
and
this
origin
can
be
anything
you
can
end
up
using
CDNs
in
layers,
so
one
in
front
of
the
other
in
front
of
the
origin
or
you
can
have
them
pointed
to
basically
any
IP
address
in
order
to
prevent
a
customer
from
configuring,
a
CDN
to
have
a
reverse
proxy
that
points
back
to
itself.
Through
another
service
they
often
implement
specialized
headers.
E
E
Basically,
the
majority
of
the
top
20
CD
ends
in
in
some
configuration
or
another,
the
loop
can
be
2
or
3
or
4,
as
long
as
you
can
configure
one
of
them
to
strip
out
the
dedicated
loop
prevention.
Header,
you
can
force
them
to
do
an
HTTP
lupine,
and
this
paper
has
some
very
interesting
graphs
and
and
experimental
results
from
this.
So
how
are
you
supposed
to
solve
this,
while
in
HTTP
there's
a
header
listed
called
via,
which
is
meant
to
indicate
that
a
proxy
has
forwarded
an
HTTP
request?
E
This
via
header
is
an
example
here.
It
lists
the
HTTP
version
and
then
a
canonical
name
of
what
the
proxy
is
and
you
can
coalesce
if
you
see
the
same
proxy
multiple
times
and
you
basically
just
append
what
you
see
on
top
of
this,
and
so
you
should
you
shouldn't,
combine
entries
that
have
different
protocols,
but
in
any
case
this
this
is
supposed
to
be
the
solution.
Next
slide,
please.
E
So
in
practice,
dia
is
overloaded
and
in
various
web
servers
like
iis,
six
is-7,
nginx
and
apache,
which,
as
of
the
list
records,
makes
up
I
would
say
somewhere
near
the
majority
of
all
web
servers
and
web
servers
behind
CDN.
It's
very
hard
to
measure
that
number,
but
it's
well.
It's
relatively
correspondent
with
the
public
facing
numbers,
there's
an
assumption
that
a
via
header
indicates
a
proxy
that
does
not
support
compression,
so
an
HTTP
proxy.
So,
basically,
all
the
compression
related
fields
are
ignored.
E
So
if
you
as
the
CDN,
send
a
request
to
an
origin
with
the
via
header,
it
will
reply
back
with
a
response
that
is
not
connect
not
compressed
unless
explicitly
configured
to
turn
on
compression
next
slide,
please.
So
the
proposal
for
this
is
a
very
narrow,
n--,
dedicated
request,
header
called
CDN
loop.
It
has
a
very
similar
semantics
to
via
and
is
meant
as
a
replacement
for
via
that
is
actually
practically
deployable
and
can
be
used
to
prevent
this
next
slide.
Please
all
right.
E
So
the
the
rec
requirements
here
are
that
conforming
CDN
should
add
a
value
if
they
are
reverse
proxying
data,
and
they
must
not
remove
this
header,
so
you
add
to
it,
you
don't
ever
delete
it,
and
as
long
as
everyone
agrees,
then
this
header
can
be
used
as
a
common
loop
prevention
mechanism.
Next
slide,
please.
E
E
It
there's
some
some
wording
in
here
that
may
be
a
little
bit
ambiguous.
I
wasn't
able
to
interpret
it
fully,
but
basically
you
can.
It
says
in
Section
four
that
you
can
remove
previous
forwarded
headers.
This
is
something
that
that
breaks
the
required
semantics
of
of
what
we
need
for
preventing
loops.
The
other
proposal
is
from
HTTP
the
one
one,
the
max
forwards,
header
field.
You
could
basically
set
a
max
forwards
field
and
decrement
every
time
that
you
go
through.
E
So
eventually
you
would
decrement
down
to
zero,
but
the
this
field
is
mostly
oriented
towards
trace
and
options,
and
its
use
in
yet
is
is
not
something
that
is
widely
agreed-upon
or
implemented,
and
that's
basically
it
so.
This
is
a
proposal.
It's
a
custom
header.
It's
has
a
dedicated
use
case,
but
it
is
one
that
is
it's
useful
for
a
very
practical
attack
so
grid
up
for
questions.
I've.
F
Thompson
I
made
some
comments
earlier
about
the
privacy
aspect
of
this,
but
looking
at
the
alternatives
here,
the
alternatives
of
far
worse
because
forwarded,
says
yeah
I
forwarded
it
for
that
guy
specifically
identifies
them.
There's
pretty
similar
on
that
front
as
well,
so
I
think
as
long
as
you're
using
some
sort
of
generic
opaque
identifier
that
only
the
CDN
itself
is
is
going
to
consume.
Then
this
seems
about
right.
It's
unfortunate
that
we
have
to
do
basically
the
same
mechanism
as
another
one,
but
you
know
welcome
to
the
Internet.
X
Y
Y
It's
much
more
generic
than
that.
It
could
also
be
used
inside
a
single
CD
on
right.
Let's
say
you
do
in
cash
hierarchies,
child
parent
proxies
that
sort
of
stuff-
and
you
and
your
example,
kind
of
shows
where
you
put
in
the
host
field
as
a
parameter
to
one
of
those
things,
but
I
really
think
that
it
would
be
better
if,
if
the
spec
would
be
generalized
such
it
can
be
both
both
cross,
CDN
and
sort
of
intra-syrian
detection,
I.
E
Think
we
already
have
mechanisms
for
intra
CDN
I
mean
this
is
why
there's
loop
detection
headers
that
are
used
inside
of
CDNs
and
and
generally,
if
you
understand
your
infrastructure,
you
will
not
get
into
this
situation.
This
is
this
is
really
about
multiple
independent
configurations
from
customers
that
can
strip
headers
and
if
you're
inside
a
CDN-
and
you
have
multiple
and
you
have
this
complicated
set
of
proxies
and
some
of
them
are
configured
to
strips
the
looping
headers.
Then
you
really
have
a
Mis
configuration
in
your
CDN.
No.
Y
B
Will
note
from
the
chair
seat
the
some
flavor
of
this
comment
was
pretty
common
in
the
in
the
thread
in
which
you
used
this
work.
I
will
also
know
that
was
kind
of
nice
to
see
you
know
25
entries
about
someone
proposing
new
work,
so
there
is
some
interest
in
the
space
which
is
great.
My
question
to
the
authors
would
be
if
the
working
group
were
to
adopt
this.
A
Mark
Nottingham
is
author,
I'll
and
I'll
say
what
I
had
to
say
before
I
answer
that
question
specifically
because
I
think
it
will
inform
it.
This
is
a
very
specific
problem.
I
know
we
all
as
engineers
have
this
urge
to
generalize
and
to
make
things
more
generally
applicable
and
that's
admirable,
but
you
know,
as
as
Nick
said,
you
know,
we
have
CD
ends
where
customers
can
configure
to
strip
or
add
headers
very
flexibly.
A
That
is
a
feature
that
we
all
like
to
support
and
that
we
need
to
support
and
if
we
reuse
forwarded
or
we
reuse
via
or
we
reuse,
something
that
can
be
used
by
other
intermediaries,
then
it
becomes
not
a
specific,
targeted
mechanism
for
how
do
we
avoid
loops
between
us
and
CloudFlare
and
Akamai
and
everyone
else
too,
something
that
there's
ambiguity
about
whether
or
not
it's
going
to
work
or
not.
We
really
need
this
to
work.
The
attacks
here
are
interesting.
A
So
to
answer
your
question
without
talking
to
the
other
authors
beforehand,
my
sense
would
be
that
if
we
can't
get
to
consensus
that
we
want
to
targeted
mechanism
here,
I'd
probably
want
to
go
somewhere
else,
because
it's
more
important
to
me
that
this
works
and
that
it's
simple
for
us
to
implement
and
we
don't
have
any
special
handling
around
a
header
that
a
customer
might
want
to
touch
for
other
reasons.
Then
it
would
be
to
have
a
general
mechanism
Kazu.
Z
C
Q
C
Must
not
modify
our
requirement
is
a
very
good
thing
because
most
power
web
programming,
web
application
programming
ideas
are
designed
to
like
whiskey
or
they
are
all
in
that
way
and
that's
causing
the
issue
that
we
had
us
get
getting
getting
dropped.
So
it's
very
it's
a
very
good
way
to
say
that
there
is
a
specific
added
that
must
not
be
exposed
to
web
application.
Programming
interface
that
could
be
modified,
but
rather
which
server
I
could
have
it
I'm
there
for
what
it
so
I
think
having
a
speaker
is
a
very
good
programming
interface.
C
U
Nygren
big
+12,
what
Mark
was
saying
around
value
of
having
to
have
a
specific
header
for
these
sort
of
semantics,
I
think
also
on
the
the
how
much
to
come,
how
much
to
generalize
it
also
cover
the
entry
within
a
CDN
case.
Some
of
that
gets
proprietary
enough.
That
could
bog
us
down
forever
I.
Think
having
like
the
value
of
having
something
standardized,
is
to
really
work
through
the
issues
of
how
different
CDN,
czar
interoperating
and
if
the
CDN
wants
to
go
and
include,
could
have
extra
hops
within
this.
U
That's
that's
up
to
them,
but
but
I,
don't
think
that
I
think
the
the
key
thing
is
getting
that
that
between
parties
thing
working
out
or
worked
out.
K
Mike
Bishop
I
will
echo
both
of
the
previous
comments.
That
I
think
the
real
nah
novel
and
useful
piece
here
is
that
it's
a
header
that
must
not
be
removed
that
the
what
is
causing
all
the
other
things
to
fall
down
is
that
they
get
removed
and
the
customers
want
to
remove
them
and
I
hear
the
concern
that
if
you
use
it
for
in
domain
things
that
customers
might
also
care
about,
then
customers
might
start
demanding
the
ability
to
remove
it.
K
I'm
not
as
worried
about
that.
If
it's
just
find
up
front
as
must
not
remove,
but
okay
and
I,
don't
think
there's
actually
anything
that
breaks,
though,
if
you
allow
a
CDN
to
kind
of
internal
I
say:
oh,
we
have
effectively
three
subsidy
ins
with
in
how
we
handle
a
process.
So
we're
gonna,
add
three
tags:
okay,
that
doesn't
break
anything,
but
the
important
thing
is
you
don't
use
it
in
any
way
that
might
incent
a
customer
to
challenge
you.
One
must
not.
I
Evans
from
Comcast
the
internal
to
the
CDN,
we
can
use
whatever
we
want.
We
have
lots
of
options
and
in
fact
the
existing
solutions
work
quite
well,
but
again
we
need
something
that
must
not
be
removed
and
making
it
simple,
making
it
straightforward,
making
it
not
useful
for
a
whole
bunch
of
things
does
in
fact,
and
send
people
not
to
remove
it
and
should
help
with
compliance,
and
so
I
am
very
in
favor
of
looking
at
this
work.
X
E
X
X
X
I
don't
understand
your
proposal.
Just
I
may
propose
to
use
forward
that
header
instead
of
that
the
CDN
loop
right
I,
like
saying-
and
you
know,
additional
draft-
that
sort
of
RFC
that
you
must
not
remove
this
header
doesn't
mean
that
everybody
will
respect
that
right.
If
you
want
to
be
realistic
about
this.
A
Mark
Nottingham
again
I
think
the
the
real
value
here
is
that
it's
not
only
it's
must
not
remove
it's
that
it's
for
an
incredibly
specific
purpose.
So,
if
you're
not
interested
in
that
particular
purpose,
then
you
don't
really
have
an
incentive
to
remove
it.
If
it's
used
for
other
things,
you
might
have
another
reason
that
we
don't
know
that
right
now
to
remove
it
and
that's
why
we're
using
forward.
It
is
not
a
great
idea,
in
my
opinion,
because
people
are
already
using
it
for
other
things,
which
we
don't
even
know.
A
If
it's
just
carved
out
for
the
most
specific
thing
possible,
then
there's
less
likelihood
of
accidental
or
you
know,
reuse
or
diverging
use
cases,
and
that's
why
this
is
so
incredibly
specific
stepping
and-
and
so
you
know,
if
you
want
the
properties
of
for
did,
use
forwarded.
That's
great!
That's
it's
already
there,
but
that's
not
the
properties.
We're
looking
for
here,
we're
looking
for
a
different
set
of
properties
which
are
a
little
bit
subtle,
and
that's
probably
why
this
is
a
bit
of
a
back-and-forth.
A
Stepping
back
I
would
ask
the
working
group
to
consider
we
have
on
this
draft
three
CDN
vendors,
who
were
all
pretty
highly
engaged
in
this
process
now
and
they've
come
I
think
this
is
the
first
time
we've
done
something
specific
to
CDN.
Usually
our
engagement
is
about
more
generic
things.
I
would
love
to
get
a
sense,
and
maybe
not
here
and
now,
but
for
the
working
group
to
start
thinking
about.
A
B
Now
you
can
judge
a
hub
for
your
own
document,
okay,
so
we
will
do
I.
Think
one
hum
here
about
whether
or
not
there's
interest
in
us
issuing
a
call
for
an
option.
That's
something
we're
gonna
have
to
do
on
the
list.
You
know
no
matter,
but
if
we
get
a
strong
indication
here
that
we're
interested
in
working
on
this
document,
which
is
a
proposed
standard,
so
you
know
it's
an
obligation
of
the
working
group
to
spend
their
time
and
energy
on.
B
You
know
advancing
it
quickly
and
correctly,
we'll
issue
that
that
call
for
adoption,
okay,
so
the
caveat
I
would
have
here,
is
that
we're
going
to
adopt
this
document
and
we're
going
to
adopt
this
document.
If
we
choose
to
do
so,
specifically
within
the
scope
of
the
CDN
question,
and
if
that
you
know
is
not
an
outcome,
that's
acceptable
to
you.
You
should
hum
against
at
this
point.
So
those
are
those
in
the
room
who
are
in
favor
of
issuing
a
call
for
adoption
and
working
on
the
CDM
loop
draft.
Please
hum
now.
H
N
F
A
F
If
that's
a
run
so
Julian
says
I,
don't
get
why
it's
less
likely
that
somebody
strips
the
new
head
of
field
as
opposed
to
fire
and
Roy,
says
via
already
works.
All
you
need
to
do
is
to
find
a
seedy
and
specific
pseudonym
and
allow
those
to
be
removed
by
Citians.
I.
Think
these
are
points
will
address
during
the.
B
B
G
I
had
a
lot
of
slides
that
I
tried
to
compress
down,
and
so
I
need
to
compress
time
further.
So
forgive
me
if
I
rush
through
this
a
little
bit
I
here
today
to
talk
about
network
tunneling
and
whether
there's
space
effectively
for
or
solving
the
problem
of,
UDP
tunneling,
mainly
for
HTTP
over
quick
and
if
we're
going
to
go
to
that
length,
whether
we
want
to
expand
the
problem
and
solution
into
something
more
generic
like
IP.
G
So
there
are
two
drafts
here
that
I'll
cover
one
is
called
hint,
which
is
something
I
prepared
to
look
at
the
general
problem
and
solution
space
and
there's
another
thing
called
helium,
which
is
draft
by
Ben
Schwartz
that
was
presented
at
dispatch
earlier
in
the
week.
So
I'll
explain
these
in
a
few
slides,
but
first
of
all,
I
just
wanted
to
frame
the
discussion
and
kind
of
baseline
on
how
tunneling
works
today.
So
next
slide
please.
G
So
what
we
have
here
is
HB
1.1.
Don't
don't
pay
too
much
attention
to
the
one
bit?
Just
just
imagine
it's
okay
and
it's
not
h2,
which
I'll
come
on
to
you
soon.
This
is
not
transparent.
Proxying
this
is
a
HTTP
1.1
client
on
the
Left,
trying
to
issue
a
request
to
the
server
on
the
light
and
being
configured
to
go
by
a
8
proxy.
So
you
can
see
here.
G
There
are
two
TCP
connections,
it's
formed
and
the
proxy
takes
on
the
role
or
the
responsibility
of
affording
that
on
and
it
may
be
able
to
filter
requests
or
enforce
any
policy
there.
So
that's
ok,
but
we
don't
live
in
a
plain
text
world
now.
So
if
you
gone
to
the
next
slide,
please
we've
got
here
HP
1.1
over
TLS
over
proxy.
So
what
we
need
to
be
able
to
do
is
create
a
end-to-end
TLS
tunnel.
So
you
know
this
has
been
specified.
G
We
have
a
new
method
called
connect
that
we
pass
in
a
name
to
so
this
example
comm
server
and
that
controls
the
creation
of
the
TCP
connection
from
proxy
to
server
and
the
client
then
can
operate
the
into
any
TLS
context
and
issue
its
request
there.
This
is
typically
configured
with
something
like
HTTP
proxy
variable
or
something
like
that
just
to
highlight
on
the
right
this.
This
is
kind
of
a
protocol
stack
for
you
from
the
client
perspective
of
things,
so
we've
gone
to
the
next
slide.
G
This
is
hb2
over
TLS,
so
you'll
notice,
it's
not
too
dissimilar.
We,
we
still
have
a
client,
that's
able
to
issue
a
connect
request
here
on
TCP
you'll
notice
that
it's
a
it's
still
a
hp1
proxy.
So
actually
we
were
able
to
mitigate
version
here
and
still
use
this
HTTP
based
initiation
mechanism
to
create
an
end-to-end
TLS
context
that
we
can
then
it
to
hb2.
We
negotiate
that
so
using
a
LPN
or
whatever
you'll
notice,
the
addition
of
a
yellow
box,
which
is
here
HP
to
stream.
G
So
this
indicates
that
for
a
request
response
exchange,
we
are
consuming
a
single
stream.
So
next
slide
please
this
one
gets
pretty
complicated,
so
I
appreciate
people,
maybe
aren't
so
familiar
with
quick,
but
if
quick
effectively
inherits
the
h-2b
to
definition
of
of
how
connect
words.
So
in
this
case,
on
the
left
hand,
side,
we've
got
a
UDP
association
between
client
and
cropsy.
This
is
theoretically
possible.
G
I'd
love
to
know
how
many
actual
deployments
of
this
there
are,
how
you
discover
that
proxies
interesting
you
something
like
old
service,
or
do
you
set
it
up
using
some
proxy
packer
or
something
like
that.
But
regardless
you
would
issue
a
connect
request
on
a
quick
stream
and
that
would
reserve
that
stream
so
then
carry
all
messages
and
the
client
to
the
proxy.
That
would
then
get
unbundled
from
the
stream
say
and
forwarded
on
via
a
TCP
connection.
G
So
what
we
have
here
is
a
single
click
contacts
plus
a
TLS
context
in
the
same
UDP
Association,
and
we
have
streams
within
streams
so
that
quick
stream
is
a
reliable
byte
stream.
It
can
be
affected
by
head-of-line
blocking.
So
any
duplexing
of
the
HTTP
two
streams
within
that
TLS
session
would
be
affected
by
the
quick
head
of
line
stream
blocking.
So
you
don't
necessarily
get
some
of
the
benefits
of
multiplexing,
but
you
do
get
the
ability
to
connect
out
to
the
Internet,
which
is
possibly
more
valuable.
So
next
slide
please.
G
So
this
got
me
thinking.
How
can
we
do
the
same
for
quick
from
the
client
to
the
server?
Can
we
create
a
UDP
Association
from
the
proxy
to
a
server
and
looking
around
doing
some
research?
There
was
no
kind
of
standardized
way
to
do
that
by
HTTP
proxy.
There
could
be
some
options
here
in
terms
of
easy
turn
or
socks5
UDP
mode,
I'm
sure
that
is
used
in
some
cases.
G
But
hypothetically,
what
might
be
neater
or
nicer
is
to
if
you've
gone
to
the
next
slide
have
an
ability
to
have
something
very
similar
to
the
TLS
handling
case.
That
would
allow
us
to
do
end-to-end,
quick
tunneling,
and
you
can
see
there.
There's
red
question
marks.
Is
that
a
connect
method
is
that
some
other
new
HTTP
quick
extension,
and
this
is
where
I
began
thinking
about
the
problem
space?
So
if
we
go
on
for
the
next
slide,
the
the
draft
HP
initiated
Network
tunneling
is
a
generalization
of
connect
based
tunneling.
G
So
this
concept
of
converting
either
an
entire
HTTP
connection
or
effectively
stealing
the
TCP
connection
from
out
underneath
the
feet
of
HTTP
or
some
part
of
it,
ie
the
streams
and
converting
that
into
something
that
can
can
effectively
be
a
TCP
UDP
or,
if
there's
interest
an
IP
tunnel.
So
that
document
presents
a
concept
and
other
design
considerations
in
this
space.
So
if
we
want
to
provide
a
solution
or
design
one
does
it
need
to
cover
multiple
HTTP
version?
So
is
this
something
that
could
just
be
for
HTTP
quick?
G
We
need
to
consider
things
like
proxy
discovery
and
its
ability
to
chain-
that's
quite
powerful
capability
here
and
is
that
required
for
the
kinds
of
interactions
that
we
might
want
to
do
for
UDP
all
right
be
something
here.
I
kind
of
glossed
over
earlier
is
the
ability
to
have
agile
argit's.
So
the
connect
tunnel
focuses
on
one
server.
There
may
be
lower
level
balancing
underneath,
but
from
the
application
layer
perspective
we
have
one
tunnel
and
its
own
to
one
place.
So
would
there
be
interest
in
having
an
ability
to
target
different
origins
is
more
agility.
G
G
So
we
need
more
more
input
before
investing
any
more
time
in
only
one
particular
one.
Next
slide,
please
so,
yes,
I
kind
of
broke
things
into
two
areas:
the
initiation,
whether
we
use
a
request
method
or
some
new
h2
or
H
quick
thing,
and
then
the
transfer,
the
steady-state
framing
of
messages
that
we
reserved
a
particular
stream.
Is
this
not
even
stream
level
and
it
needs
some
additional
capability
or
something
like
quick?
G
So
there's
a
lot
lot
of
variability
there
lots
of
permutations
lots
of
different
ways
to
skin
this
cat,
so
just
to
help
direct
some
discussion
in
kind
of
extremes.
There's
a
spectrum
of
proposals
in
that
document,
one
is:
can
we
just
take
connect
and
augment
it
in
some
way?
Come
create
a
new
method
like
connect
but
clearly
separate
from
TCP,
for
in
this
case
UDP
and
have
some
kind
of
new
framing.
Can
we
use
something
called
helium
which
I'll
it's
been
on
the
next
slide
and
carry
that
over
WebSockets?
G
Or
could
we
go
the
next
level
and
have
some
major
framing
and
for
hb2
or
quick
that
helps
realize
at
a
benefits
that
slide
please.
So
this
is
the
helium
draft.
This
is
Ben
Schwartz
document.
He
went
into
a
lot
more
detail
in
dispatches
as
he
live
in
a
single
slide:
lightweight
flexible
epoxy
protocol
based
on
IP
designed
for
many
use
cases
for
and
quick
is
what
I've
mentioned
here,
but
you
could
do
things
like
web
RTC.
Eg
Fox
in
with
ICMP
support,
go
the
whole
hog
towards
VPN.
G
The
concept
here
is
kind
of
abstract
message
types
and
then
a
concrete
realization
of
that
and
the
document
contains
one
which
uses
sea
bore
that
runs
over
WebSockets,
but
we
could
also
possibly
natively
frame
that
and
that's
captured
in
my
document
next
slide.
Please
so
in
closing
need
to
be
fully
acknowledging
of
the
fact
that
there
are
many
ways.
G
The
UDP
and
IP
based
network
handling,
HDTV,
based
or
initiated
tunneling
has
some
unique
benefits
and
in
comparison
for
those,
some
of
the
discussions
we've
had
leading
up
to
this
is
that
there
seems
to
be
interest,
but
is
there
enough
interest
in
empty
that
were
honest
time
and
effort?
And
if
so,
some
import
guidance
would
be
required
for
us?
Can
we
can
we
actually
drive
towards
one
of
those
solutions
or
permutation
of
those
two
options?
G
K
Cut
that
was
a
short
last
call,
so
I
will
comment
but
I,
like
the
architecture
and
the
problem
statement
here,
that
being
able
to
use
it
CP
over
quick
to
talk
to
a
proxy
has
its
advantages
and
I
believe
that
one
of
the
previous
Centrum's
Google
had
mentioned
they
had
at
least
for
some
purposes.
It
should
be
over
quick
proxies
yeah
with
Google,
quick
and
we're
seeing
benefits
from
that.
K
But
the
fact
that
you
can't
then
do
quit
a
capacitor
Cox
yeah
I
agree
is
a
problem
I
like
the
division
here
of
having
something
that
is
effectively
hvp
layer,
I'm
now
going
to
send
some
of
their
protocol
and
then
that
protocol
being
a
transport
II
thing,
that's
really
an
encapsulation.
It's
like
an
evolution
of
gru,
maybe
so
I
like
that
piece.
I,
don't
know
that
they
belong
in
the
same
working
group.
If
we
do
the
MGP
piece
here,
I
don't
think
we
want
the
transporting
piece.
K
U
Briefly,
the
one
thing
that
what
I
didn't
see
called
out
here
that
worries
me
a
lot
is
how
they
can
is
how,
when
you
start
doing
this
sort
of
thing,
congestion
controllers
start
interacting,
it
seems
like
there's
a
lot
of
transport
stuff
under
the
hood
here.
That
gets
very
messy
quickly
like
running
quick
in
quick.
What.
G
U
The
account
on
the
meta
is
sure,
I
think
it'll
be
worth
through.
Mail
also
feel
free
to
skip.
It
was
covered
heavily
in
helium,
but
was
and
dispatch
pull
is.
Is
we
should
be
really
clear
on
why
we
need
something
new
here
like
it
seems
like
a
I
P
SEC
over
UDP
covers
a
lot
of
these.
A
lot
of
the
use
cases
here
without
requiring
inventing
something
new.
L
So
it's
definitely
useful.
I
I
will
echo
what
Eric
said
and
there's
one
particular
design
assumption
and
quick
right
now,
which
is
different
from
TCP
in
TLS,
in
that
the
transport
and
the
crypto
context
are
not
separable,
and
this
is
right
now
commit
assumes
that
they
are
separable
right.
We
have
TCP
to
DC
connections
that
can
become,
and
one
pls
think
that
can
be
layered
on
top
of
them.
That
is
no
longer
true
and
quick.
We
really
can't
get
that
kind
of
composition,
so
you
would
have
do
you
have
to
do
TLS
in
TLS
effectively.
L
E
B
Act
be
multiple,
you
know,
dispatch
versus
HTTP
chairs,
we're
gonna,
kick
their
ass,
there's
gonna
be
or
there
could
be
multiple
discussions.
You
know.
Have
it
have
it?
Have
it
your
own
way,
I'll.
Take
you
in
the
fight
with
Murray
any
day.
That's
all
right,
so
we
will
take
you
no
further
inputted
on
the
HTTP
specs
of
this
and
I
do
want
to
reiterate.
B
One
thing:
I
said
on
the
list
that
if
you
operate
systems
that
use
the
connect
method
implement
systems
that
use
the
connect
method,
that
kind
of
thing
you
know
put
it
in
a
forward
proxy
scenario.
This
would
be
a
great
time
to
speak
up
because
I
think
it's
sort
of
under
represented
in
this
discussion,
or
perhaps
it's
not
under
represented
in
this
discussion
and
that
itself
it's
a
you
know.
It
is
a
point
of
input.
B
Thanks,
Lucas
we're
gonna
move
on
we're
going
to
talk
a
little
bit
about
h2
push
data
from
yo
EV
has
done
some
work
that
you
must
just
share
with
us.
So
this
is.
This
is
a
bit
of
an
initiative
to
find
out
how
some
of
things
we
have
standardized
our
plain
in
the
world
and
how
that
how
that
works
out.
You
are
not
you
math.
P
AA
Yes,
I'm
not
as
tall
I'm
here
to
share
some
results
about
h2
server
push,
and
this
is
over
a
period
of
11
days.
We
had
collected
some
measurements
and
had
done
some
analysis
on
it
from
June
14th
to
June
25th,
so
at
Akamai,
h2
server
push
is
primarily
provided
through
a
product
called
adaptive
acceleration,
and
so
what
this
park
does.
Is
it
analyzes
rum
data?
AA
So
this
is
real
user
monitoring
data,
so
data
that
is
based
on
nav
timing
and
resource
time
data
speaking
back
into
data
warehouses,
where
we
do
some
analysis
and
determine
what
the
critical
resources
are
that
are
necessary
for
rendering
on
a
given
page
look,
and
so
the
idea
is
that,
once
we've
identified
these
critical
resources,
we
push
them
during
the
HTML
generation.
Think
time.
AA
So,
looking
at
a
typical
case,
where
we're
not
applying
push,
the
HTML
request
is
generated.
The
request
goes
to
the
edge
in
a
content,
delivery
network
and
then
goes
on
port
origin
to
be
fetched,
and
this
is
where
we
have
the
idle
network
time
and
eventually
the
response
comes
back
and
you
can
see
that
tcp
slow-start
in
effect,
so
chunks
start
coming
back
and
getting
bigger.
AA
And
finally,
the
browser's
making
requests
for
page
sub
resources,
SS
in
JavaScript,
for
example,
and
then
those
get
fetch
down
so
the
way
that
we
utilize
push
effectively
is
to
take
advantage
of
that
idle
network
time.
So
we
want
to
be
pushed
down
CSS
and
JavaScript,
that's
critical
to
rendering
of
the
page
during
the
time
that
the
page
is
being
fetched
from
origin,
and
so
we're
usually
we're
using
that.
AA
Essentially
that
dead
kind
of
time
to
good
use-
and
you
can
see
you
know-
theoretically,
the
tcp
slow-start
comes
into
effect
earlier
and
by
the
time
that
the
HTML
pages
comes
down
to
the
browser
that
hopefully,
we've
moved
past
that
face
so
Jake
Archibald
wrote
a
great
blog
post
about
each.
It
should
be
to
push
saying
it's
tougher
than
we
thought
there's
and
he
has
a
quite
long
post
that
goes
into
the
nuances
of
H
to
push
scenarios
where
it's
more
effective
scenarios,
where
it's
less
effective.
AA
AA
So
diving
into
the
results
now
so
just
to
give
a
preamble
before
I
show
the
graphs
on
how
to
interpret
this.
We
measuring
dom
complete
time
in
this
case,
and
so
when
you
look
at
these
graphs
negative
is
better.
This
isn't
a
relative
difference
between
h
to
push
on
and
h
to
push
off
so
the
further
to
the
left
that
that
you
see
the
bar
the
better
it
is.
AA
This
is
chrome
only,
and
this
is
only
first
few
only
so
these
this
analysis,
based
on
measurements
only
on
first
view,
its
excludes
repeat
view
on
purpose,
and
that's
for
a
couple
of
reasons.
One
is
that
the
repeat
view
case
is
not
that
effective
with
push
right
now,
because
of
the
absence
of
cache
digests,
for
example,
and
with
adapter
acceleration,
we
try
to
avoid
pushes
on
repeat
views.
AA
For
that
reason,
the
difference
that
you're
gonna
see
is
going
to
be
a
bar
with
men
and
max
values,
and
essentially
this
means
that
we
are
95%,
confident
that
the
performance
difference
will
fall
within
this
range
between
the
min
and
max.
So
essentially,
you
see
green.
That
means
it's
statistically,
that's
faster
red
would
be
slower
and
blue
would
mean,
there's
no
statistical
difference,
okay,
so
what
using
here
is
mobile
results
only
so
hoping
your
eyes
are
not
pleading.
Looking
at
this,
the
bars
are
kind
of
thin,
but
in
this
case
there's
11
results
shown.
AA
So
what
we're
seeing
here
is
actually
the
intersection
of
some
websites
that
are
using
adaptive
acceleration
product
and
also
using
a
new
rum
engine
that
Akamai
provides
which
is
based
on
the
product
impulse,
and
so
it's
the
intersection
between
those
products
and
I
mean
the
we
are
just
transitioning
to
this
in
your
rum
engine.
So
the
initial
customer
base
is
small,
but
it's
scoring
and
we
hope
to
be
continuing
to
measure
it
up
based
on
this
engine.
So
we
use
the
nav
timing
metrics
to
fetch
the
Dom
complete
time
in
this
case.
AA
So
really,
in
this
case,
there's
11
results
here,
for
we
can
prove
they're,
statistically
significantly
faster
and
7.
We
can't
say
at
all
and
I
have
the
raw
measurements
on
the
side
that
you
can
see
in
the
left
that
provide
the
confidence
interval
and
the
mean,
but
for
the
blue
ones
we're
not
supposed
to
derive
any
kind
of
conclusion
from
it.
AA
You
know
you
might
see
the
bar
more
on
one
side
versus
the
other,
but
talking
to
our
statisticians
to
say
no,
you
can't
say
hey,
so
that
has
to
be
completely
on
one
side
of
the
other
to
derive
any
kind
of
conclusion.
So
here
we
have
four
that
are
better
seven.
We
can't
see-
and
you
can
see
that
the
confidence
intervals
fairly
large
in
many
cases
for
desktop.
We
have
13
results
and
we
have
six
that
are
better
and
seven
that
we
can't
save.
AA
Now
the
the
scale
is
kind
of
shifted
linearly
a
bit
and
maybe
slightly
in
terms
of
magnitude,
but
you
can
see
that
the
bars
nevertheless
are
much
shorter.
The
desktop
case,
the
mobile-
and
why
is
this
the
case
as
just
because
we
see
more
variability
and
noise
and
mobile
performance
data,
so
there's
more
variability,
presumably
due
to
more
variance
in
last
mile
networks
and
that's
reflected
in
the
measurements.
So
it's
harder
to
get
statistics.
AA
How
did
we
measure
so
statistical
methodology
we're
using
a
linear
regression
methodology,
that's
used
for
statistical
calculations
and
it's
based
on
following
dimensions,
so
geographic
location,
client
to
west
user
agent
hour
of
day
day
of
week
is
P
URL.
This
is
all
very
important
because
if
you
don't
product
control
for
these
variables,
you
get
too
much
noise
and
there's
several
dimensions.
I
can
introduce
variability
to
the
results,
so
a
B
measurements
are
actually
quite
complex,
especially
using
the
real
user
monitoring
data.
That's
part
of
what
we're
finding
as
we
go
through
this
exercise
and.
AA
B
Circumstances
so
thank
you
very
much
and
I
I've
asked
people
to
hold
their
questions
because
we
have
a
similar
presentation
from
a
different
point
of
view.
We
wanted
to
make
sure
both
of
you
could
present
before
we
run
out
of
time,
and
then
you
can
maybe
address
questions
together.
If,
for
whatever
we
have,
we.
AA
AB
AB
AB
AB
So
we
ran
an
a/b,
see
experiment.
We
compared
disabling
push
by
sending
the
settings
enable
push
setting
set
to
zero
and
we
would
still
process
the
push
frames
if
they
came.
This
was
purely
just
sending
the
the
setting
we
compared
that
to
no
treatment
regular
control,
but
we
also
compared
that
to
sending
an
unrelated
settings
changed
to
see.
If
servers
were
miss
handling
set
HTTP
to
settings,
the
Deaf
canary
data
was
too
noisy
to
draw
any
conclusions
about
push,
but
we
did
satisfy
ourselves
that
the
servers
were
handling
these
settings
for
him
properly.
AB
So
we
didn't
need
to
include
that
control
when
we
propagated
this
to
beta
within
the
beta
results
for
the
entire
population
or
sorry,
the
entire
test
population,
we
got
a
slightly
negative,
non,
statistically
significant
result.
So
the
best
way
to
say
this
is
that,
for
the
entire
population,
push
makes
no
difference
in
performance.
That's
rather
unsatisfying
either
direction.
L
AB
Lower
is
better
so
this
is
basically
trust
trying
to
describe
that
absent
server.
Think
time
or
the
maximum
benefit
of
push
is
either
your
the
amount
of
data
you
can
send
in
one
round-trip
or
your
initial
well
maximum
table
you
can
send
on
trip
which
is
either
calculated
by
your
bandwidth,
your
round-trip
time
or
your
congestion
window
and
I've
heard.
AB
So
one
of
the
questions
that
we
have
is
if
we
were
to
turn
off
push
for
everyone.
Would
anyone
really
care?
Currently,
it's
as
I
said
only
four
100
of
percent
of
all
the
sessions
we
see
it
seems
to
be
a
performance,
look
on
looking
at
some
of
the
individual
domains
that
are
pushing
some
really
really
get
it
wrong.
Some
do
get
a
good
benefit,
though
so
it
it's
all
over
the
place.
L
AA
Yeah
just
to
add
on
that,
since
you've
invoked
quick,
there's
also
a
theory
that,
with
there's
been
some
experiences
with
headline
blocking
with
server
push
that
this
we've
made
efforts
on
improving
that
and
that
in
the
quick
world.
That
problem
would
also
go
away,
which
is
an
improvement
of
pushover,
quick.
So
how
that.
AA
L
I'm
Jen
I
ain't
got
first,
he
had
a
slide
that
they
talked
about,
firstly,
having
an
initial
window
of
100
I'm
trying
to
make
it
happen.
It's
not
happened
yet.
So
that's
actually
the
the
the
paper
I'm
familiar
with
the
paper
that
you're,
citing
there
and
I,
have
no
idea
where
they
get
that
number
from
when
I
I
do.
But
the
number
is
incorrect.
Sorry.
B
L
The
on
the
I
was
it
was.
It
was
interesting
to
see
the
amount
of
variability
that
he
had
in
your
results
and
I
was
going
to
ask
you.
If
you
looked
at
the
population
sizes
of
those
different
sites,
it
seems
like
you
might
have
very
low
populations,
possibly,
but
it's
yeah.
Can
you
speak
to
the
variability
in
the
data
yeah.
AA
I
mean
so
there
is
variability
and
it's
it's
not
unique
to
anything
with
Porsche.
It's
something
that
our
service,
por
encima,
has
been
discovering,
as
we
do
more
analysis
on
ROM
based
data,
and
so
you
so
you
know
your
new
measures
using
synthetic
environments,
you're
much
more
constrained
in
terms
of
variability.
It
was
ROM
data.
AA
We
see
a
lot
of
variability
and
there's
been
efforts
at
evolving
the
methodology
to
control
that
by
using
different
dimensions
as
explained,
and
so
that's
what
we
have
now
and
there
may
be
further
protected
ology
to
further
constrain
it,
but
yeah
there
is
variability
there,
and
so
even
in
the
results
are
statistically
insignificant.
In
this
case
you
know
we
know
that
in
synthetic
tests
we
have
seen
an
improvement,
but
you
don't
see
it
based
looking
at
the
ROM
data,
for
example,
so.
A
We're
actually
out
of
time,
but
if
folks
are
willing
what,
since
this
is
just
an
advisory
thing,
let's
go
ahead
and
drain
the
cues,
but
no
more
discussions,
please,
and
please
try
to
keep
your
comments
brief.
So.
AC
Fastly,
just
really
quick
a
couple
of
things
since
we're
talking
about
numbers.
Thank
you
for
Chrome
for
sharing
those
numbers.
I
just
want
to
share
ours,
really
quick.
We
don't
store
in
a
user
data.
I
just
did
a
sample
of
90,000
requests
on
a
server
90,000
streams
of
h2
had
they're
there,
150
of
them
that
were
pushed
that's
point,
one
five
percent,
depending
on
how
much
you
believe
in
sampling
like
this,
when
I
did
this
three
weeks
ago,
it
was
like
80
out
of
90
thousand,
not
saying
growth.
AC
It's
growing
three
times
over
over
three
weeks.
Just
information
I
have
a
couple
of
clarified
questions
for
the
presentation.
Do
you
have
any
date
of
volume
numbers
for
how
many
sign?
Oh
there
was
a
minimum
of
a
thousand,
but
how
many
samples
per
I'm
guessing
each
other's
eleven
swear
sites?
Is
that
what
they
were?
Those
are
sites.
Yeah
do
you
have
data
volume,
numbers
and,
and
maybe
more
interestingly,
what
percent
of
the
pageviews
got
nuked
because
they
were
repeat.
AA
AC
AA
AC
U
Alan
from
Dell,
Facebook,
I
guess
I
just
wanted
to
say
we
have
run
the
the
slide,
the
blog
post
about,
pushes,
challenging
and
there's
different
implementations
and
browsers.
It
makes
it
hard
it's
not
uniform
amongst
browsers
how
they
implement
it
and
it
makes
push
challenging
for
us
and
particularly
the
way
chrome
works
in
how
it
does
its
push.
Cache
is
incompatible
with
the
way
we
want
to
operate
things
and
we're
not
able
to
really
push
effective
to
Chrome.
U
Different
implementations
on
push
Eric,
Nygren
I'm
question
for
Brad
I.
Think
one
thing
that
might
be
interesting
to
do
is
also
look
at
if
there's
a
way
to
filter
that
by
thing
by
cases
where
the
push
is
happening
in
the
server
sink
time.
So,
for
example,
cases
where
all
the
pushes
that
are
coming
in
are
coming
in
before
you
actually
start
with
receiving
the
response
for
the
base
object.
U
That
was
requested
because
some
of
this
might
it
might
be
that
that
one
of
the
side
effects
here
is
is,
if
you
have
other
stuff
you
can
send.
You
should
never
push,
and
maybe
that's
useful
guidance,
and
it
might
be
interesting
to
see
if
we
can
of
decouple
the
these
cases
of
pushes
pushes
useful.
If
you
have
nothing
else
to
send
yet
because
you
still
doing
something
on
the
server
side
from
the
the
people
really
badly
misusing
it
by
pushing
stuff
when
they
really
should
be
sending
something
else.
Instead,.
K
Mike
we
shop
I,
will
also
observe
that
both
of
you
are
collecting
data
off
of
Chrome
and
off
of
particular
sites.
So
that's
a
good
place
to
try
and
figure
out
where
and
where
your
methodology
is
diverging,
because
you
have
divergent
results
and
I'm
wondering
if
well
one
thing
I
observe
is
you
did
all
navigations
and
you
have
only
first
navigations.
A
Well,
thank
you.
Both
that
was
really
interesting.
I
think
we
want
to
encourage
more
data
like
that,
as
we
go
on
that.
That's
really
good!
Thank
you
right
before
we
go
to
things
real,
quick
one,
we're
having
a
session
the
barbed
off
next
door
in
about
five
minutes
for
SRV
and
HTP.
If
you're
interested
in
that
and
to
where
the
blue
sheets
does
anyone
know
where
the
blue
sheets
are
there's
one,
and
this
is
another
one
floating
around
there
somewhere,
okay!
Well,
hopefully,
we'll
find
it.
Oh.
Thank
you
great
all
right.