►
From YouTube: IETF97-HTTPBIS-20161117-1110
Description
HTTPBIS meeting session at IETF97
2016/11/17 1110
B
Welcome
to
the
second
session
of
the
HTA
p
working
group
for
the
week,
we've
got
just
one
hour,
unfortunately,
and
we
have
a
number
of
presentations
so
we're
going
to
walk
through
those
as
quickly
as
we
can.
These
are
going
to
be
the
beginning
of
a
number
of
discussions.
It's
not
going
to
be
the
end.
So
please,
if
you
do
have
comments,
keep
in
mind
that
we
do
have
very
little
time
for
each
presentation.
B
This
is
the
note
well
which
you
should
be
familiar
with
now,
if
you're
not,
then
then
spend
some
time
with
it
away
from
the
working
group
and
then
come
back
when
you're
you're
comfortable,
and
this
is
our
agenda
for
the
day.
So
we've
got
what
is
it
now
six
presentations,
hopefully
in
60
minutes
in
60.
D
E
F
F
Hello,
I'm
Emily,
Stark
I'm,
here
from
the
chrome
team.
This
is
an
old
version
of
slides,
so
this
is
gonna,
be
a
little
exciting,
we'll
see
how
cos
so
I'm
here
to
discuss
a
proposal
called
expect
CT,
which
is,
can
be
approximately
thought
of
as
a
chesty
s4
certificate.
Transparency
I
suspect
that
many
of
you
are
very
familiar
with
certificate
transparency,
if
you're
not
I'll,
give
a
little
bit
of
background.
Ct
is
a
system
that
allows
public
logging
of
certificates
so
concrete.
F
Concretely,
in
the
context
of
this
presentation,
you
can
think
of
a
TLS
server
that
has
a
certificate
that
certificate
gets
submitted
to
one
or
more
certificate.
Transparency
logs
those
logs
provide
a
verifiable
promise
that
they
they
plan
to
operate
that
certificate
in
a
way
that
can
be
publicly
audited.
F
This
verifiable
promise
is
called
a
signed
certificate
time
soon
put
an
SCT
and
the
server
will
provide
these
s
cts
on
the
connection
to
the
client,
which
can
verify
in
the
TLS
handshake
that
that
a
log
that
the
client
trusts
plans
to
publicly
incorporate
this
certificate,
so
significant
transparency.
Ideally
in
the
future,
we
would
like
to
imagine
that
all
TLS
clients
would
would
would
verify
certificate
transparency,
information
on
all
connections
that
they
make
and
doing
so
would
have
a
number
of
benefits.
F
Domain
owners
could
monitor
the
logs
for
certificates
that
they
don't
expect
to
appear
there,
and
they
would
know
that
any
certificate
that
is
accepted
by
a
client
would
appear
in
a
certificate,
transparency,
log,
and
it
also
provides
value
for
other
parties
who
might
want
to
say,
monitor
for
CA
misbehavior.
F
So
that's
an
ideal
future,
but
today
site
owners
don't
get
the
full
security
benefits
of
certificate
transparency,
because
there's
no
way
to
guarantee
that
any
certificate
that
is
accepted
by
us,
a
web
browser
actually
appears
in
the
CT
logs,
so
site
owners
today,
don't
get
protection
against
a
Miss
issued
certificate
that
just
doesn't
actually
appear
in
the
CT
logs.
F
So
an
ex
like
this
so
expect.
Ct
is
basically
an
opt-in
kind
of
like
a
chess.
Yes,
a
HTTP
response
header
that
a
site
can
send
to
ask
the
browser
to
kind
of
hold
them
to
CT,
to
require
to
require
that
any
connection
to
this
site
in
the
future
for
some
period
of
time
is
comes
along
with
valid
certificate
transparency
info.
F
So
this
is
kind
of
a
straw
me
on
syntax
from
the
draft,
probably
very
familiar
from
HS
TF.
It
wouldn't
want
to
fight
sends
this
response.
Header
asks
the
browser
to
to
remember
that
the
site
should
be
should
be
connected
to
with
CT
compliance.
F
So,
let's
see
a
couple
of
a
couple
high
points,
I
want
to
pull
out
here.
One
of
the
main
goals
here
is
to
do
something
that
is
easy
to
deploy
familiar
to
site
operators,
especially
easy
to
deploy,
deploy
soon
and
I'll.
Talk
I'll
describe
that
a
little
bit
more,
but
also
the
the
overall
goal
here
is
basically
to
give
site
owners
away
to
to
ensure
that
all
the
certificates
that
you
I
accepts
for
their
for
their
site
is
a
pure
in
the
public
CT
logs.
F
So
I'm
missing
a
slide
after
this
one,
but
basically
I.
Why
I
say
that
that
it's
especially
important
that
this
is
something
that
site
owners
can
deploy
easily
and
soon
is
because
of
what
might
be
a
little
bit
of
an
elephant
in
the
room,
which
is
that
chrome
recently
announced
plans
to
require
a
CT
for
all
new
certificates,
starting
in
October
2017.
F
So
you
might
be
wondering
how
these
things
interact,
how
they
expect
CT
proposal
interacts
with
this
chrome,
CT
requirement
date
and
I'll
say
a
few
things
about
that.
So
so,
first
of
all,
we
think
there
is
a
good
chunk
of
value
to
be
had
and
expects
et
in
kind
of
transitioning.
The
ecosystem
to
this
chrome
CT
requirement
day
and
any
other
CT
requirement
dates
that
might
be
announced
in
future
by
other
browsers.
F
So,
for
example,
if
a
site
is
is
getting
certificates
from
a
CA
and
the
CA
is
embedding
the
s
cts
into
the
certificate
the
site
might
want
to
use.
Expect
CT
before
the
chrome,
CT
requirement
date
to
make
sure
that
their
CA
is
doing
things
properly
so
that
they
can
find
out
now,
if
not
instead
of
in
October,
when
their
site
breaks
and
it's
out
of
their
control.
Let's
see
so
so.
F
That's
so
so
expect
CTS
sort
of
a
step
on
a
gradual
adoption
of
CT
for
Chrome,
and
we
think
you
can
also
potentially
be
a
step
on
a
gradual
adoption
of
CT
and
other
browsers
and
then,
of
course,
there's
also
the
consideration
that
the
chrome
CT
requirement
date
is
for
new
certificates,
so
any
certificate
issued
in
October,
2017
or
later
will
be
required
to
have
to
have
CT
information,
but
that
doesn't
protect
sign
owners
from
old
certificates
that
were
issued
before
october
or
backdated
certificates.
F
So
if
a
misbehaving,
CA
hypothetically,
decides
to
back
date,
back
data,
CA
I
mean
backdate
a
certificate
and
not
put
that
certificate
into
the
CT
logs.
That
certificate
wouldn't
be
subject
to
Chrome's
CT
requirement.
Okay,
ignore
this
go
back!
Thank
you,
ok.
So
my
my
actual
next
steps
here
were
to
pull
out
a
few
questions
and
issues
that
came
up
on
the
mailing
list
in
the
past
few
days.
F
So,
let's
see,
if
I
can
remember
them
all
off
to
top
my
head,
so
one
of
them
was
that
came
up
on
the
mailing
list
was
some
tactically
weather
should
be
a
set
of
new
hsps
directives
or
a
separate
header,
as
the
draft
currently
proposes
another
one
was
the
question
of
how
exactly
the
report
only
mode
should
behave
so
the
issue
there
is
that
a
site
owner
might
expect
and
might
want
the
report
only
mode
to
be
remembered
or
cash,
the
same
way
that
the
enforcement
mode
is,
but
that's
not
how
HP
KP
report
only
works
or
CSP
report
only.
F
F
And
finally,
there
is
a
question
of
what
it
actually
means
to
enforce,
CT,
which
I
haven't
talked
about
in
this
presentation
and
that's
because
browsers
kind
of
have
their
own
idea
of
what
it
means
to
enforce
CT
and
that
may
vary
from
the
list
of
logs
that
they
trust
to
the
number
of
SVT's
that
they
require
etc.
So
expect
CT,
as
defined
in
the
draft,
is
the
site
asking
the
UI
to
hold
them
to
kind
of
whatever
CT
policy
the
browser
the
browser
has.
F
And
so
the
question
is
whether
that
is
useful
to
site
owners,
whether
that
is
what
site
owners
want
or
if
there
is
a
desire
for
some
kind
of
more
flexibility
or
for
the
expect
CT
draft
itself
to
define
a
policy
so
I'm,
looking
for
feedback
from
anyone
on
anything
but
I'm,
specifically
interested
in
feedback
from
site
owners
on
what
they'd
find
valuable
and
useful
Thanks.
G
The
sites
that
put
this
in
there
in
their
header
fields
will
have
to
expect
a
certain
degree
of
service
when
they
put
this
in
a
header
field,
I
mean
if
your
policy
is
like
I
believe
some
browsers
have
currently,
which
is
to
ignore
CT
entirely.
Then
that
may
not
drive
very
well
with
the
opposite
policy,
which
is
the
most
strict
possible
policy
that
you
can
imagine
for
for
CT.
G
F
If
I'm
understanding,
correctly
you're
saying
something
like
the
there's
sort
of
a
minimum
bar
that
unexpected
sight
must
must
meet,
but
a
UI
is
free
to
enforce
a
strict
door
policy.
If
they'd
like
to
I.
Think.
H
So
Ryan
sleepy
I'm,
the
one
who
made
that
enough
October.
So
so
it's
all
my
fault
blame
me,
so
you
know
I
think
one
of
the
things
that
we've
seen
that
we
saw
just
just
sharing
an
implementation
experience
that
guided
and
shapes
on
the
sum
of
this
and
the
feedback
heard
was
extended.
Validation,
chrome
had
a
policy
that
extended
validation.
Certificates
must
meet
a
particular
CT
policy
and
what
we
saw
was
was
in
fact
a
number
of
CA
is
unable
to
meet
that
policy
because
they
have
trouble
with
reading
comprehension.
H
We
see
this
with
enough
CA
incidents
that
reading
comprehension
is
a
very
high
bar
for
running
a
CA,
but
it
was
enough
that
there
were
a
number
of
EV
certificates
out
there
that
weren't
even
able
to
meet.
You
know
that
policy,
so
I
I
do
take
your
point,
which
is
you
know
there,
there's
a
need
to
specify
policy,
but
you
know
on,
on
the
other
hand,
a
number
of
site
operators
weren't
receiving
feedback
that
this
their
certificates,
weren't
working
and
understanding
where
their
certificates
were
working
and
so
something
you
know
like
the
report.
H
Only
mechanism
as
the
feedback
right
to
sort
of
go
through
reporting
it
didn't
sound
from
from
you
know
your
discussion
of
policy
constraints
I,
certainly
going
to
appreciate
setting
the
maximum
right
saying
the
most
restrictive
policy,
but
expecting
everyone
to
enforce
that.
But
you
see
that,
as
as
being
a
normative
requirement
than
to
go
forward
with
expects
a
fee
that
you
must
specify
the
policy
in
the
document
as
the
normative
requirement.
So
can
I
just
do
a
quick,
interrupt
and.
C
B
It's
the
chain
of
reactions.
Can
we
just
have
a
quick
discussion
of
whether
we
think
it's
interesting
to
continue
this
and
perhaps
start
working?
This
area
in
this
working
group
I
see
a
thumbs
up
from
Martin.
Anyone
else
have
any
strong
opinions
about
that.
Should
we
do
at
home,
uh-huh.
C
B
To
hob,
who
thinks
that
it's
interesting
to
work
in
this
area
for
this
working
group,
I,
won't
say,
adopt
this
document
cuz,
nothing
we're
quite
there.
Yet
we've
just
started
talking
about
it,
but
is
this
an
area
that's
appropriate
this
group
to
work
on
hum
now
and
who
thinks
that
it's
not
I'm?
Now,
okay,
so.
J
Good
hi
cilla
anchor
I
was
wondering
whether
it
is
planning
plans
to
continue
working
on
this
for
other
protocols
apart
from
HTTP
as
well.
For
example,
when
we
add
these
new
features
like
HS
TS
and
put
in
CT
vlogs.
So
these
things
are
nice
to
have
four
in
the
TLS
layer.
We
violate
these
abstractions
all
the
time
and
we
put
this
into
HTTP
layer.
J
I
totally
understand
why
you're
doing
this
for
ease
of
deployment
and
it's
much
easier
for
sites
to
add
HTTP
headers
right
now
than
to
wait
for
their
openness
so
update
to
get
this,
but
I
would
not
like
to
see
the
stop
here.
I
would
love
to
see
a
TLS
extension
which
has
a
CT
I
put
expect
CT
as
well,
so
that
we
have
a
good,
forward-thinking
solution
for
this
done,
not
just
a
stopgap
yeah.
F
F
I
would
like
to
do
something
that
browsers
can
implement
and
sites
can
deploy
very
quickly
because
for
us
on
Chrome,
a
big
chunk
of
the
value
comes
from
this
period
between
now
and
when
the
CT
requirement
date
hits,
but
for
something
kind
of,
and
also
because
expect,
CT
kind
of
has
a
shop
has
a
natural
shelf-life
built
in,
but
for
something
like
OCSP
stapling,
something
that
I
would
expect
to
be
longer
term
I.
Think
some
other
mechanism
might
be
better.
F
Okay,
just
one
more
note,
I
uploaded
this
as
an
experimental
draft.
I,
don't
really
know
if
that's
the
that's.
What
operator.
B
F
B
C
Will
endeavor
to
get
us
back
on
time?
Okay.
I
spoke
briefly
about
a
cache
control
response
hundred
immutable
when
we
were
last
together
in
Berlin.
So
since
that
time,
I've
put
forward
a
draft
and
we'll
talk
about
that
really
briefly
today.
So
the
basic
problem
facebook
reported,
the
twenty
percent
of
all
of
their
HTTP
responses
were
304,
and
most
of
these
were
for
resources
that
had
not
existed
for
as
long
as
the
max
age
on
the
resource.
So
it
didn't
really
make
sense
that
people
were
revalidate
in
them
and
it
turns
out.
C
This
is
because
people
press
reload
a
lot
on
social
media,
so
we've
got
a
cache
control
extension,
which
is
well
defined
by
the
cash
in
RFC.
It's
a
simple
mechanism
to
assert
that
fresh
responses
would
receive
a
304
if
they
were
to
be.
Revalidated
therefore
skip
the
revalidation
and
your
reload
gets
faster.
This
mission
meshes
well
with
version
the
URL
design
patterns,
where
a
particular
resource
only
ever
has
one
version.
C
Next
great,
we
got
running
code,
so
this
has
been
out
in
the
field
and
firefox
for
about
six
weeks.
Facebook
is
deployed,
BBC
is
deployed
it.
The
interplanetary
file
system
includes
it.
I
don't
really
know
what
that
is,
but
it
makes
me
sound,
awful,
smart
and
future
looking.
There
are
a
lot
of
reports.
Page
load
times
on
reloads
were
as
much
as
fifty
percent
improves.
Ninety
percent
fewer
transactions
obviously
basis
it's
based
on
the
content
and
whether
people
are
pressing
reload
a
lot.
You
know.
C
Do
they
like
me:
do
they
not
like
me
reload
next,
alright,
so
there's
an
attractive
there,
immutable
00
an
example
up
top
you
still
have
a
max
age,
because
I
mean
a
boat
only
applies
to
still
fresh
responses
and
I
got
a
bunch
of
feedback
on
it,
especially
with
respect
to
intermediaries.
All
which
makes
sense
is
pretty
easy
to
address,
and
it
requires
a
one
next
slide,
all
right.
C
B
Is
the
beautiful
thing
about
having
two
chairs
now
we
can
have
a
clean
separation
of
concerns.
I
think
there's
been
a
lot
of
discussion
of
this
and
from
what
I've
seen
a
lot
of
interest
in
it,
ice
already
saw
a
thumbs
up
in
the
audience.
Well,
we'll
just
after
you
ask
that
so
I
think
we
can,
just
probably,
unless
there's
any
people
that
want
to
ask
questions
or
or
put
anything
out.
We
can
probably
just
do
a
hum.
L
K
Okay,
so,
as
you
folks
may
remember,
and
Berlin
we
talked
a
little
bit
about
chaired
dictionary
compression
in
particular
Charles
said
that
y
and
is
quite
interested
in
getting
an
actual
spec
for
it,
since
the
specs
board
that
was
available
did
not
seem
to
actually
reflect
his
expectations.
So
what
is
it?
It's?
No!
That's
a
different
note,
not
that
okay,
so
this
is
it
on
one
slide.
This
is
the
entirety
of
the
idea.
It's
an
HTTP
11,
compatible
extension
that
supports
into
response
data
compression
by
sharing
data
right.
K
It's
a
little
dictionary
you
send
across
and
then,
whenever
you
look
at
your
CSS
or
footer
or
JavaScript,
or
whatever
it
references
the
dictionary
rather
than
getting
it
for
fresh,
it's
rocket
science,
anyone
they
think,
that's
right,
no!
Okay!
So
the
draft
is
out
there
now
draft
lee
st
ch
spec.
Unfortunately,
wishin
was
not
able
to
make
it
so
he
sent
me
I
really.
What
not
sure
why
didn't
send
somebody
else,
since
there
are
lots
of
other
people
he
could
have
sent?
Look
Ian
could
have
done
this
next
slide.
K
The
current
state
of
play.
There
is
some
deployment,
but
no
standard
specification.
There's
been
interest
expressed
in
getting
a
specification
done
on
some,
not
sure
whether
it
be
standards
track
informational.
Whatever
the
current
authors
are
not
available
to
drive
this,
but
Charles
makhna
be
Charles
has
said
that
he
would
be
able
to
do
it.
He
was
unfortunately
not
able
to
be
here,
but
I
have
his
proxy
little
token.
That
says
one
drafts
worth
of
effort
from
Charles
but
I'm
allowed
to
spend.
If
the
working
group
is
interesting.
M
H
H
So
this
to
this
issue,
part
of
part
of
what's
happening-
you
know,
like
that
said,
is
there's
an
implementation
that
there
was
an
early
draft,
doesn't
quite
match
the
implementation,
there's
an
implementation
that
needs
change
because
by
golly
their
security
considerations
or
not
not
just
security
considerations,
implementation
considerations,
but
that
want
to
go
forward.
The
index
team
is
quite
interested
in
this
and,
like
that
said,
he
has
a
token
to
sort
of
go
forward
to
see.
H
If
this
is
something
that
is
a
tractable,
you
know,
is
there
space
that
the
solution
could
emerge
from,
or
is
this
going
to
be
the
sort
of
side
channel
so
more
or
less
the
the
drafters
day
starting
conversation
I?
I
think
you
know
we're
very
concerned
about
not
just
that,
but
also
you
know,
reliance
on
open,
VC
diff,
a
variety
of
things
that
sort
of
come
into
play
here,
but
if
the
working
group
wants
to
go
forward
and
spend
the
energy
to
discuss,
that
would
be
fantastic
sure.
So,.
K
M
K
G
G
There
there
are
ways
you
can
deploy
this
potentially
that
are
safe,
but
there's
a
very
sort
of
narrow
line
to
tread
when
you,
when
you
do
with
this
and
deal
with
these
sorts
of
things,
even
if
you
think
it's
safe,
it's
probably
not-
and
this
is
a
discussion
we
didn't
have.
I
really
only
got
up
here
to
talk
about
the
sort
of
broader
problem,
which
is
that
we
now
have
two
drafts
in
this
area,
potentially
more
and
over
an
array
of
solutions.
G
We
should
first
decide
well
whether
we
can
actually
ever
deploy
something
like
this
I
know.
People
have
and
seem
to
be
happy
with
them,
but
do
they
require
okay
right
Ryan's,
not
particularly
happy
with
them?
Do
we
want
to
put
something
like
this
out
there
and
then
what
is
the
right
way
to
do
this?
I'm,
not
I,
have
a
slight
preference
to
two
approaches
that
use
content
and
coding
over
what
we're
gonna
hear
about
later,
but
that
may
change
based
on
later
presentations.
G
M
Yeah
all
this
area,
this
is
we
busy
throw
up
our
hands
and
gave
up
with
this
for
headers,
and
so
I
mean
more
or
less
right
hum,
and
so
you
know-
and
that
was
something
that
was
actually
really
super
high
priority
and
we
burned
an
enormous
amount
of
time
on
trying
to
get
security
analysis
up.
So
just
like
I
guess,
if
we're
gonna
be
voting
for
whether
we
got
this
I'm
avoiding
know
until
someone
shows
me
a
paper
that
like
demonstrates
is
safe,
so.
K
I'm
not,
I
don't
think
we're
gonna
ask
for
adoption
today.
Well
we're
mostly
putting
it
forward
forward
to
see
if
there's
people
in
the
room
who'd
be
willing
to
work
with
charles
and
others
to
look
at
the
problems
in
it
right
and
so.
I
certainly
was
not
planning
on
asking
of
the
chairs
first
option,
but
to
see
where
there's
energy
in
the
room
to
work
on
this.
This
class
of.
F
L
You
can
use
data
from
one
HTTP
to
stream
as
a
dictionary
for
broadly
or
deflate
compression
or
any
other
kind
of
compression.
Even
vc,
deve
compression
is
a
dictionary
for
the
following
streams
and
that
basically
improves
the
aggression
by
significant
amount.
So
we
can
also
add
them
may
be
static,
dictionaries
to
this
side
proposal
and
basically,
what
you
do
is
when
you
want
to
set
a
stream
as
a
dictionary
you
send
special
frame
called
said
dictionary
frame
with
the
idea
want
to
use
for
dead,
visionary
and
the
client
knows.
L
Okay,
Ike
need
to
keep
part
of
the
data
for
later
use,
and
after
that
you
just
send
use
dictionary,
and
you
use
that
data
as
a
visionary
and
the
nice
thing
is.
You
can
also
append
several
streams
into
a
larger
dictionary.
In
that
way,
you
can
look
into
several
previous
file
or
whatever,
and
so
the
rationale
is
obviously
in
it.
You
know
in
theory,
HTTP
one
use
large
assets
and
HTTP
to
use
a
smaller
assets,
which
is
I'm,
not
sure.
That's
true
and
but
you
know,
CPU
time
is
getting
significantly
cheaper.
L
L
L
L
Next-
and
here
we
can
see,
for
example,
lay
the
blue
line.
That's
the
average
compression
that
we
got
over
the
2,000
sides,
so
the
blue
line
is
very
just
load
the
website.
For
the
first
time
we
can
see,
there's
like
over
four
percent
improvement
and
that's
just
for
music.
The
static
dictionaries
that
decorated,
but
the
orange
line
is
when
there's
like
another
click,
so
you
load
the
next
page
on
the
same
website
and
then
the
compression
goes
up
to
eighteen
percent.
L
If
you
use
large
dictionaries,
that's
with
broadly
dash
7
binary,
and
if
you
do
another
click
that
goes
up
to
twenty-five
percent
and
that's
a
crazy,
really
crazy,
crush
improvement,
which
five
percent
is
something
you
can
get
by
just
any
other
algorithm
and
that's
the
media
numbers
are
similar
in
next
is
a
total
again.
It's
very
similar.
We
see
almost
twenty-five
percent
for
the
total
of
those
two
thousand
that
we
save
next
ended
by
percentile.
We
can
see
at
the
19th
percentile
the
savings
are
fifty
percent,
which
means
we
actually
save
2x.
L
It's
a
half
the
size
of
the
original
day,
just
thanks
to
using
the
dictionaries
and
although
I
report
there,
seven
numbers
they
are
absolutely
the
same
for
their
five
dish:
six,
whatever
it's,
it's
very
similar
numbers
and
then
here
just
an
example.
If
you
go
from
deflate
a
che
to
deflate
dash
for
you
just
get
1.0
for
improvement,
and
we
do
that
because
we
think
that's
worth
it.
Whereas
if
you
use
dictionaries,
you
get
one
point
for
this
XX
at
basically
the
same
performance
as
far
as
CPU
concern
next,
so
the
main
concern.
L
Obviously
here
is
security
like
the
previous
presentation
and
it's
seriously
it's
hard
to
really
know.
If
what
we
know
today
is
even
sufficient
to
estimate
the
performance
implications
of
this
suggestion,
but
at
least
I
think
we
could
give
some
tools
to
the
client
to
to
being
able
to
disable
on
a
pure
/
stream
basis,
although
also
so
I
heard
a
proposal
to
maybe
use
several.
A
con
texts
like
define
for
different
streams,
which
can
or
cannot
be
used
to
compress
in
between
and
let
the
client
decide,
because
the
client
has
more
data
who's.
L
The
original
of
the
of
the
request
was,
and
basically
the
compression
would
be
disabled
by
default.
So
if
you
really
want
to
use
it,
you
have
to
understand
what
you're
doing
before
you
enable
it
and
if
you're
an
intermediary,
you
also
use
it
to
less.
You
can
told
you
just
a
reminder
today:
most
websites
are
non
https
websites
and
they
really
at
least,
if
you
give
them
more
incentive
to
be
HTTPS,
it's
better
than
plain
text.
I
guess,
maybe
maybe
not
everyone
would
agree,
it's
my
opinion
and
some
websites.
L
Just
really
don't
don't
have
that
data
that
would
be
compromised
such
compression
and
we
I
think
we
should
let
them
at
least
enjoy
the
benefits
next.
So
another
question
was
how
it
compares
to
the
previous
and
visitation.
So
that's
not
a
competition.
That's
completely
different
use
cases,
sin
which
is
great
when
you
can
download
a
big
dictionary
ahead
of
time
or
out
of
band,
and
there
we
also
explore
this
use
case
for
n
different
things.
L
L
The
creation
of
the
window
also
we've
seen
much
better
compression
ratio
in
the
use
case
where
you
do
use
Christian
compression
for
broadly
than
for
sandwich
broadening
and
the
creators
of
broadly
also
support
this
a
statement
so
yeah
you
can
append
streams
together,
which
means
you
have
better
efficiency
and
also
because
you're
at
the
HP
to
layer,
you
know
which
data
was
sent.
You
know
exactly
what
the
client
has.
L
L
M
Seriously,
I'm
making
an
opt-in
is
interesting.
I
mean
one
way
to
tackle
by
taking
position
on
this
field
sandwich
right
when
we
do
tackle
these
side,
channel
attacks
would
be
to
attempt
to
narrow
the
range
of
things
that
could
be
compressed
to
things
that
we
thought
were
somehow
safe.
So
I
can
imagine
a
couple.
A
couple
structures
like
that
one
came
back
saying
something
like
course
prefetch,
for
instance,
Reid
said
this
is
coming.
This
is
this
is
cross.
Origin
is
coming
from
this,
you
know:
do
you
want
to
do
it
or
not?
M
You
can
imagine
you
can
imagine
only
doing
a
live
on
cross
origin
impression
when
you
strip
cookies.
That
probably
would
actually
not
work,
but
you
might
imagine
I
mean
that
would
be
one
way
to
the
other
line
problem
with
all
these
toys.
I
shalla
tax
rate
is
that
is
that
the
attacker
manages
to
basically
get
you
make
a
request
on
his
behalf
with
your
contest,
not
his
because
they've
ambient
authority
right
so
I'm
stripping
cooking
is
what
might
might
work
for
that.
M
I,
don't
know,
I,
don't
know
what
fashion
of
things
that
would
effectively
break
so
I.
Don't
know
I
guess
my
the
reason
I
got
up
was
because
I
thought
that
everything
else
I've
heard
about
defense
is
as
large
made
about.
Oh,
how
do
we
like?
How
do
we
handcuff
the
compression
mechanism
enough
that
it
can't
be
used
as
I
channel
and
so
I
thought
was
interesting.
You
suggested
instead
of
it
is
it
of
handcuffing
question,
because
it's
a
limiting
the
times
you
use
it
and
if
we're
gonna,
adopt
something
like
this.
M
N
Now
I
I'd
like
to
point
out
that
this
approach,
I,
would
also
benefit
our
for
two
would
be
a
benefit
to
people
using
web.
Api
is
across
datacenters
because
it
can
be
used
to
compress
many
smaller
I
mean
tiny
requests
and
responses,
and
in
such
cases
it's
often
hard
to
build
a
shed
dictionary
ahead
of
time.
So
isn't
this
kind
of
dynamic
approach
might
be
beneficial,
I
think
yeah.
L
D
J
Stuff
so
I
suppose
anchor
so
insights
like
CloudFlare,
you
guys
have
multiple
different
domains
on
the
same
connection
and
you
have
a
giant
search
with
all
the
domains
in
them.
So
are
you
not
worried
about
the
connection
coalescing
effects
of
browsers
and
how
this
works
with
ya?.
L
J
F
E
G
B
B
Get
to
press
a
button
Craig
can
you?
Is
it
Greg
Pratt
can?
Can
you
do
whatever
it
is
in
the
Miguel
co-op
that
you
need
to
do
make.
A
Yourself
known
to
us,
yes,
I,
think
it's
q
yourself.
B
O
Oh
hi,
this
is
craig
Pratt
I'm
I'm,
presenting
on
on
the
live,
random-access
CR,
but
you'll
notice
that
both
Barbara
and
I'm
not
Barbara
by
the
way,
first
of
all,
Barbara
and
darsh
acarbose
there,
but
there's
they
must
believe
in
my
presentation
skills
so
much
that
they're
having
me
do
this
remotely
or
they're
smart
enough
to
know
that
they
said
this
is
a
topic
that
that's
somebody
else
should
be
presenting
who's,
not
there.
Anyway.
I'm.
I
heard
you
of
my
slides
handy.
I
did.
B
E
P
Yeah
I
could
just
start
with
talking
about
the
history,
so,
with
on
lips
shift,
will
sockets
functionality
in
2010?
Sorry,
Raymond
I
also
get
to
others.
She
has
been
published
and
it
got
a
very
good
adoption
in
the
last
five
years,
and
also
this
working
group
has
published
a
tp2
and
in
the
part
of
the
web
standards,
Asian
bodies
such
as
WC
and
what
WG
has
a
broad.
We
just
go
to
the
next
slide
about
the
api's
imparted
to
the
web
socket.
So
so
one
thing
either
talk
about.
P
So,
but
we
are
kind
of
going
to
dissolve
all
the
discs
issued
by
these
api's,
this
kind
of
actively
shipped
and
being
evolved
to
solve
these
issues,
and
so
we've
been
kind
of
sporadically
having
some
discussion
about
how
we
should
have
the
web
socket
for
the
HCP.
Two
eras
should
be
like
so
we've
in
2013.
We've
tackled
this
problem
by
kinda,
trying
to
introduce
some
WebSocket
mapping
to
http
two
frames
directory
or
just
layering
the
WebSocket
one
point:
WebSocket
1.0,
just
dev
20
on
to
the
the
stream
of
hcp
to
or
like
that.
P
So
here's
some
data.
So
we
are
still
having
this
a
WebSocket
above
showing
the
based
on
the
user.
Matrix
analysis
of
chromium
about
0.7
percent
of
web
pages
visited
are
using
the
web
socket.
So
I
should
just
think
about
this
as
a
kind
of
high
adoption
and
which
we
should
just
take
care
of
this
for
the
recipe
to
era.
So
next
up,
please
so
in
the
last
almost
two
years
now
we've
been
discussing
what
the
right
thing
to
do-
and
this
is
one
solution
I
propose
and
me
and
when
boys
proposing.
P
P
But
this
transform
stream
concept,
so
the
first
step
could
be
introducing
that
transform
stream,
400
wish
protocol
and
framing
and
then
so
as
we
buy
the
compatibility
with
a
WebSocket.
As
we
have
a
lot
of
the
already
deployed
WebSocket
stuff
and
asset.
So
we
could
just
proceed
to
bind
the
web
socket
API
with
wish
and
hep-2
and
together
with
some
fallback
mechanism
or
sense,
a
nice
handshake
or
something
and
then
estero
all
the
existing
web
socket
users
customers
to
easily
migrate
to
this
stock.
So
WebSocket
wish
over
HTTP
to
so
that's
a
story.
P
So
not
only
this
topic,
but
we've
been
discussing
how
we
should
a
guarantees,
a
kind
of
enough
compatibility
with
the
web
socket,
but
almost
all
the
answers
we
cut
recurrent
hub
have
been
summarized
into
the
idea
we
published
for
this
meeting.
So
I'd
love
to
you,
you
to
take
a
look
at
the
idea
and
give
us
some
feedback
and
your
opinion
about
the
wizard.
This
makes
sense
for
the
British
of
the
WebSocket
was
a
tp2
and
possibly
quick
or
some
more
pho
protocol
layering.
So
that's
it
so.
C
C
G
Ok
since
since
the
past,
so
service
and
events
looks
very
much
like
what
you're
describing
WebSockets
was
intended
to
replace
service
and
events
yeah.
Now
we're
sort
of
coming
full
circle,
so
I.
G
P
B
I
I
think
that
we're
we're
just
about
out
of
time
for
this
one
I
think
that
there's
maybe
two
threads
here.
One
is
what
is
the
general
future
if
any
or
the
evolution
of
web
sockets,
and
it's
not
at
all
clear
to
us
as
chairs
that
this
working
group
is
the
right
place
with
that
discussion
to
happen,
I
think
good
nodding,
and
then
the
other
discussion
is.
If
that
you
know
future
evolution
of
WebSockets
has
a
relationship
with
HTTP
in
some
fashion.
B
G
C
B
O
Hello
did
I,
say
spam:
okay,
Wow,
okay,
I'll,
try
to
make
this
quick
to.
O
No,
no
problem,
okay,
so
go
ahead
and
go
forward
a
slide
here.
So
this
is
a
pretty
few
people
have
been
down
this
road
and
no
one
has
survived,
but
basically
was.
It
seems
like
a
pretty
simple
problem
that
no
one
has
has
quite
figured
out
the
right
answer
to:
how
do
you
have
a
live
or
continuously
aggregating
resource,
which
you
can
also
support,
random
access
on
using
a
range
requests?
So
there's
a
number
of
examples
of
this
I
come
at
this
from
a
video
angle,
but
there's
lots
of
there's
more.
O
It
seems
like
there's
more
examples
now
than
there
have
been
in
the
past
anyway.
Next
slide,
please,
okay!
So
here's
the
basic
problem
in
a
bite
range
spec
last
bite
position
is
optional,
but
in
in
a
response
you
must
provide
a
last
bite
position,
which
implies
that
a
partial
content
is
a
bounded
is
a
bounded
response
body.
O
So
here's
some
of
the
things
that
have
been
considered-
this
isn't
even
all
of
them,
just
changing
the
a
be
enough
for
the
bites
range
unit
and
hope
to
not
blow
up
too
many
things
new
range
unit
for
a
lot
for
live.
This
is
what
the
current
draft
was
that
this
time
was
allocated
for.
We've
allowed
that
to
expire.
I
think
this
is
qualifies
in
the
two
rusted
to
be
usable
and
Roger.
Combs
also
demonstrated
there's
a
variety
of
software
that
can't
handle
except
ranges
with
anything
other
than
bites
and
none
in
them.
O
So
now
we
have
a
another
option,
which
is
really
the
fourth
option.
That's
been
proposed
down
this
path,
which
was
late
submission.
My
apologies
for
that
next
slide.
Please
will
this
is
pretty
simple.
The
concept
here
is
that
the
client
can
pass
a
vote
voice.
Some
of
this
formatting
got
messed
up.
My
apologies
for
that
that,
basically,
the
client
well
there's
two
there's
two
important
things
that
a
client
who's
dealing
with
with
live
content
needs
to
know
it
needs
to
know
first
of
all,
it's
live,
and
secondly,
it
also
needs
to
know.
O
Where
is
the
point
where,
where
I
stop,
where
I
can
still
randomly
access
content
without
getting
flow
controlled
by
the
production
of
content
on
the
server,
so
currently
the
open-ended
bite
bite
request
serves
that
purpose
perfectly.
If
I
do
byte
0
dash
it,
the
server
gleefully
gives
me
back
the
current
range
and
then
the
star
is
a
good
indicator
that
the
representation
length
is
unknown,
but
the
what
we're
proposing
here
is
the
possibility
of
using
instead,
instead
of
changing
the
a
B
and
F.
O
Well,
let's
work
inside
the
current,
a
BNF
and
just
say,
let's
just
say:
well
what,
if
the
client
uses
a
very
large
number
instead
of
star
or
some
other
mechanism
that
requires
a
new
BNF
in
this
case,
you
know
the
this.
The
client
would
would
pass
some
very
large
number
that
this
in
this
case
it's
this
is
2
to
the
64th
or
22
63rd
minus
1,
or
something
like
that,
something
that's
hopefully
pretty
easy
to
parse
the.
O
If
the
client
sees
in
the
response
that
same
number
back
see
the
equals
equals
that
supposed
to
have
a
bubble
around
it,
then
it
knows
hey.
This
is
a
server
that
understands
how
to
return
me
a
live
response.
That's
why
that's
the
clients
indicator
that
it
can
that
it
can
do
random
access.
So
at
this
point
in
time,
at
the
end
of
this
exchange,
the
server
knows
well,
the
client
knows:
where
is
the
randomly
accessible
content,
end
9440
83
83,
and
it
also
knows
that
it's
live
content
that
can
provide
unbounded
range
responses.
O
So,
in
response
that
you
can
see
here,
it's
actually
providing
a
response
that
starts
earlier
than
the
random
access
point,
for
instance,
to
establish
the
beginning
of
framing
in
the
content,
and
it
wants
a
it
wants
a
response.
That's
unbounded
next
slide,
please
a
current
server
that
doesn't
understand
any
of
this
will,
when
you
provide
a
number,
that's
too
large
and
out
of
bounds,
will
simply
give
you
back
just
like
with
the
open
and
arrange
request,
will
give
you
back
some
some
smaller
number
that
represents
the
current
balance.
O
Now
it's
important
to
notice
between
these
two
with
live
content.
It's
common
for
this
number
to
be
going
up
continuously.
So
here
we
were
at
99
for
083.
83
in
here
were
at
99.
For
1000,
but
the
important
thing
to
the
client
is
hey.
Wait
a
minute.
This
guy
didn't
give
me
back
the
same
large
number.
Therefore
he
doesn't
understand
anything
about
dealing
with
large
unbounded
range
responses
and
live
content.
I'm
gonna
treat
him
just
like
I'm
gonna
I'm
gonna
have
to
do
this,
some
other
way
polling
or
some
other
mechanism.
Next
slide.
O
So
that's
it
hope
that
was
fast
enough.
Any
any
thoughts
on
this
nice
and
fast.
Q
O
G
Some
mutton
thompson,
this
understanding,
the
constraints
we're
operating
under
this
doesn't
seem
at
all
crazy.
It's
awful
high
praise
indeed,
but
this
is
this-
is
this
actually
looks
workable,
which
is
the
first
time
I've
seen
something
like
that
in
this
area.
So
congratulations.
Thank
you
for
coming
up
with
something
clever.
Yes,
indeed.
Well.
O
O
G
B
R
C
I
O
I
thought
about
using
a
magic
number,
but
right
now
it's
the
client
can
pick
whatever
it
likes.
I
thought
magic
number
was
just
too
too
gross,
but
I
was
thinking
like
to
to
to
to
the
63rd
to
263
minus
42,
or
something
like
that
would
be
great,
but
no
it's
a
client's
choice
that
what's
important
is
the
Equality
test
on
the
previous
slide.
So
the
server
basically
acknowledges
if
its
support
by
returning
the
same
number
back
we're
gonna
kind
of
cues.
Now
yeah,
there's.
G
G
B
Right,
my
only
question
now
is:
you
know
it
sounds
like
people
want
this
to
be
informational,
it
could
be
done
outside
the
working
group,
but
my
gut
feeling
is:
is
that
having
it
in
the
working
group
would
get
better
oversight
and
a
better
result?
Our
people
happy
with
that
I'm.
Seeing
a
lot
of
nodding.
Heads
Mike,
really.