►
From YouTube: IETF100-HTTPBIS-20171117-0930
Description
HTTPBIS meeting session at IETF100
2017/11/17 0930
https://datatracker.ietf.org/meeting/100/proceedings/
A
A
A
All
right,
administrivia,
blue
sheets
marx
started
those
around.
Yes,
people
come
in
now.
Please
make
sure
you
sign
them
because
with
the
attendance
we
have
today,
your
vote
will
count
scribes.
We
have
a
scribe,
be
in
sweat.
We
have
a
jabber
relay
Barbara
stark
and
then
we're
gonna
move
on
to
the
agenda
and
any
bashing.
Anyone
may
want
to
do
how
much
for
Monday
session.
This
is
fundamentally
we're
going
to
just
work
through
our
active
drafts
find
what
work
we
have
in
front
of
the
working
group.
A
A
Some
last
we
met
so
we've
adopted
BCP
56
this,
but
you
won't
find
in
data
tracker
yet
because
we
actually
finished
that
adoption,
while
taken
I,
could
have
closed
been
all
too
busy
to
actually
do
the
paperwork
this
week,
but
we're
gonna
treat
that
as
active
and
origin
frame
has
been
sent
to
the
iesg
and
they've
begun
their
IETF.
Last
call
on
that
topic.
The
first
message
from
our
area
director
on
that
one
has
appeared
on
the
mailing
list
this
morning.
So
that's
good
and
I
believe
something
else.
A
B
A
A
C
D
E
A
A
A
G
H
This
is
done.
We've
had
a
great
a
lot
of
input
from
the
working
group
of
rare
to
see
this
sort
of
document
have
input
from
I
guess,
10
people,
at
least
in
very
short
amount
of
time,
and
we've
got
now
I
think
three
implementations
that
I'm
aware
of
and
intent
to
implement
by
a
couple
more
so
all
signs
are
that
it
works
and
we're
about
to
see
TLS
one
three
go
out,
so
this
is
a
finally
time
to
turn
the
crank
and
spin
out
some
sausage.
I
Tony,
probably
I
will
yeah
I
think
the
document
looks
good
I'm,
just
having
read
through
it.
There's
one
thing
that's
conspicuously
like
not
mentioned
is
other
things
that
do
really
data
that
other
than
tea.
Last
1/3
of
like
you
could
do
this
over
just
TFO.
Do
we
want
even
just
a
mention
of
that
to
acknowledge
that
that
exists,
and
there
it
just
seemed
a
little
bit
conspicuous
I
mean
everyone
should
be
doing
TLS,
but.
H
I
G
J
K
A
Open
the
last
call
we'll
keep
that
up
on
the
list,
or
you
know
a
couple
weeks
kind
of
thing
see
if
any
more
of
them
any
more
ideas
trickle
in.
But
with
that
being
said,
I
mean
I.
Think
they're
great
you've
been
great
experience,
the
other
no
well.
Okay
short
tight
document.
I
got
input
from
a
wide
variety
of
people.
We've
met
our
timelines
on
it
and
we're
gonna
get
it.
A
J
J
L
J
Yeah,
this
is
a
brief
recap
of
what
we
had.
Basically,
the
the
idea
here,
if
you
recall,
was
to
use
a
very
large
use,
a
very
large
numbers
arranged
and
basically
indicate
an
indeterminate
links,
a
pending
resource.
If
the
server
returns
to
you,
your
large
number,
you
can
assume
that
they
support
this
functionality
and,
if
you
slip
flip
to
the
next
slide,
usually
a
server
that
doesn't
support
this.
This
functionality
should
basically
give
you
the
hand
of
the
content
instead
of
this,
instead
of
your
very
large
number
having
some
break
up
here.
J
J
Cool
either
way,
so
ok,
so
here's
some
pointers
to
we
have
a
live
server.
People
can
try.
Basically
it's
it's
feeding
a
nasa-tv
stream
that
supports
this
lie
of
appending
functionality,
so
you
can
go
and
actually
try
grabbing
some
of
the
data
or
whatever
and
as
well
as
a
couple
reverse
proxies.
We've
set
up
and
source
code.
So
if
you
want
to
go
to
the
next
one.
J
Here's
an
example
session
with
that
server.
This,
you
know
basically
looks
as
you'd
hope
it
looks.
It
should
look
exactly
like
what
was
you
know
prototypes
and
what's
in
the
RFC,
we
return.
You
know
this
is
just
this
is
using
plain
old
curl.
So
plain
old
curl
works
with
these
dreams.
Just
great
it'll
just
continue
to
download
the
the
content
as
it
goes
iffy.
J
So
this
is.
This
is
the
request
response.
If
you
flip
to
the
next
slide,
we
can
see
the
server
and
the
client
kind
of
doing
their
thing
here.
The
the
hash
marks
represent
data
being
transmitted
and
the
dots
or
pauses
on
the
server,
and
you
can
kind
of
see.
Here's
curl,
showing
its
bitrate
outputs
and
you
kind
of
see
it
vacillating
and
bitrate
as
it
as
it
sporadically
gets.
Live
data
sli
pointed
that
long
said
of
hash
marks
there
at
the
top.
That
would
be
like
your
random
access
data.
J
J
Anyway-
and
this
would
just
keep
running
basically
forever
if
you
were
to
go
grab
this,
you
would
just
have
this
growing
file
on
your
system
that
you
could
play
in
VLC
or
something
like
that
in
this
case,
because
this
is
the
NASA
TV
stream
is
a
is
basically
one
continuous
transport
street,
so
ok
so
go
to
the
next
one.
Next
slide.
J
Showing
our
basic
results,
the
interesting
thing
is
how
well,
how
well
kind
of
squid
and
varnish
actually
worked.
I
wasn't
really
expecting
them
to
I
was
expecting
them
to
work.
The
same
way
that
that
farnoosh
works
in
default
mode,
which
is
basically
is
if
you
do
a
request.
It's
going
to
make
this.
It's
just
going
to
give
you
a
static
chunk
of
the
content
representing
where
that
is
at
that
particular
point
in
time.
J
But
the
important
thing
is:
it's
a
coherent.
You
know
by
tax
equals
of
the
origin
data
equals
by
Tex
of
what
you
download
so
there's
no
coherency
problems
or
anything
there
that
we've
seen
and
still
working
on
calcloud,
where
we're
testing
so
so
I,
don't
know
real,
quick
I
guess
we
can
go
through
what
what
those
requests
look
like
it's
kind
of
the
same
thing.
If
you
go
forward
a
slide
or
we
can
skip
this,
do
x,
critical.
J
J
J
So
this
is
varnish
reverse
proxy
configured
with
this
range
support,
basically
config
file
changes,
and
this
worked
remarkably
well,
although
what
I'm
trying
to
do
is
to
resist
looking
into
optimizing
in
the
caching
as
interesting
as
that
would
be
really.
We
were
just
trying
to
check
to
make
sure
that
again
varnish
doesn't
break
and
we
aren't
returning
bad
data
and
in
that
sense
everything
was
was
cool
it.
Basically,
you
know
it
acted
as
a
proxy
and
it
looked
exactly
like
the
like.
J
The
the
only
the
only
side
effect
of
this
seems
vni
like
I,
say
I
hadn't,
gotten
too
much
into
it,
and
it
may
have
to
do
with
the
way
that
it's
that
I've
configured
it,
but
it
seems
like
it
was
that
varnish
was
just
trying
to
get
even
after
I
cancelled
my
session.
It
was
just
continuing
to
try
to
download
live
data
like
it
was
going
to
do
it
indefinitely.
J
I
didn't
wait,
I
I
should
have
let
it
run
a
little
longer,
but
Darsh
acts
paying
for
the
bandwidth,
so
I
didn't
want
to
break
his
bank
account
by
running
it
for
like
a
day
or
something.
But
anyway,
that's
that's
that's
what
that
looks
like
for
varnish
with
range
support,
so
you
go
ahead
to
the
next
slide.
J
J
J
J
Like
I,
say,
I
would
one
thing:
I'd
anticipated
is
again
it's
we
weren't
checking
these
to
make
sure
that
they
were
being
very
optimal
about
how
their
buffering
data,
or
that
we
aren't
completely
blowing
their.
You
know
the
proxy
cache.
Is
you
know,
storage
or
anything
like
that?
This
is
just
making
sure
we
aren't
destroying
them
and
we're
getting
good
data.
Okay,
thanks.
A
H
H
Thanks
for
doing
this,
all
Craig
I'm
a
little
nervous
about
the
varnish
thing,
but
I
think
that's
just
something
that
those
guys
will
have
to
have
a
have
a
look
at,
and
maybe
maybe
make
a
few
tweaks
so
that
they
don't
open
themselves
up
to
denial
of
service,
but
the
I'm.
Maybe
they
it's
not
a
real
problem
and
maybe
they'll
keep
trying
for
a
little
while
and
and
give
up
eventually
that
it
would
be
interesting
to
know
what
what's
going
on
there.
H
But
ultimately
the
choice
did
avoid
something
like
this
with
something
like
varnish
is
something
that
a
server
is
going
to
make.
So
they'll
don't
be
wanting
to
test
that
the
code
works.
We
have
an
RFC
describing
how
this
works.
Then
I
can
test
that
too.
So
I'm
not
particularly
concerned
about
the
few
little
oddities
in
the
behavior.
There
things
didn't
break
the
bytes,
didn't
get
truncated,
weird
ways
or
mashed
against
I
can
imagine
all
sorts
of
funny
modes
for
this,
but
this
seems
fun.
J
And
the
regarding
varnish,
it's
important
that
that
wasn't
the
out
of
the
box
configuration
that
had
the
issues
it
was.
It
was
a.
It
was
special
changes,
there's
that
were
intended
to
do
range
support,
and
there
were
lots
of
caveats
on
that
description
of
that
functionality
in
terms
of
it
not
doing
coalescing,
and
things
like
that
and
yeah
I
really
hope
that
what
happens
is
people
go
forward
and
start
doing
those
things
start
doing
coalescing
and
and
supporting
sub-ranges
on
cache
content
and
that
sort
of
thing?
Okay,.
N
Sorry
oops
so
I
think
yeah.
Our
hope
was
to
also
finish
that
CloudFlare
love
like
a
string,
but
hopefully
we'll
probably
get
some
results
from
that.
Also
because
that's
kind
of
the
more
interesting
you
are
the
more
common
use
case
that
may
happen
in
kind
of
a
CDN
kind
of
environment,
but
probably
provisional
results
on
the
mailing
list,
but
I'm
hoping
that
there
should
be
enough
to
kind
of
at
least
issue
a
last
call.
We.
D
Open
until
that
and
I
was
thinking,
yeah
I
agree
with
Patrick.
Thank
you
for
introducing
this
work.
This
is
absolutely
what
we
asked
you
to
do
and
more
and
that's
great
I,
think
going
to
last
call
now
is
probably
appropriate,
and
that
sends
a
signal
that
we're
serious
about
this,
and
maybe
I,
was
just
thinking
as
well.
Keep
the
last
call
open
for
a
bit
longer
to
give
the
different
server
vendors
and
so
forth
a
chance
to
take
it
seriously.
So.
H
D
A
H
D
Yeah
I'm
less
concerned
about
the
status.
The
document
I
think
that
that's
probably
true
but
like
Patrick,
says,
let's
go
and
see
how
we
got
where
we
got
where
we
are
and
I
particularly
want
to
make
sure
that
we
get
folks
like
Apache
and
other
CD
ends
and-
and
you
know
the
varnish
team
themselves
just
to
have
a
look
at
this
and
make
sure
they're
comfortable
with
it.
A
C
The
aspect
of
this
that
deals
with
the
representation
where
you
have
sort
of
the
deletion
at
the
beginning
right
so
there's
a
description
in
section
3.2
of
how
this
works.
When
you
have
a
resource
where
there's
sort
of
a
time
window,
you
can
go
back
in
time
a
certain
distance,
but
not
all
the
way
back,
and
so
it
presents
this
mechanism
of
saying
hey.
C
This
is
live
and
there's
a
certain
buffer
in
the
beginning
that
moves
overtime
and
I'm,
trying
to
figure
out
what
that
would
look
like
when
we're
talking
about
what
I
put
in
the
cache
for
the
resource.
When
I'm
saying
okay,
I
was
told
that
it's
available
from
this
bite
range
to
an
indeterminate
length.
I
chose
something
later
than
that
bite
range
as
a
first.
How
do
I
know
the
last
possible
moment?
C
That
draft
might
want
to
say
that
out
loud
just
because
otherwise,
if
you,
if
you
choose
to
pick
someplace,
that's
not
at
the
beginning
of
the
of
the
of
the
range
that
you're
supplied
in
order
to
get
closer
to
the
current
place.
You
you
now
know
you
can't
go
backward
in
time
anymore,
even
if
the
server
could
have
done
it
at
the
moment
you
started
so
I'm
trying
to
figure
out
kind
of
concretely
what
I
would
put
in
a
TR
to
suggest
the
language
or
for
that
and
I.
C
It's
not
really
coming
to
me,
but
I
think
it
might
be
useful
kind
of
working
out
what
those
semantics
are
and
having
some
language
in
the
draft
that
just
says
hey
once
you
picked
a
range.
If
you
want
to
go
backward
in
time
from
the
range
you've
got
to
start
over
to
figure
out
what
that
new
range
looks
like.
N
C
I'm,
looking
at
section
3
tattoo
at
the
at
the
example,
that's
at
the
top
of
page
7
or
I
guess
at
the
top
of
page
we've
just
the
world
and
what
I'm
showing
him
is
that
the
way
the
content
ranges
is
displayed?
You
have
a
range
followed
by
the
indeterminancy
marker
and,
if
you
don't
go
to
the
beginning
of
the
range
to
start,
my
belief
is
that
you
don't
necessarily
always
have
access
to
that
same
beginning
of
the
range,
because
the
description
says
this
is
probably
a
sliding
window.
N
Happen
just
based
on
the
range
request
semantics
is
if
the
server
is
able
to
satisfy
you
want
a
part
of
that
range.
It
was
under
206
with
the
range
that
it
sends
back
if
the
server
cannot
satisfy
anything.
It
supposed
to
sign,
I
think
a
full
16
range
not
satisfy
whatever
the
range
not
satisfiable
coordinates.
I'm
done.
Okay,.
C
J
H
Yeah,
so
not
in
some
Sun
I
think
we're
we're
talking
about
things
that
actually
kind
of
unlikely.
In
practice,
the
the
number
of
resources
that
have
bits
of
them
that
go
missing
over
time
is
relatively
Q
and
the
number
of
people
actually
doing
things
like
coalescing
of
byte
ranges
and
things
like
that
have
also
relatively
few.
It's
good
to
have
it
out
in
practice,
but
I
think
we
have
all
the
existing
mechanics
in
70
to
35
I
think
it
is
that
covers
most
of
this
stuff.
Up
again,
it's
a
number,
the
rain
just.
E
D
A
A
Which
is,
you
know,
make
a
really
good
answer:
okay,
seeing
no
other
questions.
What
we'll
do
is
we'll
open.
The
last
call,
then,
maybe
in
the
working
group
and
maybe
work
through
these
last
few
niggles
next
on
our
agenda
for
this
morning,
was
supposed
to
be
expect
CT.
But
we've
already
talked
to
that.
So
we'll
move
on
to
have
a
common
structure
which
has
seen
a
fair
amount
of
discussion
since
we
last
Matt
I'm,
along
with
proposal
from
Mark
called
structured
headers,
which
may
be
a
way
forward
on
some
of
those
issues.
D
And
you
know
I
personally
felt
like
we
needed
to
move
forward
on
this
I
had
some
ideas
that
I
wrote
down
as
structured
headers
for
HTTP,
which
you
see
here
and
then
what
I
had
it
sketched
out
to
the
level
that
I
was
comfortable
with
I,
went
to
Paul
Henning
and
said
you
know.
What
do
you
think
is
this?
You
know
the
next
step
forward.
D
He
agreed
to
come
on
and
co-author
with
me
and
we've
been
refining
this
in
the
background
and
we
we
announced
at
the
working
group
where
we've
got
a
really
good
amount
of
feedback.
So
talking
to
Patrick
I
think
the
question
is:
do
we
just
make
this
version
Oh
two
of
the
structured
header
draft?
Do
we
want
to
actually
swap
it
out
or
whatever
I
think?
D
That's
just
mechanics
but
I
think
the
question
to
ask
here
is
putting
aside
all
of
the
discussion
around
the
minut
details
of
how
we
represent
numbers,
for
example,
which
could
easily
consume
this
session
and
into
next
week.
Is
this
the
general
direction
that
we
think
we
want
to
go
in
and
and
that's
the
feedback
in
the
discussion
that
I
think
is
probably
appropriate
at
this
point.
L
D
D
A
I
did
wander,
you
know
as
chair
open
the
floor
for
comments
on
the
draft
in
particular.
We
do
have
enough
time
this
morning
and
it's
if
anyone
has
strong
opinions
on
any
of
these
matters.
It's
not
a
bad
moment
to
hear
it,
but
this
will
be
time.
Bounded,
I,
don't
see
people
running
to
the
microphones
I'm,
not
all
that
worried
about
how
big
an
integer
ought
to
be.
But
if
you
have
feelings,
I'm,
I'm
interested
in
maybe
census
of
the
room,
just
a
show
of
hands.
H
A
H
Yeah,
we
may
need
more
comments
from
the
list.
Yeah
Thompson,
to
be
fair,
there's
a
lot
of
a
lot
more
people
who
aren't
here.
They've
read
read
the
drafts,
because
a
lot
of
the
discussion
on
the
list
is
from
folks
who
are
not
in
this
room.
I'm
from
kazuo.
Here,
who's
been
involved
in
deciding
how
long
the
number
should
be
of.
H
And
personally,
I
think
this
is.
This
is
much
more
promising
than
what
was
written
in
the
other
draft,
but
I
think
it
probably
needs
it.
A
good
deal
of
work,
particularly
in
the
areas
of
things
like
the
numbers,
so
that
we
can
I,
guess
narrow
it
down
to
the
set
of
things
that
were
comfortable
with
I.
Think
there's
a
little
bit
of
work
left
to
do
that.
H
That
said
that
what
could
happen
in
the
in
the
working
group
I
would
rather
see
this
be
adopted
as
a
replacement
for
the
structured
headers,
be
able
to
the
other
Paul
Hennings
draft,
rather
than
just
leave
this
hanging,
because
we
then
have
this
hanging
common
structure
thing
that
just
I
don't
think.
It's
particularly
useful
at
the.
O
H
A
A
D
My
only
comment
is
I'm
really
happy
with
the
level
of
engagement
on
the
lists
about
the
draft.
That's
what's
giving
me
a
hump
here
that
would
need
to
continue
and
need
to
broaden
beyond
just
number
formats.
H
A
Dead
yeah
so
and
I
hate
to
put
people
on
the
spot,
but
you
know
Julian
you're
a
ton
of
expertise
in
this
and
I
know
you're
sitting
there
listening
to
us
on
the
other
side
of
the
planet
and
I'm
wondering
if
this
would
be
a
good
time
for
you,
try
me
and
if
you'd
like
to
perhaps
red
button,
meantime,
my
coin
yet.
D
H
H
So
someone
times
and
I
think
the
the
comment
here
is
that
if
we,
if
we
were
to
retroactively,
we
apply
these
constraints
to
existing
headers.
We
would
find
ourselves
in
some
sort
of
pain,
but
I,
don't
think
that's
the
intention
of
the
draft.
The
intention
of
the
draft
is
to
say,
if
you're
going
to
define
a
new
header
field.
Maybe
you
want
to
use
these
recipes
and
doesn't
say
that
you
have
to
use
these
recipes
either.
It
just
gives
you
a
you
know.
This
is
good
practice.
D
Would
be
wonderful
if
we
could
clean
up
the
mess,
that
is
header
parsing
for
all
of
HTTP,
but
that
is
a
much
larger
task
that
we
don't
feel
more
capable
of
doing
right
now.
So
this
is
really
just
just
to
stop
the
pain,
at
least
for
newer
headers.
G
D
E
A
I
will
come
up
with
a
plan
for
how
we
move
forward
just
from
a
you
know,
procedural
point
of
view
out
that
out
to
the
list
or
comment
next
on
our
docket
this
morning,
cache
digest
for
e2
and
kazoo
has
got
a
presentation.
Well,
he
comes
up.
This
is
one
that's
been
with
us
for
a
while
and
I
hope
her.
We
seem
to
be
nearing
the
finish
line
now.
We've
made
some
changes,
so
hopefully
this
will
be
enough
to
get
us
over
the
endpoint.
A
L
L
How
fast
is
the
same?
Ding
cash
dices
settings
permit.
So
it's
pretty
simple.
It's
a
way
for
the
client
to
say
that
hey
I'm,
going
to
say
any
cash
digest.
So
so
the
server
please
wait,
decide
standing
water
push
and
you
see
my
cash
status
and
the
client
strategy
will
be
to
send
a
that
is
frame
for
every
quest.
That
was
the
news,
origin
and
or
if
it
doesn't
want
any
cash
digesting
to
send
an
empty
cache
that
is
frame
with
research
plug-in
gaining
that
I'm
not
going
to
give
you
any
indication.
L
And
the
next
one
is
more
debatable
and
it
tries
to
change
the
digest
algorithm
from
gone
cold
asset
that
we
use
now
to
taco
filters
and
motivation
behind
that
I
structure
on
the
site
and
send
it
wait
necessary,
rather
than
returning
through
the
cache
to
be
the
digest
when
sending
a
spring.
And
so
on
the
browser
side.
You
essentially
have
a
filter
that
hash
table
that
implements
event
handler
for
turbans
and
one
is
in
certain
events
when
a
browser,
a
new
object,
is
added
to
the
browser
cache.
L
L
L
L
L
Consisted
the
you
orient
URL
and
a
tag
needs
to
be
well,
then
the
two
properties
are
needed
to
check
if
and
that
could
have
implications
on
certain
caches,
because,
instead
of
just
look
looking
at
the
URL,
you
need
to
actually
fetch
the
object
and
check
the
exam
to
see.
If
you,
you
should
push
them
next,
please.
L
O
My
bishop,
Akamai
I
think
the
replacement
with
KUKA
filters
makes
a
lot
more
sense
because
you
can
just
have
one
object
and
keep
it
up-to-date
rather
than
having
to
crawl
through
your
cache
and
generate
it
or
even
to
call
the
table
I
really
don't
like
the
idea
of
having
two
different
algorithms
that
clients
now
have
to
signal
which
one
they
support,
which
means
servers,
support,
have
support
both
or
they
risk
having
to
match
with
clients.
I,
don't
think,
there's
enough
wind
from
one
over
the
other,
at
least
in
my
understanding,
to
support
both.
D
D
From
my
perspective,
personally,
you
know
the
the
problem
that
we've
had
in
this
draft
for
it
zero
points
out.
It's.
D
Is
that
you
know
we
have
been
had
trouble
getting
clients
interested,
especially
browsers,
implemented
interest
in
implementing
this.
So
if
there's
a
good
chance
of
getting
browser
implementation
or
at
least
some
good
experimentation
bug
on
with
cuckoo
filters,
then
that
make
me
think.
Maybe
that's
a
good
idea
and
and
if
the
working
group
decides
to
do
that,
I'm
happy
to
take
my
name
off
the
draft
and
and
have
y'all
go
in
as
an
author,
because
I
think,
obviously
he
is
gonna,
be
generating
a
fair
amount
of
text
in
the
draft.
D
Q
Although
I
do
think
that
I
sort
of
misunderstood
the
intent
where
I
did
not
intended
to
replace
the
thinking
time
of
the
browser
on
the
opening
with
nection,
but
I
used
it
to
sort
of
replace
webpack
bundling
assets
and
drastically
reducing
the
reload
time
of
a
front-end
application.
Basically,
so
I
think
there
are
experiments
that
prove
that
this
is
very
meaningful
work.
A
So
we've
had
a
couple
of
comments
about
how
yeah
that's
a
while
I
mean
I
would
also
know
it's
experimental,
so
we
don't
have
to
know
at
the
perfect.
The
enemy
of
the
good
here
so
I
would
encourage
you
to.
You
know,
have
the
courage
of
your
convictions
and
make
a
decision
under
that
experimental
manner.
A
L
D
Just
thinking
out
loud,
we
I
haven't
heard
of
any
interest
of
anyone
generating
the
frame
with
GCS.
Although
you
seems
to
be
using
the
header,
would
it
make
sense
to
publish
the
document
we
have
without
the
frame
and
just
describing
way
to
do
it
with
headers
and
serviceworkers
maybes
experimental
and
then
see
how
cook
the
filters
goes
for
the
you
know,
more
integrated
browser,
integrated
flow
you're
still
alive.
E
Q
A
Okay,
if
there
are
no
further
comments,
I
think
about
Tootie
to
the
list
too,
and
and
big
the
authors
become
how
they
want
to
update
the
document.
Thank
you.
I
make
one
last
call
for
blue
sheets
Sushi's
blue
sheets
there's
one
still
out
there
somewhere
mr.
Barnes
I'm.
Looking
at
you
and
I
want
up
here:
okay,
after
cash
digests,
we
have
clients,
so
it's
on
the
big
board
up
there.
Maybe
Michael
wanna
talk
to
talk
to
this,
so
we
got
this
update
from
Ilya.
D
Client
hints
is
still
progressing
I.
Think
Ilya
is
still.
N
D
On
it,
in
a
couple
of
different
directions,
I
mean
this
is
another
document
we've
been
working
on
for
a
while
and
we've
been
trying
to
get
right.
Most
of
the
discussion
recently
has
been
around
except
CH
lifetime
and
except
CH,
to
figure
out
how
the
mechanism
for
advertising
this
works,
and
especially
around
privacy
considerations.
D
There
been
some
concerns,
brought
and
and
hopefully
dealt
with
if
you're
not
familiar
with
this
I'd
encourage
you
to
look
at
the
most
recent
graph
and
the
discussion
most
that
has
been
happening
on
github
I,
think
that
we've
probably
got
another
cycle
or
so
of
work
to
do
on
this
document
before
it's
really
ready
to
go
so
maybe
early
ish
next
year,
and
also
depending
on
the
adoption
of
variants
right
now.
This
document
refers
to
key
I.
Think
Ilya
has
said
that
if
we
adopt
variants
or
swapped
it
out
for
the
variants.
A
I'm
encouraged
that
the
movement
on
this
document
is
now
around
fundamental
issues
of
negotiating
client
Hinson's
lifetime,
and
that
kind
of
thing
it
is
not
amongst
the
details
of
the
set
offense
that
provided
back
and
forth,
because
that
would
just
be
able
to
beam
document.
It
would
never
never
converge
and
we
seemed
to
have
exhausted
most
of
those
issues
around
to
the
fundamental
one.
So
I
think
there
is
actually
hope
now
that
we'll
get
this
out
and
maybe
variants
not
blocker,
but
the
contemporaneous
document
that
needs
to
happen
to
make
it
go
forward.
H
H
Unfortunately,
I
think
geolocation
would
be
an
awesome
example
to
test
this
out
and
a
terrible
thing
to
add
so
I,
don't
know
where
we
want
to
go
on
that.
But
I
agree
that
working
through
the
the
fundamentals
here
is
where
we're
at,
and
it's
not
quite
there,
but
I
can
sort
of
see
an
end
in
sight.
So
it's
not.
D
H
A
E
Well,
yes,
just
regarding
Martin's
comment
about
geolocation,
headers
and
I.
Think
privacy
in
general
I
think
we
will
need
at
some
point
to
distinguish
capabilities
that
are
available
for
active
content
versus
passive
content,
and
we
already
started
to
address
that
and
the
recent.
E
You
know
providing
access
to
that
in
people
to
all,
because,
on
the
one
hand,
providing
that
information
to
third
parties
is
a
very
important
use
case
and
on
the
other
hand,
it's
very
easy
to
abuse
it.
So
this
is
work
in
progress
and
I.
Think
that
you
know
capabilities
like
geolocation,
even
though
I'm
not
necessarily
convinced
about
that
particular
use
case
I,
think
we
should
address
something
like
that.
B
B
And
we've
seen
that
you
know
I'm
and
so
I
mean
the
you
know:
we've
seen
number
of
cases
where
servers
interrogate
stuff
from
the
browser
they
have
no
actual
intention
of
using
and
they
use
it
for
finger
ring
purposes
whatever
it
is,
he's
the
classic
example
here.
The
only
reason
we
know
that's
being
misused
is
because
you
can
observe
servers
interrogating
for
it
and
then
just
discard
and
then
just
scan.
B
So
yourself,
in
a
position
we
are
the
where
basically
the
server
says
is:
please
send
me
a
pile
of
stuff.
There's
sensitive
finger,
brain
data
but
I
might
happen
to
use
I
might
not
use
to
condition
my
site,
and
you
cannot
tell
what
I'm
doing
essentially
removes
the
ability
of
researchers
lenggang
partner,
Ganon,
to
find
out
when
this
is
being
misused.
So
IIIi
are
not
like.
B
To
stop
people
from
doing
things
that
are
shitty
because
that
never
works
but
I
don't
understand
what
the
like.
What.
A
B
D
So
when
we
adopted
the
document,
I
think
there
was
pushback
not
on
this
aspect,
but
just
around
general
implementor
interest
mm-hmm,
and
we
adopted
it
as
experimental
as
we
have
with
other
documents
in
the
past,
because
we
weren't
sure
if
it
was
gonna,
really
get
broad
adoption.
I
think
that's,
probably
not
a
good
use
of
the
experimental
State
separate
from
that
the
privacy
and
security
concerns.
D
We've
had
a
fairly
involved
discussion
around
that
I'd
encourage
you
to
look
at
the
document
and
look
at
the
current
approach,
and
we
can,
if
you
want
to
raise
issues,
let's
work
through
that
I
have
seen
interest.
You
know
on
the
browser
side.
Yes,
we
have
one
browser
who's
interested,
that's
primarily,
but
other
folks
are
interested
from
other
perspectives,
especially
people
who
want
to
do
content
negotiation
and
not
touch
the
content.
Sure.
B
I'm,
not
denying
I
mean,
like
lots
of
things,
would
be
convenient
like
we
really
convenient
like
I
said
she
any
kind
of
events.
My
hard
drive
that
even
looking
me
for
you
to
write
I
mean
the
the
standard
whatever
you
want,
I
think
like
like.
As
far
as
I
know,
there
have
been
discussion
about
this,
but
mice,
brain
specially
Murthy
and
people
with
ain't,
saying
these
aren't
brothers
use
when
they
obviously
are.
D
E
Yes,
so,
regarding
Eric's
points,
there
is
an
opt-in,
so
this
is
not
about
send
users
private
and
for
users
identifiable
info
to
servers
everywhere.
The
servers
have
to
opt-in
and
clients
can,
for
example,
refuse
to
respect
that
opt-in
or
track.
If
that
opt-in
also
correlates
with
a
very
header
or
a
variance
header.
Once
that's
a
thing,
and
you
know
note
that
this
server
is
asking
for
data
that
it's
not
using
and
therefore
it
is
suspicious
and,
for
example,
can
go
on
some
privacy
list.
B
My
objection
is
that
there's
a
difference
between
active
and
passive
fingerprinting
and
that
way
and
wait
for
you
to
say
that
the
standard
we're
gonna
apply.
Is
that,
with
permission,
we're
gonna
have
we're.
Gonna
have
passive
fingerprinting
for
anything
which
you
couldn't
got.
My
JavaScript
was
like
not
appropriate
standard.
That's
exactly
what's
wrong,
nor
by
the
right.
E
H
Non
and
I'll
answer
that,
because
you
give
the
site
permission
once
and
that
persists
now,
you
have
active
use
of
that
permission
on
every
single
request,
as
opposed
to
just
at
the
point
where
the
activation
of
the
the
request
was
made,
so
that
the
material
difference
here
is
that
when
I,
when
the
site
asks
for
geolocation,
it
asks
the
location
and
gets
it.
That's
that's
an
action
that
can
be
tracked,
but
one
when
you
have
this
sort
of
capability,
you
have
a
one-off
opt-in
and
then
you
have
geolocation
being
provided
on
every
single
request.
G
A
Covers
this
issue
and
the
working
group
will
have
to
come
to
consensus
on
whether
or
not
that
security
considerations
can
be,
you
know
amended
to
actually
deal
with
it
or
whether
the
issue
is.
You
know
just
overly
fatal
I
think
for
the
moment,
and
we
should
track
this
with
you
know,
initially
to
make
sure
consensus
is
reached.
I
thought
there
were
no
open
issues
in
his
track.
Well,
well,
we'll
double
check
and
make
sure
that's
the
state
when
anyone
else
we
have
time
in
favor
on
us.
D
So
I
know
Mike
has
made
some
updates
in
this
document.
His
task
is
to
integrate
the
resolution
of
the
issues
that
we
discussed,
which
is
mostly
things
like
data,
as
well
as
the
major
document
proposals
that
we
accepted
earlier
on
in
this
process
and
I
think
he
is
integrated.
Almost
all
of
those
I'm
not
sure,
but
I
think
his
intention
is,
is
that
he
will
have
a
document
ready
for
working
group.
Last
call
in
the
near.
E
D
So
hopefully,
we'll
either
be
talking
about
this
would
then
I
towards
working
group
last
call
in
London
or
before
that,
would
any
well
we'll
wait
for
Mike
to
give
us
an
update
on
the
list.
Since
he's
not
here,
that's
a
good
question.
Actually
we're
going
to
need
people
to
review
that
document
once
it's
ready
who's
intending
or
willing
to
review
the
cookies
draft
once
we
have
it
in
a
good,
State
I
see
a
smattering
of
hands.
We
need
to
increase
that
number
very
light
smattering
of
hands.
A
H
Monson
we've
been
having
a
discussion
internally
about
expiration
times
on
cookies
and
how
cookies
are
expired
and
I'm
wondering
whether
or
not
there's
going
to
be
another
thing
to
add
to
the
pile
in
a
very
short
amount
of
time.
H
It
sort
of
plays
into
that
there's
some
really
awful
side
effects
when
you
start
expiring
cookies
that
aren't
expired
and
having
some
sort
of
common
strategy
across
user
agents
would
be
kind
of
nice,
so
nothing
yet
at
the
moment
we
have
someone
looking
into
it,
but
we
may
want
to
talk
about
that
at
some
point
yeah.
They
definitely.
H
D
So
when
we
discussed
cookie
priorities,
my
perception
at
the
time
was
we
were
very
close
to
adopting
it,
but
we
just
didn't
quite
make
the
line.
Are
you
saying
that
maybe
we
should
reconsider
that.
D
D
A
D
So
BCP
56
came
about
around
what
was
that
mm
ish
2001
somewhere
in
that
time
frame
and
it's
embodied
the
the
best
thinking
about
how
to
use
HTTP
a
substrate
at
the
time
how
people
use
HTTP
has
moved
on
considerably
since
then,
and
so
we
adopted
very
recently
now
this
document
I
think
this
is
really
important,
because
now
we
have
a
large
number
of
IETF
working
groups,
creating
new
protocols
that
use
HTTP
for
some
value
of
use
and
I,
don't
know
how
deep
we
want
to
go
into
this
today.
D
I
really
just
want
to
get
people
to
start
to
look
at
this
document.
I
know
that
I
have
a
lot
more
work
to
do
on
it.
It's
very
sketchy
and
very
bare
bones
now
and
I
think
it
needs
a
lot
more
examples
and
a
lot
more
text
explaining
why
things
are
the
way
they
are
and
and
but
I
want
people
to
start.
Looking
at
the
principles
in
the
document
to
make
sure
we
have
good
agreement
on
those
principles,
I
don't
know
new.
D
So
the
first
thing
here
in
in
section
2,
is
you
know
we
need
to
decide
when
this
document
applies
and
so
at
a
high
level.
The
approach
that
I
took
was
if
you're,
using
port,
80
or
443
or
you're,
using
URLs
with
an
HTTP
ich
scheme
or
if
you're
you,
you
identify
the
protocol
as
HTTP
using
one
of
our
H
al
pn
identifiers
or,
if
you're,
using
the
message
formats
we
described
along
with
the
registries.
That
kind
of
fill
those
up
you're
using
HTTP
in
this
document
applies.
D
D
How
you
can
you
know
why
you
don't
want
to
just
use
HTTP
as
an
RPC,
because
you
know
hopefully
you're
using
HTTP
to
get
some
of
the
value
out
of
it
rather
than
just
tunnel
through
you
know,
firewalls,
which
I
think
is
maybe
another
discussion
for
another
venue
as
to
whether
that's
good
practice
or
not.
That
seems
to
be
coming
up
a
lot
too
and
then
down
in
section
four
are
the
the
more
specific
recommendations
how
you
specify
that
you're
using
HTTP?
D
E
D
D
This
is
all
fairly
straightforward,
a
few
that
are
on
HTV
for
a
while
header
fields,
their
definition
referring
into
the
stuff.
We
already
wrote
in
7231
and
then
there's
some
empty
stuff
here
about
payloads
and
interoperating
with
browsers,
because
that's
one
of
the
big
benefits
of
using
HTTP
and
I
think
I
want
to
talk
about
things
like
cores:
their
access,
control,
authentication,
application,
States
and
then
the
boilerplate
after
that
I.
Don't
think
the
check
to
see
if
there
any
of
Hennessey's,
never
again
lots
of
references.
D
So
many
references,
No
okay,
so
that's
kind
of
a
high
level
of
parts
of
the
document
as
I
said,
I
think
it
needs
a
lot
of
filling
out.
It
is
that,
let's
going
to
be
a
fair
amount,
longer
I
thought
bucks
want
to
help
out
without
that's
great,
but
right
now,
I
just
want
to
make
sure
that
to
validate
the
kind
of
core
print
that
it's
talking
about
and
I'd
love
any
feedback
now
or
on
list
or
privately.
A
So
one
of
the
reasons
I
was
interested
in
seeing
this
document
adopted
now
is
you
know
there
is
contemporary
Gnaeus
work
unite
yeah,
you
know
having
J
map
for
a
couple
iterations
now
and
it's
there's
a
group
that
could
use
a
document
like
this
to
refer
to
and
as
it
check,
I'll
have
a
designer
thing
them
doe.
Obviously,
this
time
around.
G
D
They
have
requirements
that
are
not
practically
met
by
HTTP
today,
and
so
that
tells
me
that
we
need
to
make
sure
that
that
it's
clear
that
there
are
cases
where
you
know
it's
still:
okay
to
use
HTTP.
You
know
in
perhaps
non-traditional
patterns,
and
there
are
reasons
for
that.
But
then
you
need
to
make
an
informed
decision
when
you
do
that
and.
D
A
This
is
not
the
world's
most
exciting
document,
but
I
would
like
to
put
out
a
plea
to
the
working
group
that
you
know
the
amount
of
accumulated
knowledge
in
the
working
group
about
these
topics.
You
know
is
more
amongst
this
set
of
practitioners
than
really
any
other
set
in
the
world
and
so
reading
this
even
sharing
in
terms
of
anecdote,
you
know
where
suggestions
have
worked
and
haven't
worked
in
the
past,
can
highly
inform
this
and
make
this
a
more
practical
document.
A
Perhaps
that's
predecessor,
and
that
Billy
is
this
point,
but
what
it's
updating,
so
you
know
I
think
marks
signing
up
to
do
all
that
sort
of
the
heavy
lifting
here,
but
if
you
can
make
sure
you
read
it
and
just
have
even
general
commentary
on
what
it's
suggesting
I
think
they'll
be
very
helpful
and
can
be
successful.
Let's
you
know,
as
you
know,
as
the
Shepherd
of
this
documents,
what
I
need
to
hear
is
is
that
diversity
I
can
give
you
points
to
make
it
work.
D
Don't
think
that's
an
explicit
goal.
I
think
people
could
use
it
as
such
in
many
ways,
but
that's
a
much
larger
task
and
I
think
we
need
to
assemble
more
of
the
components
to
be
able
to
do
that.
I
have
another
draft
or
two
that
kind
of
push
us
in
that
direction,
but
they're
not
for
this
working
group.
D
F
C
Stick
for
the
wider
development
community
I
would
thoroughly
agree
with
the
need
to
deprecate
BC
P
56,
as
it
currently
exists
as
a
result
of
the
mismatch
between
its
aims
and
how
that
document
evolved
or
how
the
ecosystem
evolved
over.
But
I
might
ask
you
to
consider
whether
this
would
be
better
off
as
an
informational
or
standards
track
document
rather
than
a
BCP
if
it
also
obsolete
obsoleted,
BCP
56,
because
I
think
that
there
are.
There
are
some
places
in
which
the
exploration
of
what
the
options
are
does
not
require
them
to
be
best.
C
A
H
A
Stream
compression
you
have,
you
can
put
yourself
in
the
queue
if
you
wanna
get
your
prison
speaker.
This
is
work
that
has
been
presented
in
an
earlier
form
in
a
full
stay.
At
least
once
has
been
discussed
more
than
one
time
and
there.
The
merit
here
is
is
fairly
obvious,
especially
as
you
talk
about
very
small
resources,
and
the
trend
is
towards
smaller.
A
That's
a
actually
a
useful
thing
for
web
architecture.
The
the
the
ability
to
have
you
know
compression
apply.
A
cross
stream
gives
you
substantially
better
compression
ratio
results,
and
the
authors
have
shown
that
you
know
in
in
past
past
experiences
yeah.
On
the
other
hand,
there
is
no
working,
or
there
is
no
document
author
or
working
group,
chair
or
area
director
who
wants
to
be
the
one
that
says.
Oh
compression
and
encryption,
that's
okay,
to
put
together
this
time
right,
so
everyone
remains
very
nervous
about.
A
You
know
the
prospect
of
that
and
the
the
security
analysis
of
that
that
accommodation.
So
you
know
the
chairs
that
actually
sought
out
reviews
outside
of
this
working
group,
since
we
last
met
and
some
experts
in
the
field
to
do
some
analysis
on
the
mitigations
presented-
and
you
know
in
this
work
and
other
similar
work-
and
we
have
been
unsuccessful
so
far
and
getting
anyone
to
sign
up
to
to
take
on
that
work,
maybe
because
he
also
don't
want
to
be
the
person
to
say
compression
and
encryption.
A
It's
okay,
this
time
I'm
not
sure
they
haven't.
Given
that
other
reasons,
I
somewhat
suspect
that
in
case,
so,
if
you,
gentle
members
of
the
working
group,
have
contacts
or
suggestions
of
experts
in
the
field
that
might
be
reasonable
to
reach
out
to
who've
done
this
kind
of
web
security
work
in
the
past.
Who
would
be
you
know
well-regarded
in
their
conclusions?
We
would
love
to
have
a
conversation
with
them.
E
Yes,
so
I
ran
and
I
don't
want
like
I'm,
not
a
security
person
per
se,
but
I
ran
some
analysis
of
the
various
risks
and
potential
mitigations,
and
that
analysis
is
now
part
of
the
document.
But
I
think
the
highlights
there
would
be
that
mitigating
attacks
that
our
cross
origin
is
most
probably
simple,
because
the
client
can
we
say
that
there
is
no
compression
context
sharing
between
different
origins
and
be
done
with
that.
It
won't
hurt
the
use
gates
all
that
much
and
will
significantly
increase
the
our
ability
to
protect
users
they're.
E
It's
like
it's
very
hard
to
actually
protect
against
the
same
origin,
secret
leaks
even
today,
because
in
most
cases
an
attacker
page
can
fetch
fetch
the
secret
content
and
examine
it
that
way.
But
right
now
an
attacker
can
protect
itself
up,
like
a
page,
can
protect
itself
against
such
an
attack
by
various.
E
E
So
if
that
is
indeed
the
case,
then
compression
compression
dictionary
attack
can
reveal
such
secrets,
but
I've
stated,
like
I've
put
together
in
the
document,
a
few
ways
that
enable
us
to
potentially
mitigate
against
that
one
of
them
is
heading
of
transfer
sizes.
In
certain
cases.
Another
is
to
limit
this
compression
to
non
credential
fetches,
which
would
at
least
today
significantly
restrict
its
benefits,
but
it's
significantly
better
than
nothing.
E
And
one
more
thing
regard
related
to
this
space
we
have
allotted
from
Google's
compression
team
online
who's
been
working
on
shared
dolly,
which
shares
some
of
the
aspects
like
it
can.
Potentially,
if
you
look
at
it,
the
right
way
addressed
some
of
this
same
use
cases
while
also
addressing
different
use
cases.
A
But
not
being
said,
are
there
any
comments
in
the
general
space
yeah,
as
I
said,
I
think
this
is
a
problem
that
htv-2
would
like
to
solve,
which
is
you
know.
We
have
not
adopted
this
draft
on
security
concerns,
but
we
keep
talking
about
the
topic
because
I
think
it
is
meaningful
and
it
is
a
in
a
way.
I
meaningful
criticism
of
HTTP
choose
with
something
we.
We
should
continue
to
talk
about.
A
R
H
Mon
Thompson
I
think
Patrick,
basically
summarized
it
right
from
the
outset
yeah.
There
are
various
things
that
we
know
that
we
can
do,
but
there
are
also
a
large
tract
of
things
that
we
just
have
too
much
uncertainty
over
and
it's
funny
because
some
to
some
extent
we
have
this
problem
within
resources
that
intermix
secrets
and
attacker
controlled
information.
But
once
you
start
crossing
resource
boundaries,
now
we're
talking
it's
a
it's
a
very
different
game
at
that
point
and
an
opening
that
that
potential
up
is
somewhat
worrying,
particularly
when
things
can
be
applied.
Generically.
C
Ian's
but
google
I
did
have
a
question
if
it
sounds
like
there,
there
are
two
kind
of
mostly
unrelated
but
potentially
problem
solutions
in
the
same
space.
One
is
this
graph.
There
is
written
up
here
and
another
one.
Is
this
kind
of
custom,
bratli,
dictionary
or
custom
other
other
dictionary
approach?
C
Do
we
have
an
idea
of
the
relative
compression
efficiency
of
the
two
approaches
compared
to
one
another,
because
I
mean
I
think
it
seems
like
it's
easier
to
analyze
the
publicly
shared
dictionary
option
than
the
cross
stream
compression
option
like,
and
so,
if
they're,
similar
from
an
overall
performance
perspective,
I
mean
I
like
the
yeah,
the
cross
stream
one.
It
also.
M
C
E
Compression
like
refreshing
context
in
h2
proposal
is
mostly
addressing
the
bundling
uses
so
right
now
people
many
small
files
over
h2,
and
these
are
significantly
better
compressed.
One
bundle
at
the
same
time
running
a
custom
granularity,
and
it
would
be
great
if
we
could
tell
developers
bundling
and
rely
on
the
transport
to
solve
that
problem,
though
the
use
cases
are
different,
shared
dictionaries,
out-of-band
share
dictionaries
can
potentially
give
you
better
performance,
but
they
have
other.
This
is
thoroughly
are
not
necessarily
compatible
with
all
use
cases.
H
Yep
so
mountaintops
of
basically
what
I
was
gonna
say:
Vlad
did
some
analysis
on
on
this
and
found
that
the
opportunity
cost
of
sending
a
large
blob
of
shared
dictionary
ahead
of
all
of
the
these
small
things
that
we
wanted
to
compress
wasn't
particularly
good
for
the
performance
when
you
didn't
have
some
sort
of
prior
loading
for
the
dictionary.
So
there's
something
to
keep
keep
in
mind
with
all
this.
The
performance
here
is
actually
pretty
amazing.
The
the
problem
is
balancing
that
out
with
with
other
things,
I.
H
A
It
it's
it's
very
interesting
yeah
and
in
persuasive,
you
know
as
far
as
my
codes.
Yes,
there's
also
a
third
entrance
in
this-
that's
been
kicked
around
and
I'm,
not
sure
public.
That
is
but
there's
a
growing
set
of
interesting
problem,
but
none
of
them
have
security
solutions
that
much
different
than
what
we're
seeing.
H
S
Hello,
can
you
hear
me?
Yes,
okay,
I
am
loaded
from
the
compression
team
in
Google's
Irish
working
on
shared
broadly,
we
are
creating
better
dictionaries
for
broadly
compression
and
looking
into
the
shared
dictionaries,
and
we
also
have
made
a
spec
a
draft
of
a
specification
available
currently
on
github.com,
slash,
Google,
slash,
broadly
slash,
French
specs,
and
we
are
interested
to
collaborate
with
this
efforts.
S
P
H
So
Martin
time
to
just
tweet
back
on
that
last
comment:
the
spec
that
the
the
proposal
is
as
written
a
patch
on
the
fetch
spec,
so
think
about
that.
For
a
moment,
yeah
my
little
minds
blown
it's
incomprehensible.
As
a
result,
anyone
who's
read.
Fetch
will
understand
that
it's
it's
great
for
some
things,
but
not
great
for
others.