►
From YouTube: IETF96-HTTPBIS-20160718-1400
Description
HTTPBIS meeting session at IETF96
2016/07/18 1400
A
Beats
being
taken
on
etherpad,
it's
linked
from
the
agenda
and
there's
an
audio
feed
as
well,
which
is
the
main
vehicle
for
remote
participants
and
me
dekho
to
yes,
I!
Do
unless
you're
willing
to
you
know,
take
over
no
I
think
we're
good.
You
can
help
because
it's
an
etherpad,
you
can
help.
Yeah
that'll
be
lovely,
okay,
it's
after
two,
so
we
should
probably
start.
A
A
A
No
okay,
so
we
have
a
jabber
scribe,
it's
a
new
job
or
scrub.
So
please
be
gentle
Charles.
Thank
you
very
much
and
and
Chris
Newman
is
being
our
actual
scrub
scrub.
So
thank
you
up.
You
did
you
find
the
etherpad
great
so
he's
doing
that
an
ether
pad
if
people
want
to
jump
in
and
help
him
there
or
correct
what
has
been
said
or
whatever.
That
would
be
helpful
too.
A
Awesome
you're
doing
well
as
a
diverse
crowd
so
far
next
one
so
is
John
I
here
yet,
oh
that's
a
big
problematic
sure.
Please
do.
That
would
be
most
appropriate.
B
Not
with
this
gentleman
sitting
next
to
me,
I'm
not
going
to
so
the
many
of
you
will
have
attended
the
quick
barb
off
in
Prague
some
time
ago.
This
is
a
quick,
actual
buff
I'm
Ted
Hardy
speaking.
This
is
a
working
group
performing
both
the
intent
at
this
point
is
to
take
the
formerly
described
monolith
of
quick
and
break
it
in
two
appropriately
sized
building
blocks
and
associate
those
with
work
here
in
the
ITF.
B
So
some
of
the
things
that
you
may
have
seen
in
previous
drafts
of
quick
have
been
broken
into
individual
drafts
so
that
it's
a
reusing,
IETF
technologies,
better
and
more
reusable
by
ITF
technologies.
Better,
it
is
a
UDP.
Oh
look,
I'm
channeling!
This
guy
come
and
stand
next
to
me
and
they
can
see
the
person
on
top
weird
bullet.
Point
too,
shall
we
try
and
read
aloud
together
sure
a
secure
transport
10
heads
a
secure
transport,
initially
targeted
him
and
HTTPS?
B
A
Good
so
I
think
most
people
in
the
group
are
aware
of
that
work.
Hopefully
I
think
it's
important
for
us
because,
as
he
said,
you
know,
one
of
the
core
targets
is
for
HTTPS
and
we
want
to
make
sure
that
that's
a
graceful
transition
I
have
a
new
protocol,
a
new
mapping
of
HTTP
semantics
to
a
wire
protocol.
We
need
to
as
a
community
here
keep
an
eye
on
that.
A
So
moving
on
specification
status,
we
have
two
specs
that
are
nearing
the
end
of
our
work
on
them.
Hopefully,
one
is
the
HTTP
encryption
content
encoding.
We
had
a
working
group
last
call
on
that
that
ended
just
a
little
while
ago,
and
we
had
a
some
feedback
on
it
in
working
group.
Last
call
I,
think
you
know
with
my
chair
head
on
I'm
I'd
like
to
see
a
little
more
discussion
of
it.
It
seems
like
there's
not
yet
been
broad
discussion,
but
the
people
who
have
reviewed
it
seemed
fairly
happy.
A
We
did
get
some
feedback
from
Ecker
more
recently
after
the
working
group
class
called
closed
on
list
and
and
I
think
he
wanted
to
open
up
some
more
discussion
about
that
and,
from
my
standpoint,
I'm
happy
to
sit
on
this
and
let
that
discussion
take
place
before
we
go
to
ally.
Tf
last
call
I'm
happy
for
it
to
happen
in
ITF
last
call
so
Ecker
did
you
want
to
kind
of
expand
upon
the
concerns
that
you
had.
D
Sure
see
if
I
can
do
it
by
memory,
so
I
mean
when
this
was
initially
designed.
It
was
designed
to
sort
of.
It
was
pitched
as
a
simple
mechanism
that
was
without
the
all
the
clutter
as
background.
The
IETF
has
made
a
number
of
attempts
over
the
years
of
designing
secure
messaging
protocols
of
phrase.
D
Cuts
on
this
protocol
simply
come
with
some
form
of
a
key
exchange,
k,
establishment
and
they
come
with
a
a
framing
layer
for
the
data
and
and
they
come
with
an
integrity
check,
and
so
this
was
framed
as
being
like
we're
not
going
to
take
out
all
that
baggage
of
s
mine
or
jose
or
whatever,
and
we're
just
going
to
put
it
all
in
one
in
one
nice
little
package,
you
only
the
thing
we
need
and
over
the
you
know,
as
many
standards
do
over
the
over
the
pipe
lifetime
in
sarah's
element
is
blown
it
on,
and
it's
now
picked
up
a
number
of
features
which
I'm
not
sure,
really
needed,
and,
conversely,
has
this
extremely
odd
attitude
that
the
only
symmetric
algorithm
supports
is
aes
GCM,
while
simultaneously
supporting,
like
you
know,
both
out-of-band
authentication
and
an
integrity,
integrity
and
confidentiality,
keys,
+,
look
the
coalesce
finite
field
to
heaven
and
as
an
odd,
the
set
of
design
choices.
D
Thank
you,
but
no,
not.
D
That's
a
nod
to
the
design
choices,
and
so
I
think,
would
be
worth
asking
the
question
of
what
is
the
right
set
of
design
choices
and
what
are
the
actual
use
cases,
and
should
this
be
slimmed
down
or
fattened
up,
and
if
we
really
need
something
this
fat?
Maybe
we
actually
don't
need
anything
at
all,
but
rather
just
deal
with
the
old
things
was
already
nice
and
fat.
E
Yeah
so
thomson
having
read
your
email
and
had
very
little
time
to
think
about,
there
is
a
possibility
here
of
maybe
refactoring
these
sorts
of
things.
The
the
primary
complex
parts
of
this
are
only
really
used
by
the
web,
push
stuff,
and
it
may
be
that
we
can
sort
of
take
those
pieces
out
and
sort
of
shove
them
over
there
and
deal
with
that
and
keep
this
the
relatively
slim
down
thing
that
it
was
originally,
which
I
think
might
go
a
long
way
to
addressing
that
on
your
points
on
the
DA
stuff.
E
Here
that
removes
all
that
as
well,
because
they
have
a
very
narrow
profile.
So
so
web
bush
has
a
very
narrow
profile
as
well
that
we
could
then
just
use
directly
and
sort
of
get
rid
of
all
the
baggage
that
you
have
around
diffie-hellman
groups
and
analysis
of
other
things.
So,
let's
talk
about
doing
it.
I
can.
D
A
D
Though
they're
at
this
is,
it
is
now
gone
to
the
point
where
it's
not
obvious
to
me.
It
is
correct
not
that
it's
bad
but
I'm,
just
saying
like
there's
enough
pieces
now
involved
that
I
not
sure,
I'm
not,
I'm
not
sure.
I
demonstrate
this
correct
route,
some
more
work
that
I've
done
and
the
original
version
I
thought
was
like
was
relatively
obviously
correct,
but
maybe
it
wasn't,
but
it
seemed
obvious.
It
was
ok.
A
E
Don't
know
what
other
people
think
about
my
my
view
is
that
people
are
doing
web
push
and
if
I
do
this
right,
it
won't
actually
change
any
of
the
bits
on
the
way.
For
those
guys,
then
there's
a
large
number
of
implementations
web
push
and
that's
one
thing
that
we've
got
to
be
a
bit
sensitive
to
and.
A
Right
so
I
suppose
working
group
stay
tuned,
and
and
when
we
get
that
next
draft
out,
it
would
be
great
to
get
a
few
more
eyeballs
on
it
to
the
other.
One
is
opportunistic
security
and
I
believe
with
that
draft.
We're
just
waiting
for
a
revised
internet
draft
before
taking
it
to
ITF
last
call
I.
Think
we've
addressed
all
the
issues
that
came
up
in
last
call
of
from
memory.
A
A
So,
let's
move
on
again
the
drafts
that
we
currently
have
active
are
listed
here.
So,
let's
start
with
RFC
59
87
bits
which
Julian
tells
me
we
do
have
some
something
to
talk
about.
If
I
can
get
this
to,
let's
see.
F
G
Next
slide,
please,
so
this
is
about
a
way
to
put
non-ascii
characters
into
parameter
values
in
HTTP
header
fields.
That
was
invented
a
very
long
time
ago.
Four
main
header
fields
that
I
provide
for
huge
for
use
in
HTTP
in
2010,
because
it
was
already
implemented
in
some
browsers
and
after
the
work
of
writing
down
that
standard.
In
the
end,
the
browsers
that
hadn't
implemented
it
it
yet
did
so
so
this
is
now
widely
deployed
and
interoperable.
G
There
are
a
few
changes
I
made
compared
to
the
proposed
standard
proposed
in
Madrid,
one
of
which
was
in
the
original
spec.
We
required
support
for
balls,
utf-8
and
iso
8859
minus
1,
because
that's
what's
existing
implementations
actually
did
turned
out
that
microsoft
when
they
implemented.
That
shows
to
support
only
utf-8
for
very
good
reasons
and
actually
one
encoding
that
works
as
sufficient.
So
we
are
removing
the
eyes
or
requirement
from
the
spec,
the
all
the
normative
references
to
ancient
RFC
specs
are
gone.
G
So
I
think
I
have
done
all
the
changes.
I
wanted
to
do.
Some
of
the
changes
I
did
last
Friday
a
days
ago
in
in
the
evening,
so
I
probably
introduce
some
breakage
and
I
need
to
review
what
I
submitted
on
that
evening
and
once
I've
done.
That
I
think
we
already
technically
for
working
group
last
hour.
G
G
Arranged
that
the
iesg
back
then,
and
the
RFC
editor-
okay
with
that.
So
this
now
works
for
this
draft,
but
it's
not
totally
clear
whether
we
can
leave
things
as
they
are
if
we
want
to
go
to,
if
we
want
to
submit
that
for
publication
will
need
to
find
out
this
week,
I
guess
otherwise.
We
have
to
back
out
these
changes.
A
A
You
know
having
a
clean
and
coding,
but
once
you
jump
through
all
the
different
hopes
of
the
different
pieces
of
software,
that's
going
to
touch
a
header.
You
can't
make
assumptions
about
encoding,
and
you
can't
say
you
know,
put
it
in
utf-8
and
know
that
all
the
frameworks
and
and
and
the
client-side
api's
are
going
to
handle
that
properly,
and
so
there
may
actually
be
a
place
for
this
encoding
even
going
forward.
A
So
that
would
be
all
pros
on
the
motivation,
I
I'm,
starting
to
think
that
way.
So
I
might
try
and
write
up
a
polar
question
that
safe
people
agree
with
that
or
not
released.
Have
the
discussion
and
this
kind
of
touches
on
the
discussion
of
GFE
as
well,
but
we'll
get
there
which
might
have
that
part
about.
A
So
our
next
door,
if
I
can
find
this
again,
is
key
in
buenos
aires.
We
talked
about
parking.
This
spec.
I
still
see
a
decent
amount
of
interest
in
the
capability.
That's
provided
by
this
spec,
but
I
don't
see,
is
as
much
interest
in
implementation.
There
have
been
people
who
have
said:
they've
started
an
implantation,
but
we
haven't
yet
and
it's
seen
any
interop
or
any
full
implementation
in
a
cache
and
I'm
uncomfortable
pushing
this
effect
forward
until
we
get
some
implementation
experiment
experience
mostly
because
we
kind
of
have
one
shot
at
it.
A
You
know
if
you
want
to
change
the
format
of
how
it
works
or
change
the
vocabulary
and
that
you
have
for
talking
about
request.
Headers
you'd
have
to
introduce
a
new
key
like
header,
and
that
gets
pretty
unscalable
pretty
quickly.
So
if
we,
if
anybody
wants
to
talk
about
key
what
we
can
do
that
now,
but
but
for
now
it
sits
parked
until
we
get
more
implementer
experience
or
interest
and
then
I
guess
the
follow-on
question
Alexei.
Are
you
comfortable
with
having
a
document
park
like
that?
For
that
purpose?
A
A
A
This
is
work
that,
as
I
understand
it,
we're
pretty
much
ready
to
go
to
working
group
last
call
and
on
our
side,
but
there's
a
corresponding
integration
of
what
to
do,
how
to
handle
client
hint
generation
in
the
browser.
That's
happening
in
the
fetch
specification
in
the
what
working
group
and
so
we're
waiting
for
that
to
be
completed
and
to
be
solidified
before
we
actually
ship
this
back.
So
for
now
it's
on
hold.
If
people
want
to
start
reviewing
it,
that's
fine,
I'm,
actually,
fine
doing
an
extended
working
group
last
call
on
it.
A
A
G
A
And
I
think
you
open
an
issue
about
that
and
we
discussed
it
a
little
bit.
My
personal
feeling
is
that
it
is
effectively
an
informative
reference.
You
don't
need
to
use
key
to
use
client
hits
it
just
makes
it
much
more
efficient.
A
A
Yes
back
to
key
for
a
moment?
Oh
right,
that
was
just
about
the
ad
we
parked
this
until
he
got
more
experience
with
it.
That's
right
and
that's
a
future
feature,
and
that's
these
are
all
things
that
we
thought
we
could
sit
on
until
we
knew
that
we
were
actually
to
move
forward
with
the
spec
okay
good
and
for
klein
hints
yeah.
So
you
opened
the
normative
reference
to
key
spec.
I
think
that
can't
be
flipped
into
informative.
A
So
to
channel
julian,
he
thinks
it
depends
on
the
specific
text
and
also
does
require
some
other
changes
as
well.
Okay.
So
let's
talk
that
through
and
this
issue
here
I
believe
we
said
that
is
really
just
a
placeholder
for
discussion
in
the
what
working
group
yeah
and
with
the
HTML
spec
actually
as
well.
Oh.
A
A
A
I'll
assume
he
will
thanks.
The
fourth
spec
is
the
origin
frame.
K
A
A
L
Yeah,
nothing
I
didn't
have
time
to
vote.
Suppose
anger,
I
didn't
have
time
to
file
an
issue,
but
is
it
specified
anywhere
like
how
origin
frames
interact
with
all
the
SPC,
for
example
like
what
is
the
origin
specified
when
you
do
an
office
we
see
somewhere
and
like
it
seems
like
we
should
use
the
original
origin
as
stole.
The
rules
for
origin
brings.
M
Hi
I'm
Ben
Schwartz
from
jigsaw,
and
you
know,
I've
spoken
with
some
HTTP
experts
who
who
say
that
you
really
should
never
trust
a
Content
termination.
If
you
have
multiple
responses
in
a
stream
that
that
you,
you
shouldn't,
really
trust
the
distinction
between
them.
Essentially,
you
shouldn't
concatenate
responses
from
independent
trust,
anchors
and
so
I'm
wondering
how
that
applies
here.
If
so,
if
I
don't
see
anything
in
the
Security
section
indicating
whether
this
is
considered
to
be
appropriate
for
use
with
a
combination
of
origins
that
that
are
not
mutually
trusting,
so.
A
The
context
here
is
that
this
is
a
facility
to
help
with
the
HTTP
two
connection,
bullying.
The
advice
that
you
are
talking
about
is,
in
my
experience,
that's
usually
brought
up
for
HTTP
one,
and
this
is
not
an
HTTP
one
mechanism.
This
is
HP.
Two
already
allows
connection
pooling
where
you
can
have
multiple
origins
on
a
single
connection
under
certain
circumstances,
and
this
is
about
more
carefully
allowing
the
server
to
specify
which
of
the
origins
than
a
certificate
or
covers
to
be
used
on
that
connection.
M
A
I
Ok,
too,
that's
better
Mike,
bishop
I
think.
The
other
comment
there
is
that
with
vanilla
hgp
to
connection
coalescing,
all
the
origins
you
deal
with
are
in
the
same
cert,
so
there
is
likely
more
of
a
thrust
relationship
there.
Now
if
we
go
down
the
route
of
supporting
multiple
certs
which
we're
going
to
talk
about
friday,
that's
when
things
start
to
get
interesting
about.
Do
those
multiple
origins
and
multiple
sorts
have
a
trust
relationship,
or
do
they
just
happen
to
be
on
the
same
server
sure.
E
Sometimes
you're
just
staring
at
what's
on
the
screen
here
you
are
talking
about
flags
and
it
turns
out
that
we
actually
have
now
a
pattern:
that's
evolving,
where
we're
using
these
flags
to
say
to
sort
of
build
up
an
image
of
what
a
set
of
things
is.
Yes,.
A
A
Yeah,
yeah
and
and
since
Mike
brought
up
the,
we
will
talk
on
Friday
about
the
server
certificates
thing
from
what
I've
seen
so
far,
there's
fairly
strong
interest
in
having
a
facility
to
add
new
certificates
to
a
connection
somehow
and
the
question
that
that's
in
the
back
of
my
mind
now
is:
should
we
maybe
hold
this
specific
specification
back
a
little
bit
until
we
know
more
about
that,
because
it
seems
like
there's
a
level
of
coordination
that
might
need
to
happen
there.
So
people
have
thoughts
about
that,
I'd
like
to
hear
them
so.
N
A
Consider
it
fixed
Oh
Martin,
you
beat
me
to
it
so
I
think
the
outcome
on
this
particular
issue
is
what
Martin
was
just
saying,
which
is,
let's
figure
out
some
generic
patterns
for
how
to
handle
working
with
sets
of
metadata
in
HD
two
frames
and
then
apply
that
here.
A
So
really
these
three
issues
are
all
of
the
same
kind
of
ilk
in
that
sense.
So
so
what
I
just
spoke
about
about
holding
off
on
this
back
a
little
bit,
not
not
stopping
discussion
but
not
pushing
forward
to
working
group
last
call
quite
yet
are
people
happy
to
wait
around
a
little
bit
longer
on
it?
I
see
nodding,
heads
I,
don't
see
any
shaking
heads.
My
co-author
is
nodding,
his
head,
which
is
good,
ok,.
A
G
G
G
That
we
choose
is
that
in
HTTP
we
have
the
possibility
that
a
header
field
repeats
within
a
message,
and
we
can't
change
that
at
least
not
an
HTTP
one,
not
one
or
two,
and
the
question
is:
is
it
the
best
way
to
define
header
fields
in
a
way
so
that
they
can
actually
work
with
multiple
values?
Or
should
we
encourage
people
to
reject
messages
where
field
value
repeats
and
concentrate
on
honest
in
tax
debt
does
not
allow
this
notation.
G
G
Looking
at
Jason,
independently
of
the
actual
Jason
syntax
on
the
wire,
there
are
adjacent
data
model
constraints
that
don't
make
us
very
happy
like
the
fact
that
it,
although
it
allows
members
to
repeat
in
the
message
formats
the
puzzles,
don't
actually
tell
you
what
happens.
So.
We
have
a
potential
Intel
problem
here.
G
There's
also
some
ambiguity
or
not
what
the
way
numbers
are
transported
to
Jason
make
the
use
of
Jason
numbers
a
potentially
bad
idea
to
use
as
a
protocol
element,
because
Jason
with
a
low
float,
notation
and
some
kind
of
imprecision
and
so
on,
and
then,
finally,
is
this
just
something
that
we
want
to
use
on
the
wire
for
HTTP
one
dot,
one
and
two
or
is
the
something
that
would
solve
all
the
header
field,
valuable
problems
in
future
HTTP
specs!
So
is
there
something
we
want
to
do
with
the
short-term
midterm?
G
E
G
E
Think,
to
a
large
extent,
the
discussion
that
we've
been
having
has
sort
of
been
hedging
around
the
core
problem,
which
is
how
much
of
this
is
a
schemer
aware
language
that
we're
defining
given
that
HTTP
11
header
fields
are
essentially
I
mean
you
have
to
know
what
the
header
field
means
before
you
can
posit.
So
it's
essentially
skiing
or
aware,
but
then
each
one
of
them
has
this
informal.
Well,
a
lot
of
them.
Have
this
informal
schema
free
component
to
the
definition
of
them
where
they
have
parameters
or
some
other
way
of
extending
them?
E
So
how
much?
How
much
of
this
is
a
scheme
of
free
thing,
versa,
fully
schemer
defined
syntax
and
that
sort
of
goes
to
the
decision
that
we
that
you
sort
of
went
with
Jason
as
people
actually
understand
where
the
the
boundaries
are
in
Jason,
with
respect
to
extending
and
how
things
that
define
themselves
and
where
not
to
go,
such
as,
if
this
thing
is
defined
as
an
integer,
where
I
canted
blob
and
an
object
there
and
expect
people
to
to
be
happy.
So
I
don't
do
that,
even
though
it's
possible
in
Jason
people?
C
E
I'd,
like
us,
to
sort
of
maybe
step
back
a
little
bit
at
this
point.
I
know
that
pH
case
talked
about
using
C
bar
in
this
context
and
I
think
that
that
sort
of
really
for
me
cemented
the
the
problem,
which
is
we
don't
actually
understand
what
it
is
that
we
want
out
of
this
process.
We
don't
understand
where
it.
We
need
the
sort
of
scheme
of
reconstructs
verse,
the
very
tightly
defined
definitions
and
obviously,
there's
benefits
on
both
ends
of
that
scale.
E
Obviously,
if
we,
if
we
can
manage
to
pack
dates
into
the
most
efficient
representation
imaginable,
we
had
a
big
discussion
about
this
during
the
development
of
H
pack,
you
can
get
down
to
five
bytes
for
pretty
much
every
single
date.
That's
ever
on
the
wire
first,
the
13
14
18
I
can't
remember
exactly
what
the
number
is
now
and
there's
some
real
efficiency
games
that
not
just
in
terms
of
something
bites
but
all
sorts
of
other
processing
game.
So
we
need
to
walk
that
space.
E
I
think
a
little
more
carefully
I'd
like
to
encourage
us
not
to
commit
to
the
fact
that
this
says
Jason
and
the
title
to
jason.
I
think
the
discussion
has
revealed
to
me
that
Jason
is
probably
a
bad
idea
for
this,
so
we
need
to
be
a
little
bit
more
open
than
perhaps.
This
is
something
that
where
we
use
is
something
that
exists
or
we
often
yeah,
take
the
bull
by
the
horns
invent.
O
Eric
Nygren
another
area-
that's
directly
related
to
this,
but
I
think
this
has
the
possibility
to
either
make
things
better
or
much
worse,
depending
on
some
of
these
items.
Is
the
class
of
recent
vulnerabilities
around
how
different
servers
are
in
a
path
to
produce
different
header
fields,
whether
they're
repeating
line
wrapped,
so
some
of
these
things
around?
How
do
you
deal
with
repeated
values?
O
If
we're
not
really
clear
about
that
or
if
we
don't
make
or
if
we
pick
things
in
slightly
the
wrong
direction,
we
could
make
things
a
lot
worse
there
or
if
we
are
much
more
clear
about
some
of
this
and
I
think
this
goes
to
Martin's
comments
on
my
cat
configure.
What
are
we
trying
to
solve
could
actually
make
things
much
better
if
we
could
be
much
more
clear
about
when
things
are
and
start
being
ambiguous,
these
are
cases
we
must
start
projecting
towards
those
issues.
A
So
when
we
talked
about
adopting
this,
we
were
pretty
clear
that,
as
Jillian
said,
that
it
might
not
look
much
like
what
it
went
into
the
process
as
and
from
the
recent
discussion
on
the
list,
it's
become
pretty
clear,
there's
a
lot
more
skepticism
about
using
Jason
than
there
was
even
a
week
ago.
So
I'd
really
like
to
see
that
discussion
continued
I'm
happy
to
keep
this
document
adopted
as
a
placeholder
for
an
indicator
of
the
interest
in
the
working
group.
A
In
this
topic
and
again
when
we
adopt
the
document,
the
primary
interest
that
I
heard
was,
you
know
specifying
new
HTTP
headers
is
really
hard
and
effectively.
Julian
is
a
bottleneck
in
doing
so.
You
know,
if
you
want
good
review,
you
have
to
take
it
to
Julian
we're
now
having
other
requirements
being
put
upon
what
this
work
might
be,
and
that
might
be
a
good
thing,
but
I
think
Martin's,
absolutely
right.
E
C
E
A
E
A
A
So
I
think
there
needs
to
be
a
lot
more
discussion
around
that
and
it
seems
like
the
discussion
on
the
list
is
becoming
healthy,
which
is
good,
hopefully
we'll
see
some
more
drafts
where
people
can
flesh
out
some
of
these
ideas.
But
I
expect
we're
going
to
be
talking
about
this
one
for
a
little
while
I
think
yeah.
G
C
G
Mean
people
everybody
has
their
own
set
of
goals
for
this,
at
least
from
those
involved
in
the
discussion
and
I.
Think
it's
pretty
clear
that
the
singer
that
we
are
probably
not
be
able
to
address
all
these
goals,
in
which
one
format,
or
at
least
not
within
legs,
like
24
months
so
I,
think
we
should
have
a
meta
discussion
about
what
what
is
this
supposed?
How
is
this
supposed
to
help?
What
is
the
target,
because
that
will
influence
what
we
can
do,
I
mean
discussing
how
head
of
Hughes
could
be
represented.
P
A
A
B
A
Last
but
not
least
for
today-
and
I
see
we're
doing
very
well
with
time-
we
have
the
cash
digest
specification
which
we
recently
adopted
and
the
internets
are
very
slow.
All
the
sudden.
A
And
so
we
don't
have
any
open
issues
on
it,
but
there
has
been
some
discussion.
I
think
Martin
raised
some
issues
around
how
complex
the
document
has
become,
since
we
had
draft
00,
which
I
think
is
fair
feedback
as
an
author
on
that.
So
so
this
is
one
of
the
documents
where
I'm
an
author.
So
we
have
a
separate
document:
Shepherd,
that's
Natasha
Rooney
here
in
the
front
row
wave
Natasha!
Okay!
A
Personally,
you
know
one
of
the
reasons
we
did.
That
was
because
it
was
hard
to
anticipate
all
the
use
cases
for
cash
digest
and
you
can
imagine
a
number
of
different
patterns
that
both
sites
and
browsers
might
want
to
use
them
in,
and
so
we
went
a
little
bit
shotgun
on
it
to
see.
If
we
could
capture
those
I
am
more
than
happy
to
have
a
discussion
about
what
the
high-order
use
cases
are
and
and
pairing
it
back
down.
If
that's
an
appropriate
thing
to
do,.
A
I
really
did
we
actually
didn't
meant
to
respond
to
that
and
I
didn't.
So
we
don't
really
have
your
I
know
normalization
for
the
cash.
So
why
would
we
have
it
for
digest
to
the
cash.
E
A
E
E
C
F
C
C
A
A
E
Think
this
is,
this
is
really
good.
Stuff,
I
think
there's
another
basically
fact
that
I
need
to
provide
around
the
probability
stuff
that
four
a.m.
this
morning
was
thinking
about,
but
I
think
this
is
this
probably
something
I'd
like
to
seek
out
as
experimental,
rather
than
proposed
standard
and
relatively
soon,
because
I
think
once
we
sort
through
all
of
those
minor
issues,
I
don't
obviously
see
it's
just
not
that
much
that
much
to
it.
It's
not.
A
A
I'd
like
to
understand
is,
is
that
at
least
initially,
you
know
as
a
browser
cache
you
know
owner
what
kind
of
frames
would
you
want
to
send
out?
Would
you
send
out
with
fresh
ends
Dale?
Would
you
want
to
send
a
complete
set,
or
would
you
want
to
clamp
it
down
to
a
certain
size?
There
are
a
lot
of
open
questions
there,
especially
on
the
browser
side,
but
also
in
the
consumers,
out
of
course,
of
what
they
find
most
valuable.
But
to
me
that's
more
intuitive
I,
know
kind
of
know
what
I
want
to
say.
E
It
as
a
an
experimental
document-
that's
published-
you
know
in
six
months
time,
rather
than
in
three
years
time
when
it,
when
we
all
agree,
there's
perfect
because
we're
not
going
to
work
this
out
until
people
actually
try
to
start
using
it
on
earth
at
Stefan's
done
some
implementation
work
on
the
suicide.
It
was
great,
but
without
a.
A
A
Process
wise
process
was
right:
yeah
cuz,
the
registry
policy-
so
you
know
so
one
of
the
patterns
that
I
started
with
a
different
draft,
but
I
didn't
carry
on
to
this
was
I
burned
an
experimental
frame
ID
but
I
put
magic
at
the
beginning
of
the
frame
payload
to
disambiguate
it
from
other
experimental
uses
and
that's
something
we
could
do.
A
A
Okay,
I'll
have
a
slide
discussion
with
you
and
we'll
work
on
some
of
the
use
cases.
Is
anyone
intending
to
implement
this
on
the
client
side
or
on
a
cache
side.
A
Okay,
I
know
that
on
the
list
and
in
some
back-channel
I
also
got
interest
in
people
using
this
for
non
browser
to
server
cache,
digest
communication.
So
cash
to
cash
for
folks
who
remember
squid,
has
cash
digest
and
has
had
it
for
probably
two
decades
or
so,
where
it
uses
foreigner
cash
communication
I'm,
not
sure
we
should
explicitly
target
that
use
case,
but
we
shouldn't
explicitly
disable
it
either.
I
guess:
oh.
A
A
Okay,
anything
else
about
cash
digest,
because
I
did
you
want
to
say
anything
more,
no
good
for
now.
Okay,
all
right!
Well,
we
have
a
half
an
hour
left,
but
I
think
it's,
it's
probably
a
natural
stopping
point.
Now,
on
friday,
we
have
two
hours
and
we
have
a
fairly
substantial
discussion
of
the
cookie
spec
and
then
a
few
different
proposals
that
have
been
in
flight
for
a
little
while-
and
I
think
we
do
have
enough
time
of
it
for
all
that.
A
A
Okay,
that's
fine
and
Gabriel
I'm
guessing
you're,
not
ready
for
the
Internet
of
Things
discussion.
Quite
yet,
are
you
Friday?
Okay,
we're
a
working
group
who
likes
to
get
our
slides
in
earlier.
That's
a
good
thing:
okay!
Well,
let's,
let's
break
early
then
give
give
everybody
some
time
back.
Sorry.
A
A
R
Ok,
so
we're
tomorrow
morning,
eight
thirty
in
the
morning
in
yes
in
the
morning,
in
his
room
over
there
charlatan
a
one,
we're
gonna
have
a
meeting
talking
about
buying
cash
different
uses
it
as
a
cure
court
allegations.
So
you're
all
welcome
us
with
interest
of
this
topic.
Please
prepare
at
least
take
some
look
at
the
rough
swimming
announced
on
the
mailing
list,
etc.
This
available,
sir
ok,
so.
A
N
N
So
typically,
you
would
combine
this
with
a
really
really
large
max
age,
which
is
what
you
see
people
doing
today.
So
this
goes
to
a
design
pattern.
You
typically
see
on
the
web,
where
resources
really
sub
resources,
images
JavaScript
that
kind
of
thing
never
change
when
they
do
change,
they
just
get
renamed
in
the
references
to
them,
use
the
new
names
right.
N
So
Facebook
made
a
public
post
on
this
topic
and
they
said
some
phenomenal
amount
of
their
traffic,
and
this
is
because
I
have
not
read
my
own
blog
post
to
prepare
from
slides,
which
don't
exist.
Maybe
twenty
percent
of
their
traffic
was
essentially
a
30
fours
for
resources
that
had
not
been
around
that
had
not
been
alive
as
long
as
the
max
age
they
were
published
with
right,
so
they
publish
a
max
age
of
a
year
and
people.
You
know
four
minutes
after
this
resource
came
into
existence
are
saying:
hey.
N
N
Twenty
percent
adds
huge
number
and
a
huge
number
of
round
trips
from
you
know
the
latency
sensitive
client
point
of
view
and
a
huge
amount
of
load
on
the
server
to
look
this
up
in
to
generate
a
three
or
four.
So
if
you
look
at
somebody's
wall
page
on
a
lot
of
social
media
sites
and
you're,
like
I,
don't
know
if
you're
looking
at
a
relationship
status
and
says
you
know
single
beloved,
single
reload,
single
reload
in
a
relationship.
N
N
What's
behind
this,
so
just
for
fun
and
we
sort
of
ran
a
mock
up
with
facebook
and
it
performed
extraordinarily
well
taking
what
you
would
expect
the
two
hundred
resources
on
that
page
click
and
reload,
turning
it
into
three
or
four
transactions
right,
really,
nice,
stuff,
I
publish
this
on
my
blog
I
started,
getting
a
bunch
of
tweets
five
or
six,
like
other,
very
small
little
websites.
I
started
publishing
this
because
it
was
a
really
easy
little
attribute
just
just
to
do,
and
that
seems
to
be
working
out
well.
N
So
this
is
currently
I
believe
in
the
firefox
beta
channel.
The
reason
I
wasn't
going
to
bring
it
forward
as
as
an
internet
draft
to
this
meeting,
as
we
haven't
put
it
under
out
on
release
yet
and
run
the
official
experiment
to
see
what
the
numbers
look
like,
but
that
is
the
intention
and
maybe
in
Seoul
or
something
will
bring
a
draft
forward
to
make
that
happen.
The
most
interesting
thing
people
want
to
usually
talk
about
is
its
interaction
with
Max
age,
and
so
we
should
clearly
talk
about
that.
N
The
way
I
have
it
define,
the
semantics
are,
you
know
currently
will
always
return
three
or
four,
so
don't
bother
asking
so
I
do
honor
max-age
in
case
you're,
doing
hit
counters
or
something
like
that.
But
what
happens
at
the
end
of
Max
age
is
a
non
conditional
request
is
done
for
the
object.
So
if
you'll
never
give
me
a
three
or
four
I
don't
ever
send
if
modified
since,
and
that
was
the
semantics
of
you
putting
immutable
on.
A
A
Right
here,
I
I
could
not
hear
you,
which
is
really
weird,
only
that's
weird.
The
change
that
the
biggest
change
in
semantics
is
that
for
when
this
is
present,
the
real
it's
matt
exchange.
So
you
don't
issue
a
request
on
phone.
Oh
really,
that's
correct!
I!
Guess
it
is
that
you
get
an
automatic.
Q
E
N
You
mean
without
without
the
presence
of
immutable,
yes,
and
so
we
interpret
max
age
as
essentially
a
lease
period
for
which
you
can
reuse
the
response,
but
it
does
not
make
a
guarantee
about
the
content
of
the
entity
on
the
origin
server.
So
we
interpret,
someone
is
pressing.
Reload
is
looking
for
an
update
of
the
data
on
that
page.
Is
this
not
quote
different
figured.
N
G
N
Yeah
yeah,
so
ctrl
f5,
has
always
meant
or
shift
f5
depends,
there's
a
hard
reload
semantics,
depending
on
where
you
are
what
browser
here
in,
but
the
normal
reload
sends
off
a
bunch
of
three
or
fours
on
a
page
and
the
shift
reload
ignores
the
cash
and
loads
everything
from
scratch,
and
that's
still
even
in
the
presence
of
immutable,
is
what
happens.
Ok,.
J
C
A
A
A
N
N
We
have
deployed,
and
more
recently
chromis
deployed
brutally
as
a
Content
encoding
option
for
the
web
widely
done,
it's
been
really
successful
in
terms
of
efficiency,
so
you
should
look
into
it
from
such,
as
in
the
client
side,
the
client
side,
it's
very
similar
in
time
and
space
to
gzip
very,
very
similar,
it's
harder
on
the
server
side.
So
if
your
pre
computing
it's
kind
of
a
no-win
and
if
you're
not
pre
computing,
its
or
a
no
brainer,
rather
not
win
if
you're
not
free
computing,
it's
a
harder
decision.
N
What
I
want
to
bring
up
here,
though,
was
purely
from
an
interrupt
point
of
view.
We've
done
it
over
HTTPS
only,
and
that
has
largely
been
successful
in
terms
of
just
learning.
What
we're?
Oh
look
at
that
in
terms
of
learning
what
we're
able
to
deploy.
There
was
one
problem
with
one
piece
of
antivirus
software
which
its
failure
mode.
N
It
passed
through
accept-encoding,
broadway,
just
fine
and
when
it
got
something
that
content
encoded
and
did
not
understand
it,
it
decided
that
was
actually
okay
and
it
gave
the
client
the
direct
byte
streams
of
the
message
body,
but
it
stripped
the
content,
encoding,
header,
yeah
and
that
didn't
go
so
hot
so,
but
that
was
bug
fixing
out
there,
and
that
was
only
the
only
big
problem.
So
it's
looking
like,
at
least
in
HTTPS
concent
context.
More
of
these
in
coatings
negotiated
in
the
usual
way
no
sniffing
or
anything
like
that
is
looking
more
possible.
L
E
C
E
N
A
Thank
You
Patrick,
you
did
it's
awesome,
okay,
so
we
don't
have
quite
as
much
extra
time
left
anymore.
Does
anybody
have
anything
else
they'd
like
to
bring
up?
We
can
do
any
other
business
today
and
then
Friday's
left
for
the
rest
of
the
agenda.
A
A
F
C
F
A
So
there
was
a
discussion
of
as
I
understand
it.
You
need
a
slight
tweak
tweak
to
the
semantics
of
the
bites
range
unit
and
the
question
was:
can
we
modify
the
semantics
in
place
for
bite?
Sword
is
a
new
range
unit
required
and
this
draft
was
proposing
the
latter
apart,
which
is
a
new
range
unum,
and
there
was
a
discussion
as
a
recall,
I
think
Roy
was
advocating
reusing
or
changing
the
semantics
of
bytes.
If
we
got
data
and
I
don't
mean
to
put
words
in
his
mouth,
I'm
sure
will
correct
me.
A
If
we
get
appropriate
data,
that's
reasonably
safe
with
implementations
and
I.
Don't
think
we
have
that
data
yet
because
there's
a
risk
to
it
introducing
a
new
range
unit
as
well.
Not
only
in
terms
of
you
know
it
will
break
or
won't
give
you
the
benefit
of
caching,
for
example
with
intermediaries
that
don't
understand
this
new
range
in
it,
but
also
we
do
have
some
data
that
there
are
some
extremely
bad
implementations
out
there.
That
assume
that,
if
ranges,
president
always
bites
and
so
there's
risk
on
both
sides
and
I.
F
All
right
so
at
least
based
on
our
own
just
testing
for
the
applications
on
the
library's
we
were
using
trying
to
change
the
semantics
of
bite
itself
is
actually
was
very
risky
because
we
actually
encountered
libraries
that
just
kind
of
flicked
up.
So
that's
why
we
are
proposing
that
I
think
the
safer
option
is
to.
F
Go
with
the
defining
a
new
unit,
whatever
you
want
to
call
it
and
I
wanted
to
understand.
If
there's
any
objection
to
define
the
new
unit
so
granted
that
with
a
new
unit,
there
might
be
some
caches
or
some
process
that
may
just
because
they
expecting
just
buy
it
so
I
just
do
something
period.
At
the
very
least,
the
worst
case,
energy
should
be
that
they
should
not
cash
it
because
they
don't
understand
it
exactly.
A
F
They
do
anything
more
than
that
and
they
are
really
really
bad
implementations
right
yep.
So
so
should
we
actually
like
try
to
un
care
about
those
really
part
of
should
we
shall
try
to
find
out
if
there
really
that
so
I
think
what
we're
proposing
is?
Is
there
any
objection
to
kind
of
moving
forward
with
or
proposing
a
new
bite
on
you?
You
need
and.
C
A
It's
certainly
possible
I
I
mean
we
would
look
at
mean
in
terms
of
adopting
the
new
draft
we'd
look
at
everything.
You
know
that
we'd
look
at
for
the
rest
that
we
adopt,
which
is,
is
an
implementer
interest
in
it.
Does
it
address
a
broad
set
of
use
cases
for
HTTP?
You
know
we
generally
try
not
to
adopt
things
that
are
just
for
a
specific
application,
but
rather
that
our
general
facilities
for
the
protocol
and
I
think
that
that
certainly,
this
probably
is
okay.
A
By
that
letter,
metric
I'm,
less
certain
right
now,
whether
there's
implementer
interest
and
and
of
course,
if
we
start
a
new
range
unit,
there
are
potentially
other
things
that
might
be
interesting
to
do.
At
the
same
time,
other
modifications
that
people
might
want
to
make
to
the
semantics
killing.
C
A
The
so
I
forget
who
did
it,
but
someone
wouldn't
did
the
research
on
this
and
the
bugs
that
they
found
were
in
things
like
client
libraries
for
media
for
for
streaming
video,
and
so
this
isn't
necessarily
an
intermediary
problem.
We
can
certainly
go
and
look
at
squid
and
traffic
server
and
a
few
other
open
source
implementations
at
least,
but
these
were
in
you
know,
open
source
and
and
closed
source
media
libraries
consuming
streaming
video,
which
is
as
I
understand
it.
Your
core
use
case
so
so.
G
If
we
I
mean
we
haven't
talked
about
that
today,
but
we
still
plan
in
the
midterm
to
actually
open
up
the
car
specs
and
revise
them
for
full
standard.
So
the
question
is
whether
the
change
that
Roy
has
thrown
in
whether
that
would
be
actually
that
we
could
do
when
we
move
to
full
standard,
and
another
thing
that
comes
to
mind
is
the
arm
shaky
relationship
between
hydrangeas
and
other
content
and
coatings
and
whether
we
would
like
to
do
something
about
that
as
well.
C
G
A
F
A
A
Ietf
review
yeah
so
because
this
working
group
exists,
I
think
my
intuition
is.
There
would
be
a
substantial
amount
of
pressure
to
do
it
here
and
then
and
that's
completely
reasonable
I
think
we
just
need
to
have
the
discussion
with
implementers
and
make
sure
that
it
gets
appropriate.
Reviewing
it
yeah.
F
A
Talking
to
the
right
people
for
now
yeah,
so
what
we
can
do
is,
let's
start
a
discussion,
the
mailing
list
and
get
people
to
spend
a
little
time,
reviewing
it
and
see
if
we
can
come
to
some
idea
about
whether
we
want
to
adopt
it.
Thank.
O
Since
we
seem
to
have
time
even
that
seems
to
be
shrinking,
there
was
a
topic
thats.
They
raised
on
the
list,
a
few,
a
back
in
March
about
once
we
start
having
early
data
and
TLS
13.
What
is
it
act?
O
What's
the
interaction
and
binding
into
HTTP-
and
we
haven't
really
talked
about
that
much
recently
that
I
saw,
but
it
seems
like
something
we're
going
to
potentially
needs
to
raise
duty,
lista,
saying:
hey
those
special
API
here,
there's
I
think
going
to
be
need
to
be
some
clarification
of
when
it's
okay
for
HTTP
clients
to
use
at
API.
Yes,.
B
E
E
Is
actually
really
really
easy
thing
to
answer,
so
we
have
a
bunch
of
discussion
about
this.
One
and
Patrick
made
some
interesting
discoveries
about
how
people
replay
requests
in
the
real
world,
and
it
turns
out
that
you
have
to
replay
posts.
Yes,
the
internet
depends
on
the
web
depends
on
it
and
if
you
don't
everything,
breaks
and
there's
some
really
funky
examples
of
Patrick
go
into
the
just
mind-boggling,
be
terrible,
but.
C
C
E
The
fact
of
the
matter
is
the
web.
Has
this
vulnerability
already?
If
we
didn't
asses
realize
it,
it's
been
there
for
a
long
time
and
we
kind
of
depend
on
it.
So
the
API
that
we
ultimately
settled
on
was
you
were
from
the
socket
your
right
to
it.
You
know
other
signals,
there's
only
one
signal
that
you
need,
which
is
the
server
told
you
that
you're
0
RT
t
data
was
not
accepted,
please
replay,
and
then
you
go
off
and
obediently
replay
them
all
of
that
stuff
again.
So.
P
E
A
A
N
Yeah
so
Pat
McManus
ode
to
flush
out
that
story
in
Firefox
46
through
a
change.
I
didn't
fully
understand
the
implications
of
I'm
disabled
replay
of
post
on
a
broken,
persistent
connection,
most
of
the
time
and
I
can
say
without
equivocation
that
the
web
does
not
work.
If
you
do
that,
I
dozens
of
blocking
bugs
people
wanting
to
release
hotfix
releases,
it
was
not
a
good
day,
but
I
learned
all
kinds
of
curious
ways
in
which
the
web
does
require
replay
of
post
and
everything
else
on,
some
of
which
might
amuse
some
of
you.
N
There
is
a
website
out
there
that
has
a
precision
connection
timeout
in
the
server
side
of
250
milliseconds,
which
has
got
to
be
about
the
first
possible
number
one
could
choose
which
results
in
a
collision
of
trying
to
use
that
socket
almost
every
single
time.
And
yet
the
bug
report
says:
works
fine
in
IE,
in
chrome,
right
always
right
and
did
in
firefox
until
I
updated
this
morning.
N
Depending
on
which
you
are
I,
you
request
from
the
site
actually
execute
that
under
different
permissions
on
the
server
side,
and
it
doesn't
know
how
to
make
that
change
on
an
existing
persistent
connection.
So
it
just
says:
oh
my
god,
I
need
to
be
somebody
else
and
closes
the
connection,
assuming
you
will
call
it
back,
Oh,
what's
really
really
fun
about.
That
is
when
that
happens.
You,
of
course,
just
instead
of
making
a
new
connection.
N
You're,
like
oh
I've,
got
this
other
Idol
person,
the
connection
I'll
use
that
long,
and
so
you
go
like
right
through
your
pool,
and
so
unless
you
are
willing
to
replay
at
least
seven
times,
you
cannot
successfully
connect
with
the
server.
So
I
think
it's
adam
langley.
That
said
you
know,
if
t
ellis,
doesn't
repeat
it
or
if
gcb
doesn't
repeat
a
TLS,
will
repeat
it.
If
he
else
doesn't
repeated
a
european
basically
doesn't
reply.
If
the
damn
user
is
going
to
hit
reload
and
replace
themselves,
so
it's
gonna
be
ever
played.
L
Civ,
oh
thank
our,
so
do
add
to
this
point
I
think
there's
one
part
of
the
component:
that's
still
missing
like
which
is
the
so.
Basically
it
comes
down
to
like
the
application
is
the
only
one
who
knows
whether
or
not
this
data
is
idempotent
and
and
we
can
provide
from
dls.
We
can
provide
like
various,
like
defense-in-depth
techniques,
to
sort
of
like
limit
the
replayability
of
the
initial
data,
but
so
we
need
a
mechanism
for
the
application
to
tell
the
loris
layers
like
it.
L
So
the
at
the
socket
layer,
it'll
just
be
a
right
but
they're
at
the
grout
like
if
it's
a
mobile,
app
works
of
browser
there'll
be
something
on
top
of
it,
which,
like
basically
schedules
or
requests
which
can
be
written
out
onto
the
socket
and
does
it,
which
is
aware
of,
like
the
application
level
item
potency
guarantees.
So
one
of
the
proposals,
hopefully
we'll
discuss,
is
HP
workshop,
something
I,
but
one
of
the
is
adding
a
header
field
which
is
like
this
is
retry
safe.
L
L
So
if
you're,
if
you
have
like
a
smaller
data
center
or
something
like
that
and
you
have
like
a
bigger
data
center,
so
it
actually
like
there-there
was
Google
paper
about
this
like
a
while
ago,
where
you
could
actually
kind
of
improve
your
latency
by
because
one
of
the
data
centers
will
have
like
not
enough
capacity.
So
if
you
retry
this
other
days
that
you
saved
the
time
out
that
you
didn't
get
the
response
back
from
the
initial
data
center.
L
So
if
you
retry
it-
and
you
can
only
do
this-
if,
like
the
load
balancer
knows
it's
safe
to
retry,
so
you
what's
that
well
well
like
what
is
also
like.
That
depends
on
your
application
semantics
as
well
yeah.
So
it's
not
really
like
our
up.
Our
application.
Semantics
not
fittin,
to
like
put
and
also
like
really
put
is
all
in
a
in
a
multi
distributed
data
center
environment.
Like
you
can't
really
guarantee,
like
the
item
potency
you
put
as
well,
so
it
as
I
mean
it's
like
from
HTTP
spec
situation.
L
O
And
given
the
kind
of
non
in
the
somewhat
surprising
nature
of
post,
not
actually
of
the
item,
poaching
nature
are
not
there
for
of
post.
Is
that
something
that's
worth
the
working
group
actually
putting
a
document
together
to
make
this
very
clear
that
this
problem
has
been
punted
up
to
the
application
layers
and
that
they
should
not
assume
the
post
is
the
way
it
is
in
addition,
can
appropriately
well.
E
The
things
that
we're
doing
in
browsers
will
be
horrific
from
some
perspectives,
but
we
do
all
sorts
of
nasty
things
because
we
have
to
deal
with
the
swamp.
That
is
the
web.
I,
think
that
there's
probably
value
and
having
it
having
a
document
that
says
you
know
I,
see,
7230
says
these
things
got
so
much
to
31
says
these
things
about
the
unimportance
II
of
these
methods.
A
E
A
A
L
I
her,
like
last
four
minutes,
always
take
demostrate
him
so
another
like
fresh
and
I.
Have
this
like,
for
example,
the
item
potency
properties
and
stuff
we
specify
would
it
would
it
be?
Like?
Is
working
group
like
okay
with
specifying
this
is
only
applicable
for
like
Sakura
transports,
and
you
cannot
use
it
for
like
insecure
transports
and
stuff
so
that
visa?
L
A
Are
a
surprising
number
of
protocol
facilities
where
that
decision
has
been
made
so
I
certainly
think
it's
something
we
could
talk
about,
but
yeah
I
think
we
have
to
talk
about.
The
existing
text
in
the
HTTP
RFC
is
to
make
sure
that
it's
still
correct
and,
as
you
were
saying
talking
about
potentially
new
protocol
facilities
to
help
applications
get
the
desired
properties
more
easily
and
also
to
communicate
that
information
down
to
the
lower
layers.
So
there's
potentially
a
lot
of
work
to
happen
here,
but
I
think
we
still
have
to
do
a
lot
of
discussion.