►
From YouTube: IETF99-HTTPBIS-20170719-1330
Description
HTTPBIS meeting session at IETF99
2017/07/19 1330
https://datatracker.ietf.org/meeting/99/proceedings/
A
B
A
Excuse
me
so
some
preliminaries.
This
is
the
note
well
statement.
This
is
the
intellectual
property
terms
which
we
participate
in
the
IETF
under
which
is
very
important.
It's
one
of
the
primary
functions
of
a
standards
body
is
to
give
you
well
known
terms
for
intellectual
property
and
antitrust.
So
please,
if
you're
not
familiar
with
this,
you
can
use
your
favorite
search
engine
for
IETF
note
well,
but
you
do
need
to
be
aware
of
this.
It
affects
your
contributions
here.
A
Another
bit
of
policy
which
unfortunately,
doesn't
get
quite
as
much
attention
we
strive
to
have
a
harassment,
free
environment.
Here
there
is
policy
on
this,
which
is
in
the
reference
to
RFC's,
and
we
have
an
Ombuds
team.
So
if
you
feel
you
are
being
harassed
or
you
come
across
a
situation
that
causes
you
concern,
you
can
contact
these
kind
people
at
the
bottom
either.
Personally,
that's
why
we
have
there
or
using
the
email
address
or
you
can
talk
to
the
chairs.
A
A
Can
you
handle
the
job
or
relay,
or
should
we
get
some
octopus
I
gotcha?
Ever
you
got
the
jabber.
Okay,
you've
got
the
chipper
our
agenda.
We
have
two
sessions
in
this
first
session.
We're
gonna
go
over
our
active
drafts
and
then
we
are
going
to
go
over
a
few
bits
of
proposed
work
that
has
been
discussed.
A
If
we
have
time
at
the
end,
we
have
a
presentation
on
secondary
certificates,
any
discussion
of
this
sessions
agenda
just
to
give
you
a
preview.
The
next
session
is
exclusively
about
quicken
HTTP,
so
we
can
hold
all
those
discussions
till
then.
Okay,
all
right,
so
let's
go
ahead
and
start.
Our
first
discussion
is
of
RFC
62
65
this,
which
is
the
cookie
spec,
and
we
have
the
editor
of
that
spec
here.
Mike.
C
C
Firefox
shipped
the
same
restrictions
in
Firefox
52
in
March
of
this
year.
As
far
as
I
know,
they're
also
planning
on
keeping
this
research
and
in
place.
I
haven't
heard
anything
from
either
Apple
or
Microsoft
about
their
implementations,
but
this
is
certainly
something
that
is
affecting
large
numbers
of
users
and
servers
today
and
I.
Think
it's
something
that
we're
actually
going
to
be
able
to
ship
and
keep
in
place.
The
second
thing,
our
cookie
prefixes.
So
there
are
two
prefixes
to
find
the
Medoc.
C
C
I
think
there
was
something
like
16,000
set
cookie
headers
that
we
saw
over
the
last
month,
whereas
the
post
prefix
sees
also
not
a
lot
of
usage,
but
something
like
0.005
percent
of
the
set
cookie
headers
that
we
see
carry
a
host,
prefix
and
I
know
of
at
least
two
see
a
session
fixation
attacks
inside
of
Google
that
this
is
actively
prevented.
So
it
is
pretty
useful.
It's
something
that
I'm
happy
that
we're
shipping
again
I,
haven't
heard
anything
from
either
Apple
or
Microsoft,
but
Firefox
and
Chrome
are
both.
C
Shipping
implementations
today
seems
like
cookies,
shipped
in
chrome
51
last
year.
Mozilla
folks
were
also
working
on
an
implementation.
It
looks
like
that
implementation
is
stalled,
but
it
still
seems
like
there's
interest
in
that
implementation
from
Mozilla,
again
I
haven't
heard
from
either
Apple
or
Microsoft,
but
we're
hopeful
that
this
kind
of
thing
seems
to
have
a
lot
of
popularity
with
developers.
We
see
something
like
0.01%
of
site.
C
Cookie
headers
have
already
adopted
this,
which
is
actually
larger
than
I,
would
have
expected,
given
that
it's
only
implemented
in
one
browser
and
that
it
provides
enough
protection.
That
developers
seem
interested
in
the
defending
themselves
against
CSRF,
using
this
attribute.
So
with
those
implementations
in
minds.
Let's
to
the
draft
that
is
cleverly
up
on
the
screen
in
front
of
you
right
now
long
story,
short
I'm,
just
slow,
I
missed
the
draft
deadline
for
this
IETF
I
have
a
draft
on
my
local
computer.
C
That
has
all
three
of
these
specifications
stuffed
into
it
that
I'll
upload
as
soon
as
those
restrictions
are
lifted,
so
that
you
can
actually
take
a
look
at
it
and
give
some
feedback
on
it.
One
area
in
which
comments
would
be
particularly
helpful.
It
regards
the
layering
between
the
cookie
specification
and
the
fetch
and
HTML
specifications
that
are
currently
being
worked
on
in
the
what's
up.
Ug
standards
body,
in
particular
same
site
needs
a
couple
of
me,
has
more
understanding
of
HTML
and
a
fetch,
then
I
think
it
probably
should
have
for
this
document.
C
Those
other
specifications
is
something
that
I'd
love
to
get
y'all's
feedback
on
I
can
imagine
pulling
things
out
of
the
same
site
portion
of
the
spec
and
putting
them
into
HTML,
putting
them
into
fetch
and
then
providing
a
set
mechanism
of
some
sort
that
would
set
the
header
based
upon
a
set
of
attributes.
They're
passed
in
I
can
also
imagine
using
the
request
concept
from
fetch
directly
in
the
cookie
specification
order
to
read
that
kind
of
data
ourselves,
but
getting
feedback
for
me
all
about
what
that
layering
should
look
like
would
be
super
helpful.
C
C
E
Draft
is
really
to
establish
a
set
of
principles
by
which
you
might
set
policy,
and
so
the
policy
that
chrome
implements,
if
they
start
going
down
this
path,
will
be
necessarily
a
subset
of
the
things
that
are
described
in
the
document
and
over
time.
We
might
see
that
change
and
that's
one
of
the
risks
that
we
have
with
writing.
E
It
may
change
to
the
point
that
we
can
actually
make
a
more
definitive
statement
about
it,
but
for
the
for
this
intervening
time,
we
have
to
think
about
allowing
user
agents
the
ability
to
set
policies
and
providing
some
sort
of
guidelines
and
a
framework
for
doing
that
and
I
think
that's
the
intent
of
the
draft
and
I'm
very
happy
to
talk
about
what
those
policies
might
look
like
or
what
guidance
we
might
provide
regarding
that
yeah.
So
folks,
puzzled,
I'm.
C
Or
if
you
agree
with
that
standard,
I
do
agree
with
that
statement.
I
think
the
direction
of
the
document
that
you
wrote
is
the
right
one
and
I
think
that
what
we
need
to
do
collectively
is
figure
out
ways
to
get
to
that
without
having
sincere
negative
side
effects
on
the
ecosystem
and
I
think
we
can
do
that.
C
A
C
That
makes
sense
to
me.
I
can
basically
see
two
paths.
One
is
that
we
reference
that
document
when
we
use
its
concepts
directly.
The
other
is
that
we
provide
an
entry
point
to
that
document
with
a
lot
of
boolean's
and
parameters
that
get
past
it
and
allow
the
embedder
of
cookies,
whether
it
be
a
user
agent
or
whether
it
be
curl
or
whether
it
be
something
else
decide
what
the
what
the
meaning
of
those
terms
is.
F
Okay,
hello,
I'm
gonna,
give
a
quick
update
on
expect
CT.
The
main
thing
is
the
next
slide,
which
is
that
we're
getting
ready
to
ship
an
implementation
in
chrome.
This
will
be
branching
very
soon.
It
goes
it's
going
out
with
chrome
61,
which
goes
to
stable
in
September.
F
If
you
are
a
site
operator,
who's
interested
in
using
expect
CT
now
would
be
a
good
time
to
start
thinking
about
that.
It
would
be
great
to
have
some
sites
using
it
as
chrome
goes
into
beta,
so
that
we
can,
you
know,
figure
out
if
there
are
problems
figure
out
how
it's
working
etc
before
it
hits
stable.
F
Okay,
so
I
wanted
to
give
a
quick
update
on
some
of
the
open
issues
and
well
hopefully,
neither
of
these
are
open
still,
but
we'll
see
so
one
of
them
is
fairly
minor.
I
updated
the
reporting
format
to
handle
both
svt's
in
both
696
two
and
six
nine
six,
two
bits
format,
so
it
specifies
how
clients
supporting
either
of
these
versions
of
CT
can
report
their
at
cts
the
night.
The
next
open
issue
is
cores.
This
is
not.
We
discussed
this
on
the
list
a
little
bit.
F
F
Okay,
so
the
issue
here
is
that
expect
CT
reports
like
many
other
types
of
reporting,
that
that
browsers,
implement
arguably
fall
under
cores
restrictions
that
prevent
web
content
from
triggering
arbitrary
requests
to
two
servers
that
don't
expect
those
requests.
F
F
We
can
take
reports
and
stuff
them
into
a
request
that
does
not
need
to
be
preflighted
according
to
cores,
or
we
can
just
knowingly
violate
cores.
Those
are
our
options
right
now.
What
I
have
ended
up
doing
in
the
draft
right
now
is
just
leaving
it
up
to
the
client
to
decide
what
makes
sense.
Obviously
does
it
make
sense,
for
you
know,
non
browser
client
to
send
pre
flights
and
browsers
can
also
reasonably
disagree
on
whether
they
think
that
these
requests
are
our
subject
to
preflight
and
crumb.
F
What
we're
doing
is
we
are
sending
pre
flights
right
now,
but
my
plan
is
to
go:
take
this
up
with
the
fetch
spec
and
see
if
we
can
carve
out
some
kind
of
exception.
That
makes
sense
for
reporting
requests,
because
the
fact
is
that
there
are
a
number
of
reporting
requests,
reporting,
requests
right
now
and
other
kind
of
browser
generated
requests
that
violate
Korres
and
will
continue
to
do
so
for
the
foreseeable
future.
H
E
So
so
Martin
Thompson
I,
don't
know
that
we
ever
really
got
to
the
end
of
this.
One
I
think
there
was
a
lot
of
talking,
but
we
never
really
came
to
any
conclusion
on
other
thing
and
I
would
like
to
reach
some
sort
of
conclusion.
Can
you
explain
to
me
why
it's
not
possible,
but
the
origin
of
the
response
that
contained
the
expect
CT
header
in
the
origin
of
the
pre-flight?
It's.
F
E
F
F
E
E
E
E
Were
the
only
ones
that
could
receive
it?
That
would
not
be
a
cross-origin
request
in
that
sense,
because
the
victim
calm
is
receiving
reports
at
victim
calm
and
it's
it's
the
one
setting
the
the
thing
and
it's
not
a
cross-origin
request,
I
think
the
your
modeling
of
this
as
the
thing
that
initiates
the
request
that
then,
that
the
HTTP
request
that
then
causes
the
certificate
validation
to
occur
as
being
the
triggering
our
agenda
is
incorrect.
F
E
E
E
F
F
E
F
And,
and
by
the
way
after
after
thinking
about
it,
a
little
bit
more
I,
don't
actually
think
the
null
origin.
Is
that
big
of
a
hack
like
it
doesn't
it's
fine
perspex
as
far
as
I
can
tell,
and
it
is
safe
as
far
as
I
can
tell
in
the
sense
that
it
does
require
the
server
to
opt
in
and,
like
you
know,
and
unless
the
report
collection
server
is
an
internet
server.
I
think
it's
fine,
but
but
I
see
what
you're
saying
yeah.
E
So
that
so
the
alternative
here
that
that
still
ends
up
with
two
requests
is
for
the
person
setting
the
expects
a
T
to
set
up
a
resource
for
reporting
on
their
own
site
and
they
they
you
never
have
across
a
that's,
never
across
origin
requests
from
this
perspective,
and
then
you
don't
have
a
have
a
pre-flight.
If
they
do
want
to
use
an
external
service,
they
can
use
one
of
our
wonderful
redirects
and
we'll
we'll
post
across
to
the
external
service.
E
F
E
Yeah
so
Martin
Thompson
I
would
like
to
see
at
least
a
plan
for
for
a
solution,
whether
it
be
here
or
in
some
other
document
know,
if,
if
it
is
entirely
appropriate
for
this
document
to
say
basically
nothing
about
cause,
then
that's
fine,
but
I
don't
want
to
ship
this
thing
and
then
find
out
that
the
conclusion
of
the
discussion
on
the
fetch
spec
is
that
this
document
had
had
to
do
something
I.
We.
A
A
E
I
G
A
G
A
My
response
that
is
I
would
ask
for
a
little
bit
of
Julian's
patience
in
that.
If
we
can
come
to
a
solution
as
a
working
group,
it
would
be
nice
to
have
one
way
to
do
this
instead
of
two,
but
let's,
let's
maybe
give
it
a
little
more
time,
Michael
for
me
to
echo
you're
on
the
air.
Well,
what's
up
pressing
the
button.
J
Okay,
so
I
would
agree
with
the
goals.
As
you
stated
them.
I
would
say
that
from
discussion
with
our
dev
team,
we
really
really
don't
want
to
be
including
a
JSON
parser
that
low
in
the
sack.
If
we
have
to
we
will
but
I'd
prefer
we
not
go
that
route
and
I
like
this
draft,
a
lot
better
more
in
that
direction.
A
A
C
My
quest
again,
one
thing
I,
would
note
about
the
JSON
parser.
Is
that
I
understand
that
putting
a
JSON
parser
into
the
net
stack
seemed
strange
and
difficult.
One
nice
thing
about
JSON.
Is
that
because
we
have
implementations
and
because
those
implementations
are
exposed
to
the
web,
they've
been
very
well
fuzzed
and
we
have
a
very
good
understanding
of
the
security
properties
associated
with
them.
Given
that
I
even
I,
don't
have
that
much
trepidation
about
using
the
existing
JSON
parsers
in
places
where
they
aren't
currently
used
because,
again,
I
think
they've
been
very
well
fuzzed.
K
I
am
opposed
to
using
Jason,
since,
even
in
the
most
popular
JSON
process,
we
found
in
compatibilities
that
result
in
different
interpretations,
and
since
it's
about
headers
and
miss
interpretations
or
disagreement
between
how
a
for
example,
a
floating
number
would
be
implement
interpreted,
might
result
in
a
security
issue.
So
I'm
strictly
opposed
to
using
Jason
and
I
favored.
Using
this
HTTP
had
a
common
structure
as
a
basis.
E
So
Martin
Thompson
won
I
sort
of
caution
against
getting
too
much
into
the
what
we
would
prefer
to
do.
A
or
b
no
I
think.
We've
actually
heard
a
lot
of
those
comments
before
about
Julian's
draft
and
about
this
draft.
I.
Think
part
of
the
problem
with
the
current
draft
is
that
it.
It
originally
came
from
a
place
where
it
was
being
a
little
more
aspirational
in
its
goals,
and
a
lot
of
that
legacy
remains
in
the
document
and.
A
And
I
think
one
thing
we
can
do.
We
started
the
discussion
on
the
list
about
these
goals.
We
probably
need
to
finish
that
and
get
consensus
on
them.
I,
don't
know
that
we're
in
a
place
to
get
that
consensus
right
here
and
now,
because
we
need
to
articulate
them
a
bit
better,
but
let's
go
ahead
and
do
that
and
then
see
where
we
sit,
perhaps
right
anything
else
in
this
one.
K
Hi,
so
let
me
explain
about
the
changes
that
has
been
made
to
the
cash
next
draft,
so
there
has
been
three
changes
and
well
the
first.
Let's
talk
about
the
last
one,
and
this
intended
status
has
been
changed
to
experimental,
as
we
discussed
in
Chicago
since
a
browsers
likely
to
have
support
for
cash
dices
in
the
news
feed
future
and
on.
Instead,
we've
added
definition
for
the
cash
Liars
header
and
considering
I'm
going
to
experiment.
K
So
the
cash
that
is
headed
is
emulation
of
the
cash
digest,
http/2
house
frame,
which
is
not
as
optimal
as
using
a
frame,
but
it
kind
of
works
and
it's
already
being
implemented
in
apache
h2o
and
also
there
is
a
node
module
called
chastised
immutable
and
there
are
some
small
scale
deployments
now
that
already
have
this
being
activated,
and
the
header
looks
like
this
and
it's
basically
a
base64
form
of
the
digest
value
and
some
flags.
So
it's
really
one-to-one
matching
between
the
frame
definition
and
the
header.
K
So
next
please,
and
we
also
have
a
new
feature
that
is
actually
an
HTTP
to
setting
that
allows
the
server
to
tell
the
client
whether
it
supports
cash
values
and
the
issue
with
we
thought
that
it
existed,
but
since
Justin
just
needs
to
be
setting
0
RTD,
there
was
no
way
to
let
the
server
to
notify
the
client
whether
it
wants
to
be
chances.
Not.
However,
we've
noticed
that
since
the
else
one
points
very
full
handshakes,
let's
the
server
speak
first,
using
the
zero
zero
point,
5
RTG
data.
K
K
On
the
other
hand,
0.5
RTG
is
only
a
very
available
in
for
handshake,
so
for
the
zero
our
duty
resumption,
the
client
needs
to
remember
the
remember
if
the
server
supports
a
cache
digest
and
have
that
information
associated
it's
at
your
session
cache
and
well,
while
that
might
sound
complicated.
My
understanding
that
terrorists
tax
would
have
that
kind
of
a
support.
That
kind
of
association
I
mean
associating
extra
data
to
station
cash,
since
that
that
is
also
a
requirement
in
quick.
K
A
And
and
and
so
that's
the
the
I
guess,
the
question
is
that
we've
been
letting
this
draft
hangin
around
for
a
little
while
to
see
if
a
browser
would
implement
it
and
and
we
don't
seem
to
have
anyone
prioritizing
it.
You
know
in
a
native
browser
implementation,
although,
as
Kazuo
mentioned,
we
do
have
some
header
implementations,
and
so,
if,
if
we
don't
get
any
information
about
that,
I
think
it
makes
sense
to
take
it
to
last
call
and
go
as
experimental,
pkg
hi.
D
K
D
A
J
J
K
A
K
K
E
K
K
A
E
Of
the
problem,
this
solves
some
of
the
problem
with
server
push,
but
not
all
the
problems
that
we've
discussion
at
some
length
up
in
other
venues,
but
to
the
extent
that
it
fixes
the
problem
it
is.
It
is
also
equally
encumbered
by
the
other
problems
that
that
are
attached
to
so
proportion
and
until
we
understand
more,
we
we
can't
really
say
whether
or
not
this
is
the
right
solution
and
that's
why
I
think
we
should
publish
it
as
it
experimental
and
just
though
it's
been
sitting
here,
it
hasn't
changed
in
some
ridiculously
long
time.
E
A
L
Either
bread
lossy
Google
Chrome
just
to
answer
the
question
of
what
our
status
is
we're
interested
in
this,
but
it's
not
prioritized
the
biggest
problem
we
see
is
it's
quite
hard
to
implement
within
our
current
cash
implementation
so,
and
we're
not
sure
that
this
is
the
right
solution
to
server
pushes
problems.
So
if
this
doesn't
have
a
high
probability
of
being
required,
we
don't
want
to
put
the
investment
in
to
rewrite
our
whole
cache
to
support
it.
That
makes
sense.
M
A
E
N
All
right
so
I'm,
just
gonna
I'm
here
to
just
give
a
quick,
a
bid
on
the
random
access
live
draft.
That's
right
so
I
think
at
the
last
working
group
we
actually
had
requested
comments
or
feedback
from
from
the
working
group.
We
haven't
received
any
specific
concerns
or
comments.
I'm,
assuming
people
have
probably
read
it
and
no
problems
at
all.
Our
people
haven't
read
it,
but
then
the
other
feedback
that
we
had
received
was
to
actually
just
try
out.
N
You
know
the
protocol
of
the
idea
and
make
sure
that
it
actually
works
with
caches
or
intermediaries
and
doesn't
have
any
problems
with
that.
So
next
slide
on
that
note,
so
we
are
actually
just
working
on
kind
of
building
a
test
framework,
so
we
have
our
client
and
server
sort
of
ready.
We
are.
You
are
hoping
to
finish
this
by
before
this
IETF,
but
we
are
a
little
bit
behind.
So
we'll
still
finish
this
and
send
out
our
observations
of
results
or
hope
is
that
the
answer
is.
A
A
G
A
E
Oh
wow,
that's
right
all
right,
so
one
of
the
things
we've
been
doing
in
TLS
is
this:
is
zero
I'll
teach
a
theme
that
has
everyone
in
a
flap
and
one
of
the
things
that
that
document
requires
of
us
is
to
explain
how
it
is
that
you
use
your
protocol
with
zero
ITT
before
you
actually
start
using
it
full
disclosure.
We
may
have
deployed
this
already
without
any
of
the
measures
that
are
in
this
dart
in
this
document,
but.
E
Such
as
the
nature
of
the
pre-release
channels
that
has
been
deployed
on
that
I
think,
we've
probably
claimed
that
you
know
there's
an
experiment
out
there
right
now.
It
just
happens
to
be
sort
of
rather
large,
next
slide,
please
so
I'm.
The
primary
risk
here
is
that
zero
ITT
lights,
the
client
make
requests,
but
the
TLS
handshake
isn't
done
as
a
consequence.
There's
no
fresh
state
from
the
server
mixed
in
to
this,
and
the
request
can
effectively
be
replayed
now
TLS
mandates
that
you
do
some
Antibes
replay
stuff
and.
E
They're
imperfect
what
can
I
say
next,
so
this
is
what
the
draft
does
and
says.
It
says
that
the
TLS
connection
is
modeled
as
a
single
stream.
There
was
a
lot
of
debate
about
this
in
in
Telos
as
to
whether
you
would
have
separate
compartments
for
the
early
data
and
the
other
stuff,
and
you
were
somehow
have
some
sort
of
clear
delineation
between
the
two
of
them,
practically
speaking,
that
doesn't
work
for
HTTP
I'm,
not
sure
that
works
for
many
TCP
based
protocols.
E
It
may
work
for
things
that
use
that
data
ground
protocols,
but
we're
not
done
with
DTLS
ones.
For
you
just
yet,
and
then
the
document
contains
advice,
some
basic
guidance
on
what
to
send
in
zero
RTT
and
on
the
receiving
end,
what
you
might
want
to
do
with
it
and
how
you
would
deal
with
it
on
that
end
and
then
there's
some
discussion
that
we
had
in
a
workshop
about
intermediaries,
and
we
realized
that
we've
really
nice
if
we
had
a
couple
mechanisms
from
intermediaries
and
we
defined
some
of
those
next
place.
E
E
The
other
thing
is
that
we
we
actually
mandate
an
automatic
retry
of
the
request
if
the
zero
ITT
is
rejected,
and
we
talk
about
the
fact
that
you
might
decide
not
to
send
a
request
anyway,
but
in
the
general
case,
if
you
made
the
request
in
zero,
ITT
and
zero
RCT
is
rejected,
then
you
will
make
the
request
again.
That
would
be
the
default
operation.
E
Unless
someone
has
used
one
of
the
new
methods
that
we're
providing
to
cancel
the
request
in
the
meantime,
I
think,
that's
probably
the
only
out
that
we
have
one
that
it's
important
to
recognize.
This
enables
an
attack,
and
the
draft
explains
that,
but
it's
it's
a
1:1
kind
of
one
time.
Only
it's
it's
it's
a
visible
thing
and
we
are
being
very
careful
to
distinguish
between
the
effects
of
retries
from
the
rific
at
the
effects
of
replays,
and
the
distinction
here
is
that
a
retry
is
something
that
the
client
does,
with
full
knowledge
of.
E
What's
going
on
a
replay
or
something
in
an
attacker
does
with
the
packets
on
the
network,
and
this
is
language
that's
being
used
in
tailless
as
well.
Next
advice
to
the
server's
here
first
thing
is:
please
consider
whether
you
want
to
enable
zero
ICT
at
all,
and
this
is
something
that
TLS
does
not
necessarily
enable
by
default.
You
have
to
turn
it
on.
We
have
some
advice
on
that
one,
and
then
it
says
whatever
you
do
before
the
handshake
completes.
E
That's
the
risky
stuff
and
the
assertion
in
the
draft
and
the
model
that
we
have
sort
of
adopted
for
this
not
formally
verified
disclaimer,
is
that,
if
servers
always
deferred
processing
of
a
given
request
until
after
the
handshake
group
completes,
then
they
will
only
ever
see
that
request
once
that's
a
that's
an
assertion.
I'm,
like
I,
said
not
sure
about
that,
and
so
the
recommendation
following
on
from
that
is
that,
if
you're
not
sure
whether
something
is
safe,
wait-
and
so
this
has
some
performance
downsides.
E
But
the
consequence
of
this
is
that
when
you
get
these
messages,
you
have
some
have
some
certainty
and
the
performance
downsides.
Aren't
it
doesn't
completely
eliminate
the
performance
gains
from
zero
ITT
the
opportunity
cost
of
having
this
slot
is
not
completely
wasted,
so
you,
you
can
send
all
the
bytes
and
then
the
bytes
are
ready
on
the
server
ready
for
processing,
and
you
can
even
do
some
basic
parsing
as
long
as
that.
E
Parsing
is
sort
of
content,
agnostics,
side-effect,
free
and
all
the
sorts
of
things
we've
talked
about,
so
there
are
still
some
benefits
to
having
the
data
arrive
without
actually
doing
anything
about
it.
We
talked
about
that
as
well.
Next
place
intermediaries
with
the
interesting
discussion,
so
an
intermediary
really
can't
decide
what's
safe,
just
based
on
the
knowledge
they
have
typically.
E
Deferring
processing
at
the
origin
is
difficult
because
it
doesn't
know
when
the
hantai
completes
and
doesn't
know
whether
that
that
signal
doesn't
arrive.
So
the
here
is
that
the
header
field
is
only
really
used
by
intermediaries.
If
a
request
arrives
in
zero
tt,
you
know
that
the
person
sending
it
would
have
included.
This
header
is
basically
implicitly
present
by
default,
and
that
allows
us
to
sort
of
not
worry
too
much
about
annotating
every
single
request
that
we
ever
sent
and
saves
a
little
bit
space.
E
The
reject
mechanism
allows
the
server
to
force
non
zero
RCT
requests
in
the
unsafe
cases.
It
means
if
the
server
is
unwilling
to
accept
the
risk
of
a
replay.
It
can
tell
the
client
hey.
This
is
not
cool.
Try
again,
there's
a
risk
here
again
that
if
this
processing
is
not
consistent,
you
get
you've
created
a
side
channel.
E
It
will
not
be
able
to
reject
things
appropriately,
and
so
the
message
arrives
its
marked
with
early
data
by
the
intermediary,
who
supports
this
draft
and
implements
it
and
and
deploy
zero
RTT.
But
the
back-end
server
doesn't
even
know
to
look
at
the
early
data
header,
and
so
it
processes
the
message
without
knowledge
of
the
this
risks
that
it
has
taken,
and
so
that
could
be
exploited
in
some
fairly
nasty
ways.
We
don't
really
know
any
way
to
avoid
this
particular
problem,
I
mean
once
the
request
is
being
forwarded.
E
Then
it's
being
forwarded
and
there's
some
things.
We
can
do
that.
I
think
this
is
probably
the
best
solution
without
making
the
request
completely
incomprehensible.
So
we
could.
We
could
rot13
the
entire
request
if
they
came
in
early
data
and
put
the
old
I
had
a
field
up
front
I
suppose
that
would
be
on.
We
could
do
some
some
serious
damage
to
the
protocol
if
we
wanted
to
do
to
make
this
possible
mandatory,
but
I
think
this
is
a
reasonable
compromise.
Is.
A
E
Specifically
use
the
word
gateway
and
the
term
there,
because
I
don't
see
this
being
useful
in
the
forward
proxy
case,
because
the
forward
proxy
enables
the
it's
quite
convenient.
You
know
the
client
connects
with
zero
ITT.
It
sends
a
request
to
the
forward
proxy,
like
you're
gonna
do
that
anyway,
but
since
the
forward
proxy
and
the
forward
proxy
annotates
in
this
way,
but
it's
got
no
idea
that
the
server's
are
going
to
be
able
to
accept
this.
You
understand
the
new
header
field
and.
A
E
Can
do
whatever
you
want?
It's
yeah
right,
I,
don't
think
anyone
uses
for
proxies
without
this,
oh
yeah,
so
for
xx
is
one
of
our.
We
have
this
design
pattern
that,
with
we're,
started
to
use
a
lot
more
often,
which
is
an
explicit
permission
to
retry
something
and
improves
reliability.
No
end
importantly:
here
it's
it's
an
automatic
retry
and
no
matter
what
the
request
is.
This
means
that
clients
can
be
more
confident
in
attempting
zero
RTT,
because
any
server
that
accepts
zero
RTT
will
understand.
E
This
will
reject
something
if
the
server
blaze
it
say
so,
if
the
client
thinks
it's
safe,
then
they
can
attempt
the
zero
ITT.
There
was
a
risk
with
out
these
mechanisms
that
the
clients
would
not
do
this,
because
if
they
don't
know
that
the
server's
were
able
to
understand
this,
particularly
in
those
intermediate
ated
cases
where
you
have
the
CD
ends
involved,
you
know
the
reverse
proxy,
the
the
gateways,
and
so
it
becomes
a
mutual
thing.
E
Both
sides
have
to
agree
that
it's
safe
to
do
this
to
know
each
other
before
anything
happens
next
example,
so
you
can
see.
In
the
example,
you
make
the
zero
RTT
request,
its
annotated
by
the
gateway
with
the
new
header
field.
The
server
goes
eh,
not
not
for
me
right
now
and
sends
back
a
too
early
status
code
and
then
the
client
waits
for
the
handshake
to
complete.
Then
we
try
some
request,
pretty
straightforward.
E
O
The
question
so
one
thing
which
kind
of
feels
weird
about
this
design
is
that
we
don't
actually
have
a
standardized
mechanism
to
communicate
a
between
gateway
and
server,
whether
it's
a
connection
was
secure
or
not.
As
far
as
I'm
aware
so,
Oh
early
data
feels
a
little
bit
weird
in
this
context,
because
it's
like
well,
you
don't
have
early
data,
you
don't
know
whether
it
was
actually
secure
or
not.
E
P
E
P
As
for
example,
as
a
browser,
if
I'm
sending
a
request
server
early
data,
then
is
it
valid
for
me
to
actually
include
this
early
data
header,
but
because
I
would
be
really
useful,
since
without
that
I
didn't?
Is
that
whatever
the
the
intermediate
proxy
has
to
have
a
separate
API
to
receiver
Lydia?
To
know
that
this
data
came
over
early
data
because
it's
a
single
stream,
like
you
said.
P
Same
argument
to
be
made
about
like
receiving
data
as
well,
and
some
data
as
between
the
half,
there
are
two
tiered
stuff,
so
I've
assumed
that,
like
a
reasonable
thing,
would
be
if
I
have
at
least
some
data
on
this
request.
Sterilized
of
a
zero
ITT
I
would
include
this
early
data
header
and
that
would
help
terminating
proxies,
not
need
to
have
the
separate
API
first
evening,
so
RTT
data.
If
they
want
to
do
something
more
fancy,
they
can
you
don't
need
a
separate
API,
how
the.
E
A
K
H
D
The
decision
on
the
server
side
of
when
to
send
the
4xx
is
I
think
the
least
understood
part
of
this.
The
part
of
this
that
I
understand
the
least
yes,
I
think
the
semantics
that
we
want
for
that
are
you
as
a
server,
must
know
the
complete
architecture
of
your
entire
operation,
and
you
must
be
able
to
identify
as
an
origin
server
the
set
of
requests
that
are
not
permissible
to
be
sent
in
that
that
would
be
damaging
if
they
were
redoing,
and
so
when
it
said,
you're
effectively
saying
sort
of
permission
to
retry.
D
What
you're
actually
saying
is,
there's
no
way
that
if
this
thing
was
replayed
somewhere
else,
we're
not
sick.
It's
attempting
to
read
this
as
the
semantics
are
ok
I
receive
this,
but
for
whatever
reason,
I
I'm
not
gonna
handle
it.
Therefore,
it's
safe
to
go
ahead
and
retry,
which
is
distinct
from
saying.
The
request
that
you
sent
is
not
something
that
should
have
been
in
zero
rgt
in
the
first
place,
you
see
what
I'm
saying,
but
we
don't
want
to
do.
Is
we
don't
want
to?
D
We
don't
want
the
origin
server
to
be
like
okay
I
received
this
I'm,
not
gonna
process
it
because
it
could
have
been
replayed.
Therefore,
it's
okay
to
go
ahead
and
retry.
There's
there's
two
slightly
different
meanings:
there
yeah
and
it
would
be
really
bad
if
we
gave
origin
servers.
The
wrong
guidance,
yeah.
E
D
G
G
G
A
A
Party
because
that's
active
work
Randy,
so
let's
talk
about
origin.
Do
I
need
to
update.
G
G
Do
not
believe
they're
primarily
errs.
There
is
some
discussion
that
has
been
going
on
regarding
a
few
things
that
Lucas
brought
up
and
if
you
want,
you
know
Mike
time
to
address
those
I.
Think
that's
fine,
but
the
primary
issue
in
the
document
has
been
this
clause
other
than
that
we
are
I,
think
fairly
close
to
wrapping
up
the
origin
frame
extension.
So
the
clause
reads:
clients
must
not
consult
the
Dinah's
to
establish
the
connections
authority
for
a
new
requests.
G
G
There
are
a
couple
different
schools
of
thought
on
this.
So
if
you
want
to
move
to
the
next
slide,
I
think
there
is
consensus
that
the
existing
DNS
provision
in
7540
is
a
weak
second
factor
involved,
and
you
know
establishing
a
connection.
There
is,
however,
disagreement
in
the
group
about
you
know
just
how
valuable
of
a
factor
that
that
is,
and
so
the
discussion
has
been
leading
the
chairs
to
ask
you
know
this
question
here:
is
the
substitution
of
a
different,
more
performance
and
privacy
friendly
second
factor
or
factors
you
know
into
the
origin
extension.
A
Document
not
that
slide
so
I
mean,
from
my
perspective,
I'm
looking
you
know,
has
an
editor
I'm,
an
editor
on
that
document
as
well
for
a
way
forward
on
it
and
and
as
I
commented
on
list
in
the
last
day
or
two
you
know.
Historically,
we
haven't
specified
the
exact
spec
stack
of
specifications
that
you
use
when
you're
verifying
about
a
new
certificate
for
htdp
that
has
resided
elsewhere.
The
IETF
defines
a
selection
of
those
mechanisms,
but
we
don't
say
in
HTP
itself.
E
So
so
Martin
Thompson
I
think
the
this
Falls
kind
of
into
the
territory
of
what?
What
does
a
client
do
to
decide
that
a
given
server
is
acceptable
for
a
given
name,
and
there
is.
There
are
some
commonality
there,
but
there's
also
quite
a
bit
of
wiggle
room.
Our
certificate,
transparency
policies,
diverse
and
the
way
that
we
validate
certificates
is,
in
some
cases
a
little
bit
different
and
we
have
different
trust
anchors
in
some
cases
and.
F
E
I
do
think
that
there
is
value
in
having
the
must
not
DNS
statement
in
here
and
retaining
that,
because
I
don't
want
servers
to
be
in
a
position
where
they
can't
rely
on
this
mechanism
for
particular
properties
and
that
property
that
I
care
about
here
is
not
making
additional
DNS
queries
and
certainly
in
the
in
the
current
environment.
Making
those
DNS
queries
expo
certain
information
to
to
others.
That
I
would
rather
not
have
exposed.
E
A
Just
to
respond
to
that
I
get
a
little
uncomfortable
when
you
talk
about
servers
relying
on
that
property,
because
that's
not
a
design
property
that
we
have
and
it's
possible
for.
You
know
in
error,
handling
in
transients
conditions
for
a
connection
to
close
or
foreign,
or
you
know,
an
origin
to
be
popped
onto
a
new
connection
and
all
the
sudden
it
is
exposed
to
the
network.
And
if
your
piano
and
not
being
exposed
to
the
network
that
that
it
seems
like
it's
a
much
higher
bar
than
we've
currently
designed
a
floor.
Q
Nothing
right!
So
it's
already
the
case
that
you
cannot
rely
on
this
property
right
and
similarly,
any
client
which
decided
to
it
didn't
like
the
origin.
The
frame
would
then
be
made.
These
DNS
requests.
So
as
far
as
I
can
tell,
regrettably,
you
know
I
mean.
Regrettably,
this
is
a
best-effort
kind
of
situation
unless
we're
gonna
create
some
a
mess,
we're
going
unless
we're
gonna
grate
some
indication
for
the
client
that
says
I
promise
to
speak,
origin
and
I
promise
not
to
do
DNS
or
cross.
Those
are
evidences
that.
Q
D
So
I
agree
that
I
don't
this
is
dkg
I,
also
don't
see
how
without
some
additional
mechanism
and
I'm
not
even
sure
what
the
mechanism
would
be,
that
we
can
make
this
a
must.
Although
I
agreed
that
it
would
be
nice
to
make
it
a
must
this
statement
of
the
problem,
that's
on
the
screen
right
here,
I
think
is
problematic
because
it
says
existing
DNS
provision,
but
it's
really
doing
about
existing
DNS
over
the
local
network
provision.
There
are
multiple
ways
to
get
DNS
data
and
I.
D
Think
if
we're
gonna
frame
the
problem
like
this,
we
should
be
clear
that
we're
talking
about
the
network
path
being
the
additional
week.
Second
factor,
not
the
fact
of
DNS
data
period,
so,
for
example,
I
could
have
some
additional
out-of-band
channel.
That
gives
me
DNS
information
or
I
could
have
an
in
band
general.
That
gives
me
DNS
information
not
to
bring
this
working
group
to
the
question,
though,
but
the
you
know
it's
not
about
DNS
/
data,
it's
about
what
came
over
the
local
model.
Oh
no,
the
provision
of
7540.
A
A
J
J
If
we
want
to
say
that
it's
a
clients
may
do
other
things
may
omit
the
check
and
have
some
guidance
as
to
what
you
might
look
for
before
you
make.
That
decision
I
think
that's,
probably
a
smarter
path.
I
also
will
repeat
what
I
said
in
the
jabber
that
we're
also
later
today
doing
an
adoption,
call
for
a
draft
that
says
the
exact
opposite.
The
client
must
still
do
DNS,
so
we
need
to
reconcile
this
one
way
or
the
other
in
both
graphs.
Q
Yeah
so
I
I
guess
two
things.
First
is
I,
don't
I
think
partially
right.
The
the
we
second
factor
is
the
go
deep
on
the
network
path
that
is
allegedly
associated
with
the
DNS,
and
that
will
be
a
weak
second
factor.
Even
if
the
DNS
were
entirely
trustworthy,
mainly
say
you
got
that
over
DNS
SEC,
like
the
issue
is,
is
the
the
issue.
Is
the
if
I
thought
the
factors
are
way?
Q
If
all
the
network
between
you
and
the
website
right,
so
the
the
is
attending
to
give
you
is
even
if
we
trusted
yes
entirely,
is
not
being
routed
some
entirely
third
location
which
is
not
associated
with
it
with
the
origin.
You
care
about
the
that.
The
second
thing
I
was
going
to
note
is
that
we
actually
can
take
three
postures.
One
posture
is
the
one
that's
in
this
traffic.
Nobody
likes
very
much.
Q
The
other
is
to
sort
of
kind
of
encourage
people
to
do
some
other
second
factor
and
not
say
what
that
really
is
but
say
there
ought
to
be
one
and
the
third
is
to
say
we
take
no
position
or
are
hostile
so
that
you
have
a
second
factor
that
we
spoke.
You
can
so
I.
Think
like
I
mean
this
is
sort
of
a
very
weaselly
kind
of
text.
I
think
it
might
be
better
simply
to
document
exactly
what
the
assumptions
are
and
say:
implement,
we'd
lead
off
the
implementations,
decide
how
to
behave.
Q
I
mean
I
mean
I
mean
my
sense
was
that
that
may
the
people
on
the
mailing
list
resisted
the
ID.
You
should
not
do
a
second
check,
and-
and
so,
if
we're
gonna
say
is
that
means
me
saying:
don't
do
a
second
check
to
be
problematic
by
their
hand,
I
sense
their
people,
you
know
in
a
sense
there
were
people
all
in
the
discussion
who
thought
a
second
check
was
silly
and
then
I'm
like
that
makes
me
operate
enthusiastic
about
recommending
a
second
check,
especially
when
there
appear
to
be
84.
O
Okay
Victor,
so,
if
was
there
is
one
saying
which
my
experience
with
browsers
and
socket
poles
has
taught
me:
is
that
normally
so
they
cannot
promise
you
that
they
will
handle
requests
in
certain
way.
They
will
try
to
handle
it
optimal
way,
but
God
knows
how
it
will
they
actually
handled
so
I
do
not
believe
that
from
perspective
of
user
agents,
this
is
actually
a
viable
strategy.
O
The
assumption
of
not
having
DNS
queries
is
viable,
so
I
do
I
side
with
occurs
that
which
would
document
that
there
might
be
a
second
factor
required
by
user
agent
and
documents.
Clearly
what
happens
when
the
user
agent
declines
that
second
factor
and
whether
it
declines
its
explicitly
or
implicitly,
but
leaves
the
specific
policy
after
browsers.
R
Thanks
Victor,
the
cheek
or
a
Google,
so
I
would
rather
see
as
the
site
what
should
be
the
second
factor,
especially
since
some
of
the
considerate
options,
would
require
changes
to
the
format
of
origin
frame.
For
example,
if
you
wanna
staple
DNS
SEC
response
and
send
it
inside
of
jean-frank,
that's
part
of
the
draft,
and
it
cannot
be
like
hand
wave
it
and
say
like
trying
to
do
whatever.
S
G
T
U
So
the
the
most
conservative
thing
to
do
would
be
forge
and
frame
to
not
relax
that
requirement
and
then
for
us
to
look
back
separately
at
building
a
better
and
more
comprehensive
story
on
how
we
might
want
to
real
relaxed
the
DNS
requirements
in
a
way
that
that
can
be.
That
does.
Does
it
could
unbuild
up
a
comprehensive
privacy
story.
E
E
G
So
I
think
the
chairs
will
have
to
huddle
and
sort
of
you
reread
the
minutes
here.
But
thank
you
all
for
the
comments
ad
Adam
wrote.
Are
you
still?
The
room
are
okay
well,
based
on
that
we're
going
to
do
the
discussion
of
BCP
56
bits
next
and
hopefully
we'll
talk
about
HC
Beecher
as
time
allows
in
our
next
session,
we'll
try
and
carve
out
a
little
space
for
that,
but
we
appreciate
out
of
making
time
to
be
here
for
this.
Okay.
A
All
right,
so
this
is
another
proposal
for
a
document
that
I've
been
working
on
the
background
for
a
little
while
called
BCP
56
bits,
and
the
original
BCP
56
was
on
the
use
of
HTTP
a
substrate.
So
basically,
how
do
we
use?
You
know
HTTP
well,
when
we
use
it
with
other
protocols
that
are
defined
inside
the
IETF
and
it
was
done.
This
is
from
the
data
tracker,
as
you
can
see
in
2002
2002.
A
It
had
just
a
couple
of
drafts
from
from
Keith,
Moore
and
and
was
published
way
back
then-
and
this
is
the
abstract
which
I
finally
recently
somewhat
amusing.
It
turns
out.
We,
we
now
have
a
lot
of
interest
and
quite
widespread
interest
in
using
HTTP
as
a
substrate
for
other
application
level
protocols.
I
have
a
cron
job
on
one
of
my
boxes
that
emails
me
and
Patrick
every
couple
of
weeks
now
and
and
lists
all
of
the
ITF
documents
in
working
groups
that
reference
HTTP,
and
this
is
the
current
run
of
that
tool.
A
There
are
lots
of
people,
you
know
using
HTTP
different
ways,
and
this
doesn't
actually
list
them
all.
There
are
new
working
groups.
Blowing
up
all
the
time,
it's
saying:
oh
yeah,
we'll
use
HTTP
for
that
and
and
I
don't
have
time
to
review
them
all
and
give
them
advice.
Julien
I'm
sure
doesn't
have
time
to
review
all
the
header
fields
that
they
create.
He
is
our
bottleneck
for
header
field,
syntax,
as
well
as
other
things.
Of
course.
A
It's
it's
the
there's
a
lot
going
on
we're
seeing
an
explosion
of
this,
not
just
in
the
outer
world,
but
but
in
the
IETF
specifically
and
in
the
outer
world.
It's
happening
as
well.
You
know,
HTTP
api's
are
used
for
everything,
including
cat
gifs
and
the
problem
that
I
see,
or
maybe
problems
too
strong.
A
But
one
of
the
issues
is
that
people
design
a
an
API
for
their
own
server
to
deploy
to
serve
cat
gifts
via
API,
and
they
don't
think
about
the
implications
of
taking
that
API
design
for
one
HTTP
server
and
scaling
it
out
to
multiple
HTTP
servers,
multiple
implantation
with
different
versions
and
different
extensions
and
the
coordination
problem
that
that
entails
it's.
It
requires
a
different
kind
of
protocol,
but
they're
still
using
the
techniques
that
they've
learned
deploying
their
single
implementation
and
deployment
API.
A
On
the
other
hand,
you
know
the
it's
very
easy
for
an
effort
to
give
this
kind
of
guidance
to
turn
into
a
bit
of
a
crusade
to
to
say,
thou
shalt
be
restful
and,
and
I
really
want
to
avoid
that
that
minefield
I
want
to
have
advice
that
is
useful
to
people
based
upon
the
experience
we
have
as
a
community
for
using
HTTP
as
a
substrate,
and
so
that
is
the
the
line
I'm
trying
to
walk,
and
it's
very
embryonic
right
now.
That's
the
line
I'm
trying
to
walk
in
this
document.
A
It's
it's
very
bare-bones
right
now,
but
I
think
the
question
I
have
for
the
working
group
as
an
editor
of
that
document
is.
Are
we
interested
in
in
in
starting
work
up
in
this
area?
Again,
the
the
hallway
chats
I've
had
personally
are
that
people
recognize
its
way
past
overdue.
For
for
revising
this
advice,
because
the
world
has
moved
on
considerably
since
2000
and
2001
in
terms
of
how
we
use
HTTP
and
so
I
wanted
some
feedback
as
to
whether
people
are
interested
in
working
on
this
and
giving
feedback.
A
M
Jonathan,
the
Jedi
and
God
I
think
yeah
I
think
that
it
makes
complete
sense
to
consider
the
things
that
you
would
need
to
do
to
make
HTTP
a
good
substrate.
If
that's
part
of
the
goal
of
such
of
this
document
and
I
think
it's
absolutely
appropriate
in
particular
when
something
becomes
a
substrate
which
HTTP
has
it
ends
up
picking
things
into
it,
which
people
don't
realize
that
you
don't
scale
widely
to
large
numbers
of
applications.
M
Latency
is
one
of
those
things
when
there's
a
tiny
bit
of
latency,
that's
baked
into
the
substrate
things
that
build
on
top
of
it
end
up
havin,
deeper
and
deeper
latency.
That's
not
a
good
thing
to
have
so
being
able
to
document.
All
of
that
stuff
is
very
useful.
At
the
same
time,
I
wonder
if
this
is
the
community.
That
would
be
in
a
position
to
do
that.
M
What
I
mean
by
that
is
the
people
who
are
actually
using
HTTP
as
a
substrate
are
likely
to
be
folks
who
don't
care
about
exactly
what
is
going
on
underneath,
but
are
more
likely
to
simply
grab
a
library
which
they
can
use
to
talk
easily
to
the
other
side
and
use
the
surface
of
it
and
use
HTTP
without
necessary.
Caring
about
exactly
what's
going
on
underneath
my.
I
A
But
I
think
that's
true
for
folks
outside
the
ITF,
but
we
still
have
folks
come
here
who
want
to
use
HTTP
as
the
protocol
and
JE
map
is
an
excellent
example
of
that.
We
spent
some
time
with
Jay
map
this
week
talking
about
their
use
of
the
protocol,
and
for
me
that
was
an
exciting
thing
to
do,
because
we
learned
that
they
had
requirements
that
we
weren't
meeting,
and
so
now
we
can
start
to
think
about
how
we
could
possibly
meet
those
requirements
and
also
we
can
give
them.
A
At
the
same
time,
we
can
give
them
some
guidance
of
where
they
are
able
to
use
existing
things
in
HTTP.
We
can
guide
them
to
make
sure
that
they're
not
breaking
other
stuff
or
that
they're
using
it
well,
so
I
think
you
know.
Certainly
there
are
folks
in
this
working
group
who
this
may
be
too
high
layer
for
more
to
semantics,
for,
but
there
are
also
a
lot
of
people
in
this
working
group.
I
believe
who
have
a
lot
to
say
about
this
and
have
the
right
experience
to
do
it.
Yeah
I
should
probably
change.
E
Not
until
some
like,
like
the
the
general
goals
and
particularly
the
way
that
you
stated
them,
we're
going
to
push
back
a
bit
against
what
what
Jonah
said
here,
one
of
the
core
things
that
I
think
this
document
kind
of
needs
to
say,
if,
if
not
directly
but
but
sort
of
at
least
cover
is
that
HTTP
is
not
a
dumb
transport
protocol
right.
It's
an
application
protocol
and
the
consequences
of
that
are
what
the
document
explores
in
great
depth
and
that's
valuable.
E
To
the
extent
that
HTTP
serves
the
needs
of
people,
we
should
make
it
better
suit.
Those
needs,
but
I'm,
not
interested
in
having
the
discussion
about
how
you
might
use
it
for
a
multiplexing
protocol
or
as
a
substrate
for
tunneling
and
various
other
things
over
the
top
I
think.
That's
just
one
of
the
one
of
the
ways
in
which
it.
F
E
H
Ferg,
in
the
end,
it's
not
Adam
Roche
I
wanted
to
end
up
here,
mostly
because
of
all
the
example
of
J
map
is
interesting.
They're,
they're
kind
of
you
know
a
motorcycle
with
you
know,
relatively
small
crowd
falling
behind
it.
We
have
a
freight
train
called
5g
that
has
already
said
that
they're
looking
at
HTTP
for
their
you
know
working
between
their
services,
they're,
basically
breaking
their
monolithic
service
it
down
into
what
looked
to
me
like
micro
services
and
they've,
already
strongly
signaled
that
they're
likely
to
want
some
changes
here
right.
G
V
Alexei
as
an
area
director,
I
review
lots
of
documents
that
try
to
use
HTTP,
so
I
can't
quite
understand
you,
as
as
an
area
director
I
review,
quite
a
lot
of
documents
to
try
to
use
HTTP.
So
having
a
single
point
where
you
know
partially
as
a
checklist
partially
where
to
send
people,
you
know,
go
fix.
These
things
now
would
be
very
useful
for
me.
Thank
you
until.