►
From YouTube: IETF115-HTTPBIS-20221111-0930
Description
HTTPBIS meeting session at IETF115
2022/11/11 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
We'll
get
started
in
just
a
minute.
We
had
a
the
wrong
room
on
the
agenda
for
a
little
while,
so
we're
just
gonna
give
people
an
extra
minute
or
two
to
get
over
here.
Just
in
case
they
didn't
realize.
A
B
B
A
A
This
is
the
HTTP
working
group.
I
am
one
of
your
chairs,
Mark
dunningham.
Our
other
chair
is
remote.
Tommy.
Are
you
with
us.
A
You're
doing
well
you're
doing
well.
Thank
you.
Thank
you,
happy
Friday,
so
let's
go
ahead
and
get
into
it.
I,
I
hope
by
this
point
in
the
week
you're
familiar
with
this.
It
is
the
note
well,
this
is
the
terms
and
conditions
under
which
we
participate
in
the
ietf
work
regarding
things
like
intellectual
property
code
of
conduct,
privacy
and
so
forth,
it
is
important
we
do
take
it
seriously.
So,
if
you're
not
familiar
with
this,
please
do
take
a
look
at
it.
A
Just
a
reminder
again,
I
hope
by
this
point
of
the
week
everybody's
aware
we
do
have
a
mask
policy
for
this
meeting.
If
you
are
not
speaking
to
microphone
or
or
eating
or
drinking,
please
keep
a
mask
on
and
please
do
those
things
minimally.
You,
you
generally
don't
need
to
take
your
mask
off
to
talk
with
the
mic.
For
example,
can
we
have
a
volunteer
for
scribing
what.
D
A
Thank
you
very
much,
so
the
link
for
the
the
notes
is
in
the
top
of
the
agenda.
It's
also
on
the
agenda
page
for
data
on
data
tracker.
You
can
go
in
there
and
edit
those
and,
if
folks
would
help
out
with
the
minuteing.
That
would
be
much
appreciated.
A
That
was
Monday
today
we
have
done
this
bit.
First
up
is
resumable
uploads
for
about
25
minutes.
It's
a
relatively
new
draft
for
us,
then
the
retrofit
draft
and
then
the
query
draft
and
then
we'll
go
to
from
our
active
drafts
to
the
other
topics.
David
schenazi
already
presented
unprompted
auth
on
it
was
so
long
ago
on
Monday,
therefore
allowing
himself
to
sleep
in
this
morning.
Congratulations
David
on
your
strategy.
A
There
we'll
talk
about
or
just
so
we'll
skip
to
origin
deployment,
then
talk
about
modern
HTTP
proxies
from
Ben
Schwartz
and
then
finally
yep.
Finally,
HP
authentication
with
Sasol,
so
we
we
very
well
may
not
use
the
entire
slot.
We
shall
see
any
agenda
bashing.
A
E
F
Thank
you,
hello
and
good
morning.
Is
that
talking
distance
fine?
Is
that
okay,
perfect
welcome
to
last
day
of
ITF
I'm
going
to
talk
a
bit
about
resumable
uploads.
My
name
is
Mario
Slidell
and
we've
been
working
on
this
fairly
recently.
Only
next
slide.
Please.
F
Right
before
I
want
to
get
started
with
these
major
issues,
I
want
to
do
two
things
briefly.
We
have
been
working
on
a
small
example.
Implementation
of
this
in
the
last
couple
days,
I've
been
told,
the
iitf
is
about
running
code,
so
we
provide
running
code.
F
Maybe
Jonathan
could
paste
the
link
to
it
in
the
chat.
So
if
you
want
to
check
it
out
feel
free
to
do
so.
It's
a
small
server
implementation
using
go,
and
it
demonstrates
a
few
of
the
things
that
we've
been
thinking
about
and
it's
more
intended
as
a
proof
of
concept.
But
if
you
want
to
take
a
look
at,
it
feel
free
to
and
leave
a
few
comments
other
than
that.
I
would
like
to
get
started
with
a
relatively
brief
overview
of
how
resumable
uploads
are
currently
intended
to
work.
F
So
we
all
are,
on
the
same
on
the
same
page,
about
that.
You
know
in
HTTP
we
have
resumable
downloads
for
quite
some
time,
but
not
really
a
way
to
do
resumable,
uploads
in
standardized
way,
every
vendor
implemented
on
their
own
way,
but
they're,
mostly
conceptually
always
the
same.
You
have
in
the
beginning
and
upload
creation
procedure
or
something
where
the
client
tells
the
server
hey.
I
want
to
do
resumable
file
uploads
and
after
that
you
have
additional
requests
which
actually
transfer
the
data.
F
In
our
case,
we
call
that
the
upload
appending
procedure,
because
an
upload
is
basically
an
appending
file
that
you
only
push
data
to,
but
never
right
at
different
offsets.
There's
also
the
offset
retrieving
procedure.
So
whenever
the
upload
connection
gets
interrupted
or
the
user
pauses,
and
you
want
to
restart
the
upload
or
not
restart
but
resume
the
upload,
you
want
to
know
how
much
data
the
server
has
received.
So
you
can
use
the
upload
the
offset
retrieving
procedure
to
query
the
server
hey,
how
much
data
do
you
have?
F
What
else
do
I
have
to
send
you
and
that's
basically,
the
flow
of
resumable
uploads?
How
we've
been
currently
written
it
down
in
the
draft?
However,
there's
still
a
few
major
issues
that
have
to
be
discussed.
These
are
mainly
these
four.
There
are
a
few
more
others,
but
I
think
these
are
the
most
important
ones
and
if
we
can
get
some
feedback
or
some
agreement
about
these,
that
would
be
a
really
great
outcome.
F
F
The
first
one
is
the
most
major
one.
It's
about
server
generated,
URLs
versus
client
generated
token.
The
question
at
hand
here
is:
how
do
we
identify
the
uploads?
What
approach
do
we
use?
The
current
draft
uses
client
generated
tokens.
That
means
the
client.
It
generates
a
token
on
the
client
side
and
then
includes
this
token
in
every
request
that
it
sends
to
the
server
regarding
this
upload.
F
You
can
always
retry
the
request,
because
the
client
always
has
the
ID
of
the
upload,
so
it
can
always
associate
it
with
that,
but
on
the
other
hand,
it
of
course
breaks
a
bit
with
the
standard
procedures
that
we
have
in
HTTP.
The
server
is
not
able
to
influence
this
ID.
It
only
has
to
take
care
of
the
upload
token
that
it's
escaping
so
there's
been
valid
criticism
of
this
approach
of
these
upload
tokens
and
therefore
we
have
a
proposal
at
hand
to
use
a
server
generated,
upload
euros.
F
F
So
the
server
responds
with
the
upload
URL
in
in
two
ways:
it
can
send
it
using
an
1xx
informational
response,
that's
really
great
to
send
it
to
the
client
as
soon
as
possible,
and
it
also
includes
it
in
a
2X
response,
because
the
thing
is
that
in
the
upload
creation
procedure
it
would
also
be
great
if
you
already
could
include
some
data
to
minimize
the
amount
of
round
trips
that
you
have
to
do.
F
The
the
thing
with
server
generated
URLs
is
that
if
we
use
another
element,
the
item
potency
key,
which
I'm
going
to
talk
about
on
the
next
slide.
We
can
achieve
pretty
much
the
same
results
as
with
the
client
generated
upload
token,
namely
that's.
We
have
a
lot
of
repriability
and
we
can
upload
small
files
in
a
sync
request.
F
That's
only
a
brief
overview
of
this
entire
discussion,
and
now
this
is
kind
of
the
question
to
the
audience.
What
do
we
prefer?
Do
we
want
to
use
client
generated
upload
tokens?
Do
we
want
to
use
server
generated
urls?
G
First
I
actually
have
a
question
regarding
this
house
light.
It
says
server
generated
URL
versus
quadrate
tokens
before
you
go
quickly.
The
original
title
was
between
the
tokens
versus
clients,
General
tokens,
so
did
it
change
I
mean.
Did
the
discussion
change
recently.
F
Yeah,
the
idea
would
be
that
the
server
generates
a
URL,
and
that
would
encode
some
kind
of
token.
G
So
it's
all
right.
Thank
you
very
much,
so
I
think
my
comment
regarding
this
discussion
in
general
would
be
that
I
prefer
server
generated
tokens
because
but
client
Engineers
are
talking
there.
This
is
like
conflicts,
for
example,
and
regarding
how
we
embed
the
subject.
A
Guys
Jonathan.
H
Hi
Jonathan
flat
Apple,
thanks
Maurice
for
the
great
work
in
this,
and
it's
really
exciting
to
see.
H
I
was
originally
on
team
client
generated
token,
for
the
reason
of
being
able
to
send
all
of
the
data
in
the
first
HTTP
request
and
being
able
to
easily
fall
back
if
the
server
doesn't
support
it,
but
I
think
I
see
that
there's
great
benefit
and
still
using
these
server
generated
URLs.
As
long
as
we
have
those
1xx
responses
to
kind
of
convey,
support
and
so
I
definitely
support
this
implementation
with
the
server
generated.
H
Urls
I
also
think
it's
a
little
bit
more
like
HTTP,
getting
that
like
location,
URL
and
then
using
that,
as
provided
by
the
server
to
contact
it
again.
If
you
need
to
resume
so
overall
I
support
thanks.
I
I
I
was
a
little
confused,
though
about
how
this
would
interact
in
your
mind
with
things
like
e-text,
because
it
turns
out
that
if
you're,
creating
new
URLs
you're
in
essence
minting
new
etag
spaces
and
when
I
was
looking
at
this
and
trying
to
model
in
my
head,
how
the
the
fetch
portion
of
this,
if
you're,
fetching
an
offset
to
after
an
upload,
was
going
to
work,
whether
you
intended
to
use
those
at
all,
because
if
you
do
then
I
think
I
prefer
the
server
generated
upload
urls.
I
But
if
you're
not
then
I
think
the
client
generated
token
is
equally
good
and
I.
Don't
have
an
opinion.
So
I
wondered
how
you
were
modeling.
Those
especially
in
in
relation
to
this
and
to
the
item,
potency
keys,.
F
Is
it
about
e-tax?
Yes,
okay,
so,
to
be
honest,
we
haven't
considered
e-text
that
much
yet
I
think
Dot,
e-tags,
yeah
I,
don't
think
I
can
provide
the
answer
for
that.
Yet,
okay.
I
So
I,
let
me
just
say
then,
that
if
you
do
do
the
server
generated
upload
URLs
every
time
you
do
one
of
these
fetches
to
see
whether
the
client
and
the
server
have
the
same
sense
of
the
offset.
You
are
if
you
use
e-tags
at
all
you're
generating
a
new
etag
space,
because
it's
a
new
URL,
that's
I,
think
you
might
want
to
keep
that
in
mind.
As
you
consider
this.
F
Maybe
just
if
I
honestly
correctly,
the
server
generated
URL
won't
change
during
the
upload.
So
even
if
you
make
a
request
to
retrieve
the
offset,
the
URL
would
still
stay
the
same.
I
If,
if
you've
never
changed
the
upload
URLs
in
the
course
of
this
you,
you
do
simplify
things
but
I
think
you
actually
lose
some
of
the
power
of
that
approach.
J
Son
I
was
just
going
to
get
up
to
say
plus
one
to
all
the
other
people
I
think
because
of
host
Point
about
whether
this
is
just
a
token
or
a
URL
is
fine
pertinent,
but
in
in
thinking
about
Ted's
question,
I
think
it's
probably
I,
probably
lean
more
towards
the
URL.
J
I
can
help
with
Ted's
question
I
don't
know
if
this
is
the
right
way
of
thinking
about
this
one,
but
when
you,
when
you
make
an
initial
request
to
do
an
upload,
you
you
do
that
to
a
particular
resource,
and
you
may
have
some
intent
with
respect
to
that
resource
and
there
may
be
some
somatics
associated
with
that
one.
J
It
is
a
little
different
when
you're
talking
about
a
resource
that
accepts
posts
or
something
like
that
and
you're
and
you're
trying
to
provide.
We
can
put
you
in
queue
tab,
but
don't.
D
A
J
J
How
do
I
add
extra
things
when
I
come
back
to
it
later
on
and
I
I
think
to
Ted's
questions
about
e-tags,
but
that
resource
potentially
has
some.
It
has
some
implications
for
etonics,
but
I.
Don't
think
that
we're
looking
to
really
provide
all
of
the
sort
of
caching
semantics
on
that
resource
other
than
just
to
say
that
it's
a
thing
that
we
that
has
has
this
upload
offset
state.
I
Ted
Hardy
apologies
for
not
being
in
the
queue
that
that
was
very
helpful,
but
I.
Think
one
of
the
things
I've
been
wondering
about
with
this
is
obviously
I'm.
Also
involved
in
moq
is
how
this
would
interact
with
the
kinds
of
resources
where
somebody
is
posting
to
a
resource
in
order
to
populate
it,
and
other
people
begin
having
access
to
that
resource
before
the
first
resource
is
complete
and
obviously
for
certain
kinds
of
media.
I
That's
going
to
be
a
big
thing
right,
and
so
that's
partly
why
I'm
thinking
about
whether
or
not
you're
going
to
change
URLs
and
whether
or
not
those
URLs
have
specific
entity
tags
in
that
context,
so
I
think
for
your
replacement
thing,
I'm
totally
on
board
exactly
what
you
said
yeah,
but
for
posting
resources,
especially
that
don't
have
a
defined
completion
at
the
beginning
of
the
posting,
you
may
find
that
there's
a
little
bit
more
power.
I
If
you
take
the
other
approach
and
do
give
them
sort
of
an
entity
tagged
view
of
it,
because
then
the
the
person
posting
can
know
what
the
other
clients
have
access
to
at
that
point.
Does
that
make
sense.
J
Like
yeah,
so
I
get
where
you're
coming
from
Ted
I
think
this
is
a
really
interesting
question,
but
I
don't
think
it's
fundamentally
any
different
to
the
scenario
where
you
have
a
resource.
That's
changing
currently
so
there's
a
lot
of,
for
instance,
live
TV
streaming
scenarios
where
you
use
something
like
hls
or
Dash
to
access
a
resource
that
is
usually
sort
of
framed
as
I
have
a
sort
of
a
manifest
resource
that
tells
me
about
little
chunks
of
information
that
I
can
get
and
that
continuously
updates
there.
J
The
the
the
structure
of
the
system
is,
what
is
you
have
the
main
resource
which
is
essentially
continuously
changing,
which
contains
this
manifest,
and
the
caching
semantics
on
that
one
are
essentially
just
busted
you,
you
don't
cache
that
thing,
the
chunks
that
you're
pulling
down
each
have
their
own
identity
and
and
e-tags
and
caching
semantics
then,
can
be
pulled
down
independently,
although
that
does
potentially
change
in
some
of
the
live
broadcast
scenarios
where
you
talk
about
resizing
chunks
and
all
sorts
of
other
things,
so
I
don't
think
we're.
J
H
Hi
Jonathan,
fine,
I,
think
something
that
isn't
mentioned
here
and
may
be
mentioned
is
like
a
a
minor
thing
towards
the
end
of
the
slides.
If
I
remember
correctly,
is
the
potential
for
intermittent
1xx
responses,
kind
of
like
something
that
a
client
may
get
over
time,
sort
of
like
a
progress
indicator
and
I
was
wondering
if
anyone
had
considerations
on
that,
maybe
even
for
this
purpose
of
of
e-tags
or
being
able
to
demarcate
those
resources
as
they're
being
uploaded
if
other
clients
are
accessing
them.
Thanks.
A
Thanks
and
and
I
got
onto
queue
just
to
I.
Think
I've
I've
brought
this
up
before,
but
we
should
remember
that
you
know
if
we're
trying
to
create
a
generic
facility
for
HTTP,
we
don't
necessarily
have
to
choose
one
here.
We
could
do
both
and-
and
you
know
it's
it's
very
easy
to
fall
into
the
thinking
that
oh,
we
want.
You
know
High
interoperability,
so
we
just
should
specify
one
way
to
do
it,
but
you
know
it
is
also.
A
K
So
this
is
Hunter
Kabul
find
migration,
Enthusiast
I
very
much
like
this
kind
of
proposal,
so
I
think
that's
something
really
needed.
K
I,
try
to
think
a
little
bit
and
see
a
little
bit
in
my
information
if
I
would
have
a
preference
on
this.
Probably
my
gut
feeling
would
also
be
for
Server
generated
URLs
I
found.
One
scenario
in
which
probably
client
generated
URLs
would
be
helpful,
which
is
sort
of
I
found,
ruled
out
in
the
proposal,
which
is
a
parallel
upload.
So
there
might
be
scenarios
in
which
you
have
multiple
clients
that
won't
be
able
to
receive
the
ID
given
by
a
server.
K
F
Parallel
uploads
is
a
really
good
thing,
a
good
point.
The
draft
currently
does
not
consider
them
to
keep
it
a
bit
more.
Like
simple,
for
the
first
thing,
parallel
uploads,
we
have
been
experimenting
with
them
in
production
and
people
are
kind
of
split
about
them.
Some
say
yeah,
it's
good.
Some
say
it's
not
that
good.
F
So
for
now
we
have
decided
to
like
keep
it
out,
but
it's
a
really
good
point.
If
people
are
interested
in
doing
parallel,
uploads
to
utilize,
more
bandwidth
or
upload
from
different
machines,
then
that
is
definitely
something
to
be
considered.
But
right
now
it's
not
yet
included.
J
Yeah
I'm
just
going
to
say
parallel
uploads
yeah
is
kind
of
orthogonal
to
this
question.
If
you
do
something
like
this,
you
can
potentially
support
parallel,
uploads
Maybe,
not
immediately,
but
certainly
after
a
round
trip
which
I
don't
think,
is
a
particular
problem.
If
you're
talking
about
the
need
to
upload
in
parallel
at
high
volume,
so
I
think
we'll
probably
talk
about
that
in
another
context.
It's
it's
more
of
the
status
check
that
gets
complicated
when
you're
doing
parallel
uploads,
because
now
you
have
multiple
chunks
of
different
offsets,
with
gaps
in
between
each.
A
F
So
this
the
another
issue
that
we
have
is
that
the
upload
creation
procedure
is
not
item
potent
meaning
if
the
client
does
not
receive
the
response
for
the
upload
creation
procedure
and
oh
okay,
maybe
I'll
be
back
assuming
we
go
with
server
generated
urls
with
client
tokens.
This
would
be
a
bit
different
issue.
The
problem
is:
if
the
client
does
not
receive
the
response
for
the
upload
creation
procedure,
it
doesn't
have
an
upload
URL.
It
doesn't
have
an
upload
token,
so
it
doesn't
know
what
to
do
next.
F
Where
do
I
resume
this
upload?
Now,
in
theory,
we
could
say:
okay,
just
retry
the
request,
but
that
causes
problem,
because
that
might
result
in
duplicate
upload
resources
on
the
server
and
depending
on
your
business
logic.
This
is
something
that
you
might
want
to
avoid,
where
you
say:
I
have
one
user,
and
this
user
only
has
a
limited
amount
of
upload
resources.
F
So
there's
this
draw
for
the
item:
potency
key
header
in
the
HTTP
API
working
group
and
will
be
a
really
great
fit
to
put
on
to
the
upload
creation
procedure.
Basically,
that
way
you
get
retryable
upload
creation
procedures
and
even
if
the
client
is
not
able
to
receive
the
response,
it
can
just
send
the
same
request
again
with
the
same
item,
potency
key
and
receive
the
same
upload
URL
and
then
basically
go
with
the
entire
dance
of
uploading
appending
and
offset
retrieving
as
well.
F
Of
course,
this
is
only
doable
if
the
client
knows
the
server
supports
it,
because
not
every
server
supports
either
potency
key,
but
it
would
be
a
really
great
fit
because
in
that
way
the
upload
procedure
is
not
only
resumable,
but
it's
also
rechriable
from
every
state.
So,
even
if
the
First
Response
even
failed,
we
can
still
retry
and
circumvent
these
errors.
F
D
F
Think
yeah
is
there
any
any
feedback?
Regarding
that
point.
J
So
there's
always
a
risk
when
you
take
a
dependency
on
something.
That's
in
this
state,
I
think
perhaps
this
might
just
be
orthogonal,
and
we
can.
We
can
simply
say
at
best
that
there's
a
sort
of
an
informative
reference,
saying,
oh
by
the
way,
if
you
wanted,
if
you
want
to
do
this
sort
of
thing,
this
exists.
J
The
challenge
without
impotency
key
is
so
there's
multiple
challenges
with
them.
It's
a
little
bit
difficult
for
a
server
to
guarantee
that
they
can
respect
them
and
in
all
scenarios,
and
it's
kind
of
difficult
for
the
client
to
know
that
the
server
will
use
them
if
the
client
provides
one
of
them,
so
I'm
sort
of
leaning
towards
at
best
a
sort
of
Nod
towards
the
the
draft
saying
that
that
you
do
this,
we
don't
need
to
solve
everything
in
the
space
of
upload.
With
this
draft.
H
H
If
you
know
that
a
server
does
support
it,
then
you
can
just
take
this
extra
round
trip
to
get
that
response.
Anyways
I
think
there
was
some
discussion
about
doing
that
with
the
upload
and
complete
header,
and
then
you
can
continue
from
there.
You
could
do
that
for,
like
potentially
large
files,
if
you
know
that
the
server
supports
it
and
so
I
think
that
plus
one
to
Martin
the
item,
potency
key
could
maybe
be
an
add-on,
but
not
necessarily
an
integral
part
of
the
protocol.
G
So
requiring
the
previous
issue.
If
I
understand
correctly
people,
oh
well,
actually
some
people
favored
subordinated
URLs,
because
there's
no
call
don't
worry
about
conflicts
and
the
tokens
and
those
things
am
I
correctly
understand
that
this
proposal
essentially
introduces
the
client
the
generative
token
game.
F
F
That
does
not
apply
to
the
item
potency
key.
It
would
only
be
used
for
upload
creation.
G
F
So
that's
a
really
great
Point.
The
item.
Potency
key
is
back
talks
a
little
bit
about
this
and
they
say
that
the
server
I
think
it
can
so
in
theory.
Yes,
they
would
map
to
the
same
resource
and
there
will
be
a
collision.
You
can
also
imagine
if
you
know
you
have
like
in
the
context
of
a
request.
Every
request
is
associated
with
a
user,
and
then
you
can
have
like
a
separate
space
of
either
potency
keys
for
every
user.
F
G
A
You
and
I
I
got
on
the
Queue
just
to
say
personally,
I
think
I
agree
with
what
Martin
was
saying
with
the
Proviso.
That
I
suspect
that
the
item
potency
key
spec
is
is
hopefully
going
to
be
done
before
this
spec
is
going
to
be
done
so
I'm
not
too
worried
about
the
dependency
there,
but
stepping
back.
You
know
this
is
a
little
similar
to
the
last
issue
in
that
we
have
a
a
choice.
A
It's
maybe
it's
a
stylistic
choice,
but
it's
also
kind
of
a
philosophical
choice
of
Are
We,
defining
a
generic
function
for
use
across
HTTP,
and
we
point
out
how
we
combine
it
with
other
HTTP
mechanisms.
Are
we
defining
a
very
thin?
You
know
tightly
bound
together
profile
of
how
those
mechanisms
interact
and
generally
hdb
extensions
like
this,
hopefully
are
more
the
former
where,
where
you
you
know,
you
can
pick
and
choose
them
and
and
combine
them
in
interesting
ways.
So
that's
something
to
think
about
at
least.
F
Thank
you,
yeah.
That's
a
really
good
point.
The
entire
other
potency
key
is
optional
anyway,
both
on
the
server
as
client
side.
So
we
can
also
leave
it
out
of
the
draft
for
now
and
if
we
see
later
that
is
actually
useful
in
production,
we
can
still
talk
about
adding
it
back
so
yeah.
Thank
you
very
much
for
that
feedback.
Let's
go
to
the
next
slide,
please.
F
So
the
next
slide
is
a
relatively
easy
thing.
I
would
say.
Assuming
again
we
go
with
the
server
generated
URLs.
How
would
the
server
know
that
a
request
is
the
upload
creation
procedure?
F
How
would
it
identify
that
the
client
is
interested
in
resumable
uploads?
There's
a
few
options
there
you
we
can
always
slap
on
the
header,
that's
specific
to
resumable
uploads,
saying
that
this
is
required
in
the
upload
creation
procedure,
and
that
way
the
client.
The
server
can
identify
that
the
client
is
interested
in
resumable
uploads.
This
issue
arises
because
the
endpoint
for
creating
resumable
uploads
may
not
only
serve
resumable
uploads.
It
may
also
be
used
for
uploading
non-resumable
files
foreign.
So
there
has
been
this
idea
of
using
the
preferred
header
for
that.
F
The
client
would
include
the
preferred
header
in
the
upload
creation
procedure
and,
for
example,
prefer
resumable
upload,
and
then
the
server
sees
this
and
it's
like,
oh
hey,
the
client
wants
to
do.
Resumable,
uploads,
I'm
gonna
create
an
upload
resource
for
it.
The
preferred
header
has
a
kind
of
nice
semantic
because
it
the
client,
prefers
it.
If
the
server
doesn't
support
it,
it's
no
big
deal.
It
will
just
fall
back
to
standard
uploads.
D
K
So
that's
your
Hubble
I
wonder
at
which
point
of
time
the
clients
should
submit
that
so
intuitively
I
would
say.
This
is
rather
something
that
comes
from
a
server
initially
in
header
or
something,
and
what
I
also
would
see
on
top
probably
so.
This
is
maybe
a
related,
but
different
topic
is
the
question
at
which
so
for
which
uploads?
Actually
this
should
kick
in
so
from
which
size
would
you
prefer
to
do
so
because,
typically,
what
we
see
in
apis?
K
You
have
non-resumable
endpoint
or
non-resumable
upload
functions,
and
you
have
that
resumable
upload
here,
which
comes
with
a
considerable
overhead
in
terms
of
you
know,
establishing
and
finishing
it.
It
might
also
have
performance
considerations
on
the
back
end,
because
the
back
end
might
handle
it
differently
to
commit
to
final
upload
and
so
on.
So
there's
typically
a
sweet
spot
to
be
identified
from
where
to
switch.
For
you
know
smaller
file,
uploads
and
larger
file
uploads.
F
Yeah,
maybe
to
respond
to
that.
It's
a
really
great
question.
There's
well
like
I,
would
say
from
from
our
interest
their
side
to
make
resumable
uploads
in
theory,
work
for
all
file
sizes.
That's
my
point
of
view.
Of
course,
the
great
thing
about
the
preferred
header.
That
way
is
basically
the
server
can
decide.
F
If,
if
the
client
would
also
indicate
somehow
the
file
size,
the
server
could
decide.
Okay
does
this
file
is
too
small?
I
don't
want
to
create
a
resumable
upload
for
this,
or
the
file
is
big
enough.
Yeah
I
will
respond
with
an
upload
URL,
and
then
the
the
client
uses
that
one
so
with
the
prefer
header.
F
The
the
server
could
actually
decide
if
it
is
worth
to
create
an
upload
resource,
because
that's
a
really
good
point.
There
are
concerns
that
doing.
Resumable
uploads
for
small
files
creates
too
much
overhead
on
the
server
side.
Yeah.
H
Flat
yeah
overall
I
think
the
the
prefer
header
is
is
good
and
probably
necessary
just
to
from
the
client
side
if
the
server
sends
a
104
regardless
or
something
and
a
client
doesn't
support
it.
That
could
probably
cause
issues.
H
H
When
we're
starting
to
adopt
this,
though
I
wonder
if
that
kind
of
might
slow
down
adoption
in
terms
of
if
server
is
rejecting
requests
when
a
client
does
indeed
support
it,
and
so
it
could
be
kind
of
like
drawing
this
line
and
I.
Think
Marius
has
a
good
point
that
we
might
want
all
files
to
just
try
to
do.
This
immediately,
I
think
there's
a
little
bit
of
thought
that
needs
to
go
into
whether
or
not
we
want
to
do
that.
A
Thank
you,
so,
speaking,
personally
and
and
very
strictly
just
addressing
how
do
you
spell
this
on
The,
Wire,
I
kind
of
prefer
is,
is
Loosely
specified
it's
a
bit
fluffy.
It's
not
really
clear
what
it's
for
and
there's
not
a
lot
of
mechanism
there.
It
tends
to
be
used
for
things
that
are
kind
of
configured
or
used
by
the
user
and
some
a
little
concerned.
There
might
be
conflict
even
though
in
theory
it
won't,
but
in
practice
it
might.
A
My
gut
feeling
is:
is
that
we're
defining
kind
of
a
proper
protocol
extension
here
and
so
giving
it
its
own
header
would
probably
be
cleaner
and
clearer,
but
that's
just
a
kind
of
a
gut
feeling.
It's
not
a
you
know
anything
important
Brun.
E
You
can
say,
I
want
to
send
a
literal
item
of
this
size
and
the
server
says
yes
or
no
or
you
can
literal
plus
was
you
can
say,
I'm
just
streaming
it
to
you
right
now
and
there's
a
new
spec
that
came
out
a
couple
of
years
ago,
literal
minus,
which
is,
if
it's
less
than
4096
bytes,
just
send
it
if
it's
greater
than
it
has
a
question.
First
I
think
something
similar,
but
something
small
enough
that
it's
going
to
fit
in
a
reasonable
number
of
packets.
E
You
say
just
stream
it,
but
the
other
side
of
this
is
saying
to
the
server
I'm
about
to
send
you
100
gigabytes
and
the
services,
and
nothing
so
I.
Don't
want
that
straight
away
rather
than
the
server
has
this
stream
coming
at
it
and
has
to
drop
the
connection,
because
there's
no
other
way
to
to
tell
you
to
go
away.
E
I
think
setting
a
reasonable
size
and
saying
anything
smaller
than
this.
You
can
send
anything
larger.
You
need
to
ask
permission
is
a
good
pattern
generally,
and
that's
probably
the
way
to
do
it,
which
doesn't
quite
answer
this
question,
but.
H
I
would
note,
though,
that,
having
like
this
boundary
between
large
and
small
file
sizes
can
be
a
little
difficult,
especially
because
everyone,
every
client's
case,
is
different
in
terms
of
connection
speed
and
so
for
one
client
losing
10
megabytes
of
data
and
having
to
retry
that
might
be
even
worse
than
another
client
losing
100
megabytes
of
data.
If
their
connection
is
much
faster,.
D
K
Yeah,
this
is
Hunter
once
again,
so
bronze
remark
actually
made
me
think
that
the
overall
question
of
what
maximum
five
types
will
the
server
accept
is
also
something
I'm
not
aware
of
is
currently
specified
to
be
retrievaled
by
the
client
in
any
RC
or
standardized
way,
and
it
would
probably
be
super
related
to
this
one
as
well
to
include
maybe
in
considerations.
F
Yes,
that's
a
very
good
point:
okay,
thank
you
for
all
of
the
feedback
on
that.
Let's
try
to
go
to
the
next
slide,
so
we
don't
hold
other
fingers
right.
This
issue
is
about
the
expect
header
in
the
upload
creation
procedure.
The
client,
the
server,
can
send
an
informational
response
right
now.
We
use
the
104
for
that,
but
that
might
change,
and
the
question
is
because
the
client
can
also
include
data
in
the
upload
creation
procedure.
F
Is
there
a
conflict
with
the
two
informational
responses
that
we
have,
assuming
that
the
server
generates
server
responds
with
100
continue
and
then
later
might
respond
with
104
resumption
supported
personally
I,
don't
really
know
how
the
entire
expect
headers
implemented
and
if
that
would
cause
them
complications,
because
it's
kind
of
a
special
thing.
A
So
expect
continue
is
still
used,
perhaps
unfortunately
not
a
lot,
though,
and
and
you
can
send
more
than
one
informational
response
or
non-final
response
before
you
send
the
final
response,
so
so,
strictly
speaking,
that
should
be
okay.
How
it
actually
interoperates
with
with
live
software,
will
be
really
interesting
to
find
out.
F
Okay,
if
there's
no
concerns
regarding
that,
then
let's
go.
Please
do
the
last
slide
right.
So
we've
talked
about
the
four
issues
that
are
here
in
a
bit
Bolder
front.
Thank
you
all
for
that
feedback.
Already,
there
are
a
few
other
open
issues
which
are
I
would
call
a
bit
more
minor.
F
There's
one
issue
about.
We
have
these
head
responses,
but
they
always
change
because
the
upload
offset
changes,
and
that
makes
them
kind
of
not
cachable,
and
there
has
been
a
concern
voiced
on
the
mailing
list
that
this
is
not
really
the
idea
of
a
head
response,
because
it
should
be
cachable
so
yeah
that
the
concern
there's
also
the
proposal
that
or
the
idea
that
was
also
voiced
from
Jonathan
flapp
just
a
minute
ago,
that
the
server
can
regularly
send
informational
responses
to
let
the
client
know
how
much
data
it
has
received.
F
That's
really
interesting,
because
that
way
the
server
can
remove
data
from
its
buffer.
If
it
knows
the
server
has
stored
this
information
safely
and
securely,
then
the
client
can
remove
it
from
a
buffer
again
and
the
last
two
issues
are
a
bit
a
bit
more
vague.
In
theory,
there's
this
idea
that
using
resumable
uploads
we
can
also
pause
uploads
to
give
priority
to
other
uploads.
F
It's
just
an
idea,
if
might
not
even
make
a
mention
at
rough,
but
it's
like
an
idea
that
could
be
explored
and
the
last
one
is
a
bit
more
interesting
in
if
it's
possible
to
integrate
resumable
uploads
into
standard
HTML
forms.
So,
for
example,
that
you
can
use
multi-part
uploads
with
resumable
uploads,
and
that
was
would
natively
work
without
interventions
from
the
Developers
just
a
few
ideas,
if
there's
any
feedback
on
that
as
well,
would
be
really
helpful,
otherwise,
I'm
not
sure
how
much
time
we
have
left
for
discussion.
We.
K
Answer
how
about
so
one
thing
that
came
to
my
mind,
that
might
be
very
handy
in
this
context,
is
hashes
to
be
returned
after
the
upload.
Is
that
something
you
consider
you're
likely
to
consider
returning.
F
F
Might
be
interesting?
Yes,
we.
F
I
so
previously
we
have
also
been
working
on
resumable
uploads
outside
of
the
ITF
like
a
separate
project.
It's
open
source
and
we
have
defined
something
like
that
as
well
to
like
use
checksums
to
ensure
the
Integrity
of
uploads.
People
want
to
use
that,
but
it
has
not
been
considered
yet
here,
because
it's
kind
of
a
bit
more
like
a
broader
scope,
and
we
want
to
keep
the
draft
smaller.
But
the
really
good
point
that
maybe
the
if
the
server
wants
to
that
it
can
also
return
checksums.
K
F
You
for
the
feedback
we'll
try
to
incorporate
it
as
best
as
we
can.
Thank
you.
A
Great
okay:
let's
go
ahead
and
move
on
next
up
is
retrofit
structured
fields.
A
Can
folks
see
that
it's
going
to
get
bigger
there?
It
goes
so
we
have
just
four
issues
open
on
this
spec,
the
date
definition
in
retrofit.
Hopefully,
people
have
seen
by
now.
We
have
a
zero
zero
of
the
retrofit
sorry
of
the
structured
Fields.
This
document
out
now,
as
of
I
think
yesterday
and
the
zero
one
of
that
will
take
the
date
type
out
of
the
retrofit
draft
and
into
that
best
specification.
A
So
now
we
can
close
this
issue
once
that's
done,
and
that
leaves
us
just
with
these
three
issues
here.
One
of
them
I
think,
is
a
almost
editorial
issue
from
Martin
regarding
how
we
specify
the
the
caveats
around
how
to
how
to
fix
up
retrofit
to
make
retrofit
back
porting
more
successful.
We
did
have
a
discussion
about
whether
we
wanted
to
actually
put
those
caveats
or
those
modifications
into
the
structured,
Fields,
parsing,
algorithms
and
I
think
we
landed
on
not
doing
so,
but
I
wanted
to
double
check
that.
A
So
the
idea
is,
for
example,
when
you're
parsing
around.
A
The
the
semicolon
that
separates
some
of
the
elements-
you
know
if
you'll,
strict,
structured,
Fields
parsing,
doesn't
allow
white
space.
But
if
you
allow
white
space,
you
get
a
higher
success
rate
when
you're
when
you're
doing
back
ported
structured
fields,
and
so
the
question
is:
should
we
modify
the
structured,
Fields,
algorithms
or
add
a
flag
to
say,
I
want
to
be
in
compatibility
mode,
for
example,
and
and
I
think
the
sense
was
that
we
probably
don't
we'll,
probably
want
to
keep
those
modifications
in
the
retrofit
document.
Any
comments
Julian.
A
L
Yeah
so
I
I
actually
looked
at
this
ticket
this
morning
and
I
think
the
current
tax
that
essentially
says
you
will
need
to
pre-process
the
field
value
before
sticking
it
into
the
structured
field.
Disposal
is
kind
of
crazy.
Sorry
because
it
essentially
means
that
people
will
have
to
apply
some
heuristics
to
transform
the
field
value
onto
something
that
the
father
will
accept
and
depth
seems
like
a
very
dangerous
operation
to
me,
because
to
do
it
properly,
you
would
have
to
pass
the
fields
correctly
right.
L
You
can't
just
apply
a
Rec
X
or
something
like
that
to
a
field
value
to
produce
something.
That's
the
structured
field.
Parser
will
take
so
I
I.
Think
if
we
land
with
that
solution,
we
better
do
not
retrofit
at
all,
because
that's
really
really
risky,
so
my
I
would
absolutely
prefer
for
the
common
case.
It's
like
a
quoted
string
and
white
space
and
so
on
to
actually
have
a.
L
M
A
Okay,
so
I
think
that
that
this
issue,
the
proposal
is
to
Define
in
the
style
of
structured,
Fields
parsing.
You
know
after
effectively,
as
they
say,
monkey
patch
it.
So
you
know
after
step
three,
do
this
then
continue
processing
so
forth
and
so
on.
That
would
be
one
way
to
do
this,
so
keep
it
in
retrofit.
The
other
would
be
to
actually
change
the
parsing
algorithm
in
in
structured
Fields
best.
L
I
I
believe
it
should
be
in
the
spark
plug
fields
spec,
because
we
actually
want
the
implementations
of
structured
fields
to
be
consistent
and
there's
also
the
slightly
orthogonal
question
whether
if
we
realize
that,
for
instance,
quoted
Springs
are
common,
that
throughout
characters
that
are
not
portable
in
structured
Fields,
then
maybe
I
know
that's.
That
opens
a
new
dimension
for
the
structured
Fields
provision.
N
L
Maybe
it's
we
should
think
about
actually
opening
up
the
spring
syntax
in
structured
fields
to
actually
be
compatible
with
HTTP
clock
Springs.
Instead
of
trying
to
work
around
them.
J
A
I
think
that's
the
goal.
I
think
that
personally
I
would
be
concerned
if
we
just
modified
the
the
structured
Fields,
parsing
algorithms
for
all
structured,
Fields,
parsing
I.
Think
that
that
would,
although
I
can't
foresee
anything,
immediate
I
I
can't
help
but
wonder
if
it
would
cause
some
compatibility
issues.
Even
though
structured
Fields
is
relatively
new.
A
D
J
D
J
D
J
At
this
point
at
some
level,
I
think
this
is
just
an
editorial
choice
in.
J
Because
you
don't
want
to
complicate
things
too
much
for
someone
who's
doing
the
retrofit,
but
you
also
don't
want
to
complicate
things
too
much
for
the
people
who
are
only
doing
structured
fields
and
now
have
to
deal
with
this.
This
optionality
and
the
algorithm
gets
more
complicated.
It's
trade
Us
in
both
Direction
I.
A
J
A
Personally,
my
concern
is
is,
if
we
leave
it
in
retrofit,
that
we're
effectively,
we
may
have
to
duplicate
some
of
the
algorithms
wholesale
to
you
know,
specify
the
modifications
or
make
a
really
ugly
go
to
step.
Three.
Do
this,
and
that's
just
that's
sort
of
an.
J
G
J
B
Mike
Bishop,
so
if
you
think
about
who's
going
to
use
these,
if
you
have
somebody
who's
trying
to
parse
a
legacy
header
using
a
structured
field,
library
that
they
that
do
not
themselves
control,
unless
the
person
who
at
the
library
knew
to
add
the
flag,
it
won't
be
there
I
think
the
odds
of
that
happening.
Get
a
lot
better.
B
If
the
structured
field
spec
says
you
need
to
have
this
flag
in
your
pressure
and
then
and
then
somebody
who
wants
to
apply
that
picture
to
a
header
in
a
more
generic
implementation,
they
just
set
the
flag.
Otherwise
they
have
to
go
in
and
start
messing
with
the
implementation
of
step,
three
which
they
do
not
really,
and
they
didn't
really
sign
up
to
work
on
that
Library.
D
A
The
next
one,
I
I,
noted
that
there
are
some
differences
or
I
was.
It
was
pointed
out
to
me
and
I'm
sorry
I
forget
who
pointed
this
out
to
me:
there's
some
differences
in
error,
handling
between
how
we
specify
things
in
HTTP
versus
in
HP,
parsing
versus
structured
field,
for
example,
when
there
are
multiple
instances
of
the
same
value
in
a
list
header
some
list,
headers
specify
I
forget
you
know,
first
one
wins
and
I
think
we
specify
last
one
Windsor
or
vice
versa,
instruction
field.
A
So
we
just
need
to
point
that
out
as
a
as
a
potential
difference
and
finally,
Chris
and
a
few
other
people
actually
have
added
that
we
should
add
some
mapped
fields
for
authorization
and
www
authenticate,
which
I
think
is
is
a
reasonable
thing,
because
people
seem
to
be
interested
in
specifying
new
authentication
schemes
these
days.
So
I,
don't
think
that's
controversial,
I
hope
any
other
feedback
on
this
spec.
D
L
So
there
is
no
news
on
the
actual
content
of
the
spec.
I
was
kind
of
limited
for
some
several
reasons
over
the
last
few
months,
and
now
that
I
had
some
time
doing
its
log
I,
we
had
to
look
at
all
the
other
aspects
of
the
working
group,
but
that
said,
I
actually
looked
at
the
open
issues
today.
L
So
if
we,
my
gut
feeling,
is
that
we
have
several
areas
where,
where
we
have
Open
tickets
for
like
improving
documentation
of
motivation
and
how
to
deploy
that
how
to
decide
whether
to
use
Query
instead
of
gets
explaining
the
differences.
So
these
are
mainly
editorial
things
then
there's
the
discussion
about.
L
Should
we
actually
specify
how
a
get
vary
with
a
farm-based
query
string
can
be
transformed
into
a
query
which
what
I
think
Mark
had
that
idea
that
we
should
actually
have
that
example,
in
which
case
we
actually
should
go
into
defining
the
semantics
of
that
media
type
for
query.
So
the
whatever
this
early
form
based
media
type
thing
for
post,
it's
called
I,
keep
forgetting
the
small
name.
L
So
we
probably
should
if,
if
we
say
that
media
types
are
query
payload
and
it's
a
definition,
what
it
means
for
query,
then
we
should
actually
Define
that
for
query.
L
And
then
we
have
several
open
issues
about
redirections.
L
Conditional
queries
and
caching,
so
all
of
these
probably
could
be
dealt
with
by
saying.
No,
we
don't
do
that
or
by
spending
lots
of
time
trying
to
get
these
things
right.
So
it
depends
a
bit
on
the
energy
that
we
have
for
going
into
these
issues.
I
mean
in
theory,
because
probably
a
very
small
spec
that
doesn't
talk
about
these
things,
but
then
we
would
lose
some
of
the
benefits
of
actually
defining
this.
L
So,
depending
on
how
much
energy
we
have
it's
little
work
remaining
and
a
lot
of
work
remaining
and
I'd
like
to
see
people
volunteering
to
actually
help
us
nailing.
These
things
down.
A
That
sounds
good.
Any
comment
or
feedback
on
on
this
spec
I
think
it
may
be
just
a
matter
of
finding
the
right
time
or
waiting
for
the
right
time
when
people
do
have
some
bandwidth
to
work
on
this
I
I
know
there
are
a
lot
of
people
still
interested
in
it.
L
Yeah
so
my
feeling
is
that
we
need
to
get
the
to
digest
related
specs
out
so
that
we
can
those
need
our
attention
to
get
to
last
call
and
get
published
and
once
they
are
out
I
think
for
this
working
group.
The
main
open
issues
would
be
then
structured,
Fields,
revision
and
resumable
uploads,
and
this
thing
and
then
we
should
have
more
bandwidths
for
that.
A
Sent
a
reasonable
to
me
bench
warts.
N
Hi
password
I
am
very
naive
about
this
topic.
Could
you
just
explain
if
you
could
why
item
potency
key
is
not
sufficient?
You
know
when
we
started
this
work,
I,
don't
think
we
didn't
have
item
potency
key,
at
least
that's
my
recollection.
That's.
N
Yes,
my
question
is:
if
we
have
post
with
item
potency
key,
do
we
still
need
query.
L
I
agree
that
that's
some
overlap
here,
but
very
is
not
only
about
making
things
item
problems,
but
also
about
making
things
safe,
and
just
because
the
client
says
I
just
because
we
have
a
new
mechanism
to
make
a
request.
Repeatable
doesn't
make
that
request
safe
right.
E
N
To
presumably
to
a
party
that
doesn't
isn't
already,
you
know,
part
of
the
conversation
doesn't
understand
the
the
application
layer,
semantics.
N
D
A
Okay,
well,
thank
you
Julian.
So
let's
go
ahead
and
move
on
we're
skipping
on
prompted
auth
as
discussed.
So
next
up,
origin
deployment,
sidish.
O
Hello,
I
I
hope
you
can
hear
me.
A
O
So,
okay
I'll
get
started,
hi
everyone,
I'm
sudish,
a
PhD
student
at
the
University
of
Washington
and
on
behalf
of
all
my
co-authors,
at
cloudflare,
where
I
interned
over
the
spring
and
the
summer
I'm
presenting
some
of
this
work
based
on
the
experiments
and
experiences
in
deploying
origin
frames
and
trying
to
experiment
with
connection
coalescing
next
slide.
Please.
O
So
what
is
connection
coalescing-
and
most
of
you
here
might
might
already
be
aware.
It
is
the
ability
for
the
clients
to
reuse
an
existing
underlying
connection
to
retrieve
any
additional
resources
and
in
the
process
prevent
the
creation
of
any
new
connections.
So,
let's,
let's
try
and
understand
this
with
with
this
example
here
and
due
to
domain
sharding
techniques
that
are
used
by
developers
and
web
administrators,
which
are
still
remnants
of
the
old
HTTP
one
protocol
used
to
trick
browsers
to
to
create
multiple
TCP
connections.
O
We
find
that
to
load
the
example.com
web
page.
It
is
dependent
in
this
example
on
three
other
domain
sharded
sub
resources.
Two
of
these
belong
to
the
same
domain,
while
One
belongs
to
an
external
CDN,
and
this
configuration
is
quite
common
in
today's
web
pages.
But
it
is
interesting
because
the
client,
which
is
the
browser
in
this
case
it
creates
multiple,
potentially
blocking
DNS
queries
to
the
resolver
and
ends
up
in
the
best
case,
with
the
IP
addresses
matching
the
matching
and
the
connection
then
being
reused
so
next
slide,
but
but
yeah.
O
If,
if
we
look
deeper
it,
it
looks
like
this.
The
client
makes
a
query
for
example.com
and
the
resolver
returns
a
set
of
IP
addresses
for
the
clients
to
use
the
client
chooses
one
of
the
IP
addresses
and
creates
a
TCP
connection,
followed
by
a
TLS
connection,
to
establish
a
secure,
Channel
and
then
retrieves
the
HTML
content
of
the
web
page
with
an
HTTP
request
great.
So
what
happens
next
and
the
browser
now
understands
the
various
sub
resource
dependencies
that
are
needed
and
it
creates
the
necessary
DNS
queries.
O
This
is
this
is
where
things
get
interesting,
next
slide,
please.
O
So
the
the
behavior
that
you
see
today
actually
changes
based
on
which
client
you're
using
Chrome,
for
example,
makes
the
query
for
cdnjs
in
this
case
and
receives
the
two
IP
addresses
ipb
and
IPC
in
the
response
from
the
resolver,
and
since
there
was
no
active
Connection
open
to
ipb,
which
could
have
been
there
from
the
first
set
of
DNS
responses,
it
actually
creates
a
new
connection
and
retrieves
the
required
content
next
slide.
Please,
but
Firefox.
On
the
other
hand,
reuses
the
underlying
connection,
if
the
IP
addresses
have
a
transitive
relationship
between
them.
O
O
So
so,
with
with
this,
with
this
understanding
in
place,
we
we
established
our
research
questions
and
we
set
out
to
understand
how
much
of
the
internet
today
is
coalescable.
Where
are
these
sub
resources
typically
located
and
how
are
they
really
distributed?
O
O
So
we
we
began
our
measurements
by
taking
the
most
popular
half
million
Tranco
domains,
which
is
a
a
popular
ranking
list
of
of
the
top
million
domains,
and
we
took
half
million
of
those
and
we
successfully
navigated
to
around
315
000
of
them
and
for
for
each
of
these
navigations,
we
extracted
the
request
timelines,
and
this
helped
us
understand
where
most
of
the
sub
resources
were
located,
and
we
find
that
over
14
of
the
websites
depend
on
sub-resources,
which
are
served
from
servers
that
are
located
in
at
least
one
different
autonomous
system
than
their
own,
and
six
percent
of
the
websites
today
currently
rely
fully
on
their
on
the
sub
resources,
which
are
also
served
from
the
same
autonomous
system.
O
But
another
interesting
part
is
that
more
than
50
of
the
web
pages
that
we
browse
to
need
resources
from
no
more
than
six
different
autonomous
systems
to
obtain
all
to
obtain
the
necessary
information
and
render
the
web
page.
So
next
slide.
Please!
O
So
now
that
we
have
an
idea
of
the
distribution
of
these
sub
resources,
we
wanted
to
understand
where
these
resources
were
located.
So
a
lot
of
these
websites
rely
on
sub
resources
obtained
from
Google,
cloudflare
and
Amazon,
and
the
top
10
autonomous
systems
which
you
see
in
this
table
are
responsible
for
over
60
percent
of
the
total
requests
for
sub
resources
that
we
see
in
our
measurement
scan
so
building.
O
On
top
of
the
previous
work
that
was
published
in
sitcom
by
by
Marwan,
Fayed
and
and
and
collaborators,
we
we
approximate
that
the
potential
for
connection
reuse
to
the
number
of
unique
autonomous
systems
is
the
is
the
number
of
unique
autonomous
systems
that
that
we
contact,
and
these
results
show
us
that
today's
connection
coalescing
opportunities
exist
because
of
cdns
next
slide.
Please.
O
So
this
brings
us
to
an
interesting
standard
at
the
ietf
called
the
HTTP
origin
frame,
which
was
initially
proposed
by
Nottingham
and
nigren
from
Akamai,
and
despite
the
standardization
of
the
origin
frames
in
2018
as
RFC
8336,
it
has
not
yet
been
heavily
adopted
in
the
internet
ecosystem
today
and
this
this
could
be
because
of
a
few
challenges
posed
by
origin
frames.
O
For
example,
the
the
default
origin
frame
allows
servers,
sending
the
HTTP
frames
to
contain
any
arbitrary
hostname
without
authoritativeness
and
at
the
client
side,
very
few
clients
even
support
origin
frames
and
from
our
observation
so
far,
Firefox
is
the
only
client
which
has
a
support
for
origin
frames,
but
because
of
the
issues
with
authoritativeness
and
the
contents
of
the
frame,
the
the
client
continues
to
incur
an
additional
DNS
query
which
could
have
been
prevented
and
next
slide
please.
O
But
these
challenges
and
the
extra
potentially
blocking
DNS
queries
could
be
removed
by
establishing
some
Authority
on
the
origin
frames.
And
it's
it's
really
really
important
to
understand
that
coalescing
here
not
just
allows
the
ReUse
of
the
underlying
TCP
connection
over
which
multiple
TLS
handshakes
could
be
made,
but
instead
pushes
the
ReUse
to
its
limit
by
combining
the
TCP
and
TLS
connection
reuse.
O
So
here
in
in
the
request
five
you
you
see
that
the
HTTP
request
without
the
TLs
connections,
which
you
saw
before
next
slide,
please
so
once
a
connection
is
established
by
by
the
client
to
to
the
server.
The
server
sends
a
hint
to
the
client
telling
which
additional
host
names
it
is
authoritative
for
and
it
serves
a
modified
certificate.
I
I
think
we
probably
skip
skip
the
slide.
Can
we
go
back?
One
yeah.
K
O
D
O
Yeah,
so
this
is
where
I
was
I
I.
Think
I
I
didn't
see
the
little
dots
before
yeah.
O
So
so,
when
the
when
the
new
connection
is
established
and
the
server
sends
a
hint
to
the
to
the
client
telling
which
additional
host
names
it
is
authoritative
for
and
it
serves,
this
modified
certificate,
which
is,
it
is
possible
for
for
for
Server
operators,
maintainers
or
the
cdns
to
actually
change
the
certificates
and
include
additional
DNS,
San
extension
San
names
into
the
DNS
and
extension
of
the
TLs
certificate.
O
So
in
the
next
slide,
please
so
knowing
this
information,
we
we
go
back
and
look
at
the
scans
that
we
ran
and
model
the
impact
of
using
origin
frames.
And
to
do
this,
we
combined
the
various
resources
from
the
same
autonomous
system
and
carefully
truncated,
the
DNS
and
TLS
connection
times
wherever
possible,
recreating
a
new
rendered
timeline
of
events.
So
let's
talk
about
the
example
you
see
here
for
for
a
website
example.com,
which
is
served
by
the
CDN
Network,
also
serving
cdnhost.com.
You
see
that
the
you
see
on
the
top.
O
The
second
and
third
requests
results
into
DNS
queries
and
TLS
connections
being
established,
but
the
fourth
request
for
static
dot
example.com
is
blocked
until
the
third
one
is
actually
complete
in
the
case
where
an
origin
frame
is
sent.
The
example.com
stream
0
would
also
include
the
information
for
assets.cdnhost.com
and
static.example.com,
and
this
would
prevent
the
additional
DNS
and
TLS
connections
and
reuse
them
to
retrieve
the
resources.
So
the
the
client
bottlenecks,
such
as
the
wait
time
or
block
time,
continue
to
remain,
and
the
timeline
is
moved
ahead
by
by
reconstructing
these.
O
Our
model
says
that
deploying
origin
frames
could
reduce
the
median
number
of
DNS
and
TLS
connections
by
over
60
percent,
which
is
which
is
shown
point
with
the
arrow,
pointing
to
the
pointing
to
the
green
line
on
the
on
the
left
and
compared
to
the
restricted
IP
address
based
coalescing
mechanisms,
that
is
in
the
blue
line
and
our
actual
measurement,
which
is
in
the
red
lines.
So
the
theoretical,
modeling
kind
of
implies
that
the
page
load
time
for
the
websites
might
also
get
faster
because
of
the
smaller
timelines.
O
Next
slide,
please
so
at
cloudflare
we
ran
some.
Some
large
scale
experiments
for
both
IP
and
origin
frame
based
coalescing
for
the
IP
based
coalescing.
We
configured
and
coordinated
our
DNS
resolvers
and
the
serving
infrastructure
to
respond
to
clients
with
the
same
IP
addresses
for
each
of
the
sub
resources
that
that
are
served
by
cloudflare,
and
this
allows
clients
to
reuse
connections
to
the
same
IP.
But
performing
such
coordination
between
the
DNS
and
the
serving
infrastructure
is
is
particularly
challenging
because
of
the
various
traffic
engineering
rules
and
service
level
guarantees.
The
CDN
networks
might.
D
D
O
I
I
see
I,
see,
there's
a
there's,
a
question,
but
I
can
I
can
probably
feel
that
it
looks
like
Jonathan
is
already
on
it,
but
I
see
Martin's
question
which
is:
does
the
modeling
consider
the
effect
of
the
congestion
control
window
on
the
transfer
of
sub
resources?
O
It?
It
actually
does
not,
at
least
in
the
the
way
we
we
did,
the
modeling
it
was.
It
was
kind
of
naive
and
we
wanted
to
see
what
the
what
the
overall
possibility
might
look
like,
but
I'm
happy
to
take
more
questions
later
yeah.
Thank
you
Mark.
O
So
where
was
I
yes
yeah,
so
I
mean
overall,
the
the
deployment
of
origin
frames
kind
of
gives
us
some
some
advantages,
and
this
is
there
is
this
knee?
There
is
no
longer
this
need
to
coordinate
between
the
DNS
and
the
services
it's
and
the
serving
infrastructure
itself
and
the
CDN
can
perform
its
own
traffic
engineering
practices
without
any
disruptions,
but
with
authoritative
changes,
it
allowed
clients.
O
It
allows
these
clients
to
prevent
additional
DNS
queries,
but
we
also
found
that
approximately
92
percent
of
the
websites
that
we
had
in
our
measurement
and
need
less
than
10
additions
to
their
certificates.
To
achieve
this
effectiveness
next
slide,
please
so
over.
Overall,
the
usage
of
origin
frames
makes
coalescing
practical
while
posing
relatively
lesser
additional
overheads
on
the
network
operators,
and
it
reduces
the
additional
communication
costs
at
the
clients
and
has
little
difference
to
the
wired
line
activities
that
happen
so
to
validate
our
results.
O
We,
we
sampled
5000
websites,
proxy
by
cloudflare's
infrastructure,
split
them
up
into
a
control
and
experimental
group
and
deployed
origin
frames
for
the
experiment
group,
with
modifications
to
the
certificates
and
our
goal
was
to
attempt
coalesce
the
connections
from
the
websites
to
cdnjs
which
they
were
dependent
on
and
on
the
next
slide,
please
so
our
results,
our
results
were
very
interesting.
O
Our
results
show
that
connection
coalescing
with
origin
frames
does
work
in
practice
and
overall,
in
our
deployments,
we
found
50
fewer
connections
to
cdnjs
during
the
experiment,
with
no
changes
observed
in
the
control
set,
and
this
has
a
lot
of
implications.
This
implies
that
there
is
a
reduced
cryptographic
verifications
needed
to
verify
the
certificates
by
the
clients
and
the
active
measurements
from
clients
also
indicate
that
around
65
to
70
percent
of
the
connections
can
successfully
be
coalesced.
O
So
another
implication
is
that
there
is
reduced
number
of
connections
for
the
CDN
operators,
which
means
fewer
sockets
being
used
and,
as
a
result,
it
allows
more
clients
to
connect
to
the
same
infrastructure
so
which
definitely
has
a
set
of
advantages.
O
So
a
natural
question
which
follows
up
based
on
the
timelines
that
that
I
showed
before
the
modeling
is
what
happens
to
Performance
and
in
our
modeling.
We
find
that
if
every
operator
deployed
origin
frames
efficiently,
it
could
result
in
an
improvement
in
page
load
times,
but
individually
with
only
cloudflare
making
the
changes.
O
There
is
very
minor
Improvement
in
page
load
which
we
might
be
able
to
see,
but
our
active
deployments,
however,
did
not
see
any
improvements
for
both
IP
or
origin
based
deployments,
but
we
suspect
this
could
be
due
to
various
other
path,
characteristics
or
bottleneck
shares,
and
maybe
it
could
benefit,
and
we
could
rerun
these
experiments
again
with
more
operators
who
might
who
would
deploy
origin
frames
so
currently,
instead
of
improved
performance
or
page
load
time
metrics,
we
conservatively
claim
that
the
origin
frames
makes
performance
no
worse
next
slide.
Please.
O
The
other
implications
like
the
reduced
server-side
resources,
the
improvements
to
clients
for
their
cryptographic,
compute,
reduce
State
maintenance
and
lesser
burdens
on
traffic
engineering
are
prob,
are
stronger,
motivating
factors
to
actually
deploy
Connection
coalescing
in
the
world
next
slide.
Please.
O
So
one
one
key
challenge
we
observed,
which
which
might
explain
the
lack
of
adoption
of
origin
frames,
is
the
lack
of
support
for
server-side
origin
frames
and
we
we
contribute
an
implementation
of
origin
frames
with
our
code.
Changes
to
golang
and
the
changes
are
maintained
in
the
golang
and
net
module
Forks
that
are
on
the
on
the
GitHub
cloudflare
Arc.
Next
slide,
please.
O
So,
interestingly,
during
our
during
our
experiments,
we
realized
that
the
origin
frames
should
be
deployed
with
caution
and
there
are
many
non-rfc
compliant
Network
Stacks
deployed
out
there,
which
might
result
into
incorrect
Behavior.
So
our
experiments
uncovered
this
uncovered
a
compliance
issue
in
the
network
stack
from
a
large
antivirus
and
an
internet
security
software
vendor
and
the
the
internet
security
software
installed
on
client
devices.
O
It
did
not
discard
unknown
HTTP
frames,
as
was
recommended
in
the
specification,
but
and
instead
resulted
in
the
tear
down
of
the
entire
connection,
so
the
clients
could
not
access
some
of
the
websites
that
were
in
our
experimental
and
that
were
in
our
experiments
during
when
we
ran
these
over
the
over
the
two
week
period.
O
Next
slide,
please,
but
we
really
believe
that
the
key
motivator
for
actually
deploying
origin
frames
are
really
the
Privacy
benefits
to
the
ecosystem
rather
than
performance,
and
but
this
needs
additional
investigation,
but
it's
very
simply
put
for
each
coalesced
connection.
We
hide
what
would
be
an
otherwise
plain
text,
Sni
field,
and
it
prevents
any
plain
text:
DNS
queries
from
being
leaked
to
the
network
adversaries
next
slide,
please,
but
the
most
interesting
one
is
resource
scheduling
which
is
which
is
really
really
exciting
and
the
usage
of
origin
frames.
O
It
moves
the
scheduling,
opportunities
to
the
resource
endpoints,
which
are
the
servers
and
the
clients
and
without
origin
frames.
Multiple
connections
compete
for
the
available
network
capacity.
For
example,
the
server
sends
two
resources,
A
and
B,
which
are
requested
by
the
client
and
multiple
connections,
but
the
clients
might
have
made
the
requests
in
order
expecting
a
followed
by
B
to
generate
to
render
then
as
to
render
the
page
faster,
and
these
additional
connections
followed
by
all
the
undeterministic
path
characteristics.
O
D
O
Yeah,
thank
you
so
so
the
the
usage
of
of
origin
frames.
It
allows
servers
to
schedule
and
prioritize
these
resources
and
and
send
higher
priority
resources
and
in
the
process,
it'll
be
able
to
provide
the
the
necessary
resources
for
the
in
the
order
that
the
clients
might
might
actually
want
them,
and
this
work
is
really
a
call
to
action
for,
for
other
operators
and
large
content
providers
to
deploy
origin
frames.
O
There
are
various
benefits
for
the
origin
frames
that,
and
maybe
clients
should
also
deploy
support
for
the
authoritative
origin
frames,
because
the
servers
can
only
provide
the
hints
and
they
can't
really
enable
coalescing
until
the
client
actually
wants
to
enable
coalescing.
But
but
this
also
opens
up
a
lot
of
exciting
opportunities
for
DNS
early
hints
certificate
frames
and
like
improved
adoption
for
the
prefetch
or
preload
attributes,
while
also
allowing
content
providers
to
efficiently
perform
in
if
they
would
want
HTML,
rewriting
and
optimize
the
delivery
of
these
resources.
So
next
slide.
O
So
so,
with
that
I'm
open
to
questions.
A
A
Thank
you,
Satish
I
think
Tommy
and
I
had
this
presentation
because
it's
relevant
to
not
only
our
past
work
but
our
ongoing
work,
and
it's
really
good
to
get
these
checkpoints
to
see
how
the
mechanisms
we
Define
are
going.
So
if
people
have
questions
or
comments,
please
queue
up
while
folks
queue
up.
I
just
want
to
note
it's
almost
11
A.M.
Some
of
you
may
realize
it
is
Armistice
Day.
So
this
is
at
11
o'clock,
at
least
in
in
I,
believe,
commonwealth
countries.
A
People
typically
take
one
or
two
minutes
of
silence
to
recognize
the
sacrifices
of
people
beforehand.
I,
don't
think
we're
going
to
pause
proceedings
here
to
do
that,
but
if
you
want
to
leave
the
room,
please
feel
free
to
do
so
and
we'll
continue
with
the
agenda.
Thank
you.
So
Brian.
P
O
Yeah,
so
so
for
all
the
measurements
that
we
ran,
we've
done
it
with
clean
profiles
with
no
caching
and
the
reason
was
we
wanted
to
actually
observe
the
the
behavior
on
the
network
itself.
If,
if
browsers
cache
a
lot
of
resources,
we
would
not
really
see
the
need
for
coalescing,
but
we
wanted
to
study
the
impacts
of
coalescing
itself.
So
we
ignored
the
aspect
of
caching
completely.
O
Cash
yeah
yeah.
That's
that's,
definitely
a
very,
very
valid
question
and
I
I.
Don't
really
have
an
answer
for
that.
Q
Hang
on
just
wondering
you
didn't
really
lay
out
sort
of
the
exact
type
of
ways:
H2
H3
congestion
controls
all
stuff.
I!
Guess
you
won't.
You
won't
always
necessarily
aware
of
it
all,
but
I
wondered
what
sort
of
breakdown
was
in
terms
of
the
different
protocols,
because
that
could
affect
the
performance.
Quite
a
bit,
I
mean
because
it's
just
using
one
one
over
TLS,
then
as
opposed
to
say
H3
over
quick,
could
make
it
quite
a
big
difference
when
you're
maybe
trying
to
put
multiple
requests
down
the
same
time.
O
Yeah
sorry,
I
I
really
should
have
made
it
clearer
in
in
the
start,
but
we
were
focused
only
on
http
2.
D
Q
A
O
A
Okay,
next
on
the
agenda,
Ben,
are
you
ready
with
modern
http
proxies.
N
N
So
in
HTTP,
1.1
and
even
prior,
this
is
what
proxies
looked
like.
So
this
is
just
a
reminder.
This
is
what
proxies
have
looked
like
for
a
long
time,
so
in
for
an
HTTP
request
proxy.
You
use
this
thing
called
absolute
URI
form
where
you
take
the
path,
the
the
URI
that
you're
trying
to
fetch,
and
you
put
that
in
the
request
line
and
then
in
HTTP
1.1,
you
can
have
a
different
host
header
that
that
identifies
the
proxy
and
the
same
sort
of
arrangement
goes
in
Connect.
N
These
have
some
really
I
think
unfortunate
properties.
One
is
that
you
can
only
have
one
proxy
per
origin.
So,
unlike
everything
else
in
HTTP,
which
is
you
know,
operate,
it
exists
on
a
path,
so
you
can
have
more
than
one
on
every
origin.
These
proxy
Services
don't
have
a
path
of
their
own.
The
service
is
only
identified
by
this
host
header,
but
and
then
even
sort
of
to
make
it
even
worse.
N
Virtual
hosting
of
these
things
is
essentially
impossible,
so
you
can't
share
multiple
proxy
Origins
on
a
single
IP
address.
Yeah
you
could
in
HTTP
1.1
using
this
host
header,
but
starting
in
HTTP
2.
There
is
no
equivalent
of
absolute
URI
form
there's
if
there's
only
it's
only
possible
to
express
a
single
Authority
for
each
request.
So
that
means
the
the
proxy
just
has
to
know
what
the
actual
authority
of
the
proxy
is.
It's
not
expressed
in
the
request
with
TCP
transport
proxies
the
group
so
like
HTTP
connect,
the
same
problems
apply
plus
I.
N
Just
wanted
to
note
that,
like,
if
you
have
a
mix
of
ipv4
and
IPv6
addresses,
it
would
be
nice
to
be
able
to
get
happy
eyeballs
from
that
from
the
proxy,
but
instead
all
you
have.
All
you
can
do
is
pass
single
IP
addresses
to
the
proxy
next
slide.
N
Thank
you.
So
we
have
the
mask
working
group
for
the
past
couple
years
and
The
Mask
working
group
had
to
solve.
Essentially
the
same
problems
were
for
UDP
and
IP
proxying,
and
they
dodged
all
of
these.
These
problems.
They
recognized
that
we
have
these
problems
with
the
old
mechanisms,
and
so
they
came
up
with
proxy
mechanisms
that
so
that
going
forward,
we
don't
have
this
problem
so
there.
N
The
way
it
works
is
that
the
proxy
services
are
identified
by
URI
templates
and
then
the
the
the
template
encodes
the
host
the
the
host
of
the
target
into
the
URI
path
of
the
request,
so
that
that
makes
it
entirely
possible
to
have
an
explicit
proxy
Authority,
which
is
clearly
separate
from
the
the
host
that
you're
trying
to
reach
next
slide.
N
A
J
J
N
So
this
draft
just
proposes
to
take
that
strategy
and
and
use
it
to
create
a
modern
version
of
these
classic
HTTP
proxy
functions,
so
connect
TCP
is
the
most
the
more
obvious
one.
Certainly
so
it
uses
extended
connect
just
like
connect,
UDP
and
connect
IP
with
a
new
protocol.
N
But
the
most
important
thing
I
want
to
get
to
the
working
group
is,
if
you
were
designing,
HTTP
request,
proxying
or
HTTP
connect
today.
How
would
you
do
it
and
you
know,
can
we
can
we
write
a
draft
to
say?
Okay,
here's
how
you
do
it.
You
know
in
the
way
that
we
think
would
really
be
best
today
next
slide.
N
It's
it
does
happen
to
line
up
nicely
with
another
draft
that
I
I
wrote
about
taking
a
bunch
of
different
kinds
of
things
like
taking
the
the
connect
UDP
and
and
connect
TCP
and
and
and
do
and
and
putting
them
all
in
in
a
nice
collection
together.
So
you
can
use
them
as
a
unit.
N
There's
some
Charter
scoping
questions
about
what
could
fit
in
the
mask
working
group
mask
is
re-chartering,
but
it's
definitely
not
clear
to
me
that
any
of
this
work
would
fit
in
mask
even
after
a
retarder.
J
Hi
Ben
I
think
you've
probably
already
received
this
feedback,
but
I
think
that
there's
a
number
of
people
who
have
different
levels
of
comfort
with
the
two
different
pieces
in
this
draft
I
think
the
TCP
connect
thing
is
something
that
a
lot
of
people
have
expressed
the
interest
in
at
various
times,
and
the
idea
that
you
might
be
able
to
ask
a
particular
resource
for
a
connection
to
a
remote
host
is
an
interesting
one.
The
request,
proxying
side
of
things
is,
is
far
less
clear
in
terms
of
intent
and
I.
J
I
think
I'd
like
to
see
that
split
out
that
that's
the
one
that
engages
with
all
the
difficult
questions
about.
J
Well,
why
don't
you
just
put
the
URL
of
the
Target
and
send
that
request
to
the
proxy
without
any
need
to
have
an
identifier
for
the
proxy
or
why
don't
you
tunnel,
binary
messages?
Those
those
sorts
of
questions
are
things
that
we
would
have
to.
J
N
J
I
think
I
think
I
would
rule
out
the
HTTP
API
working
group
for
this,
but
this
is
something
that
this
group
could
take
on
very
easily.
I
think
this
group
could
also
say
mask
is
capable
of
doing
it,
but
I
think
the
decision
starts
here
at
this
point
because
the
protocol
is
on
here-
and
this
is
really
core
protocol
mechanics.
B
B
N
So
for
for
the
Ojai
I
should
say
that
is
my
intended
use
case
for,
for
this
HTTP
request
proxy
functionality.
N
This
is
somewhat
controversial,
I
guess
within
the
o-hai
context,
but
right
now,
Ojai
doesn't
Ojai
essentially
says
you
need
something
that
looks
exactly
like
an
HTTP
forward
proxy,
but
we
don't
specify
how
you
how
you
actually
use
it.
That's
considered
out
of
scope,
so
I
want
to
come
up
with
a
standard
for
that.
Otherwise,
it's
essentially
by
private
arrangement.
R
Yeah
some
minor
points
we
can
debate
in
the
futurist
side.
I
think
this
is
a
good
idea.
I,
like
it
I
think
this
is
the
right
working
group
for
it,
as
I
said
in
The,
Mask
working
group
earlier
this
week,
I
think
it's.
This
is
General
proxy
and
the
mass
group
is
trying
to-
or
at
least
I
want
them
to
avoid
becoming
the
general
proxy
and
working
group.
The
one
concern
I
have
before
I
say:
yes,
let's
adopt
this
now
is
well
I
think
this
is
a
good
idea
and
I
like
it.
R
I,
don't
know
if
it's
enough
of
a
good
idea
for
all
the
Legacy
proxy
implementations
to
rewrite
it
onto
this
new
way
of
doing
things.
So
I'd
like
to
see
a
little
bit
more
discussion
of
the
use
cases,
the
clients
we
think
will
want
to
implement
the
servers
that
we
think
will
want
to
implement
this
I
know
you
mentioned
Ohio
Manago,
so
maybe
there's
stuff
to
do
there,
but
I
just
like
to
see
a
little
more
discussion
of
what
this.
What
we
think
this
will
be
used
for.
Besides,
just
hey,
everyone
rewrite
your
old
stuff.
R
A
S
David
schenazi
mask
Enthusiast
I
just
want
to
Echo,
what's
been
said
before
we're
over
at
Google
we're
building
new
privacy
proxies
and,
in
the
case
we're
building
stuff
from
scratch.
Doing
the
new
connect
TCP
sounds
interesting.
S
The
we
don't
have
a
use
case
for
the
other
bit
though,
and
as
Martin
pointed
out,
there
would
be
dragons,
so
I
would
unbundle
the
two
and
then
on
the
question
of
the
working
group,
one
Echo
X
Point,
like
the
the
goal
of
the
at
least
as
I
personally,
would
like
to
see
it.
The
goal
of
the
new
Charter
of
mask
is
not
to
become
the
all
of
the
proxy
things
place.
A
Thanks
David,
and
so
personally,
my
feedback
is
I
think
this
is
probably
fine,
but
we
need
to
have
a
discussion
about
how
it's
positioned
and
maybe
the
terminology
that
it
uses
I
highly
doubt
that
people
deploying
hdb
proxies
today
and
writing.
Http
proxies
today
are
going
to
be
enthusiastic
about
switching
over
to
this
just
for
the
sake
of
it,
and
so
maybe
we
shouldn't
be
calling
This
Modern
HTTP
proxies,
because
it
implies
that
it
will
replace
them.
I
think
it's
an
alternative
for
other
use
cases,
but
we
should.
A
It's
great
to
be
ambitious,
but
you
know
any
other
feed,
actually
I
locked
the
cube,
but
I
think
we're
done
here
Tommy.
What
do
you
think
do
we
need
to
take
a
ham
on
adoption
here
or
or
just
take
it
to
the
list.
C
I
think
this
point
may
be
good
to
take
it
to
the
list.
It
sounds
like
we
have
pretty
consistent
feedback
that
there
should
be
a
split
here
and
there's
more
clear
support
for
the
tcpe
side
of
things
which
I
personally
agree
with
as
well.
So
maybe
Ben
if
you
can
provide
a
renamed,
smaller,
focused
document
on
TCP
and
maybe
split
out
the
message
proxying
and
then
we'll
take
that
to
the
list.
A
Finally,
we've
got
HTTP
sasolin
and
thanks
to
Davis
kanazi
doing
his
session
on
Monday
we're
perfectly
on
time.
So
let's
get
going
on
that
one.
M
M
Microphone
that
might
help
okay,
I'm,
not
I,
wouldn't
say
I'm
as
native
and
as
at
home
at
hdps.
You
guys
are
I
fly
into
protocols
from
from
security
and
from
cryptography.
M
End
yeah
sure,
so
most
protocols
stem
from
a
data
for
HTTP.
Of
course,
HP
is
a
very
long
history
and
have
adopted
Sasol,
because
it
gives
me
flexibility.
The
client
can
choose
a
mechanism
that
it
can
support
from
list
and
it
can
choose
something
that
matches
their
desire
for
cryptographic,
strength
and
might
even
involve
things
like
Channel
binding,
which
in
general,
are
very
difficult
to
do
in
HTTP,
but
in
particular
use
cases
might
actually
be
very
useful.
M
M
So
HTTP
authentication
appears
to
be
an
island,
it's
defining
its
own,
its
its
own
mechanisms,
sometimes
inspired
by
social,
sometimes
in
another
way
and
I.
Think
it's
just
double
effort
and
it's
a
Pity,
especially
because
there's
such
a
strong
focus
on
the
browser
for
authentication
purposes
so
that
restful
application
sort
of
stay
behind.
That's
a
better
I
think
a
few.
M
The
guy
implementing
that
came
up
to
me
explained
to
him
how
to
do
this
in
Sasol.
He
was
very
happy
that
we
had
an
implementation
for
https,
because
that
meant
he
could
demonstrate
his
very
cryptographically
very
Advanced
mechanism
in
the
protocol
that
most
people
favor
to
see
a
demo
in
and
to
use
of
course.
So
next
slide,
please.
M
This
is
just
a
miscellaneous
point
that
I
ran
into
when
you
look
and
if
I'm
wrong,
please
tell
me,
as
far
as
I
understand
the
user
in
in
an
URI
defense,
Authority,
so
part
of
where
to
locate
the
resource
and
well
we
all
know
the
basic
authentication
has
been
used
bits
outside
of
the
specification.
I.
M
Don't
think
this
has
been
a
specification
to
put
a
username
there
that
it
seems
to
have
a
red
oauth
part
in
the
in
the
specification
of
the
uri's
authentication
part,
and
that
I
think
has
caused
a
lot
of
confusion
about
usernames
in
HTTP,
app
Viewpoint,
where
it's
not
even
used.
It's
it's
it's
it's
forbidden.
Actually,
you
rise
as
a
whole.
M
Whole
domain
on
its
own
I
sometimes
feel
a
desire
to
publish
information,
publish
resources
in
a
way
that
is
not
iterable
in
DNS,
because
DNS
very
often
can
be
iterated
even
with
the
DNS
sac.
It
would
be
very
nice
to
have
a
way
to
conceal,
like
a
home
page,
if
you
consider
it
private
and
independent
of
what
your
eyes
do.
M
M
Quantum.
Computing,
of
course,
is
one
of
those
risks.
That's
threatening
us.
It
will
be
really
a
Pity
to
not
be
agile
and
easily
adopt
new
mechanism
that
might
help
with
that
Channel
binding
is
one
of
these
things
that
you
can't
put
in
respect
for
HTTP
I
think
because
there
are
places
where
you
can't
rely
on
it
simply
because
people
are
hopping
connections
all
the
time.
M
That's
part
of
HTTP,
but
a
client
that
doesn't
do
that
might
actually
benefit
from
such
a
facility,
depending
on
the
application
and
well
I
mentioned
I
mentioned
opaque
already
that
there
was
a
guy
who
really
enjoyed
having
a
way
to
use
HTTP
with
his
mechanism.
M
D
M
So
this
is
basically
how
it
looks.
The
Satchel
part
is
not
not
on
the
not
surprised.
Of
course,
the
realm
is
what
it
always
is.
M
M
This
is
where
this
proposal
does
something
that
an
early
attempt,
I,
think
10
or
12
years
ago
failed
because
the
HTTP
authentication
framework
wasn't
defined
at
that
time,
but
it
was
already
felt
improper
to
store
state
in
the
server
which
was
part
of
the
design
back
then,
because
Cecil
can
make
a
number
of
iterations
and
may
need
to
store
data
from
one
to
the
next.
So.
M
An
extra
token
services
that
just
bounces
back
and
forth,
with
encryption
and
signature
and
on
the
bottom,
you
can
see
your
first
response
by
the
client,
where
it's
celex
one
of
the
mechanisms
and
sends
the
first
client
to
server
dog.
This
stuff
would
all
be
base64
encoded.
Of
course,
next
slide,
please.
M
I
covered
all
that
next
slide.
Please,
oh!
M
This
is
something
Advanced
and
trans
use
case,
for
which
we
are
very
much
interested
in
this,
and
that's
to
use
Cecil
over
HTTP
arrive
at
a
web
server
and
then
continue
to
backend
using
diameter
and
actually
turn
back
to
the
client's
own
domain,
where
he
runs
his
own
identity
provider
so
that
he
doesn't
need
to
store
an
account
in
every
individual
web
server
that
can
actually
just
walk
to
web
server
log
on
with
credentials
that
he
himself
manages
and
then
end
up
being
authenticated
because
the
web
server
stopped.
This
is
Joe
at
example.com.
M
You
can
proceed.
You
can
assume
that
this
is
secure
identity
and
we
have
this
all
looking
I
mean
this
is
the
stuff
I
love
to
do
about
identity
and
crypto
and
again
HTTP
will
be
a
driver
for
making
this
possible
next
slide.
Please!
M
So
we've
implemented
this
this
stuff,
it's
part
of
Apache,
it's
probably
well,
it's
a
module
for
Apache
that
we
designed
it's
there's
an
extension
for
nginx,
there's
a
plugin
for
Firefox,
Dusty,
stuff
and
we've
been
doing
part
of
this
work
as
an
ngi
pointer
project.
For
a
European
Union,
because
it's
also
more
in
line
with
the
gdpr
than
the
few
centralized
silos
to
now
take
control
over
weblog
Homebase.
Basically,
so
we
think
it
adds
something
yeah.
M
There
are
more
slides
of
this
one.
Detailed,
the
block,
a
block,
specifications
and
codes
so
go
ahead,
go
ahead
and
have
a
look.
M
I
would
really
like
this
group
to
accept
a
sales
or
proposal
or
work
on
it
or
whatever
I'm
quite
willing
to
help
with
that,
of
course,
or
be
active
in
it.
But
I'd
like
your
questions
and
opinions,
please.
T
A
The
answer
to
that
is
varied
in
the
past.
I
I
wouldn't
worry
too
much
about
that,
but
we
have
worked
on
authentication
schemes
in
the
past.
We,
when
we
do
so,
we
do
coordinate
with
security
quite
tightly
yeah.
This.
M
Will
this
will
send
to
you
by
the
dispatch
group?
So
that's
right
to
you.
J
So
much
time
so
I
think
this
is
probably
the
right
place
to
talk
about
this
sort
of
thing,
because
I
think
just
using
something
like
Sasol
is
fairly
straightforward
in
the
sense
that
you've
got
messages
that
are
well
defined.
J
Maybe
we
need
to
learn
a
few
things
if
we
were
going
to
do
something
like
this,
but
this
group
can
integrate
these
things
and
and
the
integration
into
HTTP
is
probably
the
most
interesting
part
of
any
work
like
this.
The
the
question
about
Implement
Mentor
interest
is
something
that
I
I
can't
really
answer
at
this
point.
We
have
no
interest
in
in
doing
this.
The
question
I
have,
however,
that
relates
to
that
is,
to
what
extent
is
that
is
it
possible
through,
for
instance,
browser
apis
to
to
Simply?
J
Do
this
in
in
users
user
space
into
a
web
page.
M
J
M
J
Yes,
okay,
is
it
possible
to
build
just
within
a
web
page
either
using
service
workers
or
straight
up
fetch
API
JavaScript.
J
Understand
that
I'm
I'm
trying
to
assess
whether
this
is
a
a
self
whether
the
self-service
capability
is,
is
there
because
that
brings
me
to
the
next
question
is,
or
suggestion
perhaps
is?
If,
if
it
is
possible
to
to
drive
one
of
these
things
in
a
self-service
fashion,
then
that
gives
you
an
opportunity
to
demonstrate
a
utility
without
necessarily
requiring
everyone
in
the
in
the
ecosystem,
to
implement
something.
J
Integrating
this
into
browsers,
for
instance,
would
be
quite
a
challenge
because
we
have
to
do
not
just
the
work
here,
but
the
work
in
Fetch
and
other
places
in
order
to
get
that
to
work.
But
if,
if
you
can
just
go
over
the
top
demonstrate
that
it
has
utility,
then
you
can
build
momentum
for
the
for
the
effort.
We
can
recognize
that
and
we
can
perhaps
then
systematize
it
in
in
those
specifications
without
having
to
to.
J
Grapple
with
the
the
deployment
and
and
Implement
her
interest
questions
yeah.
M
And
that
could
be
a
an
application
that
does
restful
things,
for
example,
and
then
uses
it,
perhaps
in
a
python
fashion
or
something
yeah
follow-up,
Alex
of
social
Enthusiast.
D
J
I'm
not
sure
I'm,
not
certain
I,
think
there's,
there's
there's
privileged
information
that
browsers
will
put
in
those
header
fields
in
certain
contexts,
but
I
don't
know
for
certain
whether
or
not
you
can
from
say
JavaScript
just
set
the
values.
If
you
know
the
values
that
you
want
to
set
in
them
right.
So
so,
typically,
when
you
talk
about
in
the
browser
context,
when
you
make
a
a
credentialed
request,
which
is
what
fetch
refers
to
it
as
you,
you
ask
for
a
credential
request
and
you're.
J
Really
what
you're
asking
for
is
access
to
credentials
that
the
browser
holds
in
some
store
that
you
don't
have
access
to
for
by
ordinary
means.
You
can't
simply
get
the
credentials
out
and
use
them
yourself,
you're,
asking
the
browser
that
you
to
access
its
Store
and
put
them
in
there
in
those
contexts.
J
There's
constraints
then,
on
what
you
can
see
from
the
fetch
API
in
terms
of
the
request
and,
ultimately
the
responses
you
get
back
but
I
think
it
may
still
be
possible
to
set
the
values
explicitly
yourself
based
on
values
that
you
as
a
website
know.
Now.
Don't
quote
me
on
that,
because
I
have
to
check
the
spec
and
every
time
I.
Look
at
that
spec.
My
my
little
brain
explodes
and
I
probably
want
to
ask
someone
who
more
intimate
with
it.
You.
A
Thank
you
and
then
personally,
I
was
I.
Think
browsers
are
one
interesting
part.
I
ask
another,
is
client
libraries
like
Carl,
for
example,
Daniel's
not
here,
but
that's
another
obvious
place
to
go
we're
at
time,
but
it
sounds
like
this
is
just
a
continuing
discussion
and
I.
Think
A,
continuing
discussion,
I
think
Tommy
know
I
will
have
a
chat.
We'll
might
have
some
more
discussion
on
lists.
D
A
A
broad
effort,
the
working
group,
the
question
is:
would
it
be
productive
to
have
a
focused
effort
to
get
this
as
a
spec
produced
or
or
whether
we
should
wait?
A
bit
longer
is
more
than
that
seems
to
be
in
demanding
so
well.
Well,
Tommy,
don't
have
a
chat
and
we'll
be
in
touch
we'll
figure
out
next
steps,
I
think
yeah.
Thank
you.
Thank
you.
A
Alexi
was
there.
C
No
I
think
that
I
agree
with
your
summary
of
that
yeah.
Okay,
good.
D
A
Right
so
we'll
we'll
take
those
action
items
and
get
back
to
folks
and
get
back
to
the
list,
and
hopefully
we
will
see
folks
in
Yokohama.