►
From YouTube: IETF-HTTPBIS-20221005-2100
Description
HTTPBIS meeting session at IETF
2022/10/05 2100
https://datatracker.ietf.org/meeting//proceedings/
C
B
A
D
I
am
I've
kind
of
lost
my
voice,
so
if
you
can
do.
A
B
Okay:
this
is
the
first
time
for
using
medeca,
so
it's
a
little
bit
disconcerting,
but
we'll
see
if
we
can
get
through
it.
B
See
here
so
this
is
a
a
meeting.
We
may
not
use
all
of
our
a
lot
of
time,
but
we'll
see
how
it
goes,
I
think
for
Tommy
and
I.
The
main
purpose
of
this
meeting
was
just
to
to
do
a
little
prep
work
and
to
make
sure
that
we're
in
good
places
with
all
the
drafts
for
more
involved
discussions
and
actions
when
we're
all
in
London
or
participating
remotely
in
London,
so
it
might
just
be
status
updates.
B
We
might
talk
about
issues,
but
mostly
we
want
to
make
sure
that,
if
there's
anything
blocking
progress
on
the
drafts
that
we
have
that
we
can,
you
know,
unblock
them
if
at
all
possible.
B
So
for
this
meeting
we
we've
got
let's,
let's
get
through
the
fun
bits.
This
is
an
ITF
meeting.
That
means
that
we
operate
under
the
note
well
policy,
which
I'm
going
to
attempt
to
put
up
on
the
screen
right
now.
Let's
see
how
that
goes.
B
Oh
I'm
gonna
have
to
use
system
preferencing
things
okay,
so
if
you're
not
familiar
with
the
note
well
I'd
encourage
you
to
go
to
your
favorite
web
search
engine
and
search
for
ietf
note.
Well,
it's
the
set
of
policies
that
we
operate
under
regarding
things
like
intellectual
property,
harassment,
code
of
conduct,
working
group
processes
and
so
forth.
So
if
you're
not
familiarized
with
those,
you
should
become
so
as
soon
as
possible.
B
Yeah,
that's
not
gonna
work
right
now.
Okay,
we
need
a
scribe
for
the
meeting.
Can
anyone
volunteer
to
do
that.
B
I
realized
that
that's
an
interesting
question
are
we
have
we
fully
transitioned
over
to
zulup
or
is
Chipper's
still
working
I
have
a
feeling
that
jeopard
is
not
still
working.
B
B
Oh
fantastic,
thank
you
Martin!
If
you
can
do
that
and
and
send
us
the
notes
page,
there's
a
there's,
a
page
in
the
in
the
agenda
for
minutes
on
the
Note
server.
That's
the
best
place
to
do
it
if
you
can,
if
not
send
them
to
us
afterwards
and
likewise
in
the
in
the
minutes,
there's
also
a
blue
sheets
that,
if
everyone
can
or
actually
I,
think
that's
automatic
now
that
we're
using
meet
Echo,
isn't
it
so
that
may
not
be
necessary.
B
So
the
agenda
for
today
we've
got
resume
uploads
signatures,
the
query
method,
client
search,
alt
services
and
origin
in
HTTP.
Three
do.
B
C
D
A
D
I
am
not,
but
I
did
just
upload
the
slides
as
the
PDF,
so
you
can
just
present
them
directly
from
whoever
wants
to
present
them
can
present
them
directly
from
meadeco
by
asking
to
share
pre-loaded,
slides.
F
B
G
G
G
The
draft
has
been
just
recently
adopted,
so
there
are
still
a
lot
of
open,
fundamental
questions
on
how
to
proceed
with
this.
This
is
also
when
we
go
through
these
open
issues.
You
will
see
that
most
of
these
are
really
basic
things
that
we
need
to
sort
out
before
continuing
to
the
more
details
to
get
everyone
on
the
same
page.
G
What
are
resumable
uploads?
The
idea
is
basically
to
allow
transfer
of
bigger
objects
which
can
be
interrupted
voluntarily
or
involuntarily
through
connection
issues
at
any
time
and
then
be
resumed
without
having
to
transmit
all
of
the
previously
transmitted
data.
I
think
most
of
these
will
be.
Most
of
us
will
be
comfortable
with
this
concept
and
yeah.
So
that's
why
we
hear
the
draft
got
accepted,
and
now
we
have
to
sort
all
of
these
issues
through.
G
These
are
the
three
most
pressing
issues
in
our
opinion,
I
will
go
through
them
step
by
step,
introduce
it
a
bit
to
you
and
then
maybe
get
a
bit
of
feedback
on
this,
so
that
you
know
what
the
current
status.
So,
let's
start
with
the
first
use
of
server
generated
upload
Uris.
Let's
go
please
to
the
next
slide,
so
the
issue
about
upload
identifier
is
basically
the
most
basic
and
fundamental.
G
The
idea
is
if
the
client
wants
to
resume
an
upload,
it
needs
some
way
to
communicate
to
the
server
which
upload
we
are
talking
about.
Just
like
an
identifier.
The
current
draft
uses
for
this.
The
concept
of
a
client
generated
upload,
tokener
I,
will
go
more
into
this
in
the
next
slide,
but
this
has
raised
many
concerns
with
valid
reasons.
So
this
is
why
this
is
the
most
important
one
as
a
comparison.
G
There
are
other
methods,
of
course,
like
using
server
generated
upload
URLs,
and
these
are
also
used
in
other
protocols
for
resumable
uploads,
for
example.
The
test
protocol
is
something
that
we
have
also
worked
on
before,
which
is
an
open
source
protocol
for
resumable
uploads,
and
they
have
chosen
a
different
approach,
but
I
would
go
into
that
in
more
detail
what
it
means
in
the
next
slide.
Please.
G
Right
so
the
client
generated
upload
token
means
that
before
the
client
starts
the
upload
and
basically
before
it
contacts
the
server
of
any
intention
of
doing
this
upload,
it
generates
a
random
token.
This,
for
example,
can
be
just
a
uuid.
It
can
also
be
something
else,
and
then
this
token
is
included
in
every
request
that
is
sent
to
the
server
regarding
this
resumable
upload,
for
example,
a
header
in
the
current
graph.
We
use
the
upload
token
header,
and
then
the
server
always
knows
what
upload
we
are
talking
about.
G
The
there
are
many
benefits,
but
also
many
big
disadvantages
to
it.
One
benefit
is,
of
course,
that
there
is
no
additional
requests
needed
to
get
the
upload
started.
You
can.
The
client
can
just
send
the
request
include
this
upload
token,
and
it
can
already
include
some
data
with
it.
It
can
basically
get
started
from
the
ground
up,
there's
no
round
trip
needed,
and
this
is
of
course,
really
great
to
reduce
latency
and
just
be
a
quite
performance
system.
G
Another
Advantage
is
that
the
upload
can
be
resumed
in
any
state.
Basically,
we
don't
have
to
wait
for
any
identifier
from
the
server,
because
if
we
ask
for
such
an
identifier,
the
response
can
get
lost,
and
if
we
don't
know
this
identifier,
we
can't
resume
and
upload.
We
don't
know
what
to
talk
about.
We
can't
really
communicate
about
it,
so
this
is
also
a
nice
benefit
of
this
approach
that
we
can
resume
in
any
state,
because
the
client
always
knows
this
token.
It
can
always
tell
the
server
what
uploaded
is
talking
about.
G
Another
like
a
bit
small
Advantage
is
that
this
upload
token
can
also
be
used
to
trace
the
upload
for
the
system.
If
you
have
proxies
or
different
hierarchies
of
servers,
you
can
always
use
this
upload
token
as
a
kind
of
an
identifier.
What
what
object
is?
Is
this
request
handling?
G
But
of
course
there
were
many
concerns
raised
about
this,
because
it
takes
away
the
responsibility
of
the
server
to
generate
this
identifier.
This
identifier
is
generated
by
the
client
and
the
server
must
handle
with
it,
meaning
if
there
are
possible
collisions
between
tokens,
because
even
if
they
are
generated
randomly
there
may
be
collisions
between
different
clients.
G
Also,
it
kind
of
breaks
with
like
one
of
the
fundamental
mechanisms
hdp,
because
the
identify
vacation
is
not
done
in
the
URL
or
the
URI,
but
it's
actually
done
in
an
header
which
of
course,
is
a
very
how
you
say,
controversial
thing
to
do,
I
would
say
so
there
are
these
valid
concerns,
of
course.
However,
in
the
draft
that
is
as
adopted
right
now,
we
still
use
these
client
generated
tokens
on
the
next
slide.
We
will
see
an
alternative
to
this.
G
What
we
call
a
server
generated
upload
URLs.
This
is
basically
the
opposite
to
the
current
approach
that
we've
shown
before
in
this,
the
client
sends
additional
request
for
the
upload
is
started.
Asking
the
server
to
create
an
upload
resource
is
that's
basically,
hey
I
want
to
upload
x
amount
of
bytes,
please
let
me
know
where
I
should
send
the
data
to,
and
then
the
server
responds
with
an
URI,
maybe
in
the
vocation
header,
where
it
points
to
a
resource
where
this
upload
can
be
then
performed.
G
This
response
can
be
a
200
response.
It
can
be
an
intermediate
100
response.
It
doesn't
really
matter
what
the
format
is,
but
just
the
the
server
gives
information
back
to
the
client
where
it
should
redirect
the
actual
uploads
to
the
benefit,
of
course,
is
that
the
service
in
full
control
of
this
identifier,
meaning
it
can
ensure
it's
Unique.
It
can
encode
additional
information
in
it
like
which
server
to
direct
it
to
code,
some
other
information
into
it
and,
of
course,
this
entire
upload
creation
fits
nicely
into
the
typical
scheme
that
you
see
with
HTTP.
G
You
know
you
have
requests
and
then
responses.
Everything
is
encoded
in
the
URI,
which
fits
nicely
into
a
lot
of
structure
that
we
have,
of
course,
also
disadvantages.
G
These
are
these
advantages
were
the
reasons
why
we
originally
chose
to
go
with
these
client
generated
tokens,
meaning
you
have
this
additional
round
trip
in
the
first
request
for
creating
the
upload
resource
there
can.
No
can
you
can't
really
transfer
data
unless
you
want
to
have
some
work
around
for
this,
and
this
is
maybe
not
really
helpful,
especially
if
you
have
also
want
to
enable
resumable
uploads
for
smaller
files,
you
you
have
to
handle
this
additional
round
trip.
G
Another
problem
is:
if
this
original
file
creation
request
fails.
The
client
is
left
in
a
bit
of
a
weird
State,
because
it
doesn't
know
how
to
contact
the
server
again
like
it
doesn't
have.
Has
no
identifier
to
talk
to
the
server
and,
in
general,
an
upload
creation
request
is
not
retryable
or
it's
not
even
potent.
G
You
can
retry
it,
but
some
servers
may
deny
the
second
creation
request
because
and
resource
is
already
being
created.
So
this
opens
up
another
set
of
issues.
Maybe
these
disadvantages
are
worth
to
handle.
Maybe
it's
just.
We
have
to
accept
it,
but
these
are
like
the
disadvantages
of
this
just
to
emphasize.
This
is
not
yet
what
we
have
written
the
draft.
This
is
basically
an
alternative.
It
is,
for
example,
used
in
the
test
protocol
and
also
in
many
Production
Services,
because
it
just
fits
so
nicely
into
the
schema
but
yeah.
G
This
is
basically
the
topic
of
upload
identifiers.
This
is,
in
our
opinion,
the
most
fundamental
question.
So
any
feedback
on
this
is
highly
appreciated,
because
this
is
basically
what
will
allow
us
to
build
the
rest
of
the
protocol
upon
okay
enough
about
this
issue.
Please
go
to
the
next
slide
where
we
talk
about
another
topic,
and
that
is
the
topic
of
feature
detection.
What
we
basically
mean
by
feature
detection
is:
can
we
make.
G
Martin
says
with
1xx
this
round
trip
is
in
parallel
to
the
upload.
This
is
true.
This
is
a
point
that
we
wanted
to
talk
about
later,
that
relying
on
one
XX
responses
is
something
that
we
also
have
to
think
about,
because
this
basically
means
that
you
cannot
implement
it
in
current
browsers
without
browsers
modifying
their
apis.
G
It's
something
that
we
have
to
see
if
we
want
to
accept
it
or
not,
why
can't
the
initial
request,
starting
to
stream
the
upload
as
well?
Yes,
it
would
be
a
possibility.
G
G
I
would
say
so:
yes,
yes
as
well.
This
is
not
something
that
we
have
discussed
so
far.
So
far,
it
has
been
more
about
which
approach
will
be
more
like
the
the
only
correct
one,
but
this
is,
of
course,
a
really
good
point
that
maybe
we
it's
worth
keeping
both
approaches.
B
B
G
C
Don't
know
how
to
use
the
queue,
so
maybe
I'll
remind
everyone,
you
put
your
hand
up,
and
then
you
can
talk
to
people.
I
I
think
that
why
not
do
both
is
kind
of
a
value
case.
Yeah!
That's
what
happens
when
we
fail
to
fail
to
work
through
the
reasons
for
each
one
of
the
the
different
Alternatives
and
understand
what
what
sort
of
constraints
are
on
the
problem.
H
C
The
client-side
thing
might
be
better
addressed
by
having,
for
instance,
different
different.
G
C
Generator
tokens,
or
something
like
that
just
for
the
purposes
of
tracing,
for
instance,
so
I'm
not
sure
that
this
is
so
clearly
a
while.
Not
both
situation.
B
Yeah
I
I
think
we'd
have
to
do
the
work
I'm
just
reminded
more
now
we
have
different
mechanisms
in
HTTP
like
put,
and
then
you
know,
post
and
get
a
location
back
now,
201,
because
there
are
distinct
use
cases
and
they
need
different
mechanisms.
I'm,
not
sure
that
we're
in
that
state
here
we
might
be.
C
Yeah
the
reason
I
say
that
is
that
part
of
the
reason
that
I'm
seeing
for
the
client-side
one
often
comes
back
to
being
able
to
to
do
something
within
that
first
round
trip
and
that
to
me
doesn't
really
gel,
particularly
well
with
the
the
idea
that
you've
got
a
a
long-lived
and
potentially
resumable
upload.
If
something
can
complete
within
one
round
trip,
then
great,
why
build
all
this
machinery.
G
Yeah,
maybe
on
the
base,
also
on
this
point
in
a
few
situations,
it
is
nice
to
have
one
API
for
uploads
that
work
for
small
files,
as
well
as
for
bigger
files.
G
So
this
is
a
really
good
point.
That's
saying
that
it's
worth
considering
big
files
where
it's
worth
to
do
an
additional
round
trip,
because
if
you
upload
a
large
video
one
additional
round
trip
may
not
be
that
problematic.
But
if
you
talk,
for
example,
about
smaller
images,
one
additional
round
trip
could
basically
just
double
the
entire
upload
duration
and
in
systems.
G
It
might
be
interesting
to
have
only
one
API
for
accepting
all
files,
so
it
would
be
helpful
if
this
resume
will
upload
interface
will
be
constructed
in
just
a
way
that
it
actually
works
for
different
sizes.
G
Great,
if
there's
nothing
else
for
this
for
now,
I
think
we
we
can
continue
watch
the
next
slide.
Yes,.
G
Talking
about
feature
detection,
what
we
mean
by
this
is
there
has
been
interest
into
integrating
resumable
uploads
in
the
HTTP
Stacks
that
are
offered
by
platforms,
for
example,
directly
into
browsers,
or
the
HTTP
stack
that
is
available
on
mobile
platforms
in
such
a
way
that
they
can
transparently
upgrade
upgrade
a
file
upload
to
resumable.
If
they
know
that
the
server
supports
it
so
that
without
the
developer
explicitly
enabling
it,
it
would
upgrade
a
file
upload
to
resumable
if
it
sees
the
oso
supports
it.
G
Of
course,
it
would
require
that
the
client
somehow
discovers
that
the
server
supports
it.
Actually,
the
current
approach
that
we
fought
about
in
the
draft
is
that
the
client
in
the
request
indicates
that
it
is
interested
in
resumable
uploads,
for
example,
using
a
prefer
header
with
or
in
the
current
draft.
We
use
the
upload
token
header
for
that,
but
this
might
or
might
not
survive
depending
on
the
outcome
of
the
first
issue,
and
the
server
then
responds
with
an
intermediate
response
indicating
yeah
I
do
support
resumable
uploads.
G
Please
go
ahead
and
use
this
API
for
me.
If
you
know
such
requests
or
response
would
be
received
by
the
client,
it
would
then
have
to
assume
that
this
is
like
a
traditional
upload
which
it
cannot
resume.
Of
course,
this
also
brings
other
questions
up
well,
if
such
a
response
get
lost
in
the
way.
How
would
the
client
react
to
this?
Is
such
a
method
really
backwards
compatible,
or
could
it
run
into
issues
if
the
server
doesn't
know
anything
about
reasonable
uploads?
G
This,
of
course,
brings
up
a
whole
another
set
of
questions.
Of
course,
this
is
only
one
approach
so
far
there
might
be
different
alternatives.
This
idea
might
also
be
discarded
altogether.
If
we
say
okay,
no,
we
don't
want
to
transparently
upgrade
file
uploads
to
resumable
uploads.
We
want
developers
to
explicitly
opt
into
this
mechanism,
so
this
is
something
that
is
also
worth
discussing.
If
we
actually
want
to
have
it
or
not,.
G
Yeah,
that's
about
this
issue.
Is
there
any
feedback
right
now.
G
Yeah
koyong
says
we
have
a
desire
to
support
it
in
the
HTTP
client,
Library
itself,
transparently,
upgrading
all
uploads
into
reasonable
uploads.
We
hope
that
it
had
no
adds
no
overhead
and
requires
no
opting
in
so
we
can
do
it
for
all
uploads.
This
is
basically
what
I
was
talking
about.
Some
there's
interest
into
doing
this,
but
it's
also
a
question:
if
there's
enough
interest
into
doing
this,.
G
G
If
there's
no
else
mentioned,
maybe
let's
continue
to
the
next
slide,
because
it
is
also
an
interesting
point
that
talks
about
this.
Basically
browser
compatibility.
The
question
is:
can
we
make
these
resumable
uploads
work
in
such
a
way
that
they
are
compatible
with
existing
browse
implementations?
G
For
example,
if
we
rely
on
intermediate
responses-
and
we
run
into
issues,
because
the
current
fetch
API
in
browsers
does
not
expose
this
information,
it
makes
it
not
accessible
to
developers,
meaning
we
would
first
have
to
wait
for
browser
vendors
to
implement
and
standardize
such
an
interface,
and
this
might
take
a
long
time.
It
might
actually
be
a
problem
for
adopting
such
a
solution.
G
This
may
not
be
a
problem
if
we
say
as
before
that
we
allow
multiple
approaches.
In
such
a
case,
the
client
would
have
to
select
the
one
approach
which
works
on
its
platform,
but
only
relying
on
approaches
which
would
not
work
with
current
browser.
Implementations
is
problematic
in
our
minds,
because
it
just
is
a
big
barrier
to
to
adoption.
Of
course,
this
is
like
only
talking
about
browsers,
I
think
on
most
mobile
platforms.
G
For
example,
you
can
ship
your
own
HTTP
Stacks,
so
this
is
a
lot
of
concern
for
these
platforms,
but
for
browsers
it
is
and
there's
of
course,
a
big
Target
for
resumable
uploads.
D
Do
have
a
couple
people
in
the
queue
Oh
Martin
and.
B
C
Videos
not
mirrored
wonderful,
so
the
the
challenge
I
think
is.
We
have
two
options,
both
of
which
aren't
particularly
good
from
our
perspective
and
someone's
going
to
need
to
do
some
work
either
way
in
order
for
to
teach
the
fetch
API
about
the
requisite
1xx
responses,
you
would
have
to
Define
new
apis
new
procedures
within
fetch,
that's
non-trivial,
just
putting
that
there
we
haven't
done
one
XX
in
the
past,
it's
potentially
possible
to
do
1xx
generically,
but
then
you're
you're
introducing
some
other.
C
With
the
existing
one,
XX
features
like
100
continue
and
103
early
hints,
which,
both
of
which
are
supported
in
Fetch,
but
supported
as
an
internal
function.
The
alternative
is
to
Define
1X.
The
new
one
XX
is
a
new,
a
thing
that
is
understood
by
fetch
such
that
the
browser
itself
would
then
be
responsible
for
doing
the
transparent,
upload
resumption
and
all
those
sorts
of
other
things
again.
That
is
entirely
possible,
but
it
still
requires
changes
to
fetch
and
I
think
either
way.
C
A
Hello
yeah
I,
created
that
issue.
I
I
do
agree,
there's
a
lot
of
stuff
around
like
patch
API
I'd,
say
that's
a
web
platform
more
than
browsers
lots
of
stuff
that
you
could
potentially
do
there.
The
the
the
issue
I
created
was
more
after
discussion
with
some
some
people
around
who
are
like
you
know,
give
them
a
very
high
level
view
of
what
the
feature
is
like.
A
Wouldn't
it
be
cool
if
things
worked
like,
however,
however,
a
web
page
presents
to
the
users
some
way
to
upload
things,
it's
just
kind
of
handled
internally
by
the
web
brows
and
you
don't
need
to
deal
with
it.
If,
if
a
web
browser
doesn't
Implement
such
a
feature,
it
just
behaves
like
it
used
to
and
it
might
fail-
and
you
might
just
have
to
manually
and
try
you
try
and
you
know,
keep
failing
until
you
get
onto
a
better
network.
A
But
if
that
wasn't
the
case,
it
would
just
work
magically
a
bit
like
making
my
downloads
more
reliable.
It's
very
speculative
I,
don't
appreciate
all
of
the
the
work
or
the
changes
that
are
required.
I
understand
that
there
it's
more
than
just
writing
this
back,
but
yes,
I,
think,
like
20's
point
I,
think
there's
a
difference
between
trying
to
expose
all
of
the
Machinery
such
that
JavaScript
folks
can
fill
in
the
gaps
versus
working
with
with
vendors.
A
To
try
and
do
this
inside
and
and
sometimes
doing,
the
thing
inside
could
be
easier
than
having
to
design.
You
know
a
well-formed,
abstract,
General
API
that
can
accommodate
all
cases.
That
sounds
like
a
big
amorphous
mess.
That's
going
to
be
hard
for
people
to
get
their
heads
around
versus.
You
know
like
100,
expectors
kind
of
handled
today,
and
no
one
worries
about
it,
but
we're
all
worried
about
any
other
status
code.
Why?
Why
is
that?
B
Yeah
and
for
my
part,
I
very
much
agree
with
Lucas.
There
I
think
it's
possible
to
oversell
how
much
work
it's
going
to
be
getting
something
into
fetch.
You
know.
Yes,
it
needs
to
be
detailed
oriented,
they
specify
everything
algorithmically,
but
but
historically
the
hardest
part
of
getting
stuff
into
Fetch
and
similar.
What
working
group
specs
is
getting
browsers
to
commit
to
interest
and
implementation.
B
So
if
you
can
get
one
or
especially
two
browsers
to
commit
to
it,
I
think
you've
got
a
pretty
decent
chance.
As
Martin
mentioned,
we
already
have
some
1xx
status
codes
in
Fetch.
They
are
one-offs.
I
would
shy
away
from
designing
or
trying
to
design
a
generic
1xx
API
in
Fetch
I
think
you'd
get
a
lot
of
scrutiny
there
for
for
security
and
privacy
and
other
concerns.
But
putting
a
bespoke
Handler
in
in
for
for
a
particular
mechanism
is,
is
much
more
achievable
and
again,
as
Lucas
says.
B
That's
if,
if
you
know
you
need
to
do
that,
if
you
need
to
expose
it
to
JavaScript,
so
I
wouldn't
I
I,
wouldn't
say
you
know
it's
an
impossible
or
it's
an
incredibly
daunting
task.
It's
just
you
know:
you're
gonna
need
to
do
some
spec
work,
Anna's
very
willing
to
work
with
people
the
other.
What
working
group
folks
are
too,
and
of
course
you
need
some
implementary
interest,
that
that
is
what
they
want
to
say.
H
Yeah
hello,
so.
A
A
We
already
have
the
people
adopting
task
31
and
they
have
their
service
and
clients
working,
and
they
just
want
to
upgrade
to
this
new
protocol,
so
I
believe
the
current
craft
have
this
feature,
detection
being
optional,
so
that
it
does
not
depend
on
the
list.
500
responses.
A
This
really
I
believe
that
this
provides
support
for
both
use
cases
like
for
the
use
case
of
a
browser.
We
depend
on
100.
for
those
other
use
cases,
you.
A
G
Yeah,
thank
you
for
this.
This
feedback
I
think
it's
really.
It's
a
nice
perspective
on
especially
fetch
API,
that,
of
course,
we
don't
have
to
make
a
generic
implementation,
but
just
something
that
if
we
decide
that
way,
that
is
like
a
another
detail
that
should
be
handled
internally
by
the
fetch
API.
G
G
There
are,
of
course,
other
issues,
besides
the
three
big
ones
that
I
mentioned
just
now.
These
are
more
important
in
the
meaning
that
they
basically
allow
the
LIE.
The
fundamental
work
that
we
can
then
build
upon
are
the
issues
which
are
currently
open
are
talking
about
prioritization
of
concurrent
uploads,
meaning
if
we
can
interrupt
and
resume
uploads
at
any
time,
maybe
it's
interesting
for
vendors
to
prioritize
certain
uploads
and
prefer
them
over
other
ones.
G
Of
course,
this
is
a
very
hypothetical
question
that
is
not
too
urgent
right
now,
I
would
say
the
second
issue,
and-
and
please
forgive
me-
I
messed
up
these
two
issue,
numbers
they're,
not
the
same.
Actually,
I
I
did
the
mistake.
There,
the
other
issue
is
basically
talk
about
a
header
called
uploading
complete.
G
This
header
is
intended
to
indicate
for
the
server
that
an
upload
is
not
finished
in
one
request,
but
there
will
be
subsequent
requests,
and
there
are
some
concerns
that
this
may
not
work
with
other
HTTP
mechanisms,
but
as
this
Heather
might
not
survive
depending
on
how
we
decide
regarding
our
first
issue,
I
put
this
yet
a
bit
to
the
side,
because
this
is
not
too
urgent
right
now,
but
yeah.
G
This
was
a
brief
overview
of
the
current
state
of
the
resumable
uploads
draft,
especially
highlighting
what
our
current
questions
are
and
thank
you
all
for
all
of
this
feedback
already
and
if
there's
any
other
feedback,
please
let
us
know.
Thank
you.
B
Thank
you
very
much
that
was,
that
was
I.
Think
really
useful.
Look
forward
to
this
draft
progressing
any
other
comments
on
this
one.
Or
can
we
move
on.
B
Could
this
overlap
with
item
potency
key
somewhat,
so
there's
a
draft
in
the
HB
API
working
group
called
item
potency
key,
which
is,
is
basically
a
way
for
a
server
to
realize
that
a
post
message,
for
example,
has
been
sent
before
and
so
to
have
exactly
what
semantics.
For
that
request,
my
instinct
is
that
they're
adjacent,
but
not
the
same,
but
it
might
be
interesting,
at
least
for
for
the
authors
to
be
aware
of
it.
B
Yeah
I'll
make
sure
you'll
link
to
it
and
Julian
remarks.
It'll
be
good
to
collect
information
about
1X
support
and
client
libraries
on
the
wiki
yeah
we've
attempted
that
before
maybe
we
should
just
try
and
continue
to
collect
that
information,
because
it
it's
good
to
know
about
I
think
as
I
understand
it
we're
getting
some
bugs
worked
out
in
a
few
of
the
remaining
problematic
implementations.
B
B
D
Have
so
much
looks
like
it's
Broadcasting?
Yes,
okay,
fantastic
yeah!
We
don't
have
any
slides
for
this,
so
a
quick
update
signatures
is
now
in
working
group
last
call,
which
means
the
document
is
completely
perfect
and
it
will
not
change
at
all
before
it
gets
RFC.
D
Obviously,
for
real,
though
this
is
a
long
and
fairly
complex
document,
there's
a
lot
of
moving
Parts
here,
so
we
need
everybody's
eyes
on
it,
especially
from
from
Annabelle,
in
my
perspective,
especially
the
really
deep
HTTP
experts
in
this
group
to
make
sure
that
we
are
not
using
the
wrong
HTTP
term
in
the
wrong
way
in
the
wrong
place,
because
that's
been
a
lot
of
as
as
folks
who
have
read
previous
drafts
of
this.
D
That's
that's
been
a
lot
of
what
Annabelle
and
I
have
needed
education
on
in
the
past.
So
you
know
ultimately,
all
eyes
on
this.
That
we
can
get
would
be
deeply
appreciated.
D
There's
an
extensive
security
consideration
section,
as
you
would
expect,
with
a
document
like
this.
We
recently
tried
to
bin
all
of
those
into
sort
of
major
categories.
If
we're
obviously
missing
anything
or
any
aspects
of
the
considerations,
you
know
please
raise
those
as
issues
as
well,
so
that
we
can
continue
to
sort
of
Polish
this
on
its
on
its
way
to
the
next
stage
of
the
process.
D
We
do
have
multiple
implementations
of
this
across
multiple
languages
and
no
they're,
not
all
for
me,
though,
a
few
of
them
are,
and
We've
also
been
starting
to
see
implementers
of
previous
attempts
at
HTTP
signatures,
so
the
Cabbage
giraffe,
for
example,
we've
seen
a
couple
of
cases
of
groups
using
or
adapting
a
cabbage
draft
implementation
into
the
into
the
current
HTTP
message:
signatures
draft
formats
and
structures,
and
things
like
that.
D
So
that's
been
really
good
to
see
because
there
was
you
know,
there's
a
lot
of
inertia
behind
sort
of
those
Community
drafts,
the
the
IDS
that
came
before
this
working
group
effort.
So
it's
really
good
to
see
that
that's
starting
to
really
move
forward
regardless.
We
think
that
the
document
is
is
good
enough
to
make
the
next
stage
right
now
and
we
welcome
all
feedback.
B
So,
just
a
heads
up:
if
anybody
wants
to
queue
up,
please
do
but
I
I've
heard
some
some
folks
wondering
what
the
appropriate
level
of
review
of
major
new
security
mechanisms
is
in
the
iitf.
Currently,
if
you
look
at
how
TLS
was
reviewed,
TLS
1.3
was
reviewed
before
it
went
out
whether
we
need
to
put
the
kind
of
call
out
to
get
some
some.
B
You
know
academic
looks
at
it
some
some
security
researchers
looking
at
it
and
some
verification
of
of
what's
going
on
here,
I
I
expect
we'll
probably
get
some
some
people
making
comments
to
that
effect.
Sometime
soon.
So,
okay,
really
just
a
heads
up
for
folks
Justin
and
for
everyone
else-
the
working
group
to
start
thinking
about
what
they
think
about
that.
B
What
the
appropriate
levels
are,
whether
we
need
to
have
a
more
extended
working
group
last
call
or
appeared
after
working
group
last
call
to
give
folks
a
chance
to
get
that
much
more
broad
review
and
and
what
what
people
are
comfortable
with
I
know.
We've
had
you
know
this
document
in
process
for
a
long
time,
and
a
lot
of
folks
are
impatient
to
get
it
out
the
door,
but
we
also
want
to
make
sure
we
do
the
right
thing
there.
D
To
to
that
effect,
Mark
is
there
a
plan
to
formally
engage
the
security
directorate
ahead
of
the
iesg
or
IAB
review
stages?
Again
we.
B
Already
had
an
earlier
review
from
them
right,
we'll
request,
another
review
and
I
think
that's
probably
a
good
idea.
I.
B
B
But
but
yeah
as
well.
Sure.
D
Yeah
no
to
to
address
Christopher
Wood's
comment
in
the
chat
yeah.
This
is
not
a
replacement
for
sort
of
the.
You
know
the
deeper
security
analysis
that
Mark
was
talking
about
this
is
this
would
be
in
addition
to
it
I
I
do
most
of
my
work
is
in
the
security
area
and
there's
a
there
are
a
lot
of
folks
that
do
formal
analysis
of
protocols,
especially
multi-party
security
protocols,
like
this
one.
D
That
would
probably
have
some
good
things
to
say,
or
some
good
feedback
to
give
on
this
document,
whether
they
say
good
things
about
it
or
not.
That's
you
know
that's
up
to
how
the
analysis
goes.
D
Home
I
can
reach
out
to
some
folks
that
I
know
that
are
that
have
done
work
in
the
in
the
oauth
and
app
spaces
in
the
past
and
I
know
at
least
one
of
the
core
TLS
folks
yarn
Schaefer's
been
following
the
HB
signatures.
Work
pretty
closely
with
he's,
got
his
own
implementation
as
well,
so
he
might
be
somebody
that
we
can.
We
can
try
to
tap
for
contacts
in
a
similar
way
that
TLS
did
I.
D
Of
the
stack
and-
and
that
comes
with
a
lot
of
benefits
and
but
I,
you
know
I'm
I'm
not
opposed
to
making
sure
that
the
right
academics
get
their
teeth
into
this.
B
Oh
sorry,
from
my
perspective,
I'd
love
to
do
Clarity
on
on.
You
know
how
we
we
decide
what
the
appropriate
level
of
review
is.
You
know
when
is
a
a
formal
workshop
and
and
that
kind
of
level
of
effort
necessary
and
Justified,
and
what
are
the
gradation
below
that
and
so
forth,
and
so
on,
so
that
that's
a
bigger
discussion
that
we
should
probably
kick
off,
but
yeah
Chris
sure.
C
Can
you
hear
me?
Yes,
hey
yeah
thanks
thanks
for
the
update
and
I
I
wanted
to
just
reiterate
sort
of
what
Mark
was
saying.
I
was
probably
the
the
one
of
the
people
that
had
formed
his
comment.
C
Just
now,
I
was
I
was
kind
of
surprised
to
see
this
in
last
call
without
seeing
sort
of
some
analysis
to
the
extent
that
TLS
received
during
its
publication
process,
especially
considering
the
complexity
involved
in
the
spec
I
I
recognize
that
it's
like
a
different
layer
in
the
stack
like
it's
not
like
a
key
exchange
protocol.
It's
some
like,
like
just
a
digital
signature
thing,
but
I,
don't
think
these
days.
C
That
absolves
it
of
any
like
additional
review
or
at
least
review
that's
comparable
to
what,
like
other
ITF
security
protocols
receive.
So
I
would
suggest
that
we
like
Park
this.
Even
if,
like
the
word
group,
less
call
comments
are
positive
until
we
receive
that
analysis.
C
I,
don't
know,
I
mean
to
your
part.
Point
mark
I,
don't
know
like
this
seems
like
a
a
question,
someone
larger
than
this
particular
document
and
I
think
maybe
sector
like
and
other
security
area.
People
would
like
to
engage
on
figuring
out
what
is
like
the
right
bar.
We
need
to
hold
drafts
accountable
to
these
days,
but
just
like,
given
that
the
the
number
of
security
considerations
that
are
in
the
draft,
it
I
think
it.
C
That
alone
suggests
that
this
needs
some
sort
of
real
formal
analysis
and,
if
they're
photographers
or
the
Security
Experts,
that
you
that
you
referenced
Justin
are
able
to
do
that.
That
would
be
lovely
I'm
sure
there
are
other
people,
someone
in
the
the
TLs
orbit
that
you
know
are
looking
for
things
that
are
of
interest
to
people
in
the
industry
that
we
could,
you
know,
tease
them
with.
C
So
this,
like
I'm
sure
that
I'm
sure
we'll
have
no
trouble
finding
people,
so
I
I
don't
have
any
concerns
there.
I
I
do
have
concerns
about
advancing
this,
though,
in
the
absence.
With
that
analysis,
thanks.
B
Okay,
so
that
makes
sense
to
me
I
think
Tommy
and
I
will
do
some
work
in
the
background,
coordinate
with
you,
where
necessary,
Justin
and
make
sure
that
we
don't
unnecessarily
block
this
document
and
also
make
sure
it's
the
right
level
of
review.
I
think
there's
also
a
minute
discussion.
He
had
there
that
maybe
we
can.
We
can
start
in
London
get
some
better
guidance
for
groups
of
the
future.
D
Perhaps
we
should
get
one
of
the
security
ads
and
the
ad
responsible
for
this
group
in
the
room
at
the
same
time
to
figure
out
I
I,
do
think
it's
it's
a
two-part
question
what
to
do
with
this
draft,
because
it
does
kind
of
straddle
the
line
and
also
kind
of
what
that
threshold
looks
like
in
the
larger
sense
it
at
least
to
start.
B
B
Francesca
is
she's
on
leave
right
now,
so
it's
Murray,
gotcha
and
I
think
she'll
be
on
leave
through
London,
but
we'll
see,
right,
yeah
and
and
Lucas
points
out
in
chat
it'll
be
generally
that
the
early
sector
review
could
advise
on
the
level
of
someone
else's
that
might
be
needed.
I
think
that's
really
good
feedback
right.
C
B
B
A
Certain
examples
and
stuff
like
that
I
was
happy
to
help,
but
I
didn't
hear
anything
more
and
I
haven't
had
an
opportunity
to
look
in
the
draft
as
if
there's
anything
specifically
there,
so
I'm
gonna,
I'm,
gonna
guess
the
answer
is
no
and
that
you're
happy.
But
if,
if
that,
if
in
case
we
fell
off
like
that
item
fell
off
our
agendas
and
I'm
still
happy
to
provide
some
support
there.
I
don't
know
if
Roberto
is
still
here,
but
I'm
I'm,
pretty
sure
he'd
be
happy
too
yeah.
A
And
although
we
we're
in
ad
review
now,
the
digest
draft
is
kind
of
stuck
there
for
as
long
as
it
takes
I
guess.
So
we
have
an
opportunity.
There
I
guess,
to
align
anything.
If
then
they're
different
things,
but
you
know
it's
not
like
our
ship
is
completely
sailed
yet,
but
I
don't
anticipate
any
any
future.
Changes
to
digest
so
I've
got
I've
got
some
bandwidth.
Oh
there
cheers.
D
Yeah,
so
we
added
a
security
consideration
specifically
about
covering
the
the
message
content
under
the
signature
and
that
the
way
to
do
that
is
to
use
the
digest
field,
and
so
there's
there
is
a
non-normative
example
in
signatures
about
that
I
had
requested
sort
of
a
you
know
an
example
in
kind
in
the
digest
graph,
because
saying
digest,
doesn't
protect
the
rest
of
the
message.
In
the
same
way,
the
the
signature
does
and
here's
how
to
do
it.
D
If
I
recall,
the
discussion
at
the
time
was
largely
that
explaining
that
would
add
a
lot
of
complexity
to
digest,
which
is
fair
and
that
that
wasn't
desired
at
the
time.
If
we
want
to,
if
you
and
I
want
to
have
it,
as
you
know,
an
offline
discussion
about
whether
or
not
it
makes
sense
to
bring
that
in
I'm
I'm
happy
to
do
that,
but
I
think
I'm
pretty
happy
with
kind
of
where
things
are
between
the
two
drafts
right
now.
A
I'll
I'll
take
the
opportunity
to
take
another
look
with.
A
Stuff
so
leave
it
with
me
sounds.
B
Okay,
well,
if
there's
no
one
else.
Thank
you
for
that.
That's
a
good
update
and
it
sounds
like
a
few
more
things
to
work
through
and
talk
about,
but
we
also
have
to
you
know
see
if
any
issues
come
in
and
work
in
group
last
call
too
thanks
for
that
Justin
next
up
we
have
cookies,
is
Steven
here
and
can
I
share
this
yes,
I'm.
B
It's
presentation
view
right.
C
E
So
hi
I'm
Stephen
Bangor
I'll
be
going
over
rc6265
this
otherwise
known
as
the
cookie
stuck.
This
is
mostly
just
going
to
be
a
fairly
quick
status
update
and
for
anyone
who
is
at
ietf
114.
A
lot
of
this
will
look
pretty
familiar
next
slide,
please.
E
So
these
are
the
changes,
since
the
dash
10
draft
I
presented
all
of
these
at
ITF,
so
I'm
just
going
to
kind
of
speed
through
them,
standardize
max
age
prior
to
considerations
around
third
party
cookies
specify
that
no
decoding
should
be
done
requiring
ASCII
for
domain
attributes.
A
number
of
editorial
changes
next
slide.
Please.
E
Note
to
ignore
domain
attribute
invert
and
change
cookie,
octets
cookie
serialization
case
insensitivity.
Note
another
note
for
not
designing
not
to
send
invent
cookies,
a
warning
not
to
send
nameless
cookies
and
an
improvement
to
the
max
age
attribute
parsing
next
slide,
please.
E
E
We
are
also
comparing
the
cookie
name,
prefixes
case
insensitively.
So
this
is
the
underscore
underscore
host
Dash
and
the
secure
version
of
that.
It
turns
out
that
some
servers
will
process
cookie
names
case
insensitively
because
of
course
they
will,
and
so
the
servers
were
setting
these
prefixes
without
actually
getting
any
of
the
guarantees
behind
them.
E
These
are
generally
malicious,
cookies
that
are
attempting
to
impersonate
a
prefix
cookie,
so
the
cookie
line
would
look
like
equals
and
then
a
value
that
appears
to
be
a
valid
set
cookie
line
and
when
the
browser
sends
this
benefit
to
the
server
it'll.
Just
send
the
value
part
which
impersonates
a
prefixed
cookie,
as
well
as
a
notorial
change
down
here
as
well.
E
So
these
are
the
current
issues.
We
have.
Oh,
that
numbers,
those
numbers
don't
add
up.
This
says
12
open
issues
with
an
additional
18
deferred
issues.
My
mistake
there.
The
thing
that
are
the
most
interesting
ones
are
the
ones
that
are
currently
in
scope
before
we've.
We've
closed
all
open
issues,
so
we
have
same-site
cookies
and
redirects.
So
this
is
how
do
same-site
cookies
handle
redirect
change
across
different
sites
set
cookie
parsing
algorithms
should
force
more
of
the
syntax
requirements
and
nameless
cookies.
Client
server
inconsistencies
that
last
one
is
interesting.
E
That's
similar
to
the
nameless
cookies
prefix
value.
It's
a
cluster
of
a
couple,
different
issues.
There
is
a
change
under
review
right
now.
That
is
that's
going
to
help
Rectify
that
one
and
then.
Finally,
this
isn't
in
your
slide
deck,
but
I
wanted
to
talk
about
some
post
RF
c6265
this
work,
so
we've
already
got
some
some
work
on
the
horizon.
After
this,
after
a
new
spec
is
minted
cookies
having
independent,
independent
partition,
state
or
chips
is
working
on
an
internet
draft.
E
This
is
partitioned
cookies
for
anyone
unfamiliar
with
that
partitioned
by
the
top
level
site,
and
then
there's
work
being
done
for
splitting
out
the
cookie
spec
into
different
relevant
specs,
it's
being
referred
to.
As,
like
the
cookie
spec
layering,
we
had
a
recent
discussion
at
TPAC
between
a
number
of
people
on
what
the
high
level
design
would
look
like
here.
E
That's
it
for
my
update.
Are
there
any
questions.
B
You
have
to
unmute
yourself
at
the
top
left,
okay,
audio
problems.
E
Yeah
I'm
actually
glad
you
asked
this,
because
I
meant
to
speak
a
little
bit
more
to
that
point,
so
it
is
primarily
in
what
we
at
Chrome
are
referring
to
is
unspecified
same
site.
E
E
We
apply
a
post
exception,
so
cookies
should
not
be
sent
on
not
non-item
potent
requests,
but
we
found
that
that
broke
a
bunch
of
stuff.
So
if
a
cookie
is
younger
than
two
minutes
and
it's
a
post
request,
we'll
actually
allow
it
through,
and
it's
in
those
situations
that,
according
to
the
metrics
that
I
have
so
far,
are
showing
the
are
primarily
associated
with
these.
With
these
sorts
of
problems,
we
had
a
discussion
at
TPAC
over
this
as
well
and
the
results
were.
E
We
need
more
information,
so
I'm
working
on
adding
more
metrics
to
see.
If
we
can
sort
of
understand
this
this
use
case
and
where
this
breakage
is
coming
from
a
little
bit
better.
But
those
are
going
to
be
delayed
by
a
number
of
months.
B
E
I
mean
we've
had
so
these
nameless
goofy
things
were
security,
bugs
that
popped
up
as
long
as
no
other
researchers
find
anything
interesting.
I
think
we
should
be
good.
B
Okay,
that'll
be
great
and
Daniel
also
says
for
what
it's
worth
Mozilla
is
considering
not
implementing
The
Lex
by
default
part
of
the
spec,
because
of
breakage
and
because
partitioning
proposals
kind
of
mitigate
this
issue
anyway
toward
the
issue
anyway.
Okay,.
B
That
sounds
good.
So
so,
once
we
resolve
these
in,
if
there's
an
issue
there
we
can.
We
can
discuss
that
as
well.
We'll
go
to
working
group
last
call
go
to
the
ITF
process.
It
sounds
like
then
there's
a
discussion
about.
B
You
know
what
the
next
steps
are.
We
have
a
number
of
deferred
issues
here.
There's
been,
it
sounds
like
good
discussion
at
TPAC
and
some
proposals
in
the
community
and
I
guess.
The
question
is:
would
the
next
revision
of
the
cookie,
spec
or
or
whatever
happens
to
the
spec
you
know?
Do
we
want
to
keep
it
here
in
the
HTTP
working
group?
Do
we
want
to
create
a
new,
separate
IDF
working
group?
Do
we
want
to
do
something
else?
That's
a
discussion,
the
community
we
can
have
as
well
so.
B
We
have
the
query
spec,
which
I
believe
has
a
few
issues.
Open
Julian
is
here,
but
can't
us
speak
so
otherwise
indisposed.
B
Yep
six
open
issues,
I
think
the
last
time
we
discussed
query.
We
we
felt
that
we
needed
a
bit
of
a
push
to
get
these
issues
discussed
and
had
a
maybe
even
a
miniature
design
team
that
hasn't
happened
yet.
So,
if
folks
are
interested
in
working
on
Aquaria
or
helping
to
solve
these
questions,
please
get
in
contact
with
Julian
I'm
gonna,
try
and
help
out
as
well
and
I.
Think
I'll
try
and
get
some
folks
working
on
it
in
in
London,
if
at
all
possible.
B
Any
other
discussion
on
query
anything
you
want
to
realize
relay
from
chat.
Julian
I
hear
that
my
mic
stack
is
back
so
I'm,
going
to
mute
for
a
second.
B
Says
1am
in
a
hotel
room
on
vacation
Fair
call
all
right
well
just
that
just
let
that
serve
as
a
reminder,
then
about
the
query.
Spec,
if,
if
folks
want
to
contribute
to
that,
please
do
we'd
like
to
get
that
one
across
the
line
relatively
soon
next
up,
client
search!
Brian!
Are
you
with
us.
F
Mark's
email
about
this
one
sort
of
spurred
me
into
jumping
on
you
know:
I'm
trying
to
do
a
little
bit
of
work
on
the
outstanding
issues.
Although
I
think
there's
not
much
out
there
there's
a
few
things
that
need
to
be
done
to
move
it
forward,
the
first
one
here
being
just
an
update
to
reference
the
new
RFC
9110
it.
F
There
are
a
few
terminology,
things
a
few
things,
but
it's
largely
editorial
I
think
it's
good,
no
defined
client
search
error
mechanism
for
the
origin,
so
this
has
been
a
sort
of
a
recurring
discussion.
The
question
about
the
fact
that
if
the
back
end
origin
Server
doesn't
like
the
cert,
for
whatever
reason,
there's
no
sort
of
erroring
mechanism
that
could
be
used
to
signal
that
the
TLs
layer
that
that's
an
issue,
whereas
I
guess
normally.
You
know
like
the
browser
negotiating
Mutual
TLS
with
a
server
if
a
traditional
server.
F
If
a
server
doesn't
like
it,
it'll
send
a
TLS
alert
and
just
kill
the
connection,
and
this
allows
the
browsers
to
clear
their
cache
of
client
search
they
may
be
using
for
that
connection
for
that
site,
or
whatever
all
around
a
UI
that
you
know,
isn't
used
a
lot
and
I
I.
Think
most
people
consider
not
working
very
well.
I've
pushed
back
here
saying
this
really
Beyond
this.
F
This
go
book
the
specification
to
try
to
Define
that
kind
of
mechanism
that
any
kind
of
erroring
should
should
occur
by
selecting
appropriate
content
and
or
returning
a
403,
I
believe
Martin's
sort
of
backed
me
up
on
this
in
somewhat
different
words
and
at
the
last
interim
Tommy
suggested
that
at
least
some
text
be
added
to
the
document
that
mentions
that
case.
F
Well,
I
thought
it
was,
you
know,
somewhat
sort
of
apparent.
It's
apparently
not
I'll
use
the
word
apparent
a
few
times.
So
there
is
a
pull
request
on
this.
That,
basically
has
one
sentence
saying
that
much
that
if
the
thank
you
Martin,
it's
a
I,
can't
remember
the
words
when
I
apologized,
basically
that
if
the
cert
is
you
know,
Access
Control
decisions
can
be
made.
F
F
Content
is
appropriate
or
with
an
HP
403
response
if
the
certificate
is
deemed
unacceptable
for
the
given
context,
you're
really
just
trying
to
follow
up
from
Thomas
suggestion
and
note
the
the
particular
possibilities
in
the
response
here
to
any
kind
of
Access
Control
decision
or
a
bad
cert,
so
pretty
not
a
lot
to
it
but
yeah.
F
Hopefully
this
is
good
enough
to
kind
of
close
out
that
issue
Martin
here
in
the
comments
saying
he
agrees,
but
it's
definitely
not
worth
fixing
I
agree,
though
it
might
be
worth
explaining.
B
F
I,
don't
know
if
it
was
just
me
or
everyone,
but
Mark
here
words
with
a
little
bit:
muddled,
I,
don't
know
if
it's
my
connection
is
yours
but
I.
Think
if
you,
if
you
have
suggestions
for
elaborating
on
the
text,
alternate
text
I'm
more
than
happy
to
take
them,
but
I
do
agree.
Agree
with
you
agreeing
that
the
general
something
more
than
than
that
is
at
a
scope.
C
C
I
agree
that
this
is.
This
is
just
it
just
sucks.
This
is
just
how
this
is
and
we're
not
attempting
to
solve
this
problem
and
I'm,
not
sure
that
the
proxy
is
any
in
any
position
to
to
look
at
a
you
know,
403
responses,
or
what
have
you
and
infer
the
difference
between
a
403
response
about
something
and
something
that's
tied
to
the
certificate
in
a
way.
E
C
Can
provide
TLS
layer
signals,
that's
just
not
going
to
happen
here,
so
we
should
just
say
that
it
just
can't
or
it
maybe
can't
and
look
at
that.
That's
enough,
I
think
the
the
mechanism
that
we
have
here
is
pretty
good.
I,
don't
know
that
there's
I.
F
Not
see
any
suggestion
yet
but
I'll
take
a
look.
Thank
you.
G
F
Unfortunately,
I
still
cannot
hear
Mark
very
well,
but
this
was
I.
Try
to
get
through
this
really
quickly
they.
So
there
was
some
a
few
revisions
ago.
We
added
an
additional
hitter
header
to
optionally,
convey
the
chain,
in
addition
to
the
end.
F
Institute
certificate
might
do
much
of
that
added
in
there
and
crib
some
content
from
TLS
about
the
ordering
of
those
certificates
which,
ultimately,
at
the
last
interim
Martin
suggested,
was
somewhat
problematic
and
unnecessary
and
suggested
to
just
restate
that
without
that
and
say
that
the
the
order
of
the
certificates
is
is
the
same
as
the
order
that
appears
in
TLS.
F
Trying
to
avoid
the
question
of
whether
the
certificate
chain
passed
from
the
proxy
to
the
origin
is
in
fact
the
exact
same
as
their
certificates
presented
in
the
TLs,
handshake
and
or
a
chain
that
was
put
together
by
the
proxy
itself
prior
to
sending
it
to
the
back
end
I
think
it's
not
clear
to
me
that
either
is.
F
I'm
intentionally
trying
to
be
ambiguous
about
which
it
is
and
allow
for
either
and
basically
made
a
pull
request,
trying
to
defer
the
ordering
to
Martin's
suggestion
here
whilst
is
still
describing
that
it
is
the
chain
that
yeah
that
doesn't
do
a
lot
more
than
that
removes
a
few
removes
explicit
talk
about
the
ordering,
the
first
TLS
for
it
and
tries
to
clean
up
a
few
other
places
where
the
the
search
chain
is
discussed
and
just
either
clarify
or
make
sure
that
the
the
content
around
it
is
sufficiently
ambigued
to
us
to
allow
for
for
either
one,
and
that
is
I
tried
to
explain
in
the
comment
earlier
Martin
the
reason
that
that
may
or
may
not
contain
or
may
or
may
not
have
appeared
in
the
TLs
handshake
Texas
there.
F
F
That
said,
I
I,
don't
know,
I'd
be
okay.
Changing
that
text
to
removing
it,
because
I
think
it
still
leaves
the
ambiguity,
but
that
that
was
the
reasoning.
B
Hey
anybody
have
any
further
discussion
questions
comments.
H
Let
me
all
right,
so
these
are
the
only
issues
we've
had
for
a
while.
We
have
PRS
out
so
I
would
suggest
that
we
finish
reviewing
them,
because
pairs
are
relatively
new
and
once
they've
merged.
Then
we
need
to
ask
the
question
about
if
we're
ready
for
last
call,
because
we
haven't
had
any
new
issues
come
up
for
a
while
or
do
we
have
implementations?
Do
we
want
to
wait
for
implementations.
F
I'm
not
aware
of
implementations,
as
I've
always
been
somewhat
concerned
that
this
this
draft
comes
along
a
little
bit
too
late,
trying
to
document
current
practices
after
the
fact
so
I,
don't
I,
don't
know
what
implementation
status
or
interest
is
like
or
is
likely
to
be
like
in
the
future,
but.
B
Yeah
I
think
unless
we
had
an
implementer
stepping
up
and
saying
we'd
like
some
time
to
evaluate
this
and
and
do
some,
you
know
prototyping
or
implementation
work,
then
then
it
would
be
indefinite
and
that's
not
great.
It's
also
an
informational
spec.
It's
not
standards
track,
so
I
think
the
bars
may
be
a
little
bit
lower
there.
But
if
folks
have
thoughts
about
that,
we
should
we
should
talk
about
it.
F
C
Okay,
so
I
I
think
that
the
model
might
not
text
is
a
little
problematic,
I
think
it
it
invite
questions
about
what
it
is
that
you're
doing
here,
I
prefer
to
say
something
more
along
the
line.
So
if
it
appear
that
the
certificates
appear
in
the
same
order
that
they
were
in
the
TLA
soundtrack,
unless
you,
of
course,
to
put
someone
else
there
and.
C
Covers
off
a
whole
bunch
of
possibilities
in
terms
of
properties
that
do
their
own
path,
building
and
properties
that
get
information
from
other
sources
and
all
sorts
of
other
things.
We
don't
have
to
specify
that,
but
there's
a
sort
of
very
clear,
easy
path
for
those
proxies
that
are
just
looking
to
to
implement
this
and
pass
the
information
Network,
nothing
that
covers
most
of
what
we
want,
rather
than
going
well,
maybe
maybe
not
or
sort
of
directly
acknowledging
the
maybe
not
side
of
things
by
saying.
C
A
H
F
So
some
of
this
discussion,
it
began,
though,
with
the
the
chain
in
the
handshake
not
actually
being
a
chain
but
more
of
a
tree
structure
to
support
the
possibility
of
cross-signing
and
so
forth,
and
whether
that
that
would
be
conveyed
directly
in
the
chain
to
the
to
the
origin
or
whether
the
origin
would
would
assemble
what
it
needed
for
validation
to
whatever
anchor
it
had
and
then
only
pass
the
the
chain
that
it
that
it
used,
which
which
then
sort
of
opens
up
a
larger
question
and
discussion
about
who's
actually
responsible
for
validating.
H
F
A
A
I'll
speak
I'll,
repeat
it
into
the
chat.
If
I
need
to
I
can't
speak
Authority
deliver
implemented
interest,
but
cloudflare
does
have
like
you
know,
a
a
a
header
that
we
give
to
people
about
client
certificate,
validation,
I
think
obviously
like
having
a
spec.
That's
done
for
some
definition
of
done
can
can
improve
some
of
that
interest,
but
it's
kind
of
like
it's
sort
of
not
up
to
us
either
I
wonder
if
there's
some
way
to
engage
the
people
who
consume
the
header.
A
That
like
to
say,
there's
this
other
thing
coming
like
they.
They
might
be
able
to
push
some
some
interest
and
demand
on
the
the
people
who
would
be
generating
that
header,
if
that
makes
sense
like
who
are
the
big
customers
that
use
client
search
and,
and
would
they
be
interested
in
that
I
think
we've
got
a
good
population
of
people
here
who
who
might
implement
the
feature
but
they're
looking
for
some
like
interest
from
from
their
customers,
say
to
to
whether
they
should
shift
or
migrate
or
add
both.
B
I
see
agreement
from
Mike
that
that
academicism
was
a
situation
there.
It's
not
surprising,
okay,
thank
you.
Brian
try
and
get
those
last
couple
of
PR's
tweaked
and
in
there,
and
have
a
bit
more
discussion,
perhaps
and
and
hopefully
make
a
way
to
work
or
bless
course.
It
sounds
like.
B
Which
of
the
authors
wanted
to
take
the
lead
on
this
one?
Apologies.
We
didn't
coordinate
that
beforehand.
H
H
H
B
That
sounds
great
I
suspected.
We
were
in
that
state
for
this
draft,
but
it's
good
to
hear
that
you're
making
some
problems
there
at
least
putting
the
groundwork
in
to
make
some
progress
so
hopefully
we'll
we'll
get
it
going.
There
I
think
we
can
move
on
then
to
the
final
draft
in
the
agenda,
which
is
origin
H3
get
that
up.
H
So
we
had
one
open
issue
that
was
editorial:
there
was
a
PR
for
it
that
sat
and
merged
for
quite
a
while
I've
merged
it
I've
pushed
a
new
version
of
the
draft,
so
there
should
be
a01
now
and
there
are
no
other
open
issues.
It's
a
really
simple
draft.
B
Martin
comments
that
they
have
a
plan
to
meet
to
discuss
a
plan.
That's
indicative
of
this
area,
pretty
much
I.
Think,
there's
probably
some
more
bigger
discussion
to
be
had
here
that
might
tie
into
the
previous
work
you
mentioned
and
or
at
least
be
adjacent
to
it.
So
I
yeah,
I,
think
I.
Think
once
you,
you
tell
us
that
you're
ready
we'll
go
ahead
and
and
start
the
working
group
last
call
and
get
get
this
one
out
there.
This
is
really
effectively
bookkeeping.
Yeah.
A
A
No,
we
shouldn't
spend
any
more
effort
on
this
there's
like
zero
effort
to
spend
if,
if
origin
frame
as
it
was
defined
for
H2,
it's
not
just
the
frame,
it's
all
the
other
stuff
that
comes
with
it
like.
If,
if
people
want
to
implement
that
thing
great
like,
if
they
don't
and
they
want
something
else,
we're
gonna
have
to
go
away
and
redesign
that,
and
it's
not
going
to
be
called
origin
it'll,
be
called
something
else.
So
yeah
not
not
finished.
A
It's
off
raises
weird
questions
for
other
people
in
the
community,
like
there's
a
a
thread
of
discussion
around
trying
to
contribute
origin
support
to
go
right
now
for
H2
and
they're
like
well.
H3
is
not
done
well,
wait
or,
like
so
I
I.
Think
just
do
this
and
and
we'll
deal
with
the
other
stuff,
but
because
that'll
take
a
year
or
whatever.
A
That
would
be
my
opinion
and
I
think
the
draft's
in
a
good
State
I
would
Implement
support
for
this
in
Wireshark
as
like,
whenever
I
find
the
time
I
know,
that's
not
a
real
deployment,
but.
H
B
B
I
I,
since
we
have
a
bit
of
time,
I
want
I,
have
one
other
piece
of
business
I'm
going
to
briefly
talk
about,
or
at
least
get
some
opinions
on,
the
retrofit
draft.
We
have
just
these
two
open
issues
and,
and
the
one
that
I
want
to
discuss
is
the
2225
Martin
asked
you
know
right
now.
The
retrofit
draft
has
the
new
date
type
defined
in
the
draft
and
in
discussion.
I
think
we've
kind
of
Outline
Three
possibilities.
B
One
is
just
to
leave
them
in
the
retrofit
draft
and
let
people
find
it
there.
The
other
is
to
split
it
off
into
a
separate
draft.
So
it
it
is.
It
is
its
own
RFC.
It
has
its
own
number,
so
maybe
a
little
easier
to
find
and
Julian
made.
The
third
suggestion,
which
was
to
do
a
revision
of
structured
Fields
itself
to
add
the
new
type.
B
We
don't
have
any
open
Errata
on
structured
fields
and
I'm,
not
aware
of
any
big
changes
that
need
to
happen
there,
so
this
would
probably
be
the
only
change.
The
only
thing
that
comes
to
mind
for
me
is
if
we
reopen
structured
Fields
so
quickly,
we
could
perhaps
remove
the
a
b
and
F
from
the
document,
because
I
think
since
then,
we've
decided
that
it
might
be
a
little
distracting
and
we're
trying
not
to
use
it.
B
I
I
don't
have
strong
feelings
in
the
other
three
ways:
I
think
left
to
my
own
personal
devices.
I
would
probably
just
leave
it
in
retrofit
or
maybe
just
put
it
to
a
separate
draft
I'm
a
little
wary
of
revising
structured
Fields
so
soon
after
it
opened
up.
I
just
want
to
know
if
anybody
had
any
thoughts
about
where
they
should
end
up
and
I
already
see.
Justin
I
said
negative
one
to
remove
the
Navy
enough.
C
G
C
Like
that
idea,
myself,
honestly,
if
we
can,
we
can
teach
the
ITF
to
to
accept
revisions
on
on
relatively
short
time
scales.
That's
that's
good
for
everyone
involved
and
to
Julian's
point
it's
much
nicer
to
have
all
of
these
things
in
the
one
place
we
can
just
go.
This
is
the
spec
you
implement
this
back
and
it
has
some
test
cases
that
are
associated
with
the
spec
and
I
think
that
it's
probably
doing
a
community
a
better
service
than
trying
to
patch
it.
C
B
And
I
see
Tommy
is
agreeing
with
Martin's
sense
there
as
I
understand
his
comment
in
in
chat.
I
guess
my
personal
response
is
philosophically
I
absolutely
agree.
I
would
love
to
wean
the
ietf
and
indeed
the
entire
Community
off
of
referring
to
numbers,
because
it's
ridiculous,
you
should
have
you
know
one
version
of
the
thing:
I
just
I'm
a
little
hesitant
to
make
my
draft
the
guinea
pig
but
hey
what
the
hell.
What's
what's,
philosophy
for
Justin.
D
Yeah,
so
I
I
agree
with
the
philosophy
that
it
makes
sense
to
put
everything
all
in
one
place,
I'm
just
going
to
say
out
loud
what
everybody's
probably
thinking.
It
feels
really
weird
to
do
that,
because
this
is
you
know,
the
paint
is
barely
dry.
It
feels
like
on
on
a941
and
so
like
I
I.
D
Apart
from
that
strange
feeling,
though,
I
don't
see
a
reason
not
to
because
we
wouldn't
be
really
trying
to
change
other
pieces
of
the
draft
where
we're
extending
it
and
keeping
everything
in
in
one
place.
D
Alternatively,
If
This
Were
to
go
in
its
own
separate
extension.
We
would
inevitably
have
a
this
draft
that
would
collect
it
at
all
in
one
place
at
some
point
in
the
future,
so
we
can
write
it
now
or
we
can
write
it
later.
Oh
also
keep
the
ABF
foreign.
D
C
B
That's
the
dangerous
part
that
I
think
so,
taking
all
that
on
board,
it
sounds
like
we
want
to
put
it
in
a
revision
of
structured,
Fields.
B
I
think
my
only
comment
about
that
is
is
that
that
sounds
great
as
long
as
we
can
keep
that
effort
very
constrained
so
that
we
don't
make
it
a
you
know
six
month
12
18
month,
whatever
efforts
that
blocks
other
work
if
other
work
needs,
for
example,
if
people
you
need
to
refer
to
the
date
stuff,
so
we
need
to
have
it
tightly
scoped,
so
I
guess
for
me
personally,
maybe
the
next
step
would
be
I
could
submit
zero
zero
of
a
draft
as
a
proposal
for
a
revision,
and
we
can
do
a
call
for
adoption
on
that.
B
Okay
and
yeah
I
think
it's
a
great
Habit
to
get
into
it.
We
could
even
call
it
a
living
specification
if
we
were
so
bold
any
other
business.
We
have
a
little
time
if
folks
have
other
things
they
need
to
discuss.
Otherwise,
hopefully
we'll
see
either
in
person
or
online
people
in
London.