►
From YouTube: IETF115-HTTPBIS-20221107-1530
Description
HTTPBIS meeting session at IETF115
2022/11/07 1530
https://datatracker.ietf.org/meeting/115/proceedings/
A
Henry.
If
you
could
turn
your
microphone
off
until
sorry,
your
video
off
and
and
unless
you're
you're
speaking
that'd,
be
appreciated.
Let's
go
ahead
and
get
started.
This
is
http.
A
A
So,
first
of
all,
first
and
foremost
the
note
well
policy.
You
should
all
be
familiar
with
this
by
now,
but
if
not,
these
are
the
policies
under
which
we
participate
in
the
ITF
regarding
things
like
intellectual
property
and
privacy
and
harassment
code
of
conduct,
copyright
there
are
many
different
aspects
to
it.
So
if
you're
not
familiar
with
them,
please
do
familiarize
yourself
with
it.
It's
important.
A
This
meeting
does
have
a
mask
policy,
I'm,
hoping
you're,
aware
of
that
by
now
too,
if
you're,
not
speaking
or
or
eating
or
drinking,
please
do
wear
a
mask
and
and
exercise
common
sense,
and
yes,
I
did
notice
that
someone
left
their
power
cord
here,
hi,
hi,
Lucas,
well,
I
didn't
want
to
name
names,
so
please
do
remember
to
keep
your
mask
on
and
if
someone
gently
reminds
you
take
it
in
the
spirit
which
it
was
intended,
can
we
have
a
scribe
for
this
session?
Is
anyone
willing
to
take
minutes
please.
A
Thank
you
very
much
and
that's
Jonathan.
Isn't
it
I,
don't
have
my
glasses
on,
but
I
know
it's
Jonathan.
If
you
could
just
take
notes
in
the
what
do
we
call
them
now,
I
think
it's
a
hedge
dock.
It
should
be
linked
from
the
top
of
the
agenda.
Yeah.
A
To
oh,
thank
you
thank
you
and,
and
if
folks
could
help
Jonathan
out,
that
would
be
much
appreciated
so
that
it's
not
falling
just
on
his
shoulders.
A
A
Just
for
to
keep
us
on
on
topic
there,
if
no
one's
having
trouble
with
that,
we'll
we'll
do
that,
so
we'll
do
cookies,
then
partition
cookies
then
go
back
to
concerts
and
then
finally,
we'll
end
up
with
an
update
from
our
friends
over
and
the
enthusiasts
over
in
The
Mask
working
group,
they
are
very
enthusiastic.
A
If
you
can
I,
don't
think
you
can
see
that
on
camera,
but
there's
already
enthusiasm
and
then
on
Friday
we'll
go
over
there
Jennifer
that
then,
but
we
have
a
similar
kind
of
lineup,
so,
like
I,
said,
very
packed
any
agenda
bashing
beyond
what
we
already
heard.
A
Okay,
let's
go
ahead
and
get
started
then,
with
the
signatures
discussion.
C
All
right,
hi
everybody
I'm
Justin
richer.
This
is
going
to
be
a
short
presentation.
Next
slide,
please,
on
update
on
HTTP
message,
signatures
we've
gone
through
a
couple
of
revisions
of
editorial
updates.
Added,
more
examples.
We
added
something
called
the
tag
parameter,
which
was
very
briefly
called
the
context
parameter
until
we
got
some
feedback
that
that
was
a
dumb
name
and
so
there's
the
tag
parameter
in
there.
Now
it's
optional
don't
need
to
get
into
it
and
we
have
gone
through
a
one
month.
C
Long
working
group
last
call,
after
which
a
lot
of
comments
came
in
about
the
document.
So
thank
you
for
those
that
that
have
done
that
so
far
right
now,
there
are
only
a
couple
of
small
things
going
in
so
first
Lucas
needs
to
apparently
get
me
an
update
on
the
digest.
Example
in
the
draft
I
think
thank
you
I
guess.
Apparently,
it's
wrong
I'm,
not
sure
how
it's
wrong,
but
if
you
could
make
it
not
wrong,
that'd
be
awesome
and
we
can
touch
base
in
the
hallway
or
something.
C
If
that's
easy
to
do
next
slide,
though,
but
there
is
an
open
question
about
what
do
we
do
with
trailers
Can
the
fields
that
we've
defined?
C
Can
they
be
trailers
currently
they're
allowed,
but
does
that
do
we
need
to
be
more
specific
about
what
that
means
when
they're
trailers,
and
can
we
sign
trailers
and
and
if
we
do
so
there's
a
couple
of
things
that
we
could
do
right
now,
everything
is
just
sort
of
defined
as
fields
and
we
kind
of
mash
them
all
into
the
same
namespace,
which
is
what
I
thought
we
were
allowed
to
do.
C
But
somebody
pointed
at
a
line
in
HTTP
semantics
and
I'm
getting
some
head
shakes
in
the
room,
so
this
might
be
a
very
short
discussion
next
slide,
please
so
so,
obviously,
if
we
have
a
hypothetical
field
that
can
exist
as
both
a
header
and
a
trailer
do
we
both
call
these
both
food
next
slide,
because
the
naive
thing
to
do
would
be
to
just
cram
them
together.
To
look
like
this,
that's
what
the
spec
says
to
do
right
now,
I'm
getting
a
lot
of
head
shakes.
C
This
is
why
I
tagged
you
guys
on
the
GitHub
issue,
so
I
wouldn't
have
to
have
these
slides,
but
I'm
glad
we're
here
so
next
slide.
The
alternative
proposal
is
to
just
explicitly
call
something
out
as
a
trailer
as
a
data
source
using
a
Boolean
flag.
That
tells
this
that
allows
the
signer
to
signal
and
the
verifier
to
know
where
to
Source
the
information
and
then
otherwise
we
just
treat
it
like
any
other
any
other
field.
Next
slide.
We
actually
have
precedent
for
this
with
the
request
parameter.
C
A
Let's,
let's
go
to
the
key
I'll
interject
and
then
Martin
might
have
something
to
say
so
in
in
HTTP
the
most
recent
series
of
updates.
We
recognize
that
trailers
had
some
really
fundamental
interoperability
and
deployability
issues,
and
so
we
defined
them
as
a
separate
namespace
from
headers,
so
they
are
distinct.
You
cannot
just
naively
combine
them
unless
the
definition
of
the
header
field
or
sorry
the
field
says
yes,
you
may
do
that.
Oh.
C
A
And
so
we
flipped
that
so
so
your
approach,
the
approach
that
you
talk
about
here,
we're
using
TR,
might
be
workable.
The
only
thing
that
I
would
add
to
that
is
you
have
to
realize
that
it
is
completely
legal
to
drop
trailers
on
the
floor.
Both
you
know
intermediaries
and
by
recipients
they
can
just
disappear,
and
so
you
need
to
account
for
that.
If
you
want
it
to
be
robust,
yeah.
C
We
definitely
need
additional
implementation
considerations
for
trailers,
in
that
case,
I
think
there's
already
some
text
in
there,
but
it
needs
to
be
more
robust
for
sure
I.
D
D
C
Right,
so
that's
if
I
want
to
ask
you
two,
but
if
you
go
back
to
like
several
slides,
my
final
question
on
trailers
is:
are
they
even
real,
because
I
actually
found
that
in
some
of
the
libraries
I
was
using
I
can't
get
to
trailers?
Oh.
D
Yeah,
this
is
a
common
complaint,
so
they
are
real
they're,
probably
not
a
great
thing
to
be
signing
for
the
reasons
that
Mark
stated
but
yeah
and
it
sounds
like
you're
trying
to
get
everything
so
yeah.
It
seems
to
work
yeah.
C
D
This
is
a
great
way
to
make
stuff
break
if
you
sign
these
things,
but
that's
what
people
yeah
want
sometimes
I.
C
Right
exactly-
and
this
is
this
is
definitely
off
in
in
corner
space
somewhere,
but
we
wanted
to
make
sure
that
it
was
covered
so
anyway.
Thank
you.
I
will
put
in
a
PR
to
to
add
the
TR
flag,
with
the
definition
for
that
and
and
toss
that
around
but
yeah.
C
Apart
from
tweaking
a
couple
of
examples
here
and
there,
the
the
working
group
last
call
feedback
so
far
has
been
pretty
positive
and
I've
also
been
discovering
a
handful
of
more
implementations
out
there
in
the
wild,
which
next
slide
I'm
going
to
be
talking
about
that
a
little
bit
at
the
SAG
on
Friday
morning
this
week,
where
I'm
basically
going
to
be
presenting
what
the
draft
is
and
how
it
works,
and
very
very
briefly,
to
The
Wider
security
community
and
then
tell
them
to
come
at
it
with
pitchforks
and
pickaxes
and
whatever
else
they
can
get
on
hand
and
basically
help
us
figure
out.
C
Like
did
we
leave
security
holes
in
here
that
we
don't
know
about.
Are
there
weird
Oracle
attacks
or
gotchas
or
other
things
that
that
are
hiding
in
this
space?
So
it's
going
to
be
exciting
time.
After
that,
I'm
sure,
but
oh
right,
the
second
bullet
which
I'm
pointing
over
here
now.
C
The
second
bullet
is
that
if
you
have
an
implementation-
or
you
know-
of
an
implementation
of
the
HTTP
working
group
draft,
not
the
Cabbage
draft
I'm
going
to
be
putting
up
a
tab
on
httpsig.org
our
sort
of
demo
site
playground
space
to
start
listing
these
implementations,
because
I've
had
a
lot
of
people
coming
at
me.
Lately
asking
for
that
list,
so
I'm
gonna
put
it
up
and
then
just
have
a
link
to
like
make
a
GitHub
pull
request.
C
If
you
want
to
add
your
own,
but
you
know
get
in
touch
with
me,
I'm
I'm,
seeing
more
and
more
out
there
and
I
think
that's
all
I
had
great.
A
Thank
you
and
if
folks
are
able
to
attend
sag,
it
conflicts
with
us
on
Friday.
But
if
you
happen
to
be
there,
that's
a
good
conversation
to
watch
because
there's
a
an
open
question
about
what
the
appropriate
level
of
review
is
for
this
kind
of
really
pivotal.
Spec.
A
Ific,
great
and
and
if
you
could
do
us,
the
favor
of
maybe
reporting
back
in
on
the
email
list
or
something
that'd
be
great.
Next
up
alternative
services,
Mike.
E
So
we
have
all
this
nice
complicated
infrastructure
of
how
we
handle
the
user's
request
and
where
it
flows
through.
Okay,.
E
All
right
but
slide
what
we're
trying
to
handle
is
when
the
user
accidentally
winds
up
over
there,
where
we'd
really
like
to
get
them
on
the
red
path
that
we
had
planned
for
them,
and
we
want
to
redirect
them
a
little
bit
so
slide.
E
There's
also
for
those
cdns
that
use
anycast
anycast
can
wind
up
in
the
wrong
spot.
It
happens,
there's
a
lot
of
work
that
goes
on
to
make
sure
that
doesn't
happen,
but
it
still
does,
and
in
some
cases
we
have
endpoints
that
can
offer
you
better
service
based
on
where
you
are
or
who
you
are,
that
we
didn't
know
when
you
did
the
DNS
resolution.
So
we
might
like
to
be
able
to
point
to
to
some
endpoint
that
we
control
that
special.
E
That
would
give
you
faster
service
and
the
most
common
use
of
alt
service
right
now
is
for
protocol
availability.
So
you
spoke
to
us
over
H2
we'd,
like
to
tell
you
that
we
also
have
our
H3
endpoint,
or
you
spoke
to
us
over
each
one
and
we'd
like
to
tell
you
that
H2
is
a
different
port
hypothetically,
although
that
mostly
doesn't
happen
in
the
real
world.
A
F
I
wanted
to
mention
that,
apparently
the
chat
doesn't
work
or
no,
it
actually
works.
So
I
had
actually
questions
for
Justin
early
on,
but
the
messages
didn't
come
through.
F
D
So
so
the
news
is
zulub
is
down
from
for
most
of
things.
So
all
the
chat
is
busted
and
so
Julian's
got
some
sort
of
low
priority
questions
that
he
can't
get
answered.
There's
an
iits.
E
All
right,
so
we
at
this
point
have
two
main
ways
to
redirect:
the
user.
We
can
either
do
it
before
the
request
happens
using
DNS
or
we
can
do
it
after
the
request
happens
using
alt
service
and
those
are
both
great
options
slide
Trouble
Comes,
when
we
have
to
combine
them
right
now.
What
the
service
B
spec
says
is
that
if
you
implement
alt
service
and
https,
you
take
all
the
host
names
you
found
in
alt
service.
F
E
And
Delay,
oh
okay,
so
the
trouble
that
we
have
other
than
just
using
them
together
is
what
do
we
do
when
we
get
both
sets
of
information
because
the
old
service
is
potentially
old?
So
how
do
we
make
sure
it's
still
valid,
but
it
was
received
over
a
TLS
connection
directly
from
the
origin,
whereas
DNS
comes
through
other
servers
along
the
way,
possibly
unencrypted.
E
How
much
do
we
trust
when
they
don't
say
exactly
the
same
thing,
so
we
want
a
way
that
lets
us
make
sure
things
are
fresh
before
we
use
it.
One
of
the
concerns
that
we've
heard
is
an
ALT
service
that
says
you
can
come
talk
to
me
over
H3,
but
when
you
resolve
the
hostname,
you
get
pointed
to
a
different
CDN
that
doesn't
support
H3
and
your
connect
fails
in
your
timeout,
which
is
not
great
big
slide.
E
So
the
draft
that's
been
submitted
just
before
the
deadline
was
to
replace
it
with
very
close
to
the
straw
man
that
we
had
talked
about
last
iatf,
which
is
this
alt
service
B,
which
provides
you
a
hostname
that
you
should
go
resolve
slide
for
semantics.
E
Just
do
an
https
lookup
for
the
hostname.
Do
all
the
service
B
required
connection
stuff,
that's
in
the
https
spec
and
use
that
connection
instead
of
this
one.
So
we're
trying
to
move
you
over
to
a
different
endpoint
now
and
then
in
the
future.
Remember
what
endpoint
you
wound
up
on
and
give
that
some
preference,
when
you
do
future
https
resolutions
connecting
to
this
origin.
If
you
don't
see
that
endpoint,
oh
well,
you
just
go
on
with
what
DNS
says
and
you
forget
it.
E
And
this
does
leave
us
in
a
little
bit
of
a
situation
around
stickiness,
which
is
that
if
you're,
not
remembering
or
if
you
don't
have
that
endpoint
in
the
DNS,
all
the
time,
then
you're
going
to
go
to
origin
get
redirected
without
service.
The
next
time
around
you
go
back
to
the
origin
and
it'll
tell
you
to
go,
go
over
to
the
alternative
again,
so
you
might
wind
up
flip-flopping
unless
you
put
all
of
your
possible
endpoints
in
the
DNS.
E
But
if
you
remember
it
too
long,
then
you
override
the
ability
of
a
site
to
do
multi-cdn
or
otherwise
change
where
they're
pointing
you.
So
this
is
a
trade-off
that
we're
going
to
have
to
make.
Now
we
had
a
long
conversation
about
this
at
the
HTTP
Workshop
slide
and
we
had
over
the
course
of
that
long
conversation.
We
basically
redesigned
the
proposal
and
then
wound
up
right
back
at
the
proposal
that
was
submitted
because
everything
else
we
tried
didn't
really
work
out
or
had
some
some
conceptual
difficulty
that
made
us
drop.
E
E
So
I
think
at
this
point
we're
just
going
to
open
up
for
discussion
on
the
draft
that
was
submitted,
the
current
alt
service
bis
and
how
we
want
to
move
forward
with
this.
F
G
Hi
Ben
Schwartz.
This
all
sounds
like
a
a
bit
like
a
good
Improvement
to
me,
I'm
for
it
I
I,
don't
understand
the
stickiness.
Where
did
you
end
up
on
stickiness.
E
Stickiness,
so
the
issue
about
stickiness
is
how
how
does
the
client
verify
that
the
information
it
has
cached
is
still
valid
for
an
old
service
that
potentially
has
a
very
long
long
time.
So
so
you
have
conflicting.
We
would
like
to
have
a
long
Lifetime
on
the
alt
service
advertisement,
because
we
want
you
to
remember
what
protocol
to
use,
especially
at
the
different
endpoint
when
you
come
back
on
a
future
connection,
but
on
the
other
hand,
load
balancing
happens
at
a
much
shorter
time
scale.
G
Okay,
yeah
I
think
the
it's
perfectly
reasonable
to
to
say
that
the
alt
service
B,
is
a
sticky
indicator
of
which
of
the
alt
hosts
or
which
of
the
alt
RRS
is
prep,
is
preferred
and
that
its
lifetime
is
is
the
and
that
it
it
applies
after
DNS
resolution.
G
So
on
some
you
know
a
week
from
now:
I
want
to
connect
I
re-resolve,
the
name
I
I
get
a
an
RR
set
and
then
I
check
it
against
my
sticky
post
name,
and
if
the
sticky
host
name
is
in
that
RR
set,
then
the
hint
applies.
Otherwise,
it's
discarded
I
think
that
gets
you
what
you
want.
G
Yeah
one
thing
that
I
wonder
about
here
is:
should
we
just
make
this
stickiness
thing
a
like
general
property
of
of
the
https
record
of
service
bindings?
Like
you
gotta,
you
know,
you
tried
various
service
bindings
in
the
past
and
you
found
that
one
of
them
was
the
one
that
actually
worked
like
do
you
really
have
to
try
them
in
priority
order
next
time,
since
you
know
that
so
the
other
ones
didn't
work,
and
this
one
did
or
your
selection
algorithm
selected
this
one
last
time.
G
Right,
it
just
seems
like
you
could
you
could
effectively
just
say
this
is
just
a
special
case
of
that
yeah.
Okay,
that's
enough.
A
D
Yeah
so
I
wanted
to
make
clear
here
that
the
The
Proposal
from
the
the
design
team
that's
been
working
on
on
this
whole
service
thing
is
effectively
to
take
this
or
something
approximating
this
as
the
complete
replacement
for
Old
service.
So
the
draft
proposes
that
we
I
think
that
I've
got
to
use
the
right
words
now
obsolete.
D
Not
deprecate
is
that
right
we
obsolete
78
38,
and
that
seems
to
be
the
general
sense
that
that
I've
gotten
in
discussions
is.
We
want
to
effectively
say
that
all
services
no
longer
useful.
D
There
are
some
suggestions
on
how
we
might
improve.
There
are
a
number
of
open
questions
that
we
have
Mike,
probably
didn't
touch
on
all
of
them.
The
more
interesting
ones
for
me
is
https.
Records
have
priority
order.
That
gives
you
some
control
over
where
people
end
up,
and
you
can
always
put
these
Alternatives
that
you
don't
want
people
to
use
as
the
primary
entry
point
to
your
service
down.
D
The
priority
list
on
your
https
are
assets
that
you
return,
but
maybe
that's
not
as
not
definitive
enough,
so
we
also
defined
an
attribute
on
on
each
one
of
them.
That
says,
don't
use
this
unless
it's
an
alternative
that
you've
been
told
to
use
explicitly
those
two
mechanisms
both
provide
you
with
some
amount
of
control
over
the
use
of
an
alternative,
we're
not
sure
if
we
need
those
things.
D
H
D
Post
it
to
the
mailing
list
and
if
they
don't
I
will
because
I
think
it's
an
interesting
idea,
but
we're
sort
of
trying
to
get
a
sense
of
the
general
shape
of
this
thing
to
see
if
people
are
interested
in
you
know
does
this.
Does
this
solve
your
problems?
We've
heard
from
a
number
of
people
that
simply
deprecating
old
service
and
relying
more
on
https
records
solves
their
problems
adequately,
and
they
don't
necessarily
need
this
thing.
D
We've
heard
from
others
that
they
like
the
idea
of
being
able
to
steer
traffic
using
using
the
the
old
service
records
the
old
service
advertisement
as
we've
defined.
So
that's
the
sort
of
feedback
we're
looking
for
here
so.
D
I
Eric
knier
Apple,
so
I
just
wanted
to
reiterate
a
little
bit,
and
you
were
saying:
is
this
useful
for
people
and
so
I
thought
I
could
say
heck?
Yes,
it
is
so
we
we
strongly
prefer
getting
this
kind
of
information
from
DNS.
It's
much
nicer,
because
it's
for
the
place
that
you're
actually
going
and
it's
something
most
importantly,
that
you
know
up
front
at
the
time
that
you're
going
so
rather
than
saying
for
next
time.
I
You
can
say
for
this
time
and
that
has
been
shown,
I
think
via
some
data
that
was
presented,
the
previous
ITF,
that
that
makes
a
significant
and
measurable
difference
in
the
amount
of
H3
that
we
actually
use.
I
So
that
much
is
awesome
being
able
to
take
https
records
and
say
these
are
lower
priority
and
have
a
separate
signal
for
do
not
use
this
in
less
houses
on
fire.
Both
of
those
signals,
I
think
we
find
are
super
useful
and
we
have
a
number
of
people
where
we
are
asking
them
to
do
that,
and
they
say
we
do
not
have
a
mechanism.
So
we
can't
do
that.
So
if
this
gives
them
that
mechanism,
then
like
both
of
those
unique
signals,
are
awesome.
I
J
E
You
it
probably
is
important
to
note
that
at
least
at
1
14
we
did
have
some
people
saying
they
didn't
want
the
ability
to
indicate
H3
with
a
header
to
go
away
entirely.
So
we
may
that
may
just
be
solved
by
okay.
You
can
use
the
obsolete
thing
if
you
really
have
to,
but
it's
important
for
us
to
remember
that
some
people
do
still
need
that
affordance
for
a
little
while.
K
Yeah
David
schenazi,
that's
pretty
much
exactly
what
I
was
going
to
say
and
just
to
explain
that
Chrome
and
other
web
browsers
today
don't
always
have
access
to
the
https
record
because,
like
apple
platforms
are
awesome
and
they
have
good
apis.
But
if
you
look
at
Posse's
cross-platform
get
Adder
info
gives
you
pretty
much
what
existed
at
the
time.
K
So
realistically
speaking
in
some
platforms,
where
Chrome
has
its
own
DNS,
resolver,
everything's,
fine
I
know
there's
some
people
at
ITF
weeping
that
the
applications
are
all
bundling
their
DNS
resolvers
now,
but
for
cases
where
we
don't
we'll
still
need
to
use
the
old
thing.
I,
don't
love
that
we're
obsoleting
the
old
thing,
because
it's
still
useful,
but
you
know
at
the
end
of
the
day
it
doesn't
really
matter
as
long
as
we're
on
the
same
page
that
we
might
still
make
changes
to
Old
service.
K
A
B
All
right,
this
is
a
question
for
David.
If
he's
still
up,
I
I
think
there's
some
comments
on
the
list
around.
Can
you
know
when
are
we
going
to
update
posix
get
at
our
info?
You
know,
I,
don't
think
that
functions
function.
Signature
is
going
to
change
at
any
point
soon,
just
to
confirm
I.
Imagine
people
would
probably
be
okay
with
a
separate
function
in
some
standard.
B
K
Yeah
well
David
schenazi,
Happy,
eyeballs,
Enthusiast
yeah,
like
in
practice.
It's
a
matter
of
time
at
the
end
of
the
day
like
and
resources.
Chrome
today
has
like
all
the
energy
like.
We
don't
have
a
huge
networking
team
as
we
used
to,
and
so
the
energy
goes
into,
the
client
DNS
like
I,
don't
work
on
Chrome
anymore.
The
the
energy
is
going
on
to
like
the
DNS
resolver.
That
is
bundled
with
chrome
and
the
like.
K
Cross-Platform
fallback
users
get
Adder
info
and
pretty
much
is
not
going
to
see
any
love,
probably
unless
it
becomes
like
a
pressing
issue,
so
give
an
infinite
time
totally
will
do
it.
It
makes
sense
it's
possible,
but
it's
in
Practical
things,
there's
always
another
shiny,
API
web
API
to
do
instead,
but
just
to
clarify
I
think
the
goal
is
to
ship
the
its
chromosome
resolver
on
all
platforms
and
that
kind
of
makes
this
go
away.
L
Lucas
sorry
I
didn't
use
the
tool.
My
phone
ran
out
no
recovery
from
that.
So
is
somebody
responsible
for
advertising
H3
and
wanting
to
work
and
being
involved
in
the
design
team.
I
support
the
shape
of
the
solution
that
we've
got
as
Martin
said
that
the
the
suggestion
on
GitHub
the
other
day
is
kind
of
interesting,
be
willing
to
see
where
that
goes
or
doesn't.
L
But
I
really
like
this,
because
I
don't
have
to
do
anything
like
I
would
support
this,
because
the
clients
need
to
do
some
things
and
change
stuff
and
we
already
do
HBS
records
and
it
would
all
kind
of
work
very
straightforwardly
but
I'm
not
in
a
humongous
rush
to
to
remove
the
old
service
header
tomorrow
or
or
whatever
like.
We
can
leave
it
there
and
it
has
a
place
for
me.
L
The
important
signal
here
is
is
for
people
who
who
are
suffering
now
with
the
the
problems
of
the
old
service
header
they're,
asking
us
to
try
and
find
Solutions
around
it
and
and
having
the
ITF
give
them
a
clear
signal
that
this
is
not
the
power
to
be
explored
for
future
Solutions.
But
it's
not
a
dead
end
either
in
the
sense
that
it's
going
to
be
turned
off
and
we're
never
going
to
speak
anything
and
you're
just
going
to
be
left
in
limbo,
with
no
way
to
use
H3.
That's
not
what
we
want.
D
Yeah
so
Mountain
Thompson
I
find
it
funny
sometimes
too,
when
we're
talking
about
lack
of
engineering
resources
at
Chrome
when
it's
often
the
other
way
around,
but
the
I
think
I
think
that's
the
transient
problem.
I
think
probably
the
more
pressing
one
is
the
availability
of
https
records
and
how
able
you
are
to
to
make
a
query
for
them.
We've
done
some
research
that
hopefully,
will
be
published
relatively
soon,
that
that
talks
about
the
success
rates
for
for
different
types
of
DNS
records
and
off
the
top
of
my
head.
D
D
It
was
more
like
45
for
our
DNS
Sac
related
Records
for
those
who
are
who
are
curious
about
those
sorts
of
things,
so
that
that
to
me
speaks
to
to
having
some
amount
of
https
sorry
alt
service,
Legacy
style
around.
It
may
be
that
we
want
to
encourage
people
to
to
start
shortening
their
max
age
on
that
side,
something
closer
to
this
sort
of
behavior
from
from
the
systems,
and
we
don't
run
into
the
sort
of
two
caches
problem.
That
was
a
major
issue
with
without
service
in
the
first
place.
D
That
said,
I
think
this
really
hinges
on
like
how
much
HTTP
3
you're
willing
to
sacrifice
to
to
the
altar
of
just
making
progress.
I
think
probably
at
the
moment
for
those
people
who
are,
you
know,
maybe
strapped,
for
engineering
resources.
This
is
possibly
you
know
too
much
HTTP
3
to
to
leave
on.
D
A
Any
more
slides,
that's
okay,
all
right!
So
it
sounds
like
We've
made
some
pretty
good
progress.
We
still
need
to
do
a
little
more
work
and
a
little
more
socialization.
Maybe
so
I
think
we'll
continue
to
leave
the
old
service
best
document
parked
yeah,
because.
E
A
So
next
up
origin
H3,
which
I
believe
is
this
one?
Yes,
once
again,.
E
Okay,
so
this
is
the
origin
frame
that
we
had
on
H2
recast
in
H3.
E
E
So
structural
changes
stream,
zeros
or
request
stream
in
H3,
so
we
have
control
streams.
So
we
have
to
map
those
terms
and
then
Lucas
filed
an
issue
that
we
hadn't
caught
previously.
That
H2
has
Flags.
E
Which
is
we
don't
use
them,
but
we
might
in
the
future.
So
then
there
are
four
flags
that
are
mandatory
to
understand
if
the
flag
is
set,
and
you
don't
know
what
it
means
ignore.
The
frame,
the
other
flags
are
not
mandatory
to
understand
if
they're
set
and
you
don't
know
what
they
mean,
shrug
and
move
on,
keep
using
the
frame,
as
you
understand
it,
so
we
have
those
in
H3.
We
would
have
no
way
to
set
them
if
we
ever
did
Define
them
in
the
future
next
slide.
E
E
None
of
these
seem
fantastic.
All
of
them
will
work
opinions
and
then,
let's
move
on
okay,
Martin.
D
Yeah,
so
we
have
mandatory
Flags
turns
out.
Mandatory
flags
are
just
called
frame
types,
so
we
don't
have
that
problem
if
you
want
to.
D
If
you
want
to
define
a
flag,
that's
mandatory
to
understand
pick
another
frame
type
and
off
you
go
so
problem
solved
there,
the
optional
ones
optional,
in
which
case
we
can
solve
that
problem
later,
when
someone
wants
to
Define
one
of
those
flags,
and
we
might
do
it
in
any
number
of
ways
by
defining
new
frame
types
or
by
defining
this
stupid
hack
thing
with
the
bit
at
the
end
that
you
cut
off.
D
K
David
dude,
it's
kanazi,
similar
to
Mt,
I,
think
yeah.
We
have
a
fourth
option,
which
is
Kick
the
Can
down
the
road,
we're
not
painting
ourselves
into
a
corner,
because
if
we
need
it,
the
correct
thing
is
the
frame
types.
We
have
one
extension
joint
that
we
should
grease.
Well,
we
don't
need
17
of
them.
So
just
say
we
don't
use
them
done.
Okay
and
sure,
maybe
I
won't
even
bother
deprecating
them
from
H2
like
just
don't
use
them,
don't
don't.
L
Accommodate
them
Lucas,
Lucas,
Pardo,
yeah,
so
I
created
the
issue
and
I
didn't
think
of
any
solutions.
For
it.
Sorry
I
forgot
I'd
opened
it,
but
it
was
in
a
fit
of
late
night
spec
reviewing
yeah
I,
don't
like
any
of
those
options.
I,
don't
think
we
should
change
H2
at
all,
so
I
agree
with
David
there.
Adding
adding
bites
and
stuff
just
seems
weird
I,
don't.
L
This
is
a
lot
of
the
time
with,
with
H2
and
H3
we're
trying
to
mitigate
kind
of
proxies
that
might
pass
through
frames,
even
though
you're
not
passing
frames
through
you're
reconstituting
them
from
their
parts.
Kind
of
thing.
Origin
is
a
bit
of
an
odd
one.
I,
don't
think.
There's
many
anyone
kind
of
passing
origin
across
multiple
proxies
like
that,
like
I,
don't
think
we
know
enough
about
the
problem
to
be
able
to
design
anything.
L
That
would
make
sense
and
not
complicate
stuff,
no
good
use.
So
if
we
want
to
Define
four
frame
types
that
just
basically
all
know
up,
so
it's
just
this
is
these
are
all
the
types
for
origin
in
H3
and
they're
all
the
same?
But
since
fine,
it's
not
it's
no
harder
to
write
that
code
than
for
one
type.
L
I
I
don't
know,
I
have
no
strong
opinion.
I
I
had
a
weird
idea
a
few
years
ago
about
maybe
a
use
for
one
of
the
flags
in
the
H2
frame.
So
maybe,
but
without
a
concrete
use
case
like
printing
on
it,
would
be
fine.
We're
not
We're,
not
gonna,
we're
not
short
of
space
for
frame
types.
A
I'll
just
give
my
own
feedback
to
me
that
the
the
clearly
worst
option
here
is
the
last
one.
For
the
reason,
for
the
reason
you
point
out,
it
is
not
worth
doing
that
to
save
one
bite.
Okay,
but
it
sounds
like
we
might
be
converging
on
kicking
it
down
the
road,
giving
the
problem
to
a
future
us
and
closing
the
issue
with
no
action.
I
think
is
the
outcome
which.
D
I've
done
a
home
for
so
long.
You.
A
E
I
will
point
out
that,
in
terms
of
actual
structure
of
the
frame,
reserving
multiple
frame
types
to
account
for
all
the
flags
is
exactly
the
same
as
sticking
one
extra
byte
in.
In
fact,
it's
slightly
worse
because
of
the
varanth
and
the
two
bits
for
that.
E
F
E
That's
all
you
got
I
will
go
close
the
issue.
So
the
remaining
question,
which
is
implied
by
the
picture,
are
we
at
the
Finish
Line
working
group
last
call.
A
E
A
M
Hello,
all
my
name
is
Stephen
bingler,
Google,
Chrome
and
I'll
be
going
over
a
status
update
for
6265
this
for
anybody
who
was
present
at
the
last
interim
meeting.
These
slides
are
going
to
seem
very
familiar
next
slide.
Please
so
there
hasn't
been
a
new
draft
since
actually,
that's
that's
not
true.
As
of
about
five
minutes
ago,
when
we
just
pushed
a
new
draft
but
I'm
going
to
talk
as
if
we
didn't
do
that
there
hasn't
been
a
new
draft
since
ITF
114.
M
So
all
these
changes
are
since
that
previous
draft,
during
previous
meetings,
I've
stood
here
and
read
through
all
those
bullet
points
and
that
felt
stilted
and
weird,
so
instead
I'm
going
to
leave
them
on
the
screen
for
a
few
moments,
and
if
anybody
has
any
questions,
I
can
go
into
more
detail
about
them
later
next
slide.
M
M
These
are
the
underscore
underscore,
secure
and
underscore
underscore
host
Dash
cookies,
but
it
turns
out
that
servers
don't
always
check
cookies,
sent
case
sensitively,
because
of
course
they
don't
so
servers
were
setting
prefixes
without
realizing
that
they
weren't
getting
any
of
the
guarantees
from
browsers
and
that's
bad.
So
we
fixed
that
next
slide.
M
We
have
three
open
issues
currently
same-site
cookies
and
redirects.
Click
recap
on
this.
The
there
was
a
bug
in
the
original
spec,
where
same
site
did
not
take
the
redirect
chain
into
account.
So
if
you
redirected
from
site
a
through
site
B
and
back
to
site
a,
we
would
consider
that
same
site
and
happily
send
you
your
same-site
cookies,
that's
wrong.
The
spec
was
fixed,
but
when
Chrome
tried
to
launch
this
or
enable
this
feature,
we
got
a
ton
of
complaints
and
there
was
a
lot
of
site
breakage.
M
So
we
disabled
it
and
currently
trying
to
figure
out
next
steps,
we're
collecting
metrics
on
how
sites
are
using
this
Behavior
to
try
to
get
a
better
idea
of
what
to
do
with
it,
but
with
the
US
holiday
season,
approaching
I'm
expecting
I'm
not
going
to
get
anything
useful
until
until
q1,
the
other
two
open
issues,
internal
white
space
and
cookie
names
and
values.
This
was
recently
filed
turns
out
that
the
three
major
browsers
Chrome,
Firefox
and
Safari
all
handle
internal
tabs
somewhat
differently,
internal
tab.
M
Being
that
you
have
some
other
non-tab
characters,
a
tab,
more
more
non-tab
characters.
The
spec
was
modified
somewhat
recently
to
disallow
control
characters,
but
it
accepted
tab
characters.
M
What
this
means
is
that
now
the
spec
says
you
should
accept
those
internal
tabs,
but
not
all
browsers
do
so
I'm
trying
to
figure
out
if
that
change
to
the
spec
should
have
happened
and
whether
we
should
revert
it
final.
One
is
a
mouthful.
The
spec
should
more
clearly
advise
which
parts
a
reader
should
Implement
I
filed
this
one,
because
we've
had
a
number
of
issues
with
implementers
implementing
the
wrong
requirements
because
they
are
confused
on
what
they
should
do
and
I
can't
exactly
blame
them.
M
M
So
we've
already
have
some
work
plan
post,
RFC
6265
this
the
first
is
cookies
having
independent
partition.
State
Dylan
will
be
up
here
in
a
few
minutes
to
talk
in
more
detail
about
that.
The
second
one
is
Cookie
spec
layering,
originally
Johann
Hoffman
was
going
to
speak
on
that.
Unfortunately,
he
wasn't
able
to
make
it
so
I'm
going
to
say
a
few.
A
few
words
about
it.
M
Yep
I'm,
hitting
things
on
my
computer
here
so
cookie
layering
is
an
effort
being
headed
by
Johann,
Hoffman
and
Anna
van
kestron.
The
idea
is
that
the
cookie
spec
has
kind
of
intermingled
itself,
with
the
browser
specs
with
same
site
and
partition
cookies
and
blocking,
and
this
is
an
effort
to
sort
of
decouple
the
cookie
spec
from
things
that
could
be
better
handled
by
say
the
fetch
spec.
M
Let's
see
this
was
brought
up
during
TPAC
as
like
an
initial
idea
and
request
for
feedback
and
seems
somewhat
positive,
but
there's
a
lot
of
work
ahead
for
it.
That
is
all
that
I
have
do.
We
have
any
questions.
A
So
I
I
think
the
idea
here
is
you
want
to
close
those
three
issues.
Go
through
working
group
last
call
go
to
the
ITF
isgq
publication
process
and
then
we'll
we'll
talk
about
starting
almost
an
immediate
revision
again
to
address
your
your
deferred
issues
and,
and
these
work
items
is
that
kind
of
yes,.
A
And
the
cookie
spec
layering
and
there's
been
some
background
discussion
about
that
I
think
that's
we've
for
a
long
time
talked
about
how
to
make
the
the
cookie
spec
more
accessible,
more
user,
friendly
I
think
we
want
to
try
and
involve
non-browser
communities
as
well
to
see
where
the
right
line
is
to
draw
about
making
that
separation,
but
that
that's
a
discussion
we
can
have
I
think
that's
very
reasonable.
David
spanazi.
K
Hi
I'm
failing
to
find
the
issue
but
I,
remember
there
being
some
discussion
on
the
topic
of
utf-8
characters
and
it
just
allowed
characters
in
general
about
how
the
spec
allows
a
different
set
for
setting
cookies
than
for
sending
them
and,
like
I,
think
April
King
kind
of
found.
Some
issues
there
that
were
even
somewhat
were
worrisome
from
a
security
perspective.
Where
did
that
discussion?
Go
I
just
couldn't
find
it.
Sorry.
D
M
Why
I
spoke
with
April?
Just
you
know,
face
to
face,
and
we
decided
that
at
the
moment,
okay,
so
context,
April
has
found
a
number
of
issues
where
browsers
will
send
in
safe
cookies.
That
servers
then
won't
accept.
This
is
a
problem
primarily
because
nowadays,
a
server
is
not
a
single
entity.
M
After
talking
with
April,
we
decided
that
simply
changing
the
spec
to
say
that
oh
hey
server
should
ex
should
accept
this.
Expanded
character
set
was
not
the
correct,
correct
route.
M
I
I
think
there
is
no
I
know,
there's
a
deferred
issue
for
the
for
the
next
version
of
the
spec,
where
we
are
going
to
more
formally
research,
look
into
allowing
expanded
character,
sets
and
kind
of
what
the
effect
that's
going
to
have.
But
at
the
moment
it's
the
plan
is
to
keep
the
status
quo.
K
K
Maybe
if
you
know
I
don't
want
to
volunteer
you
for
work,
but
if
we
could
have
like
a
paragraph
explaining
that
there
is
a
foot
gun
in
the
spec
that
might
be
useful.
Just
you
know,
as
a
warning
note
that
this
is
different
from
this,
and
you
you
here
be
dragons
might
be
good.
M
Yeah,
that's
exactly
what
that
final
long-winded
issue
that
I
filed
is,
which
is
like
make
it
easier.
Yes,
thank
you
highlighting,
perfect
I.
Think
that's
exactly
what
that's
supposed
to
be.
A
F
A
A
A
N
My
name
is
Dylan
Cutler
I
am
also
on
Google
Chrome
and
I
will
be
discussing
partition
cookies
with
you
guys
today
slide
so
just
a
quick
overview.
N
So
when
we
say
a
cookie
is
partitioned
as
opposed
to
unpartitioned,
what
we
mean
is
that
when
a
cookie
is
and
when
an
unpartitioned
cookie
is
set
in
a
third-party
context,
it
is
available
on
essentially
any
top
level
domain
that
makes
requests
to
The
Domain
that
set
the
cookie
and
by
partition
cookies.
N
We
mean
that
these
third
party
cookies
would
only
be
available
on
the
top
level
site
in
which
were
they
were
created,
and
then,
if
the
user
were
to
navigate
to
a
different
top
level
site,
then
the
third
party
domain
would
receive
like
a
brand
new
cookie
jar.
Also,
this
is
like
a
forewarning.
This
talk
is
going
to
be
like
because
it's
partition
cookies
a
little
more
browser
heavy
than
some
other
like
talks.
So
if
that's
not
your
thing
feel
free
to
tune
me
out
slide.
N
So
the
partitioned
attribute
is
a
proposal
for
a
new
cookie
attribute
which
would
allow
sites
to
opt
into
this
Behavior
and
just
to
go
over
the
design
really
quickly.
It
would
require
secure
domains
would
be
allowed
up
to
10
kilo,
kilobytes
or
180
cookies
per
partition,
and
we
determine
how
much
memory
a
domain
is
using
per
Partition
by
the
size
of
the
name
and
value
of
the
cookies.
N
And
then
another
detail
is
that
clear
site
data
would
also
only
clear
cookies
in
the
current
partition
that
it
was
in,
and
this
would
be
to
prevent
cross
domain.
Entropy
leaks,
essentially,
there's
a
tax
you
could
set
up
where
you
intentionally
call
clear-site
data
on
different
top
level
domains
in
order
to
essentially
like
build
enough
cross-site
entropy
to
develop
a
like
persisting,
cross-site
identifier
slide.
N
So
today,
I'm
just
going
to
be
going
over
some
of
the
open
issues
we
have
for
partition
cookies.
N
The
first
one
will
be.
How
do
we
deal
with
Partition
cookies
in
what
we
call
quote-unquote
unpartitioned
contexts?
N
N
N
Should
there
be
a
way
that
user
agents
convey
that
they
are
sending
the
request
from
a
context
in
which
only
partition
cookies
are
allowed?
If
we're
in
a
future
world
where
unpartitioned
third-party
cookies
are
obsoleted
next
slide,.
N
N
In
this
case,
it
would
mean
a
first
party
context,
and
so,
in
that
case
the
it's
a
request
where
the
top
level
domain
and
the
domain
making
the
requests
getting
or
setting
the
cookie
is
the
same,
or
it
would
be
contexts
which
have
received
a
privilege,
essentially
through
user
consent,
through
something
like
storage
access,
API,
for
example,
and
one
like
Nuance
we
like
to
point
out
is
that
Chrome
and
other
browsers,
particularly
firefox's
implementation
of
unpartitioned
contacts
differ.
N
This
is
kind
of
where
the
issue
arises.
Chrome
supports
both
unpartitioned
and
partition
cookies
at
the
same
time
and
uses
null
for
the
latter's
partition
key,
whereas
in
other
browsers,
as
soon
as
storage
access
API
is
granted
the
partition
key
for
that
context,
essentially
switches
to
that
first,
that
cookies
domain,
as
if
it
were
like
kind
of
originating
from
the
cookies
site.
N
And
so
the
question
is
how
do
we
handle
cookies
that
are
set
with
a
partitioned
attribute
in
these
contexts?
You
know:
do
we
set
the
partition
key
to
be
like
the
current
top
level
site,
which
is
what
we
are
proposing?
Is
the
right
answer,
or
do
we
just
use
like
whatever
the
current
partition
key?
Is?
N
We
think
that
we
should
just
set
it
to
be
whatever
like
the
current
top
level
site
is
even
in
these
more
privileged
contexts,
just
because
the
site
is
including
the
partitioned
attribute,
because
they're
opting
into
this
Behavior
explicitly,
and
there
are
also
ways
to
use
cookies
in
these
contexts
without
using
the
partitioned
attribute.
N
Next
slide,
oh
and
then
one
more
note
about
that
asterisk
point
when
I
say,
like
Chrome,
supports
partition
and
unpartitioned
cookies.
At
the
same
time,
what
I
I
don't
mean
that
Chrome
will
like
continue
supporting
unpartitioned
third-party
cookies
into
the
future
posts?
It's
third
party
cookie
deprecation
timeline
just
wants
to
like
kind
of
clear
the
air
there
in
case
I
accidentally
store
any
fires,
and
so
moving
on
from
there
on.
N
The
next
thing
we
want
to
talk
about
is
whether
the
partition
key
should
have
what's
called
a
cross-site
ancestor
bit
and
I
think
the
best
way
to
explain
this
is
visually.
So,
let's
say
a
server
is
setting
a
partition
cookie
in
a
first
party
context,
where
the
request
domain
and
the
top
level
domain
are
the
same.
N
In
this
case,
the
crossline
ancestor
bit
would
be
false
because
the
request
is
originating
directly
from
the
top
level
frame
slide.
Please.
N
But
in
this
next
scenario
here
we
see
that,
although
the
request
is
coming
from
the
same
site
as
the
top
level
domain,
there
is
a
third
party
ancestor
or
cross-site
ancestor
between
the
top
level
site
and
this
frame
or
context
like
making
this
request
next
slide.
Please.
N
It's
fine,
so
this
cross-site
ancestor
chain
bit
was
actually
originally
introduced
in
the
w3c
as
part
of
like
the
storage
partitioning
effort,
where
browsers
are
partitioning
JavaScript
storage
in
the
same
way
we're
partitioning
cookies
so
that
they
aren't
accessible
to
sites
across
different
top
level
domains
and
the
reasoning
behind
that
was
like
primarily
to
like
correctly
compute
site
for
cookies
in
Partition
service
workers
and
effectively
by
adding
this
bit.
N
It
would
separate
the
partition
that
top
level
contexts
get
from
context
with
a
cross-site
ancestor
and
on
the
question
of
whether
we
want
to
add
this
to
the
cookie
partition
key
there's
a
pro,
which
is
that
there
are
consistent,
cookie,
sorry,
consistent
partition
boundaries
across
cookies
and
storage,
which
is
nice
for
developers.
N
But
there
are
some
cons,
for
example,
this
would
essentially
be
a
re-implementation
of
same
site
because
developers
can
already
restrict
which
cookies
are
accessible
in
and
out
of
these
contexts
by
setting
a
cookies,
same
site,
locks
or
strict,
and
then
another
con
is
that
there
are
cookie
use
cases.
N
N
And
so
the
next
issue
we
wanted
to
go
over
is
how
to
handle
partition
and
unpartitioned
cookies
with
the
same
name,
and
it
turns
out,
there's
actually
already
precedent
for
this
domains
can
set
cookies
with
the
same
name
as
long
as
they
differ
in
either
their
domain
or
path
attributes,
and
so,
in
order
to
account
for
partition
cookies.
N
And
then
the
last
issue:
how
can
user
agents
convey
if
a
site
us
all
right?
How
can
user
agents
convey
if
a
request
comes
from
a
context
in
which
the
browser
would
only
accept
partition
cookies,
and
we
think
that
this
shouldn't
really
be
a
blocker
for
specking
partition
cookies?
This
is
kind
of
a
long-term
question
and
has
implications
Beyond
like
just
partition,
cookies
and
so
kind
of
our
solution
for
now
is
just
pump
this
down
the
line
next
slide.
N
N
Having
independent
partition
state
is
a
good
example
of
actually
how
iitf
might
benefit
from
cookie
layering,
because
essentially,
if
cookie
layering
happens
and
we
move
some
of
these
more
browser,
specific
things
into
the
fetch
spec,
then
really
all
the
ietf
needs
to
be
concerned
with
is
how
how
user
agents
parse
the
a
partition
token
in
the
cookie
line,
and
then
they
can
kind
of
let
the
fetch
spec
handle
the
rest
of
the
partitioning
Behavior
and
that's
it.
So
anyone
has
any
questions,
feel
free.
A
So
I
think
from
a
process
standpoint
here
when
we
talked
about
adding
major
new
features
for
6265
bits,
we
had
a
relatively
rigorous
process
for
getting
consensus
around
here's
a
proposal
with
a
draft.
Does
the
community
want
to
add
that
to
6265
scope,
and
so
this
is
probably
too
late
for
62.65.
This
and
I.
Don't
think
that's
a
surprise
to
you,
which
is
good,
and
the
question
is
you
know
when
we
do
this
next
revision
that
we
were
just
talking
about
you
know,
are
we
going
to
use
a
similar
process?
A
And
if
so,
you
know
what
would
our
folks
interested
in
taking
this
on
as
as
a
piece
of
work
and
as
before,
I
think
you
know
in
in
6065
original
as
well
as
this
we
kept
the
focus
very
firmly
on
implementer
intention,
and-
and
so
you
know,
folks
who
have
cookie
jars
they're,
not
the
only
people
who
matter,
but
they
do
really
matter
to
this
discussion.
So
that's
who
at
least
we
we
should
hear
from
Martin.
D
Yeah,
so
during
my
Mozilla
hat,
but
the
Privacy
CG
chair
and
w3c
that
that
group
has
pretty
broad
support
for
this
work
and
would
very
much
like
to
see
the
work
proceed
and
I
I
think
we
have
agreement
also
that
the
ITF
is
the
right
place
to
do
that,
because
the
ITF
owns
the
cookies
back
and
it
doesn't
make
any
sense
to
put
it
anywhere
else.
I
I
think
we're
willing
to
be
guided
by
everyone
else
in
terms
of
timelines
and
whatnot,
and
it
very
much
seems
like
this.
D
D
I
think
this
is
probably
something
that
I
would
sort
of
advocate
for
some
sort
of
signal
that
it's
working
in
the
ITF
process,
so
I
sort
of
would
prefer
if
this
were
adopted
here,
perhaps
parked
on
the
side,
so
that
we
can
continue
to
work
on
it
and
refine
it
and
answer
some
of
the
interesting
questions
Dylan's
asking
here,
but
not
block
other
important
work.
That
would
be
my
preference
here.
A
N
O
There's
maybe
sort
of
a
naive
question,
but
can
you
explain
a
little
bit
more
the
rationale
for
providing
this
at
all
versus
the
the
browser
just
making
its
own
decisions
about
how
it
will
partition
or
not
partition
that
it?
It's
always
felt
to
me
like
it's
a
a
question
of
how
the
behavior
of
the
client
itself
the
browser
or
whatever
behaves
and
putting
it
onto
the
wire
and
part
of
the
cookies
back,
it's
hard
hard
to
articulate
it
more
than
that,
but
referring.
N
To
like
the
attribute
and
like
the
opt-in,
behavior
and
just
say,.
O
Like
yeah,
okay,
yeah,
like
at
least
it's
my
understanding
like
Firefox
Mozilla,
basically
partition
cookies
right
or
has
the
opportunity
to
partition
cookies
in
their
client,
and
they
just
did
it.
They
partitioned
them
top
level
domain,
lower
level
domain.
It
just
sort
of
happens
so
bringing
this
into
a
spot
trying
to
give
servers.
The
opt-in,
behavior
is
never
quite
I've,
never
quite
understood
the
the
rationale
or
the
motivation
behind
that,
and
it
almost
feels
simpler
to
not
to
just
allow
it
to
continue
to
be
a
behavior
that
the
the
client
decides
on.
N
At
least
Karma's
philosophy
on
this
is
that
we
want
to
encourage
this
to
be
like
an
opt-in
Behavior,
at
least
in
the
time
between
now
and
when
on
partition.
Third-Party
cookies
are
turned
down,
and
the
reason
being
is
that,
like
a
lot
of
servers,
rely
on
third-party
cookie
functionality
for
various
things,
some
of
that's
cross-site
tracking,
which
we're
not
okay
with,
but
then
some
of
it
are
some
use
cases
that
we
are
okay
with
and
in
this
sort
of
transitory
period,
where
we
move
off
of
third-party
cookies.
N
We
think
that,
like
providing
this
attribute,
and
this
opt-in
Behavior
provides
developers
with
an
opportunity
to
kind
of
migrate
their
systems
over
to
this
partitioned
World.
You
know
before
we
just
completely
kind
of
take
the
rug
out
from
under
them
and
remove
on
a
partitioned
third-party
cookies.
N
So
it's
kind
of
a
web
compat
I,
guess,
reasoning
behind
it.
Wouldn't.
O
Again,
I
apologize
for
I'm,
not
fully
understanding,
but
in
terms
of
compatibility
it
always.
It
feels
like
there's
cases
that
are
going
to
break
then,
if
you're
expecting
third-party
cookies,
that
will
otherwise
work
in
a
partition
context
and
you're
not
opting
into
them
so
like
a
server
that
you
know,
does
whatever
it
does
in
a
in
a
frame
or
something
and
continues
to
get
set
cookies
get
cookies
as
they
would.
O
You
partition
those
it
doesn't
know
the
difference
in
terms
of
How
It's
behaving,
but
if,
if
suddenly
it's
changed
such
that
you
have
to
opt
in
with
their
or
partition
cookies,
then
that
would
I
believe
stop
working
with
unless
you
make
the
migration.
So
it
feels
like
that
area
is
not
compat,
at
least
not
in
the
way
that
I
think
about
it,
be
a
breaking
change
and
require
software
updates
to
well
continue
working.
N
I
think,
like
kind
of
at
some
point,
there
is
a
breaking
change
that
needs
to
be
made
either
it's
like
removing
third-party
cookies
entirely
or
it's
just
partitioning
them
by
default.
So
at
some
point
we're
going
to
be
breaking
sites.
I
think
it's
just
like
a
matter
of
difference
on
sort
of
how
we
want
that
breakage
to
occur.
N
You
know
the
the
server
getting
the
lack
of
cookie
back
once
we
do
turn
down.
Third
party
cookies
is
a
signal
that,
like
it's
not
working
as
intended
versus
having
the
user
agent
just
sort
of
like
change
the
behavior
of
the
cookie
from
underneath
the
site
without
really
giving
it
any
indication
that
that's
what's
going
on
is
like
kind
of
just
as
bad.
In
our
opinion,.
O
It
seems
it
seems
unnecessary
and
sort
of
backwards,
but
maybe
I'm
often
that
weeds
here
kind
of
feels
that
way.
N
There
is
yes,
I
can,
if
you
want
to
come
up
to
me
like
after
this,
and
if
you
have
any
additional
questions
to
or
I
can
also
like,
send
you
a
link
to
the
explainer
where
we
talk
about
things
in
a
lot
better
detail,
so
yeah
I
encourage
you
to
do
that.
Okay,
thanks.
D
So
Martin
Thompson,
this
has
been
debated
at
length
in
other
forums,
and
it
was,
it
was
a
the
consensus
view.
I
think
was
that
that
blocking
blocking
cookies
in
these
third-party
context
was
the
desirable
outcome.
It
was
not
necessarily
unanimous
that
that
was
the
the
outcome
there
were.
There
were
a
number
of
reasons,
the
ones
that
Dylan
articulated
for
doing
it.
D
This
way,
I
also
believe
that
certain
other
platforms
have
a
little
bit
of
trouble
when
they
tried
to
implement
partitioning
properly
and
those
were
limitations
around
devices
not
being
particularly
performant
when
partitioning
was
in
place
and
so
and
those
were
also
rooted
in
the
architecture
of
those
systems
as
well,
which
I
think
was
a
little
unfortunate.
D
But
ultimately
our
experience
with
partitioning
is
that
it
mostly
almost
completely
works,
and
so
you
could
do
without
this,
but
we
were
sort
of
in
the
minority
when
it
came
to
the
discussions
there
and-
and
we
want
to
respect
the
the
consensus
process
there
and
the
ultimately.
D
A
Okay,
well,
and-
and
you
know,
we
haven't
formally
adopted
anything
yet
it's
continuing
discussion,
but
it's
a
good
start,
I
think
anything
else.
Thank
you,
okay.
Thank
you
very
much
and
we're
on
time
too.
A
A
O
Search
I've
only
got
five
minutes
here,
so
this
is
me
pretty
short,
go
to
the
next
slide:
I
Know
It
Takes,
a
Minute
Martin.
F
O
You
want
to
clear
yourself
unless
you
already
have
comments,
so
Mark
encouraged
us
to
just
focus
on
issues,
questions
Etc
and
not
give
any
context
or
anything
it's
hard
for
me,
but
I
take
a
shot
at
that
in
interest
of
saving
time
here.
O
So
back
in
October,
we
published
03
really
relatively
minor
changes
stating
that
the
certificate
chain
is
presented
in
the
same
order
as
it
would
be
defined
in
TLS,
rather
than
trying
to
copy
a
bunch
of
language
from
TLS
that
that
is
difficult
to
get
right
and
sort
of
problematic
did
a
bunch
of
reference
updates
to
things
that
are
now
are
now
rfcs
made
HTTP
semantics
a
normative
reference,
some
of
the
normative
informative
stuff's
a
little
tricky
to
get
right
here,
but
that
one
made
sense
mentioned
that
in
the
case
the
origin,
server
Access
Control
decisions
need
to
be
conveyed
at
the
HTTP
application
layer,
either
by
selecting
specific
response,
content
or
sending
a
403
or
something
but
be
pretty
clear
that
we're
not
trying
to
invent
any
sort
of
cross
layer,
signaling
about
error
conditions
or
Access
Control
decisions,
October
30th
started
working
group
last
call
and
next
slide.
O
I
did
this
sort
of
last
minute
because
I
did
have
a
saying
that
said,
there
was
no
open
issue.
Lucas
Conley
did
a
review
and
put
a
number
of
things
up
here.
Thanks
Lucas,
there
are
a
number
of
issues
that
are
open,
I,
don't
think
any
of
them
are
very
controversial.
O
We
have
a
little
bit
of
a
discussion
going
about
sort
of
text
and
what
it
might
look
like
in
the
future
versus
now
in
context
and
so
forth,
but
it
will
work
through
that
I
saw
another
one
come
up,
do
that
as
well,
but
there's
yeah
the
these
are
open,
I
plan
to
address
them,
I,
don't
think,
there's
anything
of
of
consequence
of
consequence
in
terms
of
progressing
things
along
and
we
are
still
in
last
call
for
a
little
while.
O
That's
it
that's
sort
of
a
boring
presentation,
but
that
was
I
think
the
goal.
Okay.
A
L
Hello
Lucas,
it's
here,
yeah
I,
just
had
a
blast
of
the
draft.
Like
you
say,
loads
of
these
are
just
really
low-key.
There's
a
little
things
you
can
pick
and
tidy
up,
but
it
got
caught
in
some
other
reviewer,
some
other
editorial.
Like
I
say
we
disagree.
That's
okay,
you're
the
editor.
If
you
want
to
do
it
that
way,
that's
that's
your
discretion.
So
just
just
wanted
to
give
some
kind
of
contrast
and
feedback,
so
that
in
case
you
didn't
think
of
it
that
way
so
I'm
happy.
However,
we
resolve
them.
O
L
Yeah,
so
I
I
think
I
might
have
forgotten
that
so
on
reflection,
you
know
it's
having
some
context
is
that
there
might
be.
They
might
continue
to
be
other
drafts,
that
or
sorry
other
other
headers
that
provide
this
capability.
I'm.
L
And
so
phrasing
it
as
you've
done
is,
is
kind
of
reflects
the
reality
of
that.
This
is
a
way
to
do
it,
but
it's
it's
not
the
standard,
because
it's
not.
P
Wow
you're
already
tall
Jonathan,
Hoyle
and
cloudflare
I
I,
haven't
looked
at
this
draft,
so
I
just
looked
at
it
now,
but
the
security
security
considerations
is
super
light.
Is
that
not
absolutely
terrifying.
O
I
I
thought
they
were
pretty
well
done,
but
you
know
I
wrote
them
so
that
doesn't
necessarily
say
much.
The
security
considerations
are
meant
to
cover
the
I.
Don't
know
how
to
answer
that.
I
guess.
P
E
Jonathan,
thank
you
for
your
for
to
contribute
to
PR
aise.
C
So,
actually
to
to
follow
up
with
that
statement.
C
This
is
one
of
the
use
cases
where
this
and
the
signatures
draft
are
intended
to
work
hand
in
hand,
and
we
actually
call
this
out
in
the
signatures
draft,
because
something
that
a
a
terminating
TLS,
terminating
reverse
proxy
can
do
with
both
of
these
drafts
together
is
send
back,
is
is
to
do
the
TLs
validation
and
then
sign
like
add
a
signature
for
this
header
to
the
message
on
the
way
in
that
the
original
client
obviously
isn't
going
to
add
in
itself,
so
that
the
origin
server
sitting
off
in
a
back
in
a
back
end
somewhere
will
be
able
to
check
the
reverse
proxy
signature
against
those
inputs,
and
then
that's
how
that
trust
is
conveyed.
C
It's
it's
a
transitive
process.
It
requires
a
lot
of
out-of-band
configuration
and
knowledge,
but
it
is
one
way
to
string
this
together
and.
O
P
So
there
are
all
kinds
of
like,
obviously
you
don't
design
protocol
standing
on
one
foot,
but
like
the
the
easy
or
not
the
easy.
There
seems
to
be
an
obvious
way
of
trying
this
with
sort
of
exported
authenticators
or
like
some
kind
of
certificate.
That's
actually
bound
to
the
TLs
connection,
and
then
you
just
pass
through
a
proof
that
the
the
the
terminating
proxy
controls
this
particular
session
and
that
that
particular
scientist
client
had
signed
that
particular
session
like
to
prevent
it
mismatching
and
swapping
and.
A
So
that
was
discussed,
some
I
think
when
we
talked
about
adoption
and
the
place
we
ended
up
at
was
that
this
is.
The
intention
of
this
draft
is,
is
to
standardize
existing
practice
within
the
bounds
of
of
described.
Thank
you
existing
practice
and
and
align
on
a
single
header,
because,
frankly,
a
lot
of
reverse
proxies
and
cdns
already
do
this,
and
and
so
that
that's
one
of
the
reasons
it's
informational,
because
we
don't
want
to
put
too
strong
of
a
recommendation
behind
that,
realizing
that
there
are
better
Solutions.
A
It's
just
that
many
felt
that
this
would
be
an
improvement
and
it
would
be
more
Deployable
in
at
least
in
the
medium
term.
Thank
you.
A
O
No,
in
better
words,
I
appreciate
that
I
was
struggling
over
it,
but
that's
a
that's
good
context
and
I
think
the
correct
summary
of
kind
of
how
we
got
here
and
what
we're
trying
to
accomplish.
Yeah.
A
A
Next
up,
we
have
cap
of
the
day
a
presentation
about
what
the
wonderful
people
in
mask
are
doing
and
how
it
may
or
may
not
affect
the
world
of
http
sit
back,
get
ready
to
enjoy
a
mask
Enthusiast
at
work.
David
skanazi.
A
K
Hello,
everyone
David
scanazi,
mask
enthusiast,
big,
surprise,
I
know
so
the
chairs
of
HTTP
reached
out
and
asked
if
someone
from
Mass
could
just
give
an
update
to
the
HTTP
working
group
about
whatever
the
hell
is
going
on
in
mask,
because
it's
totally
different
people,
sometimes
mostly
the
same
people
but
not
everyone
knows,
and
some
people
might
care
and
they
don't
know
next
slide.
Please.
K
Geez,
that
is
slow,
so
what
is
mask
so
the
acronym
is
multiplexed
application
substrate
over
quick
encryption,
which
is
quite
a
mouthful.
We
came
up
with
the
name
back
in
2018.
It
was
quite
unfortunate
that
there
was
then
a
global
pandemic
for
actually
more
reasons
than
this,
but,
but
so
now
everyone
Associates
it
with
a
covet
mask,
but
that's
not
what
we
were
going
for
anyway.
K
Why
do
we
care
next
slide?
Please.
K
So
just
kind
of
a
quick
history
lesson
here
back
in
the
good
old
days
of
HTTP
before
they
had
this
thing
called
SSL
like
bandwidth
was
expensive
and
people
will
building
cash
servers,
especially
if
they
were
on
other
continents
where
things
were
really
really
far
away.
Now
we
call
them
intermediaries,
and
then
there
was
the
idea,
like
an
Enterprise
or
School
networks,
that
you
would
intentionally
go
talk
to
that
terminatory
because
it's
getting
stuff
to
you
faster,
because
it's
already
has
it
cached.
K
That
was
you
know
in
what
we
call
Web
1.0
these
days,
because
everyone
was
loading
the
same
thing,
so
you
could
cache
it
and
nothing
was
encrypted,
but
then
eventually
SSL
happened
and,
as
usual,
the
security
people
ruin
everything
for
everyone
by
making
things
safer
and
not
working
anymore,
and
so
people
had
to
deployed
these
boxes
and
were
saying
no.
If
you
want
to
reach
the
internet,
you
have
to
go
through
our
HTTP
proxy
because
that
reduces
our
internet
bill.
K
K
Quick
is
a
thing.
Quick
becomes
standardized,
as
at
the
ITF
crap
quick
runs
over
ADP.
How
could
we
have
predicted
this?
How
do
we
get
that
over
http
next
slide?
Please
Q
mask.
K
Oh
wow,
that
is
slow,
so
the
idea
was:
let's
we
already
can
proxy
TCP
turns
out
that
UDP
is
becoming
even
more
of
a
thing
now
and
there
are
things
that
are
neither
TCP
nor
UDP,
like
the
sctp
people
still
talk
inside
ITF
and
there
are
other
things.
So,
let's
just
add
a
add
a
thing
for
that.
K
So
let's
just
do
connect
for
UDP,
so
let's
call
it
connect
UDP,
and
then
we
got
to
arguing
for
about
three
years
on
how
exactly
to
do
that
and
you
end
up
with
a
solution,
that's
kind
of
as
simple
as
you
would
imagine
where
you
take
the
UDP
and
you
put
it
in
in
the
packet
and
you
send
it.
So
we
got
that
published
a
few
months
ago,
which
was
really
nice
fun
time
and
because
we
had
to
have
quite
a
few
bike
sheds.
K
We
decided
to
split
the
baby
in
half,
so
there's
one
RFC
for
proxy
UDP
and
HTTP,
and
one
RFC
that
it
depends
on
which
is
HTTP,
datagrams
and
also
the
capsule
protocol,
because
we
didn't
couldn't
come
up
with
a
better
name
than
capsule,
so
HTTP
datagrams.
The
idea
is
that,
in
addition
to
your
regular
HTTP
stream,
which
is
a
concept
that
we've
had
since
HTTP
2,
you
can
send
datagram
for
little
bits
of
data,
and
that's
really
handy
if
you
want
to
send
UDP,
you
just
put
them
in
there.
K
The
really
reason
for
that
this
exists
is
that
you
can
then
map
it
to
a
quick
datagram
frame
which
doesn't
get
retransmitted,
which
is
exactly
what
you
want
for
connect2db,
because
if
you
put
it
inside
the
stream,
you'd
get
bad
performance
and
the
capsule
protocol
that
I
mentioned
was
something
that
we
thought
would
be
useful.
A
it's
useful
for
like
HTTP
one
and
two
or
you
don't
have
the
quick
data
grab
frame
and
it's
also
useful
for
other
things
or
more.
K
We
wanted
it
for
that
and
we
said
well,
let's
toss
in
a
tlv,
so
it's
extensible.
We
have
some
use
cases
for
it,
so
it
allows
you
to
send
something
all
the
way
through
all
the
intermediaries
reliably,
and
so
those
were
our
first
deliverables
here,
I've
seen
shipped
a
few
months
ago,
we
are
working
now
on
proxying,
IP
and
HTTP.
K
We
had
a
whole
nice
fun
detour
on
writing
a
requirements
document
for
that,
first,
which
I
was
kind
of
annoyed
at
Mark
when
he
suggested
that
at
that
first,
but
he
made
a
point
that,
like
oh
you're,
not
actually
all
agreeing
on
what
you
mean
by
proxying
IP,
and
you
were
right.
So
we
ended
up
arguing
on
the
requirements
document
instead
of
arguing
on
the
discussion
on
the
solution
document,
but
at
least
when
we
went
to
build
the
solution,
we
knew
what
we
were
going
to
build.
K
So
that's
been
taken
care
of
we're
building
the
solution
and
we're
pretty
close
to
done
so
we'll
be
discussing
this
at
the
mask
meeting,
which
is
sometime
this
week,
come
if
you're
interested,
but
we're
pretty
close
to
done
next
slide.
Please.
K
So
what
why
would
you
want
mask
one
of
the
things
we
discussed
at
the
beginning
of
the
the
effort
was:
do
we
want
this
to
be
over
HTTP?
We
all
agreed
that
it
had
to
be
on
quick,
because
quick
is
the
best
thing
ever,
but
then
do
we
put
it
over
HTTP,
3
or
other
things
and
we're
thinking.
K
Well,
sometimes
it's
nice
to
be
able
to
run
over
networks
that
block
qdp,
so
putting
it
out
of
http
means
you
have
access
to
http
one
and
two
I,
really
like
the
fact
that
if
you
put
it
over
HTTP,
then
it
starts
looking
like
web
traffic,
so
it
makes
it
really
harder
to
block.
So
now
you
have
a
VPN
that
looks
like
web
traffic
hard
to
censor
I
like
that.
Don't
say
that
some
people
don't
like
that.
K
Another
part
that
I
hadn't
thought
of
at
all,
but
that
we
realized
was
companies
like
Google,
have
already
put
in
a
lot
of
effort
in
having
a
really
good
HTTP
server,
and
so
a
quick
stack.
That's
efficient
and
security
reviewed
and
HTTP
HTTP
load
balancers.
And
if
you
build
this
over
HP,
you
kind
of
get
to
reuse
all
of
this
for
free.
So
we
end
up
like
in
places
where,
like
oh,
can,
we
use
ipsec
and
people
were
like
no
we'd
have
to
review
that
whole
new
stack
and
we
don't
want
to.
K
Can
you
just
find
a
way
to
make
it
work
over
quick
and
then
there's
a
fun
story
there,
where
Alex
and
others
back
at
Google
actually
implemented
a
VPN
over
quick
way
before
the
mask
effort
and
was
kind
of
the
Catalyst
for
the
whole
thing,
and
then
we
are
closing
the
loop
on
making
that
use
mask
now,
which
is
fun
and
then,
when
we
built
this
folks
that
like
to
use
buzzwords
like
zero
trust
and
other
things
that
and
serverless
and
things
that
I
don't
understand
said.
Oh,
this
is
great.
K
Can
I
use
this
too,
and
they
were
apparently
much
the
problem
they
had
was
they
had
VMS
in
the
cloud
somewhere
that
needed
to
talk
to
each
other,
but
there
were
HTTP
load
balancers
in
the
middle
and
they
wanted
magic
crypto
and
we
said
well.
Actually
they
had
built
something
using
connect,
because
that
was
the
simplest
thing
for
them,
and
then
they
had
a
customer
who
wanted
UDP
so
boom.
This
actually
works
for
there,
like
reusing
stuff,
is
useful.
Next
slide.
K
So
where
are
we
going
from
here?
The
mass
working
group
was
very
tightly
scoped
to
make
sure
we
didn't
get
too
distracted
and
to
make
it
to
kind
of
get
us
to
focus
on
shipping
connect2dp
and
connect
IP,
but
now
that
we're
almost
done
with
that
we're
talking
about
a
sculptory
chartering
for
future
things.
So
that's
also
on
our
agenda.
K
We
don't
I,
don't
I,
don't
think
the
current
plan
from
what
I
hear
far
on
my
isg
overlords
is
that
they
don't
want
to
mask
maintenance
working
group
that
lasts
forever,
we'll
just
re-charter
for
a
few
scoped
extensions.
Do
those
and
then
potentially
shut
it
down
and
say
future
things
happen
in
the
HTTP
working
group.
We'll
have
to
discuss
those
things.
It's
not
entirely
figured
out,
but
I
guess
that's
what's
going
to
happen,
but
what
are
the
extensions
that
we've
been
discussing
that
might
happen
before
then?
So?
K
First,
we
have
extensions
to
connect
UDP,
so
connect
UDP
is
similar.
Connect
kind
of
gives
you
a
connected
five
Tuple.
But
if
you
want
to
do
something
like
webrtc,
where
you're
talking
to
multiple
hosts
you're
one
proxy,
it
doesn't
work
for
that.
So
we
have
a
little
extension
for
for
doing
that.
We
have
an
extension
for
doing
quick
over
connect
GDP,
you
can
run
quick
over
connect2dp
non-extended,
but
Tommy
had
some
clever
ideas
on
how
you
can
optimize
that
and
make
it
better.
K
We
have
extensions
kind
of
at
the
HTTP
datagram
layer,
Marcus
and
Erickson
folks
have
an
idea
for
adding
sequence
numbers
to
catch
reordering
when
they're
doing
multi-path
quick,
there's
an
extension
from
Ben
about
having
a
way
to
figure
out
your
path,
MTU
through
the
entire
chain
between
your
client
and
your
endpoint.
So
that
way
it
makes
it
easier
to
run
protocols.
Unlike
quick,
they
can't
do
it
themselves.
Lucas
has
a
draft
about
priorities
in
HTTP
datagrams
that
he's
very
excited
about
next
slide.
K
Please,
and
we
have
some
generic,
like
other
HTTP
extensions,
that
the
authors
were
thinking
like
mainly
in
the
context
of
mask,
but
could
also
be
applicable
to
other
HTTP
use
cases.
So
we
haven't
selected
the
view
in
a
venue
like
all
this
feature,
work
that
I'm
talking
about
is
individual
drafts.
So
Tommy
has
something
about
sending
more
DNS
information
as
using
the
recently
published
proxy
status.
Header
and
Ben
has
documents
on
describing
what
mask
services
and
HTTP
origin
has
and
on
modernizing
connect
itself.
Next
slide.
K
All
right,
so
we
are
meeting
this
week
on
Wednesday.
We
have
a
mailing
list,
a
GitHub
like
all
the
cool
kids.
If
any
of
this
sounds
interesting
to
you,
please
show
up.
Please
come
and
we're
happy
to
bike
shed
on
all
the
things
as
we
always
do.
That's
it
any
questions.
A
F
A
A
A
I'm
prompted
in
principle,
I,
don't
have
a
problem
with
that.
I'm
just
worried
that
somebody
who
was
expecting
it
on
Friday
might
be
a
little
surprised
by
that.
But,
of
course,
there
is
always
the
mailing
list
where,
if,
if
we
do
adopt
something
or
whatever
it'll
make
these
senses,
so
as
long
as
whatever
happens
goes
to,
melon
must
I
think
yeah,
which
it
would
yeah
I,
think
so
sure
any.
B
I
am
I'm
hanging
in
there
got
my
coffee,
yeah
I,
think
that
sounds
fine
yeah,
okay,.
A
K
Know
we
have
a
mask
for
that,
there's
a
better
way
to
phrase
that
I'm
sure
all
right,
hello.
It's
me
again,
David's
kanazi,
HTTP
Enthusiast,
so
this.
So
this
is
a
draft
that
was
initially
part
of
the
original
Mass
proposal.
So
this
is
actually
not
a
bad
segue,
and
people
gave
me
very
good
advice
at
the
beginning
that
you
don't
merge
everything
into
a
big
castle
in
the
sky
and
otherwise
it
doesn't
work.
K
You
split
it
up
into
small
bits
that
makes
sense,
and
so
this
was
one
of
those
that
then
went
dormant
as
we
were
all
focusing
on
mask
itself,
and
now
people
reached
out
that
were
interested
in
it
and
I'm
trying
to
resurrect
it
and
see
where
we
want
to
go
with
it.
So
this
now
we
have
multiple
co-authors.
This
work
is
joined
with
David
Oliver,
who's,
I
think
attending
virtually
this
time
and
Jonathan
who's
right.
There
next
slide.
Please.
K
So
the
the
draft
initially
was
called
transport
authentication,
and
the
reason
for
that
was
that
the
original
mask
proposal
was
I.
Think
the
term
is
monstrosity
in
terms
of
HTTP
semantics
because
it
took
over
the
whole
connection
and
did
things
that
were
very
evil
by
HTTP
standards
and
so
the
new
version
of
masks,
as
we
talked
about
fits
into
HTTP
semantics,
but
because
of
this,
the
old
version
needed
a
way
to
authenticate
the
whole
transport.
K
Now
we
don't
need
that
anymore,
so
the
draft
has
been
kind
of
completely
Rewritten.
The
cryptography
bits
are
still
in
there,
but
now
it
it
fits
in
HTTP
semantics
similar
to
how
mask
does
next
slide.
K
So
the
the
motivation
we
have
is
we
want
the
client
to
authenticate
to
the
server
to
the
origin,
as
is
commonly
done
with
HTTP
Authentication.
K
We
want
to
use
asymmetric
cryptography
because
there
are
some
use
cases
where
you
want
to
be
able
to
share
the
like
access
list
with
their
public
Keys
across
multiple
Origins
that
don't
necessarily
trust
each
other,
and
you
want
to
avoid
having
an
Argent
be
able
to
impersonate
one
of
the
clients
and
then
the
third
requirement
is
we
want
the
server
to
hide
the
fact
that
it
serves
authenticated
resources.
So
what
I
mean
by
that
is?
K
If
you
try
to
get
this
resource,
you
don't
want
a
server
to
send
you
a
401,
and
the
reason
for
that
is.
Let's
say
you
have
your
personal
website,
but
you
want
to
offer
some
mask
services
to
authenticated
clients.
You
don't
want
someone
to
be
able
to
probe
your
server
and
go.
Oh,
no!
No!
That
offers
Mass.
That's
blocked
on
my
network,
bang,
bang!
K
So
those
are
the
three
like
I'm
gonna
dive
a
bit
into
the
solution
space
in
this
presentation,
but
the
the
goal
or
the
interest
of
this
draft
is
really
to
get
in
see
if
there
is
interest
in
those
requirements
and
that
motivation,
because
everything
of
the
solution
is
completely
like
Up,
For
Debate
or
any
discussion
I'm
just
this
is
what
I
care
about
personally,
with
this
proposal
next
slide.
K
Yes,
so
not
that
it
is
serving
so
yeah.
Let
me
rephrase
that
I
guess
that's
not
very
well
written.
Let's
say
that
the
the
server
offers
a
resource
to
only
authenticated
clients.
That's
a
common
thing
and
an
unauthenticated
client
must
not
be
able
to
find
out
whether
that's
the
case
or
not.
So
it
can't
probe
the
server
to
find
out
that
the
resources
there
you're
just
not
allowed
to
see
it.
K
So
the
why
don't
we
already
have
this
in
our
very
large
suite
of
HTTP
authentication
methods?
So
if
you're
doing
cryptography
and
you're
using
a
signature,
you
need
something
to
sign
and
you
need
that
to
be
fresh.
Otherwise,
this
is
things
are
replayable,
so
in
common
protocols
today,
the
way
we
do
that
is
the
server
sends
announced
the
client
signs.
K
That
nonce
sends
the
signature
back
and
the
server
goes
great,
that's
fresh,
but
that
breaks
our
requirement
of
the
server
not
letting
on
that
it
does
this,
because
if
it
sends
a
nonce,
you
go,
oh
I
got
an
answer
from
you.
I
know
you
all
you
you
support
this
scheme,
so
hoba
is
a
is
a
means
that
that
does
that,
for
example,
that's
already
standardized,
but
that
leaks
the
fact
that
the
server
does
this
next
slide,
please.
K
So
the
idea
we
had
where
I
think
the
clever
part
came
from
Chris,
Wood
I
think,
but
anyway,
is
if
you
use
TLS
key
exporters.
That
gives
you
announce,
because
both
the
client
and
server
have
fed
information
into
the
TLs
key
exchange.
So
you
know
that
a
key
exporter
is
fresh.
K
The
I'm
blanking
on
the
name,
Channel
binding,
use
that
as
well,
but
we're
not
doing
Channel
binding,
very
different,
but
like
just
that
same
idea
of
using
a
key
exporter
not
to
generate
a
key,
but
just
to
use
it
as
a
nonce.
So
that
doesn't
link
any
information
because
any
side
can
export
the
key
locally
and
it
can
be
replayed
either
empty.
D
I,
don't
know
if
you've
thought
about
this
one,
but
it
it's
possible
that
in
certain
contexts
say
like
web
browsers,
an
adversary
might
be
in
a
position
to,
for
example,
make
requests,
and
this
only
prevents
the
authenticator
from
being
moved
to
another
connection
that
doesn't
prevent
it
from
being
reused.
On
the
same
connection,.
D
K
This
is
the
attack
we
discussed
this
at
the
at
the
last
ITF
and
so
Jonah
Jonathan
and
I
thought
about
it
for
a
bit
and
that's
a
real
Attack.
If
you
leak
that
you
sent
a
header
with
this
on
one
request,
you
could
put
that
on
another
request,
but
the
threat
model
there
is
an
attacker.
That
kind
is
already
like
inside
the
TLs,
and
so
we
decided
like
that,
wasn't
practical
in
practice.
So
we
added
a
paragraph
to
security,
consideration
saying
that
that
was
out
of
scope,
but
it's
a
real
Attack
as.
K
K
It's
not
the
end
of
the
world,
but
like
we
thought
that
in
practice
it
wasn't
a
problem.
So
we
document
that,
if,
if
this
is
part
of
your
thread
model,
then
don't
use
this.
D
You
could
bind
to
things
in
the
request,
so
that's
not
portable
between
requests,
for
instance,
the
URL
I,
don't
want
to
pull
too
much
in,
but
yeah.
K
So
the
the
advantage,
if
you
you
have
the
same
thing,
is
that
then
it
gets
compressed
by
each
pack
or
qpac,
and
we
thought
that
was
a
nicer
benefit
than
this
attack.
I
mean
that's
totally.
We
can
go
either
way
both
work.
It's
a
it's
a
you
know:
security
versus
performance
trade-off,
as
we
often
have
Alex
sorry.
H
Yeah
I
I,
Alex,
fromowski
Google
I
just
wanted
to
add
that
I
also
previously
brought
up
a
similar
thing
around
the
fact
that
this
was
a
connection
oriented
export
and
it
really
did
weird
things
around
streams,
so
like
I,
think
if
we
Incorporated
Martin's
idea
about
making
it
be
request
or
stream
oriented
somehow
it
would
also
make
it
clear
that
this
wasn't
a
shared
resource,
so
maybe
somewhat
repeatable,
but
not
100.
Repeatable,
like
URL,
is
definitely
a
nice
one.
That
would
still
get
you
some
of
the
compression
yeah.
K
Oh
boy,
come
on
there
we
go
oh,
so
this
is
kind
of
a
description
of
the
solution.
We
renamed
the
draft
to
unprompted
authentication
I
really
wanted
to
call
it
masked
authentication,
but
I
was
told
that
wasn't
funny.
K
So
we
went
with
this
at
least
it's
clear
about
what
it
is.
The
server
doesn't
tell
the
client
that
it
needs
to
authenticate
the
client.
Does
it
without
being
prompted
and
it
indicates
a
single
request
and
you
send
what
kind
of
authentication
so
signature
hmac
and
then
which
algorithm
like
for
a
signature,
algorithm
or
hash
algorithm
you're
using
so
initially
we
use
the
IDS
and
Mt
found
that
gross
so
I
grabbed
something
from
the
Ina
registry
and
I
managed
to
get
that
wrong
too.
K
But
that's
fixable,
I
I
got
some
good
comments
in
the
GitHub
I'll
that's
easily
fixed,
and
then
you
know
a
username
and
the
proof.
Base64
encoded
next
slide.
Please,
but.
K
I
have
a
slide
for
that
it.
It
is
unless
I
messed
it
up.
K
I'll
I'll
get
back
to
that
in
terms
like
we
kind
of
discussed
this
with
the
client
search
idea.
Similarly,
this
can't
be
transparently
forwarded
because
it
pertains
to
the
TLs
connection,
so
the
intermediary
checks
it
and
then
tells
Upstream
what
the
result
was
that
part
we've
declared
it
out
of
scope.
You
could
build
something
like
the
the
previous
presentation
if
we
wanted
to,
but
I
don't
have
a
use
case
for
that.
So
we
decided
it's
out
of
scope
of
this
draft
for
now
and
it
can
be
built
separately
next
slide.
K
So
what
we
changed
since
last
time,
because
we
got
some
pretty
good
feedback
when
we
presented
this,
so
we
renamed
the
draft
we
removed
the
oids.
We
added
the
security
concerns
to
discuss
this.
The
issue
that
we
just
talked
about
Jonathan
joined
as
co-author
to
make
sure
that
I
stopped
shooting
my
toes
off
security
wise
and
we
switched
to
structured
Fields.
Maybe
I
got
it
wrong
again:
I'm
an
HTTP
Enthusiast,
not
an
expert,
but
I
tried
to
switch
to
structured
Fields,
because
I
hear
they're
all
their
age.
K
I
did
a
bunch
of
editorial
work
to
try
to
make
the
document
better
next
slide,
please.
K
So
we
have
an
independent
implementation
by
The,
Guardian
Project,
so
multiple
entities
kind
of
interested
in
this
we're
wondering
is
it.
The
HTTP
is
working
group
interested
in
seeing
this
progress
here.
Should
we
take
it
elsewhere?
What
do
people
think?
Is
this
completely
insane?
Is
this
a
good
idea?
Do
other
people
find
it
useful
any
thoughts,
questions?
This
is
my
last
slide,
so
come
on
up
Mike.
E
Mike
Bishop
I
do
think
it's
useful
I
feel
like
you're
kind
of
re-implementing,
some
of
what's
already
an
exported
authenticators,
so
you
might
be
able
to
just
get
one
of
those
and
then
base64
encode
it
but
I,
don't
know,
I,
don't
know
what
the
Gap
is
on
that,
but
it
might
be
something
to
explore.
K
So
exported
authenticators
don't
have
that
property,
I
I,
think
from
memory,
and
someone
can
correct
me
if
I'm
wrong.
Do
you
need,
like
that
kind
of
exchange,
that
kind
of
leaks
that
the
server
does
this.
E
Yeah,
okay,
it
has
been
to
a
request,
then,
okay,
all
right,
so
we'll
building
many
of
the
same
mechanics
without
binding
to
that
request
should
be
fine.
E
But
more
broadly
written,
yes,
I
think
this
is
of
interest.
It's
it's
a
useful
property
and
there
are
already
HTTP
servers
that
will
refuse
to
admit
a
resource
exists
unless
you're
authenticated.
They
just
have
some
other
endpoint
that
you
off
to
first.
So
this
would
be
a
nice
Improvement
in
security
for
them.
K
Thank
you,
Ben.
G
Hi
sorry,
my
Network's
a
little
unstable
so
I
definitely
want
this
I've,
even
sort
of
been
involved
with
deploying
A
A
system
that
attempts
to
achieve
this
property,
but
I
still
don't
understand
the
the
use
case
that
motivates
this
design.
G
Thanks
for
helping
me
understand
a
bigger
piece
of
it,
I'm
closer,
but
the
thing
that
I'm
most
confused
about
is
this
the?
If
we
assume
that
there
is
no
indication
that
a
given
origin
supports
this,
then
client
presumably,
must
be
configured
out
of
band
with
information
to
know
that
it
can
use
this
mechanism
with
this
origin.
G
But
any
mechanism
that
could
configure
this
client
to
know
that
about
this
origin
could
have
just
provided
this
client
with
a
per
origin
symmetric
secret,
a
password
that
would
allow
that
the
client
would
send
unprompted
unprompted
authentication
yes
to
that
server.
But
this
would
not
authenticate
the
client
right.
This
would
just
reveal
the
client
as
being
among
the
set
of
clients.
That
knows
this
information
about
the
server
and
then
the
server
can
respond
with
a
challenge.
We
can
go
through
standard
challenge
response
authentication.
So
why
didn't
you
do
that?.
K
So
it's
the
general
question
about
shared
cryptography
versus
or
symmetric
cryptography
with
versus
asymmetric
cryptography
and
yeah.
Pretty
much
almost
everything
that's
done
with
asymmetric.
You
could
do
it
with
symmetric
and
N
keys,
but
then
you
kind
of
have
to
tie
the
list
of
potential
Origins
with
the
keys
and
you
get
like
a
bad
scaling
problem,
whereas
here
you
can
get
the
keys
once
and
then
later
over
time
get
the
origin
simpler.
So
it
gives
you
like
more
flexibility,
I
mean
there's
yeah.
G
I'm
I'm
not
convinced
that
it
actually
provides
that
kind
of
efficiency
Improvement.
In
this
case,
because
again
you
need
to
be
provisioned
out
of
band
with
this
information
about
each
origin
that
you
could
potentially
contact,
and
so
you
already
have
and
the
clients
are
already
being
provided
with
with
order
and
information
here,
adding
passwords
on
top
of
that
doesn't
increase
the
the
order
of
information.
G
That's
required
to
be
shared
with
clients,
and
if
you
want
this
kind
of
shared
cross-origin
authentication
of
clients
where
clients
use
a
single
credential
across
or
all
of
these
Origins,
you
can
still
do
that.
You
no
longer
have
to
do
it
within
an
unprompted
context.
For
example,
you
can
do
it
in
a
hoba
context,
and
then
you
don't
have
to
worry
about
this
thing
of
like
I'm
bound
to
the
TLs
session.
I
can't
Traverse
intermediaries,
you
know
I'm
doing
Channel
bind
foreign.
K
G
Well,
so
I'm
I'm
telling
you
that
you
don't
need
it
like
Guardian,
Project,
I'm,
very
familiar
with
their
use
case.
They
do
not
like
the
system
I'm
describing
is.
Is
their
system
like?
That's
that's
how
it
already
works.
That's
how
the
the
bridge
distribution
system
is
is
already
defined,
so
I
think
look
I!
Think
if
there's
an
advantage
here,
I
could
argue
that
it
like
it
saves
a
round
trip
like
it
avoids
the
need
to
sort
of
pre-authenticate,
and
then
you
know
and
then
actually
authenticate
as
a
as
a
second
step.
G
You
can
do
it
in
one
go.
Maybe
that's
important
in
some
use
case,
but
I'm
not
actually
aware
of
again
of
a
use
case
where
that
efficiency
gain
would
matter,
and
so
I
think
that
basically
there's
a
more
flexible,
less
complicated
solution
here,
I
think
we
should
solve
the
problem,
but
I
I'd
like
to
see
this
a
little
more
strongly
motivated
for
this
design.
K
G
Well!
Okay,
yes,
but
like
the
working
group
work
you
know
is
has
to
ultimately
proceed
by
by
consensus
about
this.
So
so
that's
you
know.
It's
also
for
the
working
group
to
consider
whether
this
is
the
right
approach.
All.
J
Chris
I'm,
it's
not
completely
clear
to
me
what
is
being
done
here
that
couldn't
be
done
with
token
binding.
J
K
So
from
memory
it's
been
a
while,
since
I've
done,
I've
looked
into
that
I
well,
I
mean
one
problem.
Is
it's
it's
more
complicated
machinery
and
it's
at
the
TLS
layer,
I.
Well,
of
course
it's
at
the
TLs
layer,
but
for
I
don't
know
if
it
has.
The
property
of
client
speaks
first
or
directly
tied
to
a
request.
J
So
the
standard
way
of
doing
token
binding
does
involve
a
negotiation
at
the
TLs
layer,
but
I
think
that
could
be
emitted
and
get
the
property
that
you
have
here
in
effectively
the
same
way
where
the
client
would
derively
ekm
from
the
TLs
connection
and
then
create
a
signature
of
it.
Put
in
the
token.
K
Playing
header,
so
I
would
have
to
double
check
from
memory.
There
was
a
lot
more
involved
in
token
binding
and
the
interactions
with
the
TLs
layer
that
made
it
quite
a
more
complicated
beast
and
would
be
harder
to
just
have
like
a
little
token
that
you
send
I.
Can
that's
a
reasonable
question.
I
can
give
you
a.
Let
me
do
some
research
I
can
give
you
a
better
answer,
because
it's
been
a
while
since
I've
looked
into
token
by
Nick.
A
Would
be
great
yeah,
so
so
without
making
any
commitments
or
saying
that
yes,
we'll
drop
something
or
that
you're
going
to
implement
it.
Just
a
show
of
hands
online,
as
well
as
in
the
room.
I,
don't
know
how
it
works
online,
but
we'll
sort
stuff
out
who's
interested
in
continuing
this.
This
discussion
that
there
might
be
something
here.
A
A
A
K
A
K
A
K
A
K
Not
really
I
mean
it's,
it
will
be
nice
because
we
have
a
use
case.
I
mean
you
know
you
can
ship
thing
without
having
a
standard,
but
it
will
be
a
nice
bow
tie
to
the
mask
story
as
well
to
have
all
these
things
like
wrapped
up
so.
A
Yeah
all
right,
thank
you
and
we've
got
five
minutes
left
so
unless
we
have
any
other
business,
I
think
we
can
close
for
today
and
we'll
see
everyone
on
Friday.
Thank
you.