►
From YouTube: IETF114-HTTPBIS-20220728-1730
Description
HTTPBIS meeting session at IETF114
2022/07/28 1730
https://datatracker.ietf.org/meeting/114/proceedings/
A
A
That's
fine!
By
the
way,
I
don't
really
care
that
microphone
on
the
pink
x,
because
the
camera
will
look
at
you,
yeah
yep
I'll
call
you
up
yeah
every
yeah
exactly
and
the
cue
for
that
is
using
the
little
phone
thing
to
put
your
raise
your
hand
in
that,
and
then
you
get
in
that
my
client.
So
questions
come
from
that
mic
and
the
presenters
have
this
mic.
A
A
A
E
E
F
B
H
A
All
righty,
we
do
have
a
tight
agenda,
so
let's
get
started.
Welcome
to
the
meeting
of
http
at
ietf114.
I
think
this
is
the
first
in
semi-in-person
meeting
at
an
actual
ietf
meeting.
We've
had
it
in
a
while.
Thank
you,
everyone
for
all
of
your
attendance
of
the
various
virtual
interims
we've
had.
We've
made
a
lot
of
great
progress,
but
it's
also
good
to
see
faces
here
and
on
medeco,
and
hopefully
we
continue
to
make
good
progress
today.
A
Also
in
itf
we
have
codes
of
conduct
and
general
rules
of
trying
to
be
civil
and
friendly
to
each
other,
and
this
is
a
great
group
at
doing
that.
So
you
know,
as
we
are
going
through
our
work
today
and
hearing
about
some
new
work,
let's
make
sure
that
we
are
welcoming
to
it
and
give
good
constructive
feedback
and
then
also
for
the
people
who
are
on
site
masks
are
required.
You
do
need
a
kn95,
equivalent
dot
dot
mask.
A
If
you
don't
have
one
of
those
you
can
get
them
at
the
registration
desk.
E
B
Sorry,
if
your
experience
of
this
group
is
not
that
it's
a
great
group
for
that,
if
you've
ever
experienced
harassment
or
you
feel
like
it's,
not
a
great
environment,
please
do
talk
to
tommy
and
I
we
do
want
to
make
sure
that
it's
a
good
experience
for
everyone
and,
if
you're
not
comfortable
doing
that,
there
are
other
resources
for
you
as
well.
A
Absolutely
yeah,
thank
you
all
right,
so
here's
our
agenda
for
today
we
are
currently
in
the
administrivia.
We
have
several
active
drafts
that
we
are
going
to
get
a
catch
up
on
along
with
a
summary
of
a
side
meeting
that
occurred
earlier
this
week,
and
then
we
have
four
newer
proposals
that
are
not
things
that
we
have
adopted
into
this
working
group,
but
have
been
discussed
on
list.
A
I
A
That
seems
reasonable
to
me
any
objection
mark
or
that's
fine
yeah.
If
you
were
going
up
and
down
the
stairs
eric,
I
think
you
will
take
notes.
Thank
you
for
taking
notes.
I
appreciate
it.
Does
anyone
want
to
officially
jabber
scribe?
A
Thank
you,
david
great.
Let
us
begin
anything
else
mark
before
we
get
into
it
great
all
right,
I'm
gonna
switch
slides
to
go
to
the
slides
for
the
summary
of
the
side
meeting
on
alt
service
mike,
do
you
want
to
talk
us
through
this?
Oh
did
you
want
to
share
it.
I
Let's
try
this
okay,
I'm
not
coughing,
so
key
takeaways
from
our
side
meeting
yesterday.
I
Basically
alt-service
is
great
for
being
able
to
narrowly
tailor
information
to
one
particular
class
of
users,
but
a
lot
of
what
we're
using
it
for
right
now
is
really
about
protocol
capabilities.
Do
I
support
h3?
Do
I
support
h2?
Do
I
have
multiple
endpoints
and
all
of
that
can
go
in
the
dns
and
service
b.
The
https
record
does
a
great
job
for
that.
We
should
just
use
it
and
the
main
thing
that
the
server
knows
when
you're
talking
to
it
is
if
it
is
not
the
right
place
for
you
to
be
talking.
I
I
So
what
we're
really
trying
to
cover
is
the
case
where
the
server
thinks
it's
probably
not
the
best
endpoint
for
you
to
be
using,
but
it's
willing
to
continue
serving
your
requests.
If
you
need
it
to,
we
want
some
degree
of
stickiness
there.
We
want
that
recommendation
to
last
until
the
network
changes
or
the
server
config
changes
now
part
of
the
problem
we
have
with
alt
service
today
is
the
client
only
kind
of
knows
when
the
network
changes
and
the
client
does
not
know
when
the
server
changes.
I
I
I
I
I
Do
we
build
something
entirely
new
and
adopt
a
draft
there,
or
do
we
try
and
shove
it
into
alt
service
as
it
currently
exists,
we
could.
We
could
hijack
an
aopn
value
to
serve
as
a
sentinel,
something
like
use.
This
one
equals
the
hostname
we
care
about,
or
we
could
be
really
crazy
and
just
drop
a
host
name
in
the
alt
service
field.
I
I
C
David
skanazi
is
she
beheader
enthusiast.
I
apologize
for
missing
this
side
meeting.
It
happened
to
conflict
with
sleep,
and
the
sleep
working
group
had
to
be
prioritized.
Sorry
about
that.
The
it
seems
like
what
this
is
saying
is
that
the
use
of
quick
is
dependent
on
the
https
frame.
Now.
Is
that
a
correct
understanding,
sorry
on
the
https
resource
record
in
the
dns.
C
Which
is
how
that
works
today
on
the
web
right
yeah,
so,
unfortunately,
most
os
apis?
Don't
let
you
query
https
resource
records?
I
know
the
apple
one
does
because
they're
on
top
of
things,
but
get
out
our
info
doesn't,
for
example,
and
there's
no
pro
6
equivalent.
That
does
this.
C
I
couldn't
do
this
for
chrome,
because,
while
we're
trying
to
move
all
deployments
of
chrome
to
use
the
chrome's
custom,
dns
client,
which
does
everything
itself-
that
one
knows
how
to
query
for
those
records
because
we're
also
on
top
of
things,
but
we
can't
use
that
on
all
os's,
especially
when
there
are
oss
where
things
are
complicated.
Like
there's
a
vpn.
C
Okay
screw
it.
This
is
too
hard.
We
fall
back
to
get
our
info
and
then
we
would
lose
quick,
so
that
would
be
a
non-starter
for
us.
I
would
say
I
would
love
to
be
part
of
this
discussion
because
I
think
that's
that's
an
important
feature
for
us,
even
though
I
love
the
idea
of
putting
everything
in
the
dns
here
I
don't
think
short
term.
It's
necessarily
the
right
plan.
I
I
think
that's
a
really
good
point
about
compatibility
in
the
short
term.
I
think
that
almost
sounds
like
an
argument
for
being
able
to
shoehorn
it
into
the
existing
old
service,
such
that
you
could
just
say,
I'm
going
to
also
include
some
legacy,
alt-service
entries
and
I
accept
the
problems
if
I
have
to
use
one
of
those.
But
if
I'm
on
a
platform
where
I
can't
query
service
b
yeah,
that's
what
I
got
to
do
that.
G
Yeah
so
mountain
times,
just
briefly,
I
understand
the
implication.
K
G
Implications
of
having
to
implement
this
on
a
vast
number
of
platforms
where
dns
is,
let's
just
say,
sketchy,
but
I
think
the
future
is
bigger
than
the
past,
and
we
really
do
want
to
move
people
under
https
records
for
for
http,
so
I'd
be
I'd,
be
somewhat
comfortable
doing
this
on
the
understanding
that
some
people
would
not
be
able
to
benefit
from
it,
and
that's
always
been
the
case
for
odd
service
anyway.
I
All
right,
thank
you,
so
I
think
we're
done
with
that
piece.
Yep
do.
I
Share
slides
again,
I
don't
have
slides
for
that
one.
Basically,
where
we
stand
there
is,
I
have,
I
believe,
resolved
the
one
open
issue
on
the
document.
Yeah.
I
A
C
David's
kanaz
another
option
would
be
to
do
the
working
group
last
call.
I
think
that
would
be
useful
because
it
would
get
everyone
to
read
it
and
then
park
it
there
don't
send
it
to
the
iesg.
I
mean
that
tells
us
that,
barring
any
major
news
that
come
about
service,
this
will
go
forward,
but
it's
not
open
season.
Like
the
working
last
already
happened.
This
is
pretty
much
done.
L
B
M
A
Great
okay,
so
moving
right
along,
I
believe
cookies
is
next.
So
stephen,
if
you
want
to
come
up.
A
J
We've
also
closed
out
several
more
issues
since
then
working
towards
getting
us
down
to
zero
in
scope
issues
oops
next
slide,
please
all
right
I'll
get
this
right.
Thank
you.
Oh
yeah,
close
several
more
issues
since
then,.
J
As
for
the
current
issue
status,
we
have
four
open
issues.
I've
split
these
into
two
parts
currently
in
scope
is
a
issue
2185
named
cookie
octet
reality
check
the
the
one
sentence
summary
of
this
is
that
the
specs
current
structure
is
confusing
and
prone
to
incorrect
implementation,
specifically
around
uas
that
are
accidentally
implementing
the
more
strict
server
syntax
rather
than
the
more
permissive
ua
requirements.
J
A
little
while
ago,
the
spec
was
updated,
such
that
same
site
needed
to
take
into
account
redirect
change
chains
with
every
site
in
the
chain
needing
to
be
same
site.
Both
chrome
and
firefox
attempt
attempted
to
implement
this
and
saw
a
ton
of
sites
break,
so
we've
backed
it
out
and
now
we're
trying
to
figure
out
what
to
do
with
it
under
the
maybe
should
defer
these.
This
is
my
opinion.
J
J
J
Finally,
there's
about
14
additional
deferred
issues
that
are
not
in
scope.
Four,
six,
two,
six
five
abyss
any
longer
coming
back
to
the
in
scope
issues.
I
just
want
to
talk
a
little
bit
more
about
those
two
one.
Eight
five.
One
of
the
proposed
changes
for
this
issue
is
to
merge
the
ua
and
server
syntax,
that
is
to
say,
allow
servers
to
create
cookies
that
adhere
to
the
more
permissive
ua
requirements
rather
than
their
existing
more
strict
requirements.
J
If
the
consensus
is
the
former
merging
the
syntaxes,
then
I
suggest
that
this
issue
be
deferred
because
that's
a
very
significant
redesign
of
the
spec.
If
the
consensus
is
the
latter,
the
rephrasing,
then
I
think
that's
probably
in
scope
for
six
two
six
five
bis
and
then
for
two
104.
The
redirect
we're
currently
considering
this
a
blocking
issue
for
six
two,
six
five
best.
J
G
Yeah
just
a
simple
comment
in
1939
there.
I
I
think,
the
what
working
group
spec
on
determining
the
difference
between
in
url
spec
has
a
pretty
simple
set
of
rules
for
distinguishing
between
domain
names
and
ip
addresses.
G
I
put
it
in
chat.
What
the
regular
expression
looks
like
it's
pretty
pretty
straightforward
to
explain.
Even
so,
I
would
suggest
that
we
we
just
pull
in
a
definition
either
by
reference
or
or
write
it
out.
It's
probably
one
paragraph.
Oh.
O
J
A
P
This
is
just
really
a
clarifying
question
when
you
say
that
you
thought
merging
the
syntax
would
be
essentially
a
great
deal
of
work
compared
to
rephrasing.
I
was
wondering:
has
anybody
taken
a
run
at
it
yet
so
that
you
kind
of
know
how
hairy
it
is,
or
is
this
your
impression
from
eyeballing
it.
J
So
what
I
meant
by
work
was
not
so
much
changing
the
the
spec
itself.
It's
more
that
it
has
the
potential
for
a
lot
of
side
effects
that
we
would
need
time
to
sit
and
see
how
that
actually
planned
out
rather
than
going
ahead
and
moving
into
into
last
call.
I'd
really
feel
better
if
we
did
that
that
the
spec
had
time
to
sort
of
like
wait
and
see
for
servers
to
actually
implement
this
new
syntax
and
what
the
results
could
be.
E
B
Okay,
I'm
I'm
a
little
confused,
then,
because
the
proposal
that
I
made
full
transparency
was
to
use
the
arbs
text
construct
from
http
core,
which
does
exactly
what's
you
know,
being
talked
about
here.
It
effectively
obsoletes
a
particular
range
of
characters,
so
that
I
don't
think
it
would
have
any
impact
on
implementations.
B
It
is
just
an
editorial
change
and
I'd
be
happy
to
do
a
pr
for
that
or
at
least
attempt
one.
If
that's
the
issue.
J
Yeah,
I
I
did
see
your
change
mark
and
I'm
happy
to
talk
more
offline.
My
big
concern
there
was
we've
had
evidence
in
the
past
of
implementers
just
kind
of
skimming
the
spec
and
implementing
whatever
looked
right,
and
I'm
not
100,
confident
that
they
would
understand
or
yeah
they
wouldn't
understand
what
the
obsolete
is,
they'd,
see
a
grammar
and
they
would
implement
it.
B
My
experiences
is
that
if
the
goal
is
to
defend
against
people
who
don't
read,
there's
very
little,
we
can
do
by
writing
things.
I
I
think
that
alignment
with
the
core
specs
would
be
have
a
nice
set.
Property
of
you
know
reducing
the
amount
of
new
stuff.
They
have
to
understand.
J
Yeah
and-
and
I
said
this
in
the
issue,
but
I
I
like
your
idea
more
I'm
sorry,
I
didn't
list
it
in
my
in
my
alternate
solutions
and
that's
that's
what
I'm
I'm
open
to
discussing.
E
A
B
Sure
so
we
basically
have
one
issue
open
on
retrospect,
which
is
the
date
format,
issue
2162,
and
the
heart
of
this
issue
is
basically
excuse
me
whether
we
take
this
opportunity
since
we're
we're
retrofitting
the
date
header
last
modified
expires,
which
all
contain
dates,
retrofitting
a
new
structured
type
in,
so
that
they
present
in
http
1.1
or
they
present
in
any
kind
of
textual
representation
of
the
structured
header
as
something
human,
readable
and
and
human
friendly
or
or
whether
we
keep
the
notion
that
we
currently
have
in
retrofit
of
converting
them
into
an
integer
and
conveying
them
as
presenting
them
as
an
integer.
B
So
you
know
seconds
since
the
epoch
which
most
people
have
trouble
reading.
I
see
both
sides
of
this
argument.
I
think
that
that
if
I
can
try
and
characterize
them,
you
know
if
having
a
human
friendly
presentation
is
better
for,
for
developers
doing
debugging
it's
easier
to
make
sure
that
it's
presented
correctly
in
tools,
whereas
a
a
integer
representation
is
at
least
in
the
textual
form,
easier
to
parse
and
less
overhead
to
parse.
B
B
Is
a
nice
thing
to
do
what
I
think
it
comes
down
to
is
that
if
you
have
tools
presenting
your
your
your
messages
and
they
understand
that
a
particular
header
has
a
particular
type,
then
it
doesn't
matter
whether
we
have
a
special
type
or
not.
They
can
say.
Oh
look,
I
know
this
is
expires.
I
know
that
that's
a
date
look
there's
an
integer,
but
I'll
show
the
user
a
human
friendly
date
to
be
to
be
nice
about
it.
B
The
problem
is,
is
that
if
they
don't
recognize
that
it's
a
date,
then
you
can
see
the
integer,
whereas
if
we
have
a
date
type,
then
it
can
be
automatic,
you
can
have
a
tool
that
does
the
right
thing
and
it's
hidden
from
the
user,
and
it
happens
for
all
dates,
not
just
the
ones
that
it
happens
to
know
about.
I
think
that's
what
it
comes
down
to.
There
are
some
you
know,
concerns
about.
B
If
we
don't
ever
have
a
binary
structure
types
which,
of
course
we
don't,
you
know
we
have
don't,
have
anything
adopted
yet
there
and
we
don't
have
any
market
adoption
there.
Yet,
of
course,
then
we're
stuck
with
the
textual
representation,
which
does
have
a
little
bit
more
overhead
parsing.
That
does
have
a
little
bit
more
overhead
on
the
wire.
I
think
it's
you
know.
B
We
just
used
the
iso
format,
but
I
think
we
do
need
to
make
a
decision,
because
this
is
our
one
opportunity
to
get
it
used
for
if
we,
if
we
do
have
a
date
type
to
get
it
used.
For
you
know
the
headers
we're
covering
here
there's
also
a
discussion
in
http
api
about
headers
that
they're
minting
that
might
have
dates
in
them,
and
you
know.
B
That's
where
we're
at
we've
gone
back
and
forth
I'd
love
to
get
a
sense
of
the
room
of
what
people
think.
Otherwise
I
think
we
just
need
to
take
it
to
the
list
and
hammer
it
out,
but
I'm
not
quite
sure
how
we
get
to
consensus
here.
B
G
That's
how
you
tell
so.
I
tend
to
think
that
the
machine,
readable
thing
is
probably
where
I
would
head
the
integer,
not
just
because
I
like
the
ease
of
processing
of
that
that
sort
of
thing,
but
because
even
the
dates
aren't
particularly
helpful
in
a
lot
of
cases.
Anyway,
I
have
to
deal
with
time,
zone
changes
and
all
that
sort
of
thing,
often
than
it.
It
takes
me
10
minutes
to
work
out,
work
that
out
and
the
tools
the
tools
are
better
at
that
sort
of
thing
anyway.
G
So
I
do
like
the
idea
of
having
an
explicit
date
type.
I
think
that
that
part
is
is
good
and
having
an
having
an
indicator
there.
That
a
tool
can
pick
up
on
is
is
pretty
valuable
if
we
do
go
for
the
other
one.
I
don't
really
care
that
much.
I
mean
I'm
not
not
completely
bent
out
of
shape.
G
If
we
go
for
a
profile
of
the
profile
of
iso
dates,
it's
in
3339
the
whole
date
thing
is
complicated
and
we
we
don't
need
to
do
the
full
spectrum
of
options
here,
so
it
could
be
quite
simple.
B
A
Great,
I
I
got
in
cue
just
as
an
individual.
I
would
agree
with
martin
that
it
seems
to
be
simpler
to
do
the
integer.
Also,
if
we're
thinking
forward
to
when
we
do
want
the
binary
representations
of
things.
B
Oh
so
just
to
be
clear
right
now
the
proposal
is
not
to
account
for
leap
seconds,
though
yeah.
B
Okay,
I
I'm
happy
to
go
with
using
an
integer.
I
just
want
to
make
sure
we're
doing
that
aware
that
consumers
like
http
api,
will
probably
just
go
back
to
using
string
headers,
not
structured
headers,
and
as
long
as
we're.
Okay
with
that.
A
D
Yeah,
I
don't
have
strong
feelings
about
this,
but
I
do
think
that
it
is
more
developer
friendly
to
have
a
string
based
representation
that
is
vaguely
human
readable.
Yes,
I
understand
that
not
everybody
especially
me
calculations
and
things
like
that,
but
it
is
much
easier
to
say,
compare
two
values.
At
least
you
know
speaking
anecdotally
that
are
date
strings,
as
opposed
to
two
very
large
integers
trying
to
figure
out
what
the
difference
is
between
those
two
again
as
a
human
and
as
a
developer.
D
I'll
also
point
out
that
this
notion
of
having
it
be
an
integer
in
the
underlying
data
model
and
a
having
a
string
representation.
E
N
D
For
example,
you
know
it's
it's
a
it's
a
question
mark
one
or
a
zero,
and
but
underlying
that
is
like
is
a
single
bit
for
booleans,
and
so
it
just
to
me
it
feels
better
to
define
it
as
as
an
integer
data
model,
but
with
a
string
representation
with
a
clear
format.
Q
I
E
B
Sorry
flow
control.
We
could
do
that
personally
if,
if
we
believe
that
binary
structured
headers
is
eventually
going
to
be
a
thing
which
I'm
still
on
the
fence
about,
but
I
like
to
live
and
hope,
then
that's
kind
of
you
know
that
would
give
us
a
y
representation.
B
That
is
the
integer
which
it
has
those
nice
properties,
but
it
gives
a
when
you,
when
you
convert
it
to
something
that
you
need
to
show
to
humans
or
in
wireshark
or
whatever.
Then
you
get
the
nice
human
readable
properties,
which
you
know
what
I'm
getting
from
the
people
who
are
kind
of
using
http
rather
than
implementing
http.
G
H
Eric
near
apple,
I
would
say
that
we
provide
some
tools
and
there
are
a
lot
of
other
tools
that
we
don't
provide,
but
at
least
in
many
of
the
tools
that
the
apple
ecosystem
uses
have
lots
of
places.
Where
there's
something
that
is
ugly.
And
I
say
that
with
quotes
in
for
humans,
but
makes
a
lot
of
sense
for
a
wire
format
and
we're
pretty
used
to
saying
hey.
Nobody
wants
to
look
at
this
massive
integer
and
try
to
determine,
if
that's
yesterday
or
three
years
ago.
H
A
Okay,
I
mean
it
does
sound
like
no
one
would,
you
know
not
be
able
to
handle
either
case
like
it
seems
like
there's
no
blocking
opinions
in
either
direction.
I
guess
if
there
is
a
blocking
opinion,
I
have
a
direction,
please
speak
up
now,
but
they
all
seem
to
be
preferences
for
what
is
easier
and
so
and
different
people
have
different
perspectives
on
what
is
convenient
for
them.
B
One
thing
I
might
do
is
talk
to
http
api
and,
if
and
show
them
the
pr
and
if
they
indicate
that
they
would,
you
know
at
least
look
at
it
for
future
fields,
then
that
that's
good
information,
if
they're
not
interested,
then
maybe
there's
less
of
a
point.
G
A
It
sounds
like
david.
Benjamin
on
the
chat
was
strongly
preferring
the
integer.
E
A
An
at
in
front
of
it
so
cool
okay.
Should
we
move
on
from
the
retrofit
mark
alrighty.
A
Next
up
we
have
signatures
and
we
do
have
slides
justin.
Did
you
want
to
share
your
own
slides?
If
you
just
do,
they
should
be
pre-loaded,
so
you
should
just
click.
The
share,
pre-loaded
slides
button
sure
and
I
will
grant
access.
N
D
All
right
so
update
on
message
signatures.
My
name
is
justin
richer.
I
think
I
saw
annabelle
in
the
participants
list
as
well,
so
she
can
type
in
as
as
needed.
D
So
I'm
not
going
to
go
into
depth
on
how
this
all
works,
because
we've
presented
this
a
bunch
of
times,
but
basically
it's
a
detached
signature
mechanism
for
http
messages
designed
to
work
across
http
versions
and
in
all
of
sort
of
the
weird
kind
of
chopped
up
ways
that
http
kind
of
exists
out
in
out
in
the
wild,
and
this
is
for
applications
that
really
can't
rely
on
whole
message.
Encapsulation,
like
you,
would
get
with
something
like
ohio
or
or
anything
like
that.
D
D
And
then
that
gives
you
your
signature
output,
which
consists
of
both
the
list
of
things
that
got
signed
and
the
signature
itself.
You
send
both
of
those
as
message
headers,
and
that
is
how
you
get
a
signed
message
now,
the
message
component.
We
had
to
invent
a
bunch
of
terminology
for
this
and
I'm
going
to
be
using
that
throughout
throughout
the
presentation.
D
Yes,
julian
fields,
not
headers,
I'm
sorry,
I'm
I'm.
I
work
in
the
sec
area.
This
is
I
so
I
appreciate
the
keeping
that
honest.
Yes,
these
are
fields
anyway.
D
This
is
an
example
message
component
which,
in
this
specific
example,
is
a
dictionary
field,
and
this
is
a
dictionary
formatted
structured
field
in
this
example
and
the
bits
that
make
this
up
are
the
component
name,
which
in
this
case
is
the
name
of
the
field
drop
to
lower
case
the
component
identifier,
which
is
the
component
name
and
any
parameters
that
are
attached
to
that
particular
instantiation
of
it.
D
Now
this
is
an
example
from
a
field
and
sort
of
a
you
know
more
esoteric
example
to
show
what
the
parameters
and
stuff
look
like,
but
there
are
message
components
that
are
also
based
on
the
larger
context
of
the
message,
so
you
can
sign
the
things
like
the
method
and
target
uri
for
a
request,
or
you
can
sign
the
status
code
for
a
response,
all
sorts
of
stuff
like
that
all
right,
so
the
signature
process
is
that
you
take
in
the
message,
your
key
material,
the
things
that
you're
required
to
sign.
D
D
Is
you
take
in
the
message
and
those
parameters,
and
then
you
regenerate
that
same
signature
base
on
the
new
context
of
the
verification
process
that
the
verifier
has
since
http
messages
can
be
transformed
in
a
bunch
of
expected
ways
on
the
way
through
this
is
this
is
where
this
process
really
kind
of
starts
to
shine,
and
then
a
signature
can
be
robust
across
those
transformations
all
right.
D
So
what
have
we
been
up
to
we've
added
a
few
more
security
and
privacy
considerations?
How
to
deal
with
weird
stuff
like
set
cooking,
which
doesn't
behave
like
other
http
fields?
We've
clarified
how
this
relates
to
the
digest
draft,
which
we'll
be
hearing
about
I
think
later
today,
and
we
mostly
most
of
what
we
did
was
just
a
lot
of
cleanup
clarifying
those
terms
that
I
was
just
using
making
sure
we're
using
the
right
terms
throughout
we
expanded
our
examples.
D
The
two
functional
changes
that
we
are
sort
of
additions
for
advanced
cases,
so
the
req
flag,
marking
something
as
a
request
response.
So
when
you're
recite
when
you're
signing
a
response,
you're
signing
it.
Oh
thanks,
lucas!
No!
D
So
when
you're
signing
a
response
message,
it's
always
in
the
context
of
it
of
some
type
of
request
message.
So
you
can
actually
sign
parts
of
the
request
and
include
those
in
the
response,
and
this
is
being
used
for
non-repudiation
in.
D
Of
this
out
in
the
wild,
we
also
added
the
byte
sequence
flag,
mostly
to
deal
with
problematic
fields
like
set
cookie,
which
don't
follow
the
list
syntax
and
could
be
used
to
leverage
some
very
esoteric
but
still
possible
attacks
against
this.
So
when
you're
doing
things
like
set
cookie,
you
binary
encode
it
basically
and
then
and
then
wrap
that
into
your
signature.
That
way,
this
has
been
being
implemented
all
over
the
place.
D
A
lot
of
people
are
taking
their
updates
of
the
old
individual
drafts,
especially
the
cabbage
drafts
that
were
floating
around
as
an
individual
draft
for
years,
updating
them
against.
D
What's
new
from
the
working
group,
this
is
just
the
list
that
I
personally
know
of
I
I
know
for
a
fact
that
there
are
more
out
there,
because
I
get
people
poking
me
about
about
this
from
time
to
time
still,
but
we're
seeing
people
actually
building
and
importantly
using
this
and
and
that's
been
really
really
good
to
see.
D
D
I've
also
been
presenting
this
work
all
over
the
place.
I've
got
a
talk
that
includes
tom
holland
and
drag.
D
Unfortunately,
this
is
not
this
talk
today,
but
it
is
definitely
a
an
interesting,
an
interesting
way
to
talk
to
people
about
how
weird
http
actually
is
so
talk
to
me
sometime,
if
you,
if
you
want
to
point
her
to
that,
the
gnap
working
group
is
making
use
of
hp
message
signing
as
the
as
the
primary
signing
and
key
proofing
mechanism
inside
of
the
nav
spec.
D
So
this
is
the
canap
smurf
for
those
that
don't
know,
and
so
that
is
a
direct
reference
to
to
this
work,
you
know
in
full
disclosure.
I'm
also
lead
editor
of
the
canap
specification
in
that
working
group,
but
there's
it's
not
a
coincidence
that
these
two
things
fit
together.
Well,
an
interesting
one
that
I'm
not
directly
involved
in,
but
the
financial
grade,
api
draft
specification
and
open
id
foundation
is
also
referencing.
This
using
message
signatures
specifically
for
data
non-retaliation
for
api
calls.
D
This
is
not
token
binding.
Specifically,
it
is
in
fact
the
api
is
signing
a
response
that
says
no
really.
This
is
this
is
the
response
to
that
particular
request
and
you
can
prove
it
based
on
this
signature.
So
a
really
interesting
use
case
a
bit
outside
of
what
brought
me
into
trying
to
define
this
stuff,
but
there
we
go,
hbsig.org
is
still
up
and
running.
It's
been
updated
a
little
bit.
You
know
please
go
feel
free
to
play
around
with
that.
D
I
actually
generated
the
example
at
the
beginning
of
this
presentation
using
httpsig.org,
so
the
the
tooling
in
there
is
pretty
fun
and
the
python
library
that
runs
the
actual
parsing
and
crypto
stuff
is
the
sorry
I'll
answer
the
chat
question
in
a
moment,
the
python
library
that
runs
this
is
actually
now
published
on
pi
pi,
so
you
can
pull
down.
I
think
it's
http
sig
pi
is
what
I
had
to
name
it,
and
you
can
pull
that
down.
D
If
you
want
clarification
question
how
it
works,
the
normalized
fields
are
not
sent
across
the
wire.
They
have
to
be
renormalized
by
the
verifier.
What
is
sent
and
I'll
go
back
to
this
example.
D
Is
you
get
a
list
of
these
content
identifiers,
and
these
give
you
enough
information
to
recreate
the
signature
base
as
long
as
you
know,
as
long
as
those
parts
of
the
message
that
you're
signing
have
not
actually
changed,
so
you
do
not
actually
send
the
signature
base
at
all
when
you're
when
you're
doing
this
okay.
D
So
at
this
point
the
authors
believe
that
we're
we
think
we're
actually
pretty
ready
for
working
group
last
call
the
core
has
been
stable
for
a
very
long
time.
The
stuff
that
we've
added
recently
has
been
mostly
like,
refinements
and
corner
cases,
and
a
lot
of
editorial
stuff
we're
seeing
more
and
more
implementations
we're
seeing
implementations
that
interrupt
with
each
other.
D
Other
work
is
depending
on
this
and
there's
only
a
handful
of
issues
that
we
think
that
that
are
in
the
issue
tracker
today,
that
we
think
we
can
close
pretty
quickly
either
without
action.
Just
before
working
group
last
call
if
we
can
get
some
good
feedback
or
as
part
of
working
on
last
call,
I
was
going
to
go
through
that
handful
of
issues
really
quick
here.
N
D
R
D
Okay,
fantastic,
I
I
refreshed
the
issue
tracker
too
early
in
this
in
this
call,
then
so
there's
a
proposal
for
something
called
signature
context
to
be
added.
D
This
is
all
written
up
under
issue
2133.
The
editors
have
actually
written
pull
request
2222
with
a
proposal
to
add
an
optional
parameter
with
descriptions
of
use,
but
not
require
its
use,
as
the
original
proposal
required
are
requested.
D
Please
go
read
through
that.
Editors
are
currently
leaning
towards
what
we
have
in
that
pr.
As
the
resolution
to
this
issue,
2134
we're
not
exactly
sure
what
to
do
with
cash.
If
we
need
more
text
in
there
there's
some
weird
states
things
can
get
into
if
you're
getting
a
a
signed
responses
from
a
cache
like
is
that
as
meaningful?
If
you're
responding
to
a
signed
request
with
a
cast
response,
you
know:
are
there
gotchas
there
that
that
we
should
be
aware
of?
We
have
some
text
in
there.
D
We
don't
know
if
it's
good
enough.
We
are
not
the
experts
to
this.
So
please
help
us
out
with
that.
Even
if
it's
saying
hey,
I
read
through
things,
and
it
looks
okay
and
then
finally
lucas
raised
the
issue
of
server
push.
We
think
that
our
the
way
that
we're
defining
message,
context
for
deriving
all
of
the
content
is
clear
enough
that
server
push
messages
in
htmh3
should
should
still
be
fine,
but
we
just
don't
know.
D
You
know
we'd
like
to
get
other
people
to
take
a
look
at
this.
So
we
think
that
these
last
two
like
might
need
a
little
bit
of
text.
Might
not
we
just
actually
don't
know
and
that's
all
I've
got
like.
I
said.
We
think
that
this
is
in
pretty
good
shape
and
ready
for
hopefully
ready
for
working
group
last
fall.
C
Hi,
david
kanazi,
so
thanks
for
the
clarification
that
was
super
helpful.
This
worries
me
because
parsing,
http,
headers
and
or
fields
if
you
care
yes,.
O
C
Sorry
is
super
hard
doesn't
look
like
it
actually
is,
but
the
number
of
security
vulnerabilities
we've
seen
in
that
space
is
astounding,
request
smuggling
blah
blah
yadda
yadda
in
this
proposal,
we're
signing
a
normalized
version
and
sending
a
non-normalized
version,
which
sounds
like
a
recipe
for
a
time
of
check
to
time
of
use
problem.
So
what
am
I
missing?
Something.
D
Yes,
for
the
most
part,
you're
assigning
the
header
value
exactly
as
it
as
it
is
sent
across
the
wire.
The
the
field
value
is
sent
exactly
across
the
wire.
There
are
cases
where
you
can
opt
into
transforming
that
in
specific
ways.
So
there's
one
where
it
says:
use
the
strict
structured
field,
serialization
definition.
D
So
if
you
know
you
have
a
structured
field,
you
can
say
do
that,
there's
the
binary
wrapping
version
sort
of
with
the
vs
flag,
but
for
the
mo,
oh
and
you
have
to
trim
white
spaces
and
do
the
obstacle
thing
if
you're
in
h1,
but
other
than
that
it
is
just
the
defined
value
and
for
most
developers
it's
going
to
be
take
the
thing
that
my
library
gives
me
off
the
wire
and
just
chuck
that
in
and
that
works,
so
we're
not
normalizing
we're
normalizing
how
they're
how
they're
stacked
into
the
signature
base
and
we
don't
send
the
signature
base.
C
Yeah,
but
any
kind
of
normalization
can
lead
to
this
problem.
If
the
yes
unnormalized
text
triggers
the
vulnerability
and
the
normalized
one
doesn't
yep.
D
That
is
discussed
extensively
in
the
security
considerations
and
because
you're
not
parsing
the
the
signature
base
to
get
values.
It's
it's
not
as
scary
as
it
seems
on
the
surface.
D
S
So
can
we
just
look
at
two
one?
Three,
three,
the
signature
context
issue.
S
Basically,
so
I
think
the
optional
is
is
fine
because
it
is
effectively
equivalent
to
this
to
just
having
it
mandatory,
but
somebody
could
put
null
in
or
nil
or
whatever
you
want
to
use,
but
I
would
say
that
it's
better
to
say
mandatory
or
output
nil,
because
then
at
least
you
can
be
sure
that
all
the
libraries
will
implement
it,
because
if
some
libraries
don't
implement
it,
then
it's
not
gonna
work.
No
one
will
be
able
to
use
it,
so
I
prefer
mandatory.
D
Okay,
no
none
of
the
libraries
that
I've
seen
are
that
strict
about
the
parameters
and
it's
an
extensible
field
set
to
begin
with
so
point
taken.
But
what
we've
tried
to
do?
If
you
look
at
the
pr,
what
we've
tried
to
do
is
instead
say
that
if
a
specific
application
requires
it,
then
then
that
needs
to
be
enumerated
as
part
of
the
application
of
signatures.
E
R
So
I
I
don't
know
how
much
we
can
be
expected
to.
You
know
tolerate
that.
Obviously
we
can
write
a
spec
that
encourages
proper
implementation,
but
there's
only
so
far
you
can
go
there.
A
Okay,
lucas.
T
Hello,
hello:
as
the
author
of
the
silver
push
issue,
I
don't
think
we
need
to
face
plant
on
it.
It's
just
more
of
an
observation.
I
mean
a
prior
life.
I
had
a
use
case
for
this
thing.
T
I
I
just
like
I'm
happy
to
contribute
some
text
if
you
think
and
he's
needed,
I
would
have
done
that
earlier,
but
I
run
out
of
time
if
I
think
what
I'm
looking
for
from
the
the
authors
here
is:
if
a
server
generates
a
push
request
that
doesn't
validate
the
signature,
what
what
would
the
client
that
receive
that
thing
do
with
it,
but
we
can
work
on
this
offline,
but
that's
kind
of
the
one
question
I
have
that
I
can't
answer.
U
My
audible,
it
sounds
like
I'm
cool,
hi
yeah,
so
I
guess
first,
I
want
to
echo
what
david
said
about
the
normalization
thing
like
we've
seen
in,
like
basically
every
cryptographic
thing
that
uses
signatures
like
whenever
you
have
this
complex
normalization
process.
Then
there's
like
there's
a
lot
of
room
for
like
if
there's
any
property
of
the
message
that
the
normalization
process
drops
which
like
by
construction,
it
kind
of
does.
U
If
that
thing
is
ever
read
by
any
code
downstream
that
that
is
a
place
for
an
attacker
to
like
change
the
interpretation
of
the
message
more
than
what
you're
intended
to
like.
Even
the
ops
folding
thing.
If
the
downstream
code
used
an
api
that
was
like
sensitive
to
where
the
headers
were
split,
which,
like
you
know
they
like
morally
shouldn't,
but
http
is
a
complicated
format
like
that
will
result
in
a
security
vulnerability.
U
I
can
kind
of
understand
why
you
sort
of
went
in
this
direction,
because
you
want
to
sign
a
thing
that
got
exploded
into
the
like
http
serialization
and
so
maybe
for
some
use
cases
they're
kind
of
forced
into
this,
not
secure
mode,
but
for
a
lot
of
other
kids
but
like
where
you
can
get
away
with
it.
It's
much
safer
to
have
the
thing
you
parse
be
exactly
the
thing
you
sign
so,
for
instance,
maybe
like
take
your
message
and
encode
it
in
this,
like
binary,
http
format
and
then
sign
just
that.
U
That
was
what
we
did
like
early
on.
When
I
I
helped
the
sound
exchange
folks
design
like
some
of
their
formats,
and
that
was
one
thing
that
we
were
trying
to
push
for
that,
like
the
thing
that
you
signed
is
the
thing
that
you
parse
and
that
way,
there's
no
room
for,
like
other
unsigned
input
to
get
in,
and
it's
sort
of
related
to
where
attackers
can
inject
stuff.
U
So
I
noticed
you
mentioned
that
there
was
a
lot
of
like
signature
parameters
that
can
change
how
the
signature
like
it
looks
like
it
impacts
both
the
normalization
process
and
just
changes.
What
is
even
signed
is
that,
like
just
carried
in
a
header
that
the
sender
just
like
fills
in,
however,
they
like
or.
U
U
R
U
U
A
To
interject
quickly,
since
we
are
over
time
on
this.
U
Oh
sorry,
yeah,
okay,.
A
U
R
Main
worries
are
particular,
if
you
could
point
out
where
you
are
seeing.
You
know
significant
normalization
of
of
component
content.
You
know
component
values
that
would
be
appreciated
because
the
as
a
spec
author,
one
of
the
things
we've
we've
tried
to
avoid,
is
any
normalization
of
values
beyond
what
is
already
just
inherently
part
of
http,
such
as
collapsing
of
multiple
header
fields,
right
with
the
same
header
field,
name,.
D
A
You
thanks
for
being
excited
about
contributing
to
this
document,
and
we
look
forward
to
your
comments
on
the
list
and
on
github
alrighty.
Next
up
julian
was
going
to
talk
to
us
about
query.
K
Hello,
hello,
hello,
okay,
no
slides,
there's,
no
progress.
It's
last.
Sorry
for
that.
Look
at
the
issues
today
and
I
think
there
we
are
and
what
needs
to
what's
left
to
be
done
depends
a
bit
on
what
people
expect
from
this
specification.
K
If
all
they
expect
something
that
is
similar
to
post
without
side
effects,
then
we
can
probably
say
this
minor
editorials
tuning.
We
are
done
if
we
are
looking
for
something
which
introduces
proper
cache
ability
of
query
results
and
more
works
needs
to
be
done,
and
I
believe,
every
time
we
look
at
these
issues
we
get
into
the
in
a
working
group
session.
We
get
into
the
details
and
it's
it's
really
not
simple
to
to
define
these
things
properly.
So
I.
K
So
I
try
to
summarize
this
on
the
mailing
list
and
then
invite
people
to
volunteer,
and
then
we
can
try
to
find
a
time
slot
in
the
next
few
weeks
to
talk
about
this.
A
H
A
Alrighty,
I
guess
we
can
move
on
then,
and
we
got
some.
A
O
O
O
To
start
an
upload,
you
use
a
zero
lens
post
request
to
retrieve
a
unique
uri
that
contains
a
server
generated
token.
Then
you
use
patch
to
start
an
upload
to
that
target.
Uri
in
the
case
that
upload
is
interrupted,
you
can
use
the
head
request
to
retrieve
the
upload
offset
and
then
use
patch
again
to
resume
from
that.
Offset
touch
is
a
grid
protocol
and
we
are
using
it
as
a
starting
point
to
build
our
new
protocol.
O
O
O
To
recap:
we
are
able
to
achieve
minimum
number
of
round
trips
by
using
client
generated
tokens.
We
also
define
a
feature
detection
mechanism
so
that
a
regular
upload
can
be
upgraded
to
a
resume
of
our
upload
to
achieve
our
goal
of
reservable
uploads
everywhere.
O
This
protocol
can
be
implemented
both
on
top
of
http
as
an
application
layer
protocol
and
also
within
http
libraries
themselves.
This
allows
existing
adapters.
That
already
depends
on
reservoir
upload
to
switch
to
this
protocol
and
also,
eventually,
everyone
else
can
get
it
when
the
browsers
and
libraries
themselves
implement
this
for
advanced
users.
O
O
O
We
want
to
discuss
the
exact
feature,
detection
mechanism
to
upgrade
from
a
regular
upload
to
a
reservable
upload,
so
we
are
currently
using
a
104
status
code
to
upgrade.
We've
also
discussed
using
http
settings
frame
and
dns
record.
There
are
also
other
options
that
adds
an
additional
round
trip,
such
as
options
requests
or
a
well-known
uri.
O
E
A
Not
working
for
you,
I
don't
know,
what's
left
wrong.
O
Yeah,
oh
thanks.
So
the
next
question
is
what
about
upload
metadata.
The
current
draft
does
not
specify
any
way
to
send
upload
metadata.
O
O
However,
from
the
mailing
list
discussions,
I
believe
we
should
be
using
a
kind
of
a
standard
generic
media
type
for
the
uploader
pending
procedure,
and
what
would
that
be?
I
hope
to
we
can
achieve
some
consensus.
O
A
Okay,
thank
you
for
the
presentation.
So,
if
you
have
comments
on,
if
you
think
this
is
an
interesting
problem
to
solve
can
come
up,
if
you
have
questions
about
the
details
or
comments
about
the
details,
you
can
also
come
up.
N
Okay,
all
right
alessandro,
gadini
cloudflare.
I
have
a
question
about
the
the
client
generated
token.
It
seems
like
it
might
be,
potentially
a
problem
for
a
server
operator
where
you
know
the
server
side
needs
to
be
able
to
guarantee
that
the
token
is
not
reused
across
different
clients
and
also
the
server
might
want
to
encode.
The
information
in
the
token
to
you
know
avoid
maintaining
yeah
certain
stage,
so
it
would
be
better.
N
I
think,
if
the
if
there
was
an
option
for
the
server
to
generate
the
token
at
the
very
least
or
just
make
the
token
server
generated.
O
Yes,
so
the
token
currently
by
default
is
client
generated
and
256
bit
random
data.
However,
if
you
own
the
client,
you
can
do
whatever
you
can
use
your
own
token.
You
can
negotiate
your
own
data
to
put.
In
the
token
that
is.
N
Right
so
I
guess
the
the
problem
arises
where
the
server
and
the
client
are
not,
you
know,
operated
or
developed
by
the
same
entity.
My
understanding
is
this
proposal
is
to
create
a
protocol
that
is
interoperable
between
different
implementations.
N
So
it
seems
an
interesting
problem.
I
think
lucas
is
involved
in
the
in
the
design.
N
I
guess
we
are
interested
in
doing
things
that
other
people
are
interested
in
doing
like
there's,
not
a
lot
of
point
in
just
us
deploying
some
stuff
that
nobody's
gonna
use,
but
it
seems
like
an
interesting
problem
at
the
very
least,
there's
potential
use
cases
as
well
like
internally,
where
you
might
want
to
transfer
big
files
or
stuff
like
that.
N
B
Mark
it's
your
turn,
so
alexander's
comment
kind
of
touches
on
something
I
wanted
to
talk
about
in
a
little
bit
of
a
higher
layer,
which
is
that
there's
a
trick
to
play
here
where
you
want
to
design
something
and
document
something
that's
interoperable
and
that
works
out
of
the
box.
But
you
don't
want
to
constrain
the
use
of
the
protocol
so
much
that
you
rule
out,
you
know
other
valid
uses.
So,
for
example,
you
know
the
initial
post
to
get
the
token
conveyed.
B
Maybe
it's
a
put
because
the
client
already
knows
where
the
file
is
going
to
be,
for
example,
and
so
we
have
to
be
very
careful
about
how
we
document
it
and
I'd
like
to
have
that
discussion
if
we
adopt
but
but
then
maybe
kind
of
putting
my
my
chair
head
chair
hat
on
when
we
talk
about
proposals
like
this,
I
think
it's
super
important
for
the
propos,
the
proponents
to
understand
that
they
are
giving
change
control
to
the
community
that
that
you
know
it
will
be
owned
by
the
itf
and
and
that
they
don't
have
a
special
say.
B
But,
of
course,
they're
extremely
welcome
to
participate
in
the
process.
Likewise,
for
for
people
who
are
you
know
not
the
proponents,
I
was
glad
to
see
the
word
starting
starting
point
used.
You
know
this
is
a
starting
point.
It's
not
the
end.
Point
it'll
go
through
the
normal
process,
and
so
it's
important
that
we
all
have
that
in
mind.
B
When
we
talk
about
adoption
speaking
personally,
I
think
that
you
know
there's
enough
interest
around
this
and
enough
proven
interest
in
this
over
time,
thanks
to
efforts,
like
tusk,
that
it
would
be
good
for
us
to
adopt
work
in
this
area,
and
I
think
this
is
a
reasonable
starting
point.
C
Google,
so
first
off
thanks
for
this
presentation,
that
was
great.
It
was
very
clear.
I
think
this
is
useful
and
I
do
think
this
is
a
good
starting
point.
I
think
there
are
some
like
real,
quick,
oh
and
I
have
read
the
draft
and-
and
I
think
it
is
a
good
starting
point-
I
think
there
will
be
some
tricky
questions
to
answer
like.
Should
we
have
the
tokens
and
generated
by
the
client
the
server?
Maybe
we
allow
both,
but
that's
I'd.
Rather
we
do
that
in
the
working
group.
C
So
we
build
something
that
we
can
operate,
that
everyone
likes.
We
might
have
some
uses
for
this,
so
I'd
say
I
strongly
support
adoption.
V
Alan
from
dell
meta
yeah,
thank
you
for
your
presentation
and
your
draft.
You
know
we
have
had
our
own
version
of
this
reasonable
upload
technology
running,
for
I
don't
remember
how
many
years,
but
many-
and
I
remember
this
topic
coming
out-
that
an
http
workshop,
maybe
three
years
ago,
or
so
it's
something
that
keeps
coming
up
and
honestly
we
probably
should
have
published
a
standard
about
how
to
do
it
a
long
time
ago.
So
I'm
also
happy
that
we're
here.
I
think
we
should.
V
A
Yeah
and
just
looking
at
the
chat,
I'm
seeing
some
others
of
cloudflare
and
microsoft,
echoing
the
same
thing
of
like
we
do
something
like
this
too,
and
it's
always
different,
so
being
interoperable
seems
like
a
good
thing
to
do,
and
it
almost
certainly
will
change.
But
as
long
as
we
are
happy
with,
it
sounds
like
everyone's
happy
with
you
know
letting
the
working
group
have
some
design
team.
A
That's
going
to
actually
figure
out
the
right
thing
to
do,
and
I
think
it'd
be
very
useful
if
those
people
who
have
existing
solutions
in
the
space
would
kind
of
also
present
those
and
explain
those
to
everyone
else.
So
we
can
kind
of
compare
and
learn
from
each
other
if
we
want
to
go
ahead
with
the
work.
H
A
All
right
any
other
comments,
thoughts.
It
sounds
like
this
is
something
we
can
take
to
the
list
of
saying
hey.
Do
we
want
to
work
on
the
problem
start
with
this
and
acknowledge
that
it
will
completely
change,
but
that's
good
yeah.
A
A
All
right,
this
is
a
document
we
talked
about.
I
think,
last
time
where
we're
saying
hey,
we
have
some
use
cases
for
geolocation
and
we
have
some
updates
here.
Let
me
reset
the
timer
one
moment.
A
So
dave-
and
I
talked
about
this
based
on
the
feedback
we
got,
and
so
we
revised
the
approach
so,
first
of
all,
what
is
this
for
clients
sometimes
not
always
would
like
to
get
content
that
is
relevant
for
their
general
location,
like
you
want
to
know
where
a
pizza
near
you
is,
or
something
else
and
servers
generally
know
your
location
based
on
your
client,
ip
address
doing
a
geolocation
database
look
up
based
on
your
ip
address.
This
is
assuming
that
you
did
not
do
explicit
javascript
apis
to
share
your
precise
location.
A
So
that's
all
happy.
So
what
is
the
problem?
I
think
mainly
the
problem
that
we
have
been
I've
been
experiencing,
and
I
think
others
are
sympathetic
to
this-
that
I've
talked
to
is
that
these
databases
are
often
wrong
or
out
of
date
or
disagree
between
each
other.
A
There
are
cases
in
which
different
databases
have
different
levels
of
supported
specificity.
A
lot
of
them
will
turn
things
that
are
officially
country-wide
or
statewide
into
specific
cities
and
by
choosing
one
at
random
or
choosing
something
in
the
middle
of
a
region.
So
you
can
get
strange
results.
A
This
is
particularly
bad
for
cases
where
you
have
a
privacy
proxy
disservice
or
some
sort
of
vpn
like
when
you
have
an
ip
address.
That
is
not
just
a
very
common
entry
in
an
isp
database,
but
also
it's
also
a
problem
for
any
new
ips
or
ips
that
change
location.
Frequently.
A
The
ip
maps
you
have
for
a
lot
of
cellular
networks
is
often
really
really
terribly
located,
and
you
end
up
just
going
to
like
where
comcast
is
located
like
I've
been
in
philadelphia
while
I'm
in
california,
and
that
makes
no
sense
and
recognizing
the
databases
are
mismatched
or
out
of
sync
is
often
just
like
a
manual
outreach
process
of
users
complaining
and
then
someone
going
and
asking
this
server
to
refetch
their
database
from
this
provider.
A
Last
time
we
talked
about
how,
essentially,
when
we
were
deriving
this
geohash
location,
it
was
potentially
safe
to
do
this
if
that
location
was
really
just
derived
from
the
ip
address
that
you
were
showing
anyway,
because
it's
not
anything,
that's
new
information,
but
that
led
us
to
a
simpler
proposal
in
which,
if
the
problem
really
is
that
these
databases
are
just
wrong
or
servers,
don't
have
the
right
copy.
A
The
new
proposal
is
to
simply
share
the
gip
database
entry
and
associated
feed
with
the
server
so
that
the
server
can
know
you
know
the
client
has
this
address.
It
thinks
it
got
it
from
here.
It
thinks
it
means
this.
That
is
a
hint
do
with
it.
What
you
will,
and
so
the
format
for
this
is
just
a
structured
field
string
which
is
literally
the
entry,
that's
defined
for
the
gip
feeds,
and
it
can
include
a
pointer
to
the
feed
that
owns
this
particular
ip.
A
This
you
know
it
is
potentially
narrowly
scoped
today
to
cases
where
clients
have
a
way
to
know
what
ipgo
map
they
have.
This
works
very
well
for
a
vpn,
some
sort
of
privacy
proxy,
but
one
could
also
imagine
services
such
that
when
you
receive
your
ip
address
from
your
carrier
or
from
your
isp,
they
could
let
you
know
hey.
A
You
know
I,
my
carrier
uses
their
carrier
feed,
and
this
is
where
they're
putting
you
right
now
and
the
client
only
has
to
share
this
if
they
want
to,
if
they
think
it's
useful,
if
they're
opting
in
on
the
server
behavior
you
receive
this
hint,
there
are
decisions
about
what
you
can
do
with
it.
You
could,
if
you
really
don't
care
you're,
just
trying
to
show
pizza
near
someone,
and
you
just
want
it
to
be
convenient
for
them.
You
could
just
show
them
wherever
they
told
you.
A
You
could
also
easily
filter
this
based
on.
If
you
know
the
feed
is
a
trusted
feed
and
essentially
say,
if
you
had
multiple
options
for
how
to
map
this
ip
address,
that
you
would
prefer
the
one
that
is
actually
authoritative
for
it,
you
could
use
it
as
a
signal
to
re-fetch
your
feed
if
it
mismatches
what
you
have
and
you
haven't
fetched
your
feed
in
a
week
or
a
day
or
some
reasonable
time,
so
that
you're
not
refreshing
constantly,
and
it's
also
potentially
just
a
way
to
flag
cases
of
new
feeds
coming
online.
A
A
So
that's
the
basic
thing
here.
I
just
wanted
to
see
if
there's
interest
in
working
on
this
problem,
if
we
think
it
belongs
here,
I'm
not
clear
on
if
there's
a
specific
other
place
that
this
belongs
but
happy
to
hear
people's
thoughts.
N
Alessandro
gadini
klotzler
there
is
definitely
interested
in
working
on
the
problem.
I
haven't
really
thought
too
much
about
the
solution
specifically,
but
I
think
this
is
probably
a
good
starting
point
and
yeah.
I
think
we
should
work
on
this.
I
don't
know
if
you
know
http
base
is
the
correct
place,
but
I'll
leave
that
to
others
to
decide.
A
Okay,
thank
you.
Matt.
F
Bye
sat
yeah.
I
I
agree
that
this
is
an
interesting
problem
to
work
on.
One
piece:
that
of
particular
interest
to
me
is
how
to
think
about
this
in
terms
of
satellite
services,
where
the
relationship
between
the
you
know
the
ip,
if
you
will
on
the
terrestrial
network,
doesn't
necessarily
align
with
the
location,
so
your
isp
knows,
but
it's
hard
in
the
reusable
ip
space.
F
It's
kind
of
tricky
to
keep
the
databases
up
to
date
in
a
timely
way,
so
yeah
I'd
be
definitely
interested
in
in
exploring
this
more
thank
you.
M
Fred
brad
lassie,
google.
I
think
this
is
definitely
a
area
that
we've
run
into
and
be
interesting
to
work
on.
So
I
support
working
on
this.
G
I'm
gonna
disagree
with
everyone
who's
gotten
up
here
so
far,
and
so
I
don't
think
this
is
quite
right.
One
of
the
reasons
that
gops
aren't
such
a
problem
for
privacy
is
that
they're,
very
bad
and
trying
to
fix
that
problem
puts
you
exactly
back
in
the
same
box
you
had
with
the
geohash.
Unfortunately,
when
you,
when
you
have
good
geolocation
and
people
start
moving
around,
you
have
the
ability
to
start
tracking
them,
even
with
the
imprecise
information
that
you
have
there.
It
depends
on
the
populations
that
are
in
these
areas.
A
Okay,
just
to
quickly
respond,
I
think
the
way
I've
been
thinking
about
it
is.
A
Since
this
is
reflecting
the
information
that
a
well
updated
server
database
would
already
have,
it
essentially
means
that
the
people
who
are
really
really
interested
in
tracking,
where
you
are,
will
already
know
this
information
and
it's
the
people
who
are
just
not
really
paying
attention
and
have
out
of
date,
things
that
are
hurting
the
user.
A
B
So
I'm
going
to
interject
real
quick,
we're
we're
running
low
on
time
for
this
session,
but
I
think
we
gained.
E
B
So
I'm
gonna
close
the
queue
soon.
Martin,
could
you
just
say
whether
you
think
that
that
those
issues
can
be
addressed,
whether
we
could
adopt
this
and
and
try
and
work
it
out
or
whether
you
think
we
should
not
just
adopt
anything
in
this
space.
G
I'm
not
going
to
say
this
is
impossible,
but
we
spent
close
to
10
years
on
this
problem
in
this
organization
in
the
past
and
came
to
some
conclusions
that
or
essentially
it's
very,
very
hard
to
give
away
any
geolocation
information
without
giving
it
all
the
way
and
that
that
is
probably
in
this
case
somewhat,
maybe
less
of
a
less
of
a
case,
because
you've
got
this
intermediate
sort
of
step
where
you're
going
via
ip
addresses,
and
that
has
some
inherent
problems.
And
maybe
those
problems
are
enough
to
keep
people
private.
G
A
C
Yeah
david
skenazi
just
jumping
in
to
respond
as
co-author,
so
I
definitely
think
that
mt
understands
this.
Space
must
much
better
than
I
do
and
that
this
is
a
hard
space,
and
I
would
say
that
if
we
are
unable
to
figure
out
a
tight
privacy
box
that
we
shouldn't
publish
this,
but
I
think
it
makes
sense
to
adopt
and
for
us
to
work
and
see
if
we
can
solve
this
one
potential
exam
idea.
C
Q
A
Q
E
Necessarily
convinced
that
this
approach
is
useful,
I
think
applicability
may
be
an
important
part
of
it,
but
it
does
seem
like
the
I've
trouble
coming
up
with
pieces
of
people
who
would
use
this
on
both
the
cut
on
the
client
and
server
side,
who
aren't
already
at
a
level
of
maturity,
to
take
a
a
totally
different
approach
or
leverage
some
of
the
existing
stuff
like
getting
these
feed
and
the
feed
information
out
of
the
ways.
M
Thank
you
brett
bradley.
I
I
think
working
on
here
we
got
the
opinions
of
a
bunch
of
folks
who
are
certainly
focused
on
on
privacy,
and
hopefully
we
can
improve
things
over
the
status
quo,
such
as
potentially
changing
the.
M
The
accuracy
based
on
population
density,
but
the
other
bit
is,
I
would
suggest,
not
thinking
about
it
as
folks
who
have
opted
to
use
a
privacy
proxy,
but
making
the
ability
to
use
a
privacy
proxy,
less
painful
such
that
more
folks
will
be
able
to
adopt
it
and
will
improve
privacy
overall.
A
Okay,
thank
you
all
right
interest
of
time.
Let's
move
on
that
was
good
feedback
transport
off.
Did
you
want
to
share
your
slides?
Do
you
want
me
to
I
can
do
it.
C
Hi
everyone
david
scanasi
today,
http
enthusiast,
so
I'm
going
to
talk
about
a
document
that
was
first
brought
to
the
atf
back
at
the
bangkok
one
in
2018,
but
that's
completely
been
rewritten,
and
so
now
I
have
david
oliver
who's
joined
me
as
co-author
and
we're
working
together
on
this,
and
please
ignore
the
title
of
transport
authentication.
It
no
longer
authenticates
the
transport,
so
we
just
haven't
come
up
with
a
better
one.
Next
slide,
please.
C
So
what
is
our
motivation?
We
want
the
client
to
authenticate
itself
to
the
server.
It's
like.
Okay,
great,
that's,
http
authentication
that
already
exists.
On
top
of
that,
we
want
to
use
asymmetric
cryptography,
it's
like
yep.
That
also
already
exists
hoba,
for
example,
but
we
have
yet
another
requirement.
C
C
So
why
isn't
there?
This
already
done
the
fundamental
property
of
asymmetric
cryptography
as
used
for
something
like
this?
Is
that
you're
using
a
signature?
And
you
have
to
sign
something
so
that
can
be
many
things,
but
conceptually
it's
a
unique
nonce,
because
you
want
freshness,
you
don't
want
someone
to
be
able
to
replay
that
authentication.
E
C
Be
signed
and
you
can
go
okay,
yep
you've
signed
this,
and
ideally
you
bind
it
to
the
right
things.
The
problem
with
that
is
most
of
the
scenarios
that
do
that.
You
start
off
and
say
hey.
I
want
to
use
this
authentication.
The
server
says
all
right,
here's
a
nonce,
then
you
say:
okay,
I've
signed
it
and
then
the
server
says
go
ahead,
but
when
the
server
has
sent
you
this
here's
a
nonce,
it's
just
leaked
that
for
the
specific
requests
it
requires,
authentication
and
so
boom.
You
lose
next
slide,
please!
C
So
what
do
we
do?
The
idea
actually
came
from
a
conversation
with
chris
wood
back
then
of
using
a
tls
key
exporter.
Yes,
we're
not
trying
to
reinvent
token
binding.
I
promise
but
yeah
that
it
does
have
that
in
common
and
the
idea
that
the
insight
there
is
the
tls
handshake
contains
fresh
random
data
from
both
endpoints
and
a
key
exporter.
C
Pretty
much
creates
like
random
pseudo
random
numbers
from
both
of
those
random
bits,
and
so
conceptually
we
use
a
key
exporter
not
to
create
a
key,
but
just
to
create
a
nonce
that
the
server
had
input
to,
and
then
you
sign
that
nonce
so
that
you
don't
even
need
to
transmit
it.
The
server
can
decode
it
as
well,
and
you
solve
the
problem
of
sending
your
nonce
without
there.
So
jonathan
do
you
wanna
is.
Are
you
queuing
up
for
later?
Do
you
have
a
question
for
on
this
slide.
S
C
That's
a
good
point:
I'm
not
gonna
have
to
think
about
that,
and
this
is
exactly
why
we're
bringing
this
to
this
room,
because
there
are
a
bunch
of
smart
people
who
can
tell
me
when
I
up
the
crypto
mt.
What
else
did
I
get
wrong?
No.
E
G
C
S
C
All
right
cool
next
slide,
please.
So
here's
the
way
the
wiring
coding
looks
today.
You
send
the
signature,
because
you
send
the
algorithm
that
you
used
for
the
signature
and
there's
a
username
in
there,
because
we
figured
that
could
be
useful
for
the
server
to
look
up
in
its
database
of
public
keys
and
yeah
whatever,
like
I'm
sure,
we
can
completely
like
that
if
people
care
next
slide,
please
so
a
note
about
intermediaries
is
obviously
this
is
tied
to
the
underlying
tls
handshake.
C
So
it
works
over
https
with
tls
or
quick,
but
it
doesn't
work
through
intermediaries,
and
so
we
have
a
section
on
that
about
and
that's
something
that's
kind
of
commonly
done
today,
where
the
intermediary,
if
it's
a
reverse
proxy,
can
be
responsible
for
checking
the
authentication
and
then
in
a
way
that
it
has
that's
trusted
between
itself
and
the
server
behind
it
can
say:
yep
it's
this
user
and
I've
checked
the
authentication
and
we
have
it
in
such
a
way
that
if
the
intermediary
accidentally
just
forwards
these
along
instead
of
working
on
them,
I
think
that
the
other
one
won't
it
won't
be
able
to
validate
things.
C
Will
break
all
is
all
is
good
unless
I
messed
that
up
too,
but
probably
that's
good,
all
right
next
slide,
so
we're
we
have
an
another
independent
implementation
by
the
guardian
project
that
dave
works
on
that
work,
and
so
we're
talking
about
having
this
like
in
various
open
source
projects,
we
figure
it
makes
sense
for
us
to
have
a
place
to
discuss
this.
That
would
be
nice
even
just
from
the
conversation
today.
C
I'm
realizing
that
I
already
learned
something
and
like
probably
saved
a
few
toes
in
the
process,
so
we'd
love
to
hear
are
folks
interested
in
this.
Do
you
also
think
this
is
useful?
Is
this
the
right
place
to
talk
about
this
or
not?
What
are
all
your
thoughts
and
I'll
open
up
the
mic.
B
You
mark
so
speaking,
just
as
an
individual.
I
think
this
is
an
interesting
area
of
work.
I
think
it's
possibly
important.
Even
I
would
be
so
much
more
comfortable
if
the
draft
were
positioned
in
terms
of
and
the
initial
discussion.
The
working
group
were
positioned
in
terms
of
what
properties
it
has,
rather
than
what
the
solution
is.
That
word,
transport
is
not
helping
you
right
now.
K
C
N
C
But
no
absolutely
happy
to.
I
just
wrote
this
the
day
of
the
draft
that
lion
and
didn't
have
the
time
to
figure
out
the
a
better
name,
and
I
figured
everyone
would
have
opinions
on
that.
So
all
right
thanks,
alessandro.
N
Alessandro
gadini
cloudflare,
this
sort
of
reminds
me
of
the
acp2
certificate
frame.
Any
thoughts
on
that.
C
So
so
I
haven't
implemented
the
the
certificate
frame
myself,
but
from
this
one
it
felt
like
a
much
easier
lift
to
get
it
to
work
to.
N
So
there's
two
main
differences.
I
guess
one
is
that's
a
frame.
This
is
not
a
frame,
so
there's
different
use
cases
that,
having
like
a
frame
covers,
like
you
know,
post
request,
authentication
or
something.
If
you
want
to
that,
there
is
the
certificate
frame
uses
certificates
like
x,
509
certificates,
there's
probably
ways
to
to
to
use
raw
public
keys
for
that
as
well.
It
seems
there
were,
it
seems,
like
a
certificate
type
solution.
N
Certificate
frame
type
solution
is
more
broadly
useful,
not
just
for
client
authentication,
but
you
know.
C
So
my
concern
with
using
a
frame
is
that,
unfortunately,
on
the
internet
today,
http
2
servers
just
explode.
If
you
these,
you
send
them
a
frame
that
they
don't
know
about
which
they're
supposed
to
ignore.
I
get
that
here.
It's
not
absolutely
critical,
because
you're
sending
this
to
a
server
that
you
trust
but
like
I
don't
love
that
also
being
it
prevents
being
able
to
use
it
on
h1
and
when
it
comes
to
certificates.
If
I
can
stay
as
far
away
from
x,
509
as
I
can,
I
would
very
much
prefer
that.
N
C
Awesome,
thank
you
alex.
L
Alex
tranowski
google,
so
I
have
two
comments
here,
the
first
of
which
is
that
this
absolutely
breaks
in
the
presence
of
intermediaries,
because
the
thing
you'll
be
slower
man.
This
breaks
in
the
presence
of
intermediaries
like
if
you
have
a
reverse
proxy,
because
the
exporter
will
be
running
on
the
reverse
proxy
and
not
the
target.
So
this
is
only
generally
applicable
if
we
can
fix
that.
L
I
didn't
get
there,
but
nonetheless,
that
already
means
that.
I
think
this
is
a
foot
gun
waiting
to
happen
in
the
current
formulation,
which
is
what
gets
me
to
the
the
second
part,
which
is
that
this
authenticates
the
channel
not
the
session,
which
means
that
we
have
additional
cryptography
problems
that
we
need
to
think
about.
When
you
go
to
h2
or
h3.
L
As
I,
as
you
may
know,
I
designed
something
recently
that
used
tls
exporters
for
google
internally
to
do
a
binding
between
the
x509
presented
certificate
and
a
google
internal
certificate,
and
I
think
that
there
is
certainly
a
place
where
tls
channel
exporters
are
a
fit.
But
I
think
we
need
to
think
about
if
we're
trying
to
authenticate
the
channel
or
of
a
session
here
before,
we
can
make
progress.
C
Sure
I
mean
that's
something
that
I'm
happy
to
discuss:
yeah
all
right.
Well,
thanks
everyone
question
for
the
chairs:
what
would
you
expect
from
the
authors
in
terms
of
next
steps?
It.
A
A
Okie
dokie
metadata.
E
N
A
W
Example.
Usage
for
this
would
be
logging
or
other
diagnostics.
It
could
be
communicating
rtt
as
seen
from
an
endpoint
or
cpu
usage
that
could
then
be
used
for
load
balancing
by
the
other
end
or
other
internal
use
cases.
So
this
is
not
part
of
the
actual
http
message
itself,
but
it's
about
it.
That's
why
that's
why
it's
meta?
W
So
we
don't
anticipate
a
lot
of
compression
gain
from
using
the
dynamic
table
and
we
did
restrict
the
use
of
hvac
and
would
restrict
the
use
of
cubac
to
not
use
the
dynamic
table
at
all.
We
are
just
using
hpac
for
convenience,
because
it's
something
that
we
have
at
hand
and
we
get
a
little,
maybe
a
little
gain
with
a
huffman
encoding.
W
W
There
can
be
multiple
of
them.
There's
no
limit
on
the
number
of
metadata
blocks
that
can
be
communicated
and
also
there's
no
restrictions
on
the
values
that
each
character
can
take
in
the
key
or
the
value
of
of
the
list
of
key
value
pairs
in
the
metadata
block
notice.
I'm
talking
about
methods
of
blocks
and
not
frames
because
in
h2
there's
a
frame
limit,
and
we
of
course
obey
that
so
a
single
metadata
block
can
be
fragmented
across
multiple
http
2
metadata
frames.
W
Also,
we
allow
metadata
blocks
to
be
sent
on
stream
0
for
http
2
or
the
control
stream
or
http
3.,
to
convey
information
that
pertains
to
the
entire
connection.
Instead
of
a
single
request,
response
exchange.
A
A
Opaque
blob
that
you
can
just
transfer
through
here
is
concerning
and
raises
a
few
alarms.
It
seems
to
me
that
you
know
the
mechanism
that
we
have
that
allows
you
to
do
this
like
we
already
have
frames.
Essentially
it's
just
like
a
frame
inside
of
a
frame
to
some
degree.
I
completely
understand
how
it
makes
very
good
sense
within
proprietary
deployments,
where
you're
just
trying
to
say
let's
stuff
things
in
and
see
what
makes
sense.
A
What
is
useful,
what
I
would
be
interested
in
hearing
is
of
the
things
that
you
do
inside
metadata
that
have
become
useful
in
google.
Would
there
be
specific
frame
types
that
you
would
like
to
define
out
of
those?
You
gave
some
examples
of
what
you
wanted
would
want
to
put
in
there.
Nothing
stops
us
from
defining
lots,
lots
of
more
frame
types,
and
so,
if
they
are
useful
frames,
we
could
define
those
there
instead
of
a
generic
metadata
where
you
just
stuff
another
registry
inside
of
it.
M
G
I
was
wondering
if
you'd
get
a
response,
so
martin
thompson,
thanks
for
the
presentation
vince,
it's
always
great,
seeing
getting
an
insight
into
how
the
internals
of
these
systems
work.
But
the
only
thing
that
I
heard
from
your
presentation
was
that
google
has
a
bunch
of
proprietary
extensions
to
http,
2
and
http
3..
G
G
If,
if
there
are
things
that
you
are
doing
here,
that
you
think
is
useful
for
someone
to
use
in
in
in
other
settings,
I
I
would
suggest,
like
tommy
just
said,
a
frame
for
defining
the
exchange
of
that
information,
whether
it
be
on
stream,
zero,
or,
I
should
say
a
control
stream
or
a
request
stream
or
would
be,
would
be
really
interesting
and
so
time
stamps
and
details
of
cpu
utilization
or
whatever.
What
whatever
it
is.
G
That
would
be
interesting
to
have
a
discussion
about,
but
an
entirely
generic
bucket,
I
don't
think
is.
It
is
especially
interesting
in
terms
of
standardization.
W
W
The
way
I
think
about
this
is
that
the
purpose
of
this
exercise
of
writing
up
this
draft
and
and
bringing
it
here
and
having
this
conversation
here
in
this
working
group,
is
to
kind
of
gauge
interest
and
see
if
anyone
can
think
of
outside
google,
like
hey.
W
This
thing,
or
we
meant
to
do
this
thing
and
using
metadata
frame
or
a
more
specific
train
type,
would
be
helpful
and
if
no
one
jumps
up
and
says
we
really
want
this.
That's
that's
important
information
for
us
us
as
a
working
group
I
mean
and
the
perfectly
reasonable
outcome
beer
do.
You
have
anything
to
add.
U
Again
for
the
useful
feedback
on
the
mailing
list,.
A
All
right,
we
have
just
a
minute
left
on
the
clock:
lucas,
alan
and
david.
Aren't
you
okay,.
T
So
hello
yeah.
Thank
thanks
for
bringing
this
to
mailing
list.
I
appreciate
those
walls
of
text
I
put
on
there.
I
think
it's
a
good
chat.
I
am
aware
of
use
cases
of
people
trying
to
do
stuff
by
carrying
sidecar
data
alongside
with
requests,
is
really
useful
and
they
struggled
trying
to
puzzle
how
to
do
this,
maybe
even
just
using
their
own
connect
method
and
hacking.
It
and
creating
like
a
weird
chunking
format.
T
Thing
where
actually
like
h2
and
h3
framing
would
have
been
a
much
much
neater
solution
and
I
think
capsule
kind
of
goes
towards
that.
I'm
not
advocating.
We
should
use
capsules.
There's
a
there's
like
a
whole
like
a
spectrum
of
what
to
do
here,
but
the
the
most
generic
thing,
I'll
echo
that
I
just
think
it.
It
opens
up
the
kind
of
wins
with
developers
who
aren't
us
that
they'll
see
it
and
think,
oh,
I
can
start
sending
headers
and
and
do
whatever
I
like,
and
it's
going
to
work
somehow.
V
Alan
fendell,
I
I'll
just
sort
of
offer,
maybe
a
counter
point
to
what
martin
and
tommy
push
back
on
the
generic
nature
in
that
http
header
fields
and
trailer
fields
are
already
generic.
V
That's
the
api,
that's
one
of
the
things
that
makes
http
really
flexible
is
you
can
just
stick
your
own
headers
in
there
and
you
don't
need
to
change
implementations
to
do
so
so
having
a
mechanism
like
this
does
provide
some
value,
and
I
think
there
are
cases
we've
had
internally
also
oftentimes
communicating
sort
of
like
timing
data,
maybe
or
other
like
metadata
or
stats
in
the
middle,
like
while
we're
transmitting
a
long-running
response
or
something
has
come
up,
I
believe
I've
even
told
somebody
once
to
use
push.
C
It's
kanaz
you'll
be
very
quick.
No,
this
is
http.
Sorry,
I
I'm
the
one
who
encouraged
benson
baron
to
come
and
talk
to
the
atf
because
they
were
working
on
this
and
you
know
using
it
we're
going
to
use
it
for
very
real
things
like
google
we're
going
to
be
putting
it
in,
like
probably,
I
can't
commit
to
future
plans
we're
going
to
probably
be
putting
it
into
the
envoy
http
proxy,
which
is
very
widely
used
outside
of
google
and
and
so
my
thought
was:
let's
bring
it
to
the
ietf.
C
Let's
see
if
people
tell
us,
no
one
else
has
a
use
for
this,
then
you
know
we
just
ask
for
the
code
point
and
we're
done,
but
if
other
people
think
this
is
useful
or
can
improve
it,
we're
very
willing
to
take
feedback
and
to
work
with
the
ietf.
So
we
thought
it
would
be
interesting
and
we're
that's.
Why
we're
super
willing
to
hear
any
feedback
and
see
what
other
use
cases
people
have.
B
So
I
inserted
myself
briefly
using
my
awesome
privileges
as
chair
just
to
respond
to
something
that
alan
said,
which
is
you
know
if
we
do
want
to
add
something
like
this.
I
think
we
need
to
look
at
it
as
a
change
in
the
developer,
visible
signature
of
http.
You
know
right
now.
We've
got
these.
B
You
know,
request
response
messages,
headers
body,
we
could
add
a
new
construct
to
all
of
that,
and
we've
tried
that
before
with
things
like
server
push,
but
if
we
do
do
that,
it
needs
to
be
something,
that's
really
deliberate
and
I
think
really
well
thought
out,
because
we've
also
frankly
failed
a
lot
when
we've
tried
to
change
the
wire
signature
of
hd
or
the
developer
signature
of
http,
with
even
things
like
pipelining.
That.
A
B
It
sounds
like
we
need
more
discussion
on
it
and
maybe
use
cases.
B
E
A
All
right,
thank
you
all
thank
you,
remote
people
and
mark
for
joining
in
and
have
a
good
rest
of
your.