►
From YouTube: IETF111-WEBTRANS-20210730-1900
Description
WEBTRANS meeting session at IETF111
2021/07/30 1900
https://datatracker.ietf.org/meeting/111/proceedings/
B
A
B
B
You're
in
here
you
probably
have
done
the
data
tracker
login,
the
blue
sheets
are
automatic.
You
can
join
the
session
jab
room
if
you
want
to
please
use
headphones
or
an
echo
cancelling
speakerphone,
and
please
state
your
full
name
before
speaking.
So
we
can
accurately
record
that
in
the
minutes
you
enter
the
queue
by
pressing
on
the
hand
with
the
slash
and
you
leave
it
by
pressing.
On
the
full
hand,
you
need
to
enable
your
audio
to
be
heard.
B
People
still
learning
that
and
you
unmute
by
pressing
on
the
slashed,
microphone
and
disabled
by
clicking
on
the
mic.
You
have
it's
your
choice,
whether
enable
video
it's
optional
and
is
handled
separately.
B
A
Quick
reminder:
I
just
would
like
to
highlight
two
important
aspects
of
the
notewell.
Most
of
you
been
here
all
week
have
already
seen
it,
but
in
case
you
haven't,
one
is:
are
the
its
ipr
policy?
If
you
say
anything
there,
it
causes
you
to
have
to
disclose
patents.
You're
personally
aware
of
read
the
full
details.
If
you're
not
aware
of
this,
and
additionally
is
our
code
of
conduct,
we
have
been
lucky
to
have
a
very
well
running
of
respectful
people
at
webtransport.
A
B
Thank
you.
We
are
looking
for
a
notetaker
for
the
meeting.
Do
we
have
volunteers
we'd
like
you
to
take
notes
in
the
kodi
md
which
we've
provided
anybody
willing
to
volunteer
to
do.
B
B
Oh
mike,
okay,
thanks
mike
and
a
jabra
scribe,
anybody
willing
to
volunteer
for.
B
B
A
A
Calling
jabber
but
there's
it's
hooked
into
meet
echo,
there's
a
chat
there.
So
if
you
could
just
monitor
and
if
anyone
asked
for
something
to
be
related
at
the
microphone,
you
could
come
and
say
it,
so
you
don't
need
to
install
any
jabber
client
just
having
the
miracle
keeping
it
on
the
meek.
Deco
chat
for
folks
asking
to
be
relayed
would
be
super
helpful.
B
Thank
you.
Thank
you.
Thank
you.
So
much
okay,
so
here's
the
agenda
we
have
a
few
preliminaries,
then
will
will
give
us
the
w3c
update
victor,
we'll
talk
about
hd
webtrans
over
http
3.,
we'll
have
eric
talking
about
issue
b2
and
then
we'll
wrap
up
and
summarize
any
additions
to
the
agenda
or
mods.
E
Okay,
good
afternoon
will
law
akamai
co-chair
w3c
along
with
yanivar
from
mozilla,
so
just
some
updates.
From
the
last
time
we
presented
this
on
may
20th
a
couple
of
key
decisions
and
prs.
We
have
to
find
a
milestone
for
minimum
viable
ship.
We
do
have
a
vendors
indicated,
they're
hoping
to
ship
towards
the
end
of
the
year,
and
while
this
doesn't
specifically
drive
our
our
timetable,
we
feel
that
a
minimum,
viable
collection
of
issues
is,
is
good
to
go.
We
define
a
secure
context
for
all
the
api
surface.
E
You'll
only
be
able
to
access
web
transport
under
https
definition
of
back
pressure,
control
attributes,
define
a
cleanup
procedure
around
eventing,
remove
dot
state
and
on
state
change,
attributes
also
custom
methods
and
promises
for
stop
sending
and
reset
stream,
and,
very
recently,
over
the
last
week,
progress
on
a
new
web
transport
era.
So
the
details
of
all
these
are
available
on
our
github
page.
If
anyone
has
further
questions
there
next
slide,
please.
E
E
There's
a
demo
server
that
runs
a
web
transport
echo
server
and
I
believe
this
is
going
to
be
the
basis
for
the
web
platform
tests
that
will
be
developed
for
w3c
small
sample
over
to
the
right
to
show
how
it's
accessed
and
how
you
get
your
data
back.
E
E
We
need
to
start
promoting
the
accessibility
and
availability
of
these,
because
it's
only
when
you
have
a
server
and
a
client
combination
that
a
lot
of
the
real
testing
can
begin
and
also
some
of
the
use
cases
more
fully.
Exploited
chrome
has
signaled
the
q4
release
of
web
transport
over
h3,
there's
a
link
there
to
chrome
status,
updates
and
also
on
the
blink
side.
An
intent
to
experiment
has
a
reasonably
long
thread.
Mozilla
are
working
on
an
implementation
in
firefox,
but
have
not
provided
dates
to
date.
E
There
is
a
current
flurry
of
video,
specifically
ingest,
which
just
came
out
of
that
meeting,
but
other
next-gen
video
protocols
built
directly
on
top
of
quick.
I
list
three
here
that
are
available.
E
E
We
have
a
slide
coming
up
on
that,
but
they
would
inherit
benefits
such
as
automatic
h2
fallback,
which
might
be
complex
when
you
have
to
roll
your
own
you're,
getting
the
same
binu
directional
streams
and
datagram
support
with
minimal
code
changes
to
projects
that
you
would
get
from
raw,
quick
and
you're,
going
to
inherit
a
very
large
environment
of
browser
implementations
and
that's
really
useful
in
terms
of
reach
and
also
testing
and
conformance.
E
I
noticed
none
of
those
prior
projects
mentioned
web
art
web
transport
in
their
discussions
today.
So
there's
some
mutual
work
to
do
here,
to
encourage
them
to
move
over
to
the
webtron
support
camp
next
slide.
Please.
B
Yeah
yeah-
I
just
copied
this
over
from
horch's
presentation
actually
just
a
few
minutes
ago
and
also
in
from
abt
core,
and
I
guess
the
question
I'd
have
and
I
don't
think
now's
the
time
to
follow
up.
But
is
there
anything
we
need
to
do
in
web
transport
in
the
api,
for
example,
to
surface
some
of
the
stats
that
that
are
being
used
in
the
rtp
over
quick
or
potentially
in
the
ack
area,
which
is
currently
not
there's?
No
real
info
in
the
api?
B
So
that's
more
of
a
question
for
for
this
and
other
implementations.
Is
there
something
missing
in
the
api
and
if
so,
can
people
speak
up
and
you
know
maybe
potentially
address
it,
but
I
think
this.
This
was
a
case
where
things
were
very
tightly
integrated
and
it's
not
being
served.
The
info
isn't
being
surfaced
in
the
api
currently.
E
Right
and
just
two
more
slides
here
from
w3c
two
requests
to
the
ietf
group:
firstly,
to
help
clarify
the
prioritization
of
datagrams
versus
streams
and
also
streams
versus
streams.
Is
this
going
to
be
the
responsibility
of
the
user
agent,
or
will
there
be
core
protocol
assistance
here?
We
have
two
issues
open
up
on
this
for
more
background,
and
it
would
certainly
help
drive
this
towards
our.
The
the
initial
implementations
are
gonna
have
to
address
this
question.
E
That's
the
first
one.
Next
slide,
please,
on
the
second
one,
just
a
an
assistance
on
defining
available
error
code
spaces
that
the
work
feels
needs
to
be
covered,
as
well
as
reserved
error
codes.
Again
we
have
an
issue
open
up
and
on
this
that
defines
the
the
exact
questions
but
we'd
like
some
clarity
from
the
iatf
groove,
and
I
think
both
victor
and
bernard
and
utaka
can
provide
details
on
that,
and
those
would
be
our
two
primary
blockers
at
the
moment
with
that
bernard
back
to
you.
Thank
you.
B
Okay,
so
we'd
like
to
give
the
microphone
to
victor
to
present
web
transport
over
http
3.
A
So
I
would
say
bernard
if
you
stop
presenting
the
slides
and
then
victor
will
request
them,
and
that
way
they
he
can
control
the
slides
directly.
A
A
F
Okay,
fortunately,
I
tagged
all
the
issues
that
we
need
to
discuss
beforehand,
so
we
can
move
straight
into
those,
so
issue
number
10.
I
just
wanted
to
make
sure
that
we
can
close
them.
It's
the
one
where
we.
A
F
Oh
okay,
sure
thing:
oh.
F
Okay,
so
since
last
atf
we
actually
released
a
new
draft
draft
one.
It
has
fixed
three
issues.
One
we've
decided
that
https
is
the
correct
scheme
for
our
protocol.
Two.
We
have
agreed
that
we
can
use
type
value
frames
without
land
for
the
web
transport
stream
frame,
which
is
used
to
signal
that
the
bi-directional
stream
is
a
web
transport
stream
and
free.
We
agreed
to
lift
all
of
the
requirements
on
waiting
for
handshake
to
finish
before
sending
streams,
which
means
now
you
can
send
stream
data
stream
for
a
web
transfer
session.
F
Before
you
establish
the
connection,
and
I'm
sorry
before
you
establish
that
connect
request
and
similar
server
can
wait
before
hearing
back
from
client.
F
In
that
way,
you
don't
have
to
wait
for
it
and
you
can
start
your
connection
faster,
but
it
also
means
now
you
have
to
buffer
your
data
streams
to
associate
them
with
requests
later,
and
there
is
a
text
about
how
to
do
that
and
how,
to
time
out
strings
we
I
we've
actually
implemented
all
of
that
in
chrome,
so
drafter
one
is
the
one
currently
available
in
chrome
and
at
the
interim
we
discussed
some
issues
and
there
are
pr's
that
everyone
is
welcome
to
look
at
so
issues
actually.
Actually.
F
People
are
welcome
to
not
do
that,
buffering
that
would
just
result
in
I'm
answering
to
martin
and
jabber.
People
are
welcome
to
not
do
that
buffering
it
would
result
in.
They
would
can
reset
the
streams,
but.
F
The
first
issue
is
issue
number
10
and
that's
the
one
where
we
asked
if
we
want
to
have
consistent
stream
ideas
between
the
client
and
the
server,
and
I
saying
at
some
point:
we
decided
that
the
answer
is
decidedly
no,
but
I
could
not
find
that
discussion.
So
I
wanted
to
ask
if
people,
okay
with
me,
just
closing
it,
because
we
decided
no
in
the
past.
B
Okay
for
the
meeting
minutes,
please
indicate
that
the
issue
has
been
closed
or.
A
Victor
just
answer
the
just
mention
in
there
that
that's
what
we
plan
on
doing
we're
going
to
give
everyone
a
week
to
if
they
weren't
able
to
attend
and
then
just
well
we'll
go
through
a
week.
From
now
and
close.
A
F
This
is
atf
111,
correct,
okay,
good,
let's
go
next
ish,
so
the
next
issue
is
so,
let's
start
with
the
easier
one
reset
stream
so
reset
streams.
We
had
the
problem
where
it
is
not
always
possible
to
associate
this
and
the
solution
is.
I
wrote
apr
that
solves
both
this
problem
and
recent
stream
error
code
problem
and
the
basic
idea
for
that
pr
is:
where
is
where
is
the
pr?
F
Or
here
is
the
pr,
so
the
proposal
is
basically,
we
are
going
to
allocate
256
reset
stream
error
codes.
Where
256
is
the
number
that's
small
enough,
we
can
fit
it
both
into
h3
and
h2
airspace,
and
the
idea
is.
F
There
is
a
allocation
for
those
error
codes
which
is
somewhat
complicated
because
we
have
to
admit
error
points,
but
there
is
some
nice
pseudo
code
to
show
how
to
map
those.
F
But
the
idea
is
that
by
default,
the
web
transport
application
error
codes
are
best
effort
for
resetting
streams,
because
if
you
have
pulling
it
is
possible
the
stream
gets
reset
before
you
know,
which
session
it
belongs
to
the
problem
and
the
only
case
in
which
we
can
actually
provide
reliable
research
stream
codes,
and
I
think
we
should
is
when
you
have
no
intermediaries,
and
there
is
only
one
session,
then
you
know
it's
a
web
transport
reset
stream
error
code,
so
you
can
just
send
it
to
the
application
and
that's
the
proposal.
F
F
Okay,
do
people
martin.
G
F
F
All
right-
and
I
would
like
minis
to
reflect
as
that's
what's
the
opinion,
because
there
are
actually
two
issues
I
would
have
to
close,
but
I'd
like
to
move
on
to
the
next
issue.
For
now,
which
is
negotiation
pulling
support.
So
we
had
a
long
discussion
about
this
in
the
past
and
basically
the
only
thing
people
seem
to
agree
is
that
we
could
do
something
like
max
web
transport
session
option
on
the
issue.
F
So
I
would
like
to
hear
if
people
think
that's
an
adequate
solution
or
that
we
should
wait
for
or
we
should
go
with
some
alternative,
because
from
discussion
we
had
on
the
issue.
It
sounds
like
max.
Web
transport
session
is
basically
the
right
answer.
H
Yeah,
I'm
a
little
I
mean.
I
think
I
maybe
even
suggested
that
as
an
option,
but
it
seems
like
the
most
complicated
option
and
I
guess
I'm
comparing
it
to
mask
where
connect
udp
opens
a
similar
kind
of
session,
but
isn't
throttled
in
any
way
right.
So
and
maybe
I'm
comparing
apples
to
oranges
or
maybe
not,
but
I
mean
how
critical
is
it
that
we
that
we
limit
this.
F
F
F
Thank
you,
luke.
D
One
of
those
people
that
doesn't
want
to
have
multiple
web
transport
sessions.
The
reason
why
is
we're
using
connection
level
stats
to
decide
how
much
data
to
send
over
each
session
so
right
now,
if
somebody
tries
to
make
a
new
web
transport
session,
we
just
say
no
like
if
you
try
to
make
the
second
one
on
the
connection
and
it'd
be
great.
D
If
there's
some
way
to
just
tell
the
browser
dial
a
new
quick
connection,
don't
even
bother
doing
this,
no
extra
hop
asking
if
it's
allowed
to
or.
F
Not
all
right,
it
sounds
from
what
I
hear
is
like
people
are
basically
agrees
that
max
web
transport
session
is
just
adequate.
So
how
about?
I
just
write,
a
pull
request
for
that,
and
we
decide
that
that
particular
issue
is
solved
that
doesn't
even
object
to
that
course
of.
F
Action,
alan.
H
I'm
not
sure
I
object,
but
I
want
to
just
clarify
because
the
issue
I
was
just
rereading
it
quickly.
It
talks
about
it's
more,
not
not
just
a
parameter
max
web
transport
sessions,
which
is
the
maximum
number
of
concurrent
allowed
sessions
versus
something
that
was
more
like
I'm
going
to
use
a
dirty
word
here.
Http
3
push
with
like
push
id
where
you
would
say
like
max
wt
session
id
and
then
having
to
manage
this
more
explicitly.
So
I
I
just
want
to
clarify
what
the
proposal
is.
F
F
Yes,
you're,
muted,
okay,
not
muted
yeah,
for.
D
Us
it's
simultaneous
we're
using
the
congest
control
statistics
of
the
quick
connection.
So
if
nothing
else
is
using
it
then
we're
fine
having
a
new
one
later.
H
H
Having
a
hard
time
formulating,
the
exact
idea
in
my
head
but
like,
if
are
we
saying
like
the
client,
is
the
one
that
would
always
know
how
many
sessions
are
open
but
like
what
happened?
The
client
has
closed
his
session
and
then
opens
a
new
one,
but
that
closing
and
opening
is
reordered,
and
so
the
server
says.
No.
You
can't
do
that.
You
already
have
a
session
open.
F
F
All
right,
I
think
we
can
move
on
at
like
this
issue.
It
sounds
like
we
should
figure
out
details
more
details
before
we
can
close
it,
but
let's
move
on
to
the
next
one,
the
next
ones
that
I
actually
have
tagged.
F
F
The
second
is
more
interesting.
It
is
when
you
have
a
stream
that
is
not
the
web
transport
stream.
Can
you
reset
if
someone
tries
to
open
a
data
stream
for
it
and
the
reason
it's
interesting
is
because
there
could
be
various
race
conditions,
so
I
I
want
to
hear
what
people
saying
about
one
closing
for
all
invalid
such
streams
straight
all
session
ideas
that
are
just
a
priori
invalid
and
two
is:
should
we
do
anything
more
in
terms
of
validation,.
H
Yeah,
I
think
for
one
I
think
it's
fine,
it's
it's
obvious.
If
you're
trying
to
create
a
session
on
a
not
client-initiated,
bi-directional
stream,
that
that
should
be
probably
a
such
an
error,
id
error
seems
fine
and
sorry
a
connection
error.
Maybe
it's
better
than
session
for
two.
I
agree:
there's
there's
race
conditions
where
you
may
not
know,
but
you
can't
maybe
do
anything
with
the
session
id,
but
before
the
you're
worried
about
case
so
like
when
the
request
arrives.
H
If
it's
a
get
instead
of
an
extended
connect
and
then
later
either
a
stream
or
a
datagram
tries
to
refer
to
that.
You
can
tell
that.
Probably
that
it's
not
that
this
is
it's
an
open
stream,
but
it's
not
a
web
transport
session.
So
that's
probably
also
a
fatal
error.
H
If
they
were
reordered
in
the
stream
arrived
first,
you
kind
of
have
to
buffer
it
anyway,
but
maybe
you're
thinking.
How
do
you
distinguish,
but
that's
more
about
the
buffering
question
anyway.
I
think
I'm
trying
to
say
that
we
can
do
a
little
bit
more
than
just
one,
but
maybe
I
don't
have
a
clear
idea
exactly
how
much
more.
F
Yeah,
I
think
we
can
do
two,
but
one
thing
that
worries
me
about
two:
is
that,
like
the
conditions
under
which
you
can
do
do
two
are
very
specific
and
in
case,
if
you
get
one
of
the
edge
cases
wrong,
you
will
reset
connection
for
valid
reasons,
and
I
don't
want
people
to
do.
I
don't
want
to
require
this
from
everyone,
because
this
is
just
so
easy
to
get
wrong.
Martin.
G
So
if
you,
if
you
think
about
number
one,
I
I
think
it's
totally
reasonable
to
require
it
that
all
I
heard
from
number
two
is
then
maybe
we
can't
require
people
generate
a
connection
level
error
in
this
case
and
we
simply
just
have
to
allow
them
to
create
one
there's
a
lot
of
cases
where
this
this
will
be
totally
recognizable
and
you've
already
had
the
get
or
something
like
that.
Then
that's
fine.
G
F
Four
must
close
four
one
and
may
close
in
other
sense:
does
it
sound
right.
A
F
Sure,
let's
move
on
to
the
next
issue,
which
is.
F
This
discussed
last
time
I
actually
wrote
a
pull
request.
I
wanted
to
confirm
that
people
are
still
okay
with
this
and
look
at
the
pull
request,
but
actually
I'm
not
even
going
like
we
discussed
this
last
time,
but
I
want
to
make
sure
that
people
know
and
look
at
the
pull
request,
but
basically,
in
order
to
negotiate
versions
via
settings,
the
server
has
to
wait
until
receiving
settings
from
the
client
in
order
to
before
processing
any
uf
transport
requests.
F
So
that's
as
that's
not
something
I
feel
we
have
need
to
have.
Another
discussion
you're
welcome
to
comment
on
the
pr
and
the
issue.
The
next
one
is.
I
already
we
discussed
in
tandem
with
the
other
resistance
issues
that
we
allocated.
256.
Recent
stream
error
codes
and
you're
welcome
to
take
a
look
at
the
pull,
requests
and
comment
on
it
and
comment
on
the
issue.
F
If
you
think
our
approach
should
be
different,
so
I'll
move
on
to
the
next
issue,
which
is
actually
really
interesting,
which
is
erica's
for
closing
this
session.
F
This
is
roughly
the
idea
is
that
in
quick
we
have
something
like
connection
close,
and
that
gives
you
an
error
code
and
an
error
string,
and
it
would
be
nice
to
have
that
in
h,
web
transport
over
http
free
as
well.
F
F
H
F
That
is
probably
a
separate
capsule,
because
normal
connection,
clothes
just
implies
that
everything
else
goes
away,
and
that
is
fine
for
the
error
case.
The
case
when
you
have
a
connection
error
for
go
away.
There
is
a
separate
issue
which
I
think
we
might
also
get
to
get
to
discuss
at
this.
If
we
have
enough
time
but.
F
Yeah,
if
you
have
specific
comments
about
like
the
fact
that
I
accidentally
specified
number
of
bytes
instead
of
bits,
you're
welcome
to
comment
on
the
pull
request.
Lucas.
F
Can
you
hear
me
we
can
hear
you.
I
Good
yeah,
I
took
a
brief
look
at
this
pr.
Then
I
thought
it
was
using
capsules
and
I
got
depressed
from
the
other
day
with
the
capsule
stuff.
I
think,
like
the
the
content
of
the
capsule
is
an
important
thing
here,
and
I
saw
david
made
some
comments
on
the
pr
about
bits
and
blah
blah
blah,
but
the
one
that
I
would
have
also
made
is
about
the
application.
I
Error
message,
like
I
I
was
trying
to
understand,
is:
is
this
closed
session
more
similar
to
like
the
quick
connection
close
frame
or
is
it
more
similar
to
the
stream
reset,
and
am
I
I
wanted
just
to
understand
the
motivation
for
that
error
message,
but
who's
asking
for
that?
What
who's
going
to
use
it
and
what
for.
F
The
idea
was
to
provide
facilities
similar
to
connection
close
in
quick,
and
the
reason
is
sometimes
your
application
protocol
on
top
of
web
transport
would
want
to
signal
that.
But
it
would
also
want
to
close
the
connection,
and
this
provides
a
way
to
signal
it,
and
then
let
the
web
transport
stack
deal
with
actually
terminating
the
session.
I
A
Sorry
waiting
for
the
microphone
to
start
just
wanted
to
quickly
clarify
one
thing
that
lucas
was
saying
about:
capsules
is
we're
in
mask
in
the
process
of
changing
the
encoding
of
capsules,
but
it's
I
haven't
heard
anything
to
the
lines
of
us
kind
of
removing
them.
So
the
fact
that
this
pr
uses
capsules
is
perfectly
fine,
we're,
at
least
from
that
perspective,
and
we're
we're
going
to
work
on
exactly
how
the
center
for
there
are.
F
Thank
you
for
clarification.
People
are
welcome
to
read
sbr
more
closely.
I
sang
unless
anyone
has
objections
to
like
the
general
idea
of
having
a
capsule
for
that.
I
would
like
to
move
on
to
next
issue.
So
does
anyone
want
to
speak
against
the
general
idea
of
having
closed
web
transport
session.
F
This
is
another
issue
we
discussed
at
the
previous
meeting
and
I
wrote
a
pr-
and
I
just
wanted
to
point
out
that
this
vr
exists
and
everyone
should
take
a
look.
Web
transport
currency
requires
status
code
200
for
creating
a
connection
and
general
http.
F
Free
internet
draft
requires
you
to
have
any
of
2x
error
codes,
so
200
to
299..
F
Oh
martinez,
the
answer
is
the
current
to
your
question
on
java,
the
current
requirement
is
to
send
the
fin
after
it's
a
capsule,
and
everything
else
is
an
error.
So
there
there
is
some
disagreement
in
in
the
discussion.
The
all
right
I
pointed
out,
this
exists.
The
next
one
is
an
issue
that
should
have
a
pr,
but
it
doesn't.
F
I
wanted
to
reconfirm
that
we're
still,
okay
with
that,
because
otherwise
I
will
go
and
write
spr
and
list
it,
as
other
issues
is
that
we
want
to
make
sure
that
we
want
to
make
web
transport
stream
of
unidirectional
looks
the
same
way
frames
the
same
way,
they're
framed
as
bi-directional,
which
is
to
say
currently
we
have
this,
and
the
idea
is
to
make
it
a
stream
of
frames
and
that
would
use
the
existing
frame
for
bi-directional
streams,
as
that
would
bring
it
with
alignment
with
everything
else
in
http
free.
F
We
discussed
this
as
previous
meeting.
I
just
want
to
point
out
that
there
will
be
apr
and
unless
anyone
objects,
cpr
will
go
into
the
next
version
of
the
draft.
So
if
you
object,
please
comment
on
this
issue.
F
F
What
makes
this
specifically
tricky
is
that
in
some
cases,
web
transport
may
be
requested
as
a
non-pool
dedicated
connections
and
in
general.
If
your
connections
are
pulled,
you
don't
care
as
much
and
you
can
for
a
few
streams,
but
if
they
aren't,
that
means
you
can
spam
handshakes
and
create
too
many
connections
from
one
browser
and
in
the
question
I
would
like
to
ask,
is
even
less
of
what
should
we
write
text,
because
I
think
the
text
here
is
like
fairly
obvious.
F
F
Yes,
martin,
that's
that's
a
good!
That's
that's
effectively.
What
I'm
asking,
who
should
limit
connections
and.
F
F
Everyone
is
a
possible
answer,
but
the
question
here
is
every
client,
presumably
because
of
it
definitely
the
servers
are.
It
is
less
relevant
on
servers
because
servers
don't
create
outcome
connections,
but
it's
the
question
would
be
if
we
decide
that
every
client
should
limit
outgoing
connections.
F
We
could
write
that
in
the
draft,
but
if
we
decide
that
this
is
not
a
problem
with
most
clients
and
only
problems
with
clients
that
run
inside
browsers,
which
means
they're
untrusted,
that's
a
different
issue.
So
that's
primarily
the
question
I'm
interested
in
being
answered
here.
F
It
doesn't
look
like
people
have
any
opinions
or
seems
like
martin
says
seems
like
this
is
further
study.
Possibly
it
sounds
like
no
one
wants
to
say
anything
as
a
mic,
so
I
will
write
those
proposals
and
pick
a
default
one
and
like
if
no
one
objects.
I
would
go
with
that,
but
that's
it
doesn't
sound
like
where
anyone
wants
to
say
anything
in
submitting.
So
oh
martin
yeah.
G
Work
out
what
how
you
would
how
you
would
even
think
about
this
problem,
but
it
seems
to
me
like
saying
nothing
would
be
what
I
would
start
with
here
and
servers
that
do
have
this
problem
will
need
to
manage.
It.
Browsers
will
probably
have
to
manage
this
as
well.
They
pro
you
probably
don't
want
to
dedicate
infinite
resources
to
an
a
single
site
or
allow
sites
to
create
multiple
frames
and
thousands
of
connections,
but
these
are
these
are
things
that
each
can
independently
work
through
at
most.
G
At
this
point,
I
think
yeah
david
said
may
throttle.
I
I
think
that
just
some
security
considerations
text
or
something
along
those
lines.
It
basically
says
that
connections,
don't
aren't
free,
consider
limiting
them
either
at
the
server
or
the
client.
Then
that
would
be
fine.
G
That's
probably
true,
but
at
the
same
time
it's
not
something
that
we
can
necessarily
provide
in
in
apis
or
anything
like
that.
We
just
have
to
deny
access
to
these
things,
arbitrarily,
probably
from
the
perspective
of
the
people
using
in
the
api.
F
I
agree
with
that,
so
it
sounds
like
this
needs
further
discussion
on,
say
shoe
so
I'll
move
on
to
the
next
issue,
which
is
the
most
fun
issue.
Definitely,
and
it's
titled,
what
should
we
do
with
datagram
context,
ideas
so
for
some
context,
web
trans,
so
the
h3
datagram
draft
provides
with
two
options
with
like
two
ways:
to
use
datagrams
associated
with
a
specific
stream
one.
Is
you
send
a
capsule
called
register,
datagram
no
context,
and
you
have
no
context
whatsoever.
F
You
still
have
to
send
that
capsule
to
where
the
peer
knows
that
that's
how
you're
using
it,
but
otherwise
you
don't
have
any
context.
You
don't
you
care
about
context
after
that
and
the
second
one
is.
A
David,
okay,
sorry
always
waiting
for
the
microphone
thing
to
get
over.
This
meet
echo
bug
so
just
to
clarify
on
the
on
the
wording,
so
speaking,
not
as
web
transport
chair,
but
as
the
author
of
the
http
3
datagram
document.
A
The
the
mindset
in
that
document
isn't
that
there's
it's
you
know
context
or
no
context.
It's
like
there's
always
a
context,
but
when
you
register
no
context
what
it
means,
you
still
kind
of
negotiate
the
potential
the
format
for
that
context.
It's
just
that.
You
have
one
for
the
entire
connection.
You
no
longer
have
a
context
id
that
allows
you
to
multiplex
multiple
contexts
and
one
of
the
important
distinctions
there
is
that
you
can
kind
of
saying
I
register
no
context
and
just
use
that
is
from
the
higher
level
perspective.
A
The
exact
same
thing
as
saying
I
register
context
2,
for
example,
with
that
format,
and
only
use
that
one
like
they're
completely
semantically,
the
the
same.
It's
just
on
the
wire
encoding.
You
end
up
with
an
extra
variant
that
gets
sent
around,
and
so
at
least
in
the
mindset
of
the
current
http
datagrams
document.
The
idea
is
that
you'll,
for
example,
the
way
I
would
see
this
playing
out
is
for
web
transport
to
say
well,
we're
currently
only
do
using
one
context.
A
G
So
david's
suggestion
here
is
reasonable
if
you
think
that
extensions
of
that
shape
are
possible-
and
this
is
probably
something
that's
that's
hard
to
know.
At
this
point
I
mean
that
I
I
can't
imagine
any
way
in
which
we
could
use
the
api,
that
we
have
to
build
multiple
contexts,
and
so
that
makes
me
a
little
sort
of
cautious
about
using
this
mechanism.
G
I
would
probably
lean
towards
saying
that
no
no
use
of
the
context
mechanism
is
made
of
this
and
then
leave
it
at
that.
You
don't
actually
have
to
send
either
of
these
frames
to
you,
david.
A
The
the
way
the
http
datagram's
draft
is
currently
written
you,
the
client,
has
to
send
at
least
one
of
these
frames
they're.
One
of
these
register
register
contacts
frames
before
it
starts
using
datagrams,
because
that's
how
it
communicates
to
the
server
that
there
is
a
context
id
or
not
inside
the
datagram
frame.
A
Some
use
uses
there,
so
I
mean
yeah.
We
could
definitely
take
that
to
a
mask,
but
one
example,
just
that
is
specific
to
web
transport
could
be
if
we
want
to
do
priorities
at
some
point
down
the
line
where
you
would
say,
like
your
the
usual
context
that
is
being
defined
in
this
spec
is
the
regular
default
priority,
but
later
down
the
road,
you
could
say:
okay,
I'm
registering
context
14
and
that
one
is
for
background
datagrams.
That
should
be
prioritized
lower.
G
G
A
Can
you
clear
everything
api?
Do
you
mean
the
w3c
api
or
the
like
api
between
http
datagrams
and
web
transport.
G
A
Absolutely
the
and
so
saying
we're
only
going
to
use
like
a
single
context.
There
is
the
right
way
to
go
because
you're
right,
it's
not
going
to
show
up
in
w3c
the
the
only
real
thing
that
I
kind
of
objected
to
was
option,
one
that
victor
mentioned
here,
which
was
to
say
that
web
transport
servers
could
only
implement
half
or
a
partial
implementation
of
the
hp
datagrams
document.
And
that
was
what
was
making
me
concerned
because
it
could
prevent
us
from
doing
extensions
later.
B
F
I
mean
this
is
the
main
issue
I
wanted
to
discuss.
That's
probably
most
the
most
tricky
one
eric
near.
J
Yeah,
I
was
just
curious,
actually
martin
thompson
kind
of
about
some
of
the
reasoning
for
why
the
flow
id
isn't
useful.
Because
are
you
saying
effectively
that
it's
not
that
priorities
or
something
else,
isn't
an
interesting
use
case,
but
that
any
of
those
use
cases
would
have
to
come
along
with
some
signaling
at
the
application
layer?
Which
means
that
at
that
point
you
might
as
well
just
accept
the
fact
that
we're
going
to
have
kind
of
this
duplicative.
J
You
know
this
particular
warrant,
or
this
particular
section
of
the
datagram
is
the
id
for
whatever
that
tying
that
to
the
application
and
the
thing
that
it
actually
uses
means
that
it's
not
worth
having
something
at
a
lower
layer
that
is
common
or
is
it
on
a
totally
different
tank.
G
So
there
will
be
applications
that
need
context
and
there
are
applications
that
won't-
and
this
is
one
of
those
applications
that
I
think
doesn't
and
I'm
sort
of
wondering
why
there's
even
a
generic
mechanism,
I
think,
there's
probably
some
value
in
having
some
signaling.
That
says:
yes,
I'm
using
datagrams,
but
that's
the
signaling
that
we
want
to
use,
not
here's
this
generic
mechanism
that
will
sort
of
get
in
the
way
in
a
lot
of
cases,
and
it
goes
back
to
the
datagram
debate
about
having
context
ids
at
the
datagram
layer
gotcha.
A
So
speaking
at
the
web
transport
chair,
I'm
going
to
say
that
this
is
a
part
of
the
http
datagrams
document.
So
let's
take
this
to
mask.
Let's
discuss
this
on
the
http
datagrams
draft
and
and
then
now
speaking
kind
of
personally,
I
would
say
that
web
transport
should
not
allow
like
half
implementing
the
http
datagrams
document.
A
So
if
you
think
that
there
is
a
part
of
it
that
shouldn't
be
in
it,
let's
take
that
discussion
to
to
mask
and
go
see
how
we
want
hdb
datagrams
to
look
like
and
then
webtransport
will
use
those
and
we'll
like
implement
all
the
mandatory
to
implement
parts
of
http
datagrams
that
that
still
can
be
changed.
That's
still
an
active
discussion
in
mask,
but
maybe
we
put
a
pin
in
this
for
today's
session,
given
the
time
and
we'll
discuss
it
in
mask
with
all
the
mask
enthusiasts.
F
I
I
will
say
that
I'm
personally
actually
agree
with
that,
and
it
is
more
like
option.
One
is
more
of
was
attempt
to
do
that,
given
what
h3
datagram
says,
but
the
best
way
to
rephrase
option
one
would
be
just
like.
We
don't
need
contexts,
and
that
is
a
valid
opinion
and
I
personally
more
inclined
to
agree
with
that.
That
webtransport
doesn't
need
context
in
general,
but
that
is
in
fact
better
suited
for
mask
working
group.
F
Now
I
think
we
discussed
almost
all
of
issues
except
the
one
with
go
away,
which
I
left.
This
is
complicated.
How
much
time
do
we
have
left?
I
assume
we
don't
have
any
for
this
presentation.
Yes,
we
are
out
of.
A
F
Okay,
I
with
couple
finishing
remarks
before
I
go,
which
is
basically
roughly
please
go
to
the
issues.
Please
read
all
of
the
pull
requests.
I
will
ask
the
shares
if
we
can
have
a
consensus
call
for
all
of
those,
but
there
are
rpr's
and
I
strongly
encourage
people
to
look
at
them.
That's
it
for
me.
B
J
G
J
That
depends
on
web
transport
is,
and
in
order
to
do
this,
we
built
web
transport
over
h2
kind
of
vary
similarly
to
h3,
because
we
want
to
have
many
of
the
same
capabilities,
and
the
point
was
raised
that
there's
some
interesting
layering
questions
that
that
brings
up
that
aren't
necessarily
present
in
the
way
we
do
h3.
So
I
tried
to
draw
a
picture.
It
may
or
may
not
be
helpful,
but
I'm
trying
to
find
a
good
way
to
visualize
what
this
difference
is
and
see.
J
J
We
have
bidirectional
streams
that
you
can
open
and
there's
new
http
3
frames
that
are
created
and
potentially
last
until
the
end
of
the
stream,
for
you
know,
web
transport
stream,
which
I've
abbreviated
to
wtstream
here,
and
you
can
also
send
hdb3
datagrams,
which
is
what
we
were
just
talking
about,
and
those
end
up
getting
mapped
onto
quick
datagrams,
quick,
bi-directional
streams,
quick
unidirectional
streams
underneath
that
I've
put
separate
udp
kind
of
packets
for
each
one,
because
part
of
the
reason
that
we
do
this
and
part
of
the
reason
that
we
don't
just
send
all
of
the
different
web
transport
streams
over
a
single,
quick
stream
is
because
we
want
that
independence.
J
J
And
then
you
know
we
can
do
a
couple
of
fun
options
and
things
on
that
to
make
uni
streams,
work
and
all
that
sort
of
thing
of
basically,
you
know:
hey
start
this
half
closed
and
everything's
happy
now
you
have
a
unidirectional
stream,
but
conceptually
what
we've
done
here
is
we've
to
to
the
use
the
language
that
martin
did
in
his
email.
We've
effectively
integrated
this
into
h2,
so
we've
added
new
frames
to
h2,
which
means
that,
if
you
want
to
support
this,
you
need
to
change
your
http
2
implementation.
J
Looking
at
this,
what
we
actually
ended
up
doing
was
defining
four
new
h2
frames,
and
so
we
did
that
with
wt
datagram
stream.
Those
are
kind
of
the
main
ones
and
then
there's
a
reset
stream
and
a
stop
sending
and
that
kind
of
goes
along
with
the
way
that
we
want
to
be
able
to.
J
Right
now,
when
we
talk
about
that
being
integrated,
we're
saying
we're
adding
new
frames
to
h2
and
you
have
to
update
your
h2
implementation
to
be
able
to
handle
those
new
frame
types
which
is
good.
We
have
an
extension
mechanism
for
http
2..
This
would
be
a
great
way
to
use
that
extension
mechanism.
J
You
know
really
exercise
that,
and
that
would
be
great,
but
one
of
the
alternatives
that
was
brought
up
is:
should
web
transport,
when
it's
running
over
h2,
be
layered
on
top
of
h2
so
effectively.
What
that
would
mean
is
that
I
can
take
any
off
the
shelf,
http
2
stack
or
even
potentially
http,
one
which
we'll
talk
about
in
a
little
bit
to
implement
web
transport,
and
that
effectively
means
do
we,
you
know
send,
do
we
send
our
extended
connect.
J
Do
we
say
hey
I'm
connecting
for
web
transport
that
gets
everybody's
happy
now
I've
got
a
http
2
stream.
Do
I
then
mux
and
demux
all
of
my
different
web
transport
streams
onto
that
single
http,
2
stream,
and
from
there
you
can
kind
of
break
things
out
into
a
kind
of
native
mapping.
If
you
wanted
to
have
separate
h2
streams
per
web
transport
stream,
you
could
do
that,
but
not
if
they're,
server,
initiated
or
you'd
have
to
have
the
server
signal
about
that
or
not
if
they're,
unidirectional
and
datagrams
are
fun
and
exciting.
J
J
I
think
the
actual
comparison
ends
up
looking
kind
of
like
this,
and
so
we
have
web
transport
datagram
you
need
to
add,
or
if
you're
effectively
using
quick
with
all
of
the
semantics
that
already
work
underneath
h3
and
everything
you
end
up
with
bringing
over
a
datagram
frame,
you
bring
over
stream
frames.
You
just
use
stream
frames
on
top
of
that
h2
stream.
That
you
already
have,
we
need
to
bring
over
reset
stream
and
stop
sending
that's
great.
They
can
be
very
clean.
You've
got
all
the
same
semantics
that
you
were
expecting
from
web
transport.
J
Over
h3
generally
works
exactly
the
same
way
and
then
the
other
streams
that
you
need
to
bring
over
are
mostly
flow
control.
So
you
have
max
data
max
stream
data
max
streams,
all
that
sort
of
thing,
but
conceptually
that's
something
that
we
were
previously
getting
from
h2.
So
I
think
that's
one
of
the
major
differentiators
between
these
two
approaches.
There's
not
actually
that
much
more
that
you
need
to
bring
over.
J
J
With
that
in
mind,
issue
number
25
in
github
is
currently
about
datagrams
and
how
we
send
datagrams
over
h2
and
kind
of
where
the
h3
datagram
is
and
all
the
capsule
stuff
is
going
in
mask,
and
the
proposal
here
is.
We
should
use
whatever
approach
to
doing
this,
that
we
have
coming
out
of
whatever
mask
decides
and
since
the
mask
meeting
on
monday.
J
That
effectively
means
a
layered
approach,
where
the
body
of
the
connect
stream
that
you
use
is
a
sequence
of
capsules
that
end
up
being
conveyed
in
you
know,
data
frames
and
you
define
a
datagram
capsule
that
carries
one
of
these
datagrams,
and
so
you
can
still
transport
all
of
these
things,
but
you're
doing
it
using
a
capsule
rather
than
defining
a
new
frame.
J
That
then
brings
up
an
interesting
question
about
streams,
so
we
could
have
the
capsule
over
the
message.
Body
kind
of
going
in
the
data
frames
be
used
as
a
fallback
when
you
don't
have
something
natively.
J
So
this
is
where
we
start
talking
about.
Well,
if
we're
doing
this
kind
of
quick
mapping
over
this
h2
stream,
do
we
have
the
ability
to
split
out
some
of
these
concepts
to
whatever
the
native
underlying
transports
construct
is
so
in
theory,
if
I
have
h3
datagrams
that
are
going
over
h3
and
I
want
to
send
a
data,
and
I
want
to
send
a
web
transport
datagram
well,
then
that's
great
we've
got
a
native
construct
all
the
way
down
to
the
wire
for
a
datagram.
J
J
We've
already
coped
with
a
lot
of
the
kind
of
implications
of
that
in
terms
of
head
of
line
blocking
and
that
sort
of
thing,
no
matter
what
we
do
with
web
transport
over
h2.
But
if
we
take
that
same
philosophy
and
apply
it
to
streams,
hdb2
provides
native
streams.
So
one
of
the
big
questions
here
is
should
web
transport
streams.
J
Speaking
of
h1
there's
a
bunch
of
questions
that
we're
raising
here
and
then
we've
got
kind
of
some
time
for
discussion
at
the
end.
Do
we
think
that
it's
interesting
to
support
h1?
Is
that
a
requirement?
Is
that
something
that
we
have
used
for
if
we're
taking
the
approach
of
augmenting
h2
with
new
kind
of
native
h2
frames,
rather
than
laying
it
on
top
of
h2,
then
obviously
that
doesn't
directly
translate
over
to
h1,
but
if
we
have
something
that
is
layered
on
top
of
h2,
then
it's
not
super.
J
The
second
question
here
is
basically
is
proxying
through
a
generic
h2
intermediary
requirement,
and
that
comes
into
play
with
a
lot
of
the
questions
around
capsules
being
end
to
end,
and
how
do
we
handle
all
of
these
different
situations
where,
when
you
have
an
intermediary,
the
underlying
transport
might
change,
so
I
could
be
doing
h3
and
breaking
out
my
datagrams
to
actual
datagrams.
B
I
just
wanted
a
comment
that
I
think
it's
important
to
keep
in
mind
where
things
are
deployed
in
terms
of
the
answering
these
questions,
and
my
experience
is
in
the
enterprise,
if
you're
talking
about
huv1
you're
talking
about
a
pretty
old
box
which
might
not
be
upgradable
to
do
almost
anything
new,
which
is
why
I
suspect
my
personal
view
is
that
http
one
is
not
a
requirement.
But
I'd
like
to
hear
what
others
think.
J
So,
looking
at
the
differences
between
these
kinds
of
things,
there's
one
area
of
difference
which
is
kind
of
the
end
to
endness
versus
the
hot
by
hoppiness,
so
a
native
stream
is
going
to
be
hot
by
hop.
So
if
we're
using
a
native
stream
or
we've
broken
things
out
to
use
kind
of
the
native
capabilities
of
the
underlying
transport,
all
of
those
things
are
kind
of
inherently
hot.
By
hop
similar
to
how
web
transport
over
h3
is
using
quick
streams.
J
When
we
look
at
unidirectionality,
h2
does
not
natively
come
with
unidirectional
streams,
and
so
we've
added
those
as
an
extension
which
needs
a
couple
of
minor
tweaks
and
that
kind
of
snowballs
into
a
couple
of
other
considerations.
You
have
to
deal
with
and
that's
where
we
ended
up
with
some
of
the
additional
frames
there,
and
so
that's
where
we
look
at
things
like
the
unidirectional
ability
to
reset
a
bi-directional
stream
and
kind
of
half-close.
J
It
that's
another
thing,
that's
added
as
an
extension,
and
so
I
don't
think
the
current
draft
is
claiming
that
everything
works
natively
one-to-one
with
the
way
h2
works,
which
is
a
fallout
from
the
fact
that
we
did
not
keep
h3
and
h2
looking
exactly
the
same
way,
so
we're
effectively
bringing
a
lot
of
the
h3
shape
of
things
over
to
h2.
J
There's
a
couple
interesting
questions
here,
so
there's
flow
control
and
stream
limits,
and
if
that
already
pretty
much
does
what
we
want
for
web
transport,
then
that's
great,
because
we
don't
have
to
bring
over
all
the
new
frames
and
everything
to
do
flow
control.
And
we
don't
have
to
re-implement
flow
control,
because
our
h2
implementation
already
has
flow
control
that
generally
works.
J
Obviously,
the
corollary
to
that
is,
if
the
flow
control
and
limits
that
we
have
for
h2
don't
match
what
we
want,
then
we
have
to
build
them
anyway,
going
along
with
that
there's
a
question
here
about
kind
of
session:
state
management,
if
you're
layered
on
top
of
h2
and
each
new
h2
stream
that
you
open
is
effectively
a
new
web
transport
session.
Then
we've
got
great
stream
limits
in
h2
that
are
built
in
that
kind
of
manage
that
state
and
control.
J
How
many
of
those
you
can
open
that
of
course
gets
more
complicated
if
you
then
break
out
to
use
native
h2
streams
and
all
that
sort
of
thing.
J
So
I
think,
when
we
talk
about
session
state
management,
we're
going
to
have
to
do
some
work,
no
matter
what
the
question
is
where,
when
we
look
at
pros
and
cons
of
these
one
option
is:
if
we
build
web
transport
over
that
single
stream
and
we
put
new
things
on
top
that
potentially
requires
more
effort
than
you
would
otherwise,
because
you
have
to
do
new
stream
management,
you
have
to
bring
flow
control.
You
have
to
implement
all
the
flow
control
you
have
to
do
all
of
the
mux
and
demux
of
everything.
J
That
is
really
interesting
in
part
because
it
works
over
h1
which
may
or
may
not
be
a
goal.
So
that's
one
of
the
things
that
we'd
like
to
kind
of
settle
on.
Ideally,
in
the
next
day
or
a
couple
of
weeks,
it's
end
to
end
by
design,
because
I've
got
this
stream
that
I've
opened
to
the
other
endpoint
and
whatever
I'm
muxing
and
demuxing
over.
J
It
is
up
to
me
how
much
I
choose
to
kind
of
split
out
to
whatever
the
native
transport
is
doing,
and
it
could
also
be
reusable
in
other
contexts
where
quick
is
not
available.
So
we
were
talking
at
the
very
beginning
of
this
web
transport
session
and
we
were
discussing
some
of
the
other
people
who
wanted
to
do.
J
Personally-
and
this
is
my
own
personal
opinion-
there
is
a
difference
in
the
result
of
the
equation.
If
the
question
being
asked
is
for
web
transport
alone,
should
I
rebuild
much
of
quick
and
what
quick
provides
over
an
h2
stream
and
then
never
use
it
again
when
I
could
just
take
my
existing
h2?
That
has
most
of
it
and
strap
four
extra
frames
on
the
side
and
generally
be
in
good
shape
versus
I've.
J
Got
these
four
or
five
different
things
for
which
I'm
going
to
need
a
fallback
for
when
quick
isn't
available,
and
if
I
build
this
mapping
of
effectively
quick
over
tls,
tcp
etc.
I
can
use
it
for
web
transport
and
I
can
also
use
it
for
other
things.
So
this
is
where
we
start
talking
about
layering.
Do
we
do
what
we
talked
about
at
the
beginning
of
the
meeting
and
ask
all
of
them
to
go
and
and
build
everything
that
they're
doing
on
top
of
web
transport?
J
Or
do
we
say,
hey,
let's
build
this
other
thing
that
lives
somewhere
and
build
web
transport
and
all
of
those
other
use
cases
on
top
of
it
and
we'll
talk
about
connecting
a
little
bit.
So
that
is
kind
of
the
key
question
I
think
for
right
now.
What
do
we
think?
What
do
we
want
to
do?
We'd
love
some
input.
G
Interesting
it
works,
so
I'm
not
sure
that
this
is
quite
the
framing
that
I
would
have
used.
But
it's
pretty
close.
What
I
found
interesting
was
the
discussion
we
had
in
mask
about
this,
where
we
effectively
decided
there
to
rather
than
layer
on
top
of
a
version
specific
protocol,
so
we're
not
strictly
layered
on
top
of
http
3
when
layered
on
top
of
http,
and
then
we
opportunistically
use
the
native
capabilities
of
the
transport
when
they
are
available.
G
G
The
the
tuning
of
it
is
is
not,
but
it
is
relatively
simple
to
get
the
flow
control
piece
right
and
it's
not
that
much
code
in
rust
and
I
can't
imagine
it
being
too
much
more
code
in
languages
that
are
less
good,
but
so
I'd
I'd
like
to
concentrate
on
that
as
the
framing
for
this
one,
which
is,
are
we
building
on
top
of
http,
where
we
get
some
end-to-end
guarantees
and
it
will
work
across
any
version?
G
Whether
or
not
we
care
about
that.
I
I
think
becomes
almost
irrelevant
at
that
point
and
then
then
we
can
talk
about
opportunistically
using
those
capabilities
that
might
result
in
us
re-examining
how
we
structure
documents
and
what
not.
But
that's
you
know,
those
people
who
write
specifications
are
the
the
least
important
in
any
of
these
considerations
anyway.
So
I
prefer
to
say
it
that
way.
J
I
think
that
makes
a
lot
of
sense.
The
key
question
that
I'm
hearing
there
is
effectively
is
it
more
work
or
more
painful
to
redo
flow
control?
Speaking
as
somebody
who
has
had
a
bunch
of
various
bugs
and
kinks
to
work
out
in
quick
flow
control
as
well
as
h2,
but
vice
versa,
can
we
do
that
everywhere?
On
top
of
http.
A
Yeah
thanks,
so
I
have
a
fairly
strong
opinion
that
I
would
really
like
to
support
this
over
hb1.
A
I
I
have
experiences
with
protocols
that
are
designed
to
tightly
integrate
with
http,
2
and
and
there
there
have
been
some
challenges
there,
in
particular
like
the
hard
dependency
on
trailers,
and
things
like
this
like
that,
make
it
very
it
makes
it
much
more
difficult
to
deploy
in
a
general
purpose,
proxy
and
so
on
and
so
forth.
A
So
I
I
I
would
say,
based
on
my
experience,
like
anything
that
smells
of
that
like
I
would
be
very
hesitant
to
do,
but
also,
I
think,
going
over
h1
for
an
application
like
web
transport.
A
If
you're
going
over
tcp
anyways
is
like
fine
like
there's
no
real,
meaningful
performance
difference,
I
would
anticipate
the
other
thing
is,
I
think
I
think
the
quick
over
reliable
stream
or
quick
over
tcp
or
whatever
you
want
to
call
this.
I
think
there's
a
there
there,
like,
I
think,
we've
we've
kind
of
tossed
this
around
before
and
weirdly
enough.
I
just
dug
up
a
design
dock.
I
wrote
like
a
year
and
a
half
ago
about
this.
That
was
called
quick
over
tcp
and
I
don't
know
if
it's
it.
A
It
says
it's
half
big,
so
I
suspect
that.
But
but
I
think
I
think
this
has
real
use
cases
that
that
could
be
very
valuable
and
the
final
point
I
was
going
to
add
is
like
yes,
I
have
an
http
2
stack
so
like
it
seems,
really
convenient
to
use
that.
But,
like
at
this
point,
doesn't
everyone
already
have
like
an
http
3
stack
as
well?
So
like
sure
I
don't,
I
don't
have
to
rewrite
the
flow
controller,
because
I
can
just
use
the
one
for
my
http
3
stack.
A
So
if,
if
http
3
wasn't
always
already
so
widely
deployed,
I
would
say
like.
Oh
maybe
this
is
like
owners,
but
if
we
can
reuse
a
lot
of
the
functionality,
I
think
it
puts
us
in
a
better
place
and
certainly
much
much
better
for
like
multi-layered
proxies.
J
Thanks
makes
a
lot
of
sense.
If
you
were
doing
quick
over
tcp,
would
you
also
do
h3
over
tcp.
A
The
the
well
the
way,
no,
not
the
way
I
have
it
documented
now
that
would
that
would
be
unnecessary.
It
would
just
be
the
the
quick
transport
layer
without
the
you
like
remove
the
acknowledgements,
and
I
can't
remember
there
are
a
few
other
bits
that
you
get
to
remove,
but
it's
it's
mostly
the
ax.
I
think.
J
F
Yeah,
I
also
find
the
idea
of
not
trying
to
completely
modify
http
2-state
machine
to
what
we
want
more
appealing.
But
my
old
answer
to
that
was
like
we
should
just
use
http,
2
state
machine
and
port
that
to
web
transport
over
http
free
and
I'm
not
I'm
less
enthusiastic
about
that
now
than
I
was
before.
F
One
thing
I
want
to
point
out
is
that
if
you
go
back
to
the
very
old
presentations
on
what
web
transport
should
be,
it
had
four
protocols
and
it
always
had
like
the
quick
transport,
h3
transport,
h2
transport
and
the
thing
that
I
never
created,
which
was
called
fallback
transport
and
which
is
basically
this
proposal
of
independent
framing.
But
it
was
instead
of
running
directly
over
connect
stream.
F
It
used
websocket
messages
and
there
were
multiple
reasons
we
wanted
that,
one
of
which
was
that
that's
already
exists
and
that's
what
everyone
knows
how
to
deal
with
so
and
you
could
even
deploy
it
in
browsers
like
today,
but
that
never
really
advanced,
because
there
didn't
seem
to
be
a
lot
of
practical
interest.
F
As
in
when
I
say
practical
interest,
all
people
who
wanted
a
tcp
file
back
were
more
enthusiastic
about
http
2,
but
now
that
we're
looking
into
the
all
of
the
details
of
how
to
layer
it
that
sounds
that
option
sounds
progressively
more
and
more
appealing.
J
So
that
that's
interesting,
because
I
think
there's
a
difference
between
what
we're
talking
about
here
and
whether
we're
talking
about
h3
transport
versus
quick
transport
and
h2
transport
versus
fallback
transport,
the
difference
being
my
understanding
at
the
time
is,
we
said:
hey.
We
want
to
build
web
transport
on
top
of
http,
because
otherwise
we're
going
to
be
bringing
our
own.
I
need
effectively
structured
metadata
that
I
can
send
as
part
of
these
messages.
That's
you
know
going
that.
I
want
to
define
how
where
it
goes
and
how
that
all
works.
J
So
I
think
what
we're
talking
about
right
now
is
effectively
providing
this
on
top
of
http,
rather
than
removing
the
http
parts
of
it.
But
yes,
I
do
think
the
fact
that
those
exist
and
are
probably
viable
means
that
this
is
also
probably
viable
for
many
of
the
same
reasons.
F
What
I
wanted
well,
when
you
say
headers,
you
mean
headers
for
streams
and
side
web
transport
or
for
like
outside
of
it,
because
the
one
thing
is
that
older
http,
free
transport
proposal
had
is
that
it
had
the
headers
not
only
for
the
connect
stream,
but
also
for
every
data
stream,
and
we
never
ended
up
implementing
that,
because
it's
just
a
lot
of
complexity,
especially
given
that
cataracts
have
semantics
that
intermediaries
and
servers
can
misinterpret,
etc.
F
I
agree
that
we
should
not
remove
http,
but
I
I
I
want
to
point
out
that
this
idea
existed
before
and
it
was
not
discarded
because
we
thought
it
was
bad.
It
was
discarded
because
there
was
no
interest
in.
It
makes
a
lot
of
sense.
F
D
I
just
wanted
to
speak
up
as
well
as
one
of
the
video
for
quick
folks,
too.
Generally.
What
we
have
is
we
have
some
legacy
tcp
systems,
either
http
1.1
or
h2
or
rtmp
on
the
video
side
that
all
suffer
from
head
of
line
blocking.
D
I
think
the
main
compelling
use
case
for
web
transport
is
we
we
want
to
get
rid
of
that
and
we
get
that
for
95
percent
of
people
right
if
they,
if
udp
is
enabled-
and
it
really
comes
down
to
like-
what's
the
least
amount
of
resistance
to
get
support
for
that
remaining
five
percent
of
people
that
are
stuck
on
tcp.
So
I
think
it's
pretty
important
to
make
any
solution
relatively
simple,
ideally
layered,
on
something
existing.
I
don't
want
to
like
rewrite
my
http
2
server.
D
It
just
seems
like
that's
a
lot
of
effort
for
very
little
benefit,
so
that's
my
take.
Is
it
just
as
much
as
we
can
simplify
this
tcp
layer?
Maybe
go
over
websockets
or
something
just
to
layer
it
on
top
of
existing
software.
J
It
makes
sense,
there's
a
certain
amount
of.
I
think.
A
lot
of
the
question
that
we're
facing
right
now
is
is
about
how
much
effort
it
is
to
do
one
way
versus
the
other
right.
How
hard
is
it
to
bolt
a
little
bit
into
your
existing
h2
server
or
the
alternative
there?
Being
hey
I'd
like
to
build
this
entirely
complete?
You
know
new
thing,
but
it's
on
top
of
that
h2
server,
so
maybe
I
don't
even
have
to
crack
it
open
at
all,
which
could
be
very
attractive.
J
That
also
brings
up
potentially
a
set
of
additional
use
cases
for
other
things
over
quick.
That
would
like
a
fallback
like
this
that
would
potentially
like
to
not
have
to
define
a
fallback
for
each
one
of
those
things.
J
H
Yeah,
I
I
guess
I
one
thing
I
wanted
to
hear
more
about-
I
guess,
is:
how
is
the
similarity
of
resource
management
to
like
is?
Is
h2's
built-in
resource
management,
even
apt,
for
what
the
resource
management
will
be
for
web
transport,
and
we
didn't.
I
thought
we
would
spend
more
time
in
h3
talking
about.
H
For
our
primary
transport
would
help
inform
this
discussion
somewhat,
which
is
like
if
what
h2
has
baked
in,
isn't
even
adequate,
there's
even
less
attraction
in
in
retaining
it
at
all
I
mean
I've
obviously
heard
a
number
of
people.
I
may
be
the
only
person
who
enjoys
tinkering
with
my
h2
stack
occasionally,
but
you
know
I
hear
a
lot
of
people
saying
like
they
don't
want
to
touch
it
at
all
and
that's
real.
I
think
you
know,
and
the
way
martin
has
framed
it
too,
about
upgrading
to
native
capabilities.
H
Where
available
I
mean
building
this
over
h1
or
over.
You
know
any
generic
in-order,
bytes
stream,
basically
quick
over
a
single
byte
stream,
could
be
a
place
to
start,
and
then
you
know
there
could
potentially
be
you
know
if
somebody
wanted
to
have
the
h2
version
of
it
use
some
native
h2
streams,
because
they
are
apt.
That
could
be
an
option,
but
I
don't
know
it
sounds
like
you
hear
a
lot
more
appetite
for
people
wanting
to
implement
quick
streams
over
a
single
byte
stream
than
than
for
tinkering
with
h2
stacks.
F
F
There
are
a
lot
of
scenarios
where
you
would
just
want
to
use
dcp
like
inside
data
centers,
where
your
tcp
stack
is
heavily
integrated
with
us
and
optimized
or,
if
you're,
let's
say
a
cloud
service
which
does
load
balancer
for
your
cloud
users.
You
would
like
it
to
handle
terminating
quick
for
you
and
then
connect
with
something,
that's
probably
going
to
be
http
one
or
two,
because
that's
what
people
tend
to
use
for,
like
load
balancer
to
server
use
case.
F
J
Thanks
yeah,
that's
really
good
to
keep
in
mind,
so
I'm
hearing
a
lot
of
support
for
the
idea
of
effectively
layering
this
on
top
of
h2
rather
than
inside
of
h2
in
practical
considerations.
Here
are
there
any
folks
who
would
be
willing
to
kind
of
implement
that
and
do
it
if
we
started
writing
it
up?
J
Yes,
that's
a
good
point:
it's
on
top
of
http,
rather
than
http
2
specifically,
so
I
think
that
an
easy
next
step
there
is
is
write
that
up
and
get
it
going.
Are
there
folks
who
would
be
willing
to
implement
so
that
we
can
start
doing
that?
Alan.
H
Yeah
I
mean,
I
think
the
first
step
would
be
to
produce
a
document
that
describes.
You
know
what
such
a
thing
would
look
like.
I
know
martin
sort
of
took
an
early
stab
at
it,
but
I
think
you
know
we
probably
need
to
fill
in
the
details
there
once
such
a
document
was
available.
You
know
the
next
hackathon
around
like
I
would
consider
doing
it
and
I
think,
as
someone
who
has
already
implemented
something
very
similar
to
what
the
existing
h2
draft
is,
you
know
I
think
I
would
not.
H
It
would
be
illustrative
to
me
to
have
implemented
both
systems,
although
I
definitely
hear
in
the
room
that
I
don't
think
anybody
cares
if
it's
easier
to
do
over
h2,
nobody
wants
to
touch
it,
no
matter
what,
even
if
it
was
one
line
change.
So
maybe
the
effort
level
is
just
not
a
factor,
and
you
know
the
the
the
other
advantages
of
being
able
to
go
through
an
immediate
area.
H
That's
never
seen
this
extension
being
able
to
work
over
http
11
are
both
much
more
compelling,
so
it
seems
like
that's,
probably
the
direction
we
want
to
go.
J
That's
it's
interesting
so
for
context.
We've
shipped
a
previous
iteration
of
one
of
these
things
and
it
ended
up
being
a
you
know.
Four
or
five
line
change
in
our
h2
stack
to
to
make
it
work
for
both
the
client
and
and
the
server.
But
that
was
not
bringing
over
a
bunch
of
the
additional.
H
H
J
Cool
it
sounds
like
we're
getting
and
mike.
Is
that
something
you'd
like
to
say
at
the
microphone,
because
I
think
it's
something.
I
was
also
wondering.
C
So
basically,
I
was
just
observing
that,
given
that
h2
is
an
inorder,
reliable,
byte
stream,
no
matter
how
you
spell
it,
I
would
not
expect
a
performance
difference
by
using
native
capabilities
versus
just
doing
it
as
capsules
inside
the
body
of
the
connect,
whereas
in
h3,
by
moving
it
to
different
streams
and
datagram
frames.
You
actually
are
getting
performance
differences,
you're,
getting
concrete
differences
in
behavior
in
the
transport
layer
yeah.
I
think
it
seemed
to
justify
changes
in
h2
yeah.
G
If
you
use
independent
frames,
then
all
of
your
frames
will
have
a
fixed
five
byte
overhead,
which
might
be
bigger
than
the
overhead
you
have
for
an
individual
capsule
because
of
the
way
that
the
encoding
is
more
efficient.
So
there's
a
little
bit
of
a
difference
there.
But
it's
it's
not
like
enough
to
worry
about.
I
don't
think
to
allen's
point
on
the
resource
management
thing.
G
I've
sort
of
come
to
realize
that
the
I
don't
think
this
is
going
to
inform
our
decision
very
much
because
of
the
way
that
we've
we've
approached
the
the
resource
management
issue
generally,
which
is
to
say
that
it
looks
like
we're
just
going
to
dump
the
the
simplest
possible
resource
management.
On
top
of
this,
and
the
in
in
h3,
we're
going
to
have
the
streams
sharing
with
all
of
the
http
requests
and
we'll
just
sort
of
let
people
deal
with
it
is
is
where
that
sort
of
ended
up,
and
that's
not
terrible.
G
H
Yeah,
I
just
wanted
to
talk
about
performance
for
a
second,
which
is
that
it's
not
so
much
the
wire
overhead,
particularly
when
you're
thinking
about
in
dc
use
cases
the
performance
profile
of
the
applications
that
can
use.
This
can
look
very
different
and
things
like
the
fact
that
you'll
have
two
hash
table
lookups
one
in
you
know
the
sort
of
outer
stream
and
then
one
in
your
sort
of
list
of
inner
streams.
H
You
know
that
kind
of
thing
will
definitely
show
up
when
you're
doing
like
higher
high
rps,
rpc
use
cases
or
other
other
types
of
dc
use
cases
that
may
be
leveraging
this.
So
I
just
wanna,
you
know
there
could
be
a
performance
impact,
but
you
know
again,
my
read
is
no
one
cares.
Every
everybody
is
just
like
done
with
touching
h2,
so.
J
That's
a
that's
a
performance
difference
that
is
mitigated
by
breaking
it
out
to
use
the
h2
native
capabilities
right,
not
a
not
necessarily
a
reason
to
go
bolted
into
h2
itself.
H
A
That
was
kind
of
thinking
in
on
the
performance
question.
If
we
think
95,
plus
percent
of
people
are
hoping
to
use
the
quick
based
option,
I
mean:
how
much
do
we
care
about
performance
as
long
as
it's
like
acceptable?
A
I
agree
that,
like
the
more
layers
you
have
the
harder
it
is
to
optimize
performance,
but
that's
actually
already
a
problem
for
us
with
http
2
today,
like
it,
I
think
we've
actually
gotten
to
the
point
where
h2
stack
is
typically
slower
than
our
h3
stack,
because
their
h3
stack
is
like
more
tightly
integrated
and
our
h2
stack
goes
over
tls
and
has
an
extra
record
layer
and,
like
all
of
that
stuff,
so
I
guess
I
I
feel
like.
A
If
you
want
performance,
probably
web
transporter
for
each
one
is
actually
the
ideal
option
you
should
just
like.
Not
even
support
hq
for
alpn
and,
as
probably
all
of
you
know,
that's
what
youtube
video
servers
do
so.
J
Which
is
actually
an
interesting
point
there,
because
when
you
talk
about
kind
of
the
95
and
five
percent
of
users,
I
think
the
data
center
use
case
was
one
of
those
where
we
weren't
saying
it's
going
to
be
some
small
subset
of
users.
It's
for
you
know
within
my
data
center.
I
want
to
do
this
always,
but
at
that
point
I
can
also
completely
control
it
and
choose
to
do
it
over
h1.
If
I'm
concerned
there.
J
J
We
have
one
more
thing
to
chat
about
briefly,
which
is
another
kind
of
existential
question
which
is
connect,
and
so,
as
we've
been
talking
about
kind
of
how
we
do
these
different
things
and
how
do
we
integrate
with
mask
and
how
do
we
handle
all
of
that?
J
We
said
well,
there's
significant
overlap
in
participant
group
between
these
two
different
working
groups,
but
it's
a
little
odd
that
we
ended
up
using
different
approaches
to
how
we
handle
connect,
and
so
web
transport
is
using
extended,
connect
with
a
protocol
value
of
web
transport
and
in
mask
connect.
Udp
is
a
new
method
rather
than
extended
connect
with
a
protocol
of
you
know,
value,
udp
and
similar
for
ip,
and
I
think
both
working
groups
so
far
have
kind
of
laid
out.
J
So
I
don't
know
that
we
necessarily
need
to
make
a
huge
change
necessarily,
but
I
think
it's
worth
bringing
up.
I
think
it's
worth
talking
about
and
I
think
making
sure
that
we've
written
down
our
rationale
for
why
did
these
end
up
different?
Why
are
we
not
using
the
same
approach
if
we've
said
this
is
the
way
to
extend
or
or
do
more
things
with
connect?
J
What
is
the
way
that
we
should
do
this?
Is
this
something
that
we
should
talk
about
in
http
biz?
What's
the
right
way
to
do
this,
because
it
seems
a
little
weird
that
we
did
two
different
ones
if
we
have
strong
reasoning
for
it
great,
but
we
should
at
least
write
that
strong
reasoning,
down
thoughts
and
feelings.
J
G
Based
on
the
discussion
we've
had-
and
I
don't
know
if
kazuho
is
here
but
kazuo
had
a
couple
of
good
points
about
connect.
First,
new
methods,
based
on
that
discussion,
I'm
sort
of
leaning
slightly
more
towards
extended
connect
using
the
the
colon
protocol
field
to
indicate
which
one
of
the
protocols
we're
talking.
G
The
only
thing
that
I
genuinely
care
about
is
that
we're
able
to
use
an
https
resource
url
that
identifies
the
resource
that
we're
talking
to
rather
than
the
thing
that
you're
connecting
the
ip
tunnel
or
the
udp
tunnel
or
the
tcp
tunnel
2,
which
is
the
old
connect
semantics,
and
as
long
as
we
have
that
which
all
of
these
options
provide,
then
I'm
happy.
So
I
I
think
we
should.
We
should
just
pick
one,
and
we
should
probably
pick
one
for
all
of
the
things
that
we're
doing
and
only
use
that
one.
A
David
yeah,
so
I
agree
with
what
and
he
was
saying
I
I'm
gonna,
bring
this
to
the
mask
list
to
say:
hey.
It
seems
like
there's
these
arguments
there
and
if
folks
agree
there,
then
we
can
probably
have
mass
unify
on
what
transport
is
doing
and
then
call
it
a
day.
That
sounds
like
the
simplest
option
to
me,
but
you
know
I'll
bring
it
over
to
the
mask
list,
see
what
folks
think
over
there.
F
The
initial
reason
we
used
extended
connect
is
that
it's
consistent
with
what
websockets
over
h2
does
and
extended
connect
is
a
good
way
of
representing
resources
which
are
real
resources,
but
do
not
necessarily
fit
fit
into
the
regular
http
semantics,
which
is
exactly
what
web
transport
is.
So
I
I
believe
at
least
for
web
transport.
We
should
stick
to
extended
connect.
I
have
no
opinion
on
what
mass
working
group
would
want
to
do.
I
Lucas
oh
yeah
like
having
played
around
with
some
of
the
stuff
for
masks
and
and
a
bit
of
web
transport,
although
that
I
got
still
I
I
just
call
these
things
connect
streams
anyway,
regardless
of
the
method
that
they
are
and
that's
like,
the
easiest
way
to
explain
it
to
anyone
else
who
isn't
on
these
meetings.
It's
just
connect
thing,
so
I
I
would
be
in
support
of
connect
method
with
the
pseudo
header.
G
So
I
know
that
certain
people
have
something
of
an
allergic
reaction
to
extended
connect
and
rfc8441,
and
those
people
aren't
here.
So
I
would
suggest
we
take
this
to
the
http
biz
working
group
as
well,
and
we'll
make
sure
that
those
people
have
their
input
on
the
discussion
as
well.
I
I
think
we're
headed
in
the
right
direction,
but
I
want
to
make
sure
that
we
don't
create
any
surprises
there.
J
A
Yeah,
I'm
jumping
in
as
chair,
wanted
to
confirm
that
we'll
definitely
be
taking
discussing
this
on
all
three
lists
to
make
sure
everyone's
aware
and
we
get
inputs
from
the
community
at
large.
C
F
A
Rc-8441
is
the
one
that
defined
this
and
it's
standards
track.
Excellent.
Okay,
thanks
for
checking
that
would
have
been
a
kind
of
a
sad
thing
to
notice
an
isu
review.
A
Right
right,
how
do
I
there
we
go
slides
again.
A
All
right,
so
it
it
sounds
like
we.
We
made
some
really
good
progress.
Today
we
fixed
a
bunch
of
issues
on
h3.
Sorry,
we
didn't
get
around
to
that,
go
away
one.
Hopefully
I
mean
from
victor.
It
sounded
like
this.
One
would
require
more
discussion
that
wouldn't
fit
into
this
meeting
anyway
and
on
the
topic
of
what
we
want
to
do
around
tcp,
we're
we're
getting
a
sense
from
the
room
that
folks
are
kind
of
unifying
on
building
on
top
of
http
semantics.
A
So
we're
gonna
confirm
that
on
the
list,
but
since
we
have
some
time
left,
if
anyone
were
to
disagree
with
this
direction
of
building
over
http
semantics
and
then
having
http
version
specific
optimizations
like
in
h3,
the
datagram
frame,
could
they
come
speak
up
now?
Otherwise
we're
going
to
assume
consensus
in
the
room
and
validate
it
on
the
list
over
the
next
couple
weeks.
So
gonna
pause
here
for
a
second.
A
A
Yep,
so
we're
we're
gonna,
we're
gonna,
assume
consensus
in
the
room
and
I
guess
that's
a
wrap
anything
you
wanna.
B
A
My
gut
feeling
would
be:
let's
have
the
web
transfer
of
a
tcp
enthusiast
kind
of
maybe
huddle
in
terms
of
seeing
what
they
want
to
propose
and
then
discuss
that
on
the
list
and
then
based
on
how
that
goes.
We
can.
We
can
see
if
we
want
to
have
another
interim
and
similarly
for
the
web,
transporter
hdb3
issues.
If
anything
gets
stuck,
we
can
have
an
interim
to
discuss
them,
but
let's
see
how
things
go
on
the
list.
First,
alan.
You
had
something
you
wanted
to
say.
H
Yeah
not
so
much
an
interim,
but
I
wonder
if
maybe
holding
a
virtual
interop
sometime
between
now
and
the
next
meeting
would
be
useful.
It
seems
like
there's
a
few
people
with
h3
implementations
or
just
setting
aside
a
time
where
people
would
have
a
target
to
work
on.
It
might
help
move
that
ball
forward.
A
That
sounds
like
a
good
idea:
yeah
cool
we'll
also
try
to
set
that
up
as
well
great,
I
think
in
that
case
we
are
done
with
seven
minutes
to
spare
hooray
thanks,
everyone
for
coming
and
we'll
see
you
on
the
list
and
on
github
pretty
soon.
Thank
you.
Bye.