►
From YouTube: IETF114 WEBTRANS 20220726 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
E
F
A
All
right,
I
I
think
we
can
get
started-
welcome
to
the
web
transport
working
group
by
tf-114.
A
It's
really
nice
to
see
so
many
of
you
in
person
more
than
last
time
and
way
more
than
for
two
years
when
we,
you
know,
had
the
first
session
of
this
working
group
in
the
pandemic.
Next
slide,
please
bernard.
A
We're
not
doing
paper
blue
sheets
anymore,
there's
a
qr
code
to
simplify
stuff,
otherwise,
it's
accessible
from
the
itf
agenda
page
the
full
meet
echo
allows
you
to
have
access
to
the
chat
and
otherwise
the
meat
echo
light
which
this
this
qr
code
gives
you
the
opportunity
to
join
the
queue
and
to
join
the
blue
sheets,
but
without
the
rest
of
the
interface
it's
it
works
better
than
the
other
one.
On
phones,
for
example,
all
right
next
slide,
please!
A
So
here's
a
reminder
on
the
buttons.
You
know
please
stay
muted,
unless
you
want
us
to
hear
your
keyboard
and
there's
a
hand
icon
for
joining
the
queue
yeah.
If,
if
your
remote
having
the
video
on
when
you're
speaking
is
pretty
nice,
helps
us
understand
you
better,
but
it's
not
required.
A
The
notewell,
so
some
of
you
may
know
this
pretty
well,
but
it's
worth
taking
a
minute
to
discuss
this,
the
what
the
ietf
does
is
covered
by
our
notewell
and
if
you're
here
it
means.
You
actually
said
you
read
it
on
the
page,
but
everyone
is
used
to
clicking
things
without
actually
reading
them.
So
let
me
take
a
minute
one
of
the
parts
of
it.
A
Is
that
anything
you
say
at
an
itf
meeting
or
on
the
github
issues
or
on
the
mailing
list
is
considered
an
itf
contribution
and
that
triggers
the
itf
policy
on
intellectual
property
patents
and
all
that.
So,
if
you
don't
know
what
that
is,
you
should
take
a
look
because,
if
you're
aware
of
a
patent,
it
means
you
have
to
disclose
it
and
that
could
become
complicated
if
your
lawyers
at
your
company
could
be
upset
at
you.
If
you
don't
do
it
right,
so
just
make
sure
you
read
all
this
and
next
slide,
please.
A
The
notewell
also
covers
the
itf
code
of
conduct
and
anti-harassment
policy.
I
want
to
take
a
minute
to
underscore
that
we've
never
had
a
problem
in
this
working
group.
Everyone
has
always
been
working
nicely,
so
let's
just
keep
doing
that
because
everything's
more
fun
when
everyone's
nice-
and
if
you
see
anything
that
you
think
is
not
great.
We
have
procedures
in
place
for
reporting
it
come
talk
to
me
or
to
the
ombuds
team,
we'll
make
sure
it
gets
handled
the
best
way.
We
can
next
slide
please.
A
So
as
a
quick
reminder,
the
itf
has
a
strict
mask
policy
for
all
working
group
sessions.
If
you're
attending
in
person,
you
have
to
wear
a
mask
unless
you
are
presenting
or
at
the
chair
table,
which
is
far
away
from
everyone,
and
currently
speaking,
also
note
that
masks
need
to
be
certified
with
these
certifications
or
something
equivalent,
which
means
that
most
cloth
masks
and
surgical
masks,
actually
don't
qualify.
Those
don't
really
do
a
good
job
of
preventing
transmission
of
the
latest
variants.
So
just
fyi
we
have
free
kn
95
mass
at
the
front
desks.
A
If
you
need
one
and
you've
or
you
forgot
yours,
there
come
in
all
sorts
of
cool
colors,
too.
Okay,
and
if
you're
talking
at
the
microphone,
you
don't
need
to
take
your
mask
off.
Just
get
close
to
the
microphone,
it
works
really
well
next
slide.
Please!
A
All
right
here
are
some
more
links.
We're
going
to
need
a
jabra
scribe
and
a
note
taker.
Can
we
have
a
volunteer
for
jabber
scribe
all
right?
Thank
you.
Jake!
Can
we
have
a
volunteer
for
notetaker
the
fun
part?
Now
I
get
to
awkwardly
stare
at
people
in
the
room
and
remotely
and
we're
not
going
to
start
the
session
until
we
have
a
volunteer
because
notes
are
very
important.
A
Oh,
oh,
thank
you
jake.
That
is
amazing.
All
right
can
someone
do
jabra
scribe.
That's
a
very
limited
bit
now.
Thank
you,
alan
all,
right.
Thank
you
both!
A
Oh,
yes,
we
are
now
using
zulip
you,
so
you
can
either
join
through
the
meet
echo
chat
or
through
the
zoolop
client,
and
both
of
them
seem
to
work
and
are
bridged
together
and
if
you
want
alan
to
say
something
because,
for
example,
let's
say
you're
remote
and
you
don't
have
audio
just
say,
mick
colon,
something
and
alan
will
jump
in
the
mic
queue
and
say
what
you
said
at
the
microphone
awesome
thanks.
Both
of
you
next
slide.
A
A
A
Next
slide,
please
johnnyvar,
are
you,
we
see
you
and
we
can't
oh
say
something
again.
G
Hi,
it's
me:
can
you
hear
me
now?
Yes,
we
can
go
ahead.
All
right!
Thank
you.
So
I'm
janavar
co-chair
with
will
law
for
the
w3c
specification
for
web
transport.
So
I'm
here
to
give
you
a
progress,
update
on
changes
since
march
24th,
so
we
published
another
working
draft.
The
latest
version
is
june
23rd
of
this
year
and
we
have
a
charter
extension
underway
for
an
additional
year,
because
the
current
charter
expires
september
22nd.
G
So
if
you
have
input
on
that,
it's
still
not
too
late
to
provide,
and
then
the
next
bullet
has
a
type
bow.
It
should
say
more
realistic
time
table
for
the
year,
because
we
had
an
earlier.
We
presented
an
earlier
optimistic
timetable
that
was
not
anywhere
near
realistic.
G
So
currently
we're
aiming
for
this
is
what
we're
aiming
for
september
end
of
september
for
canada
recommendation,
which
requires
stability
in
api
and
then
by
end
of
the
year,
we're
hoping
for
that's
our
goal
post
for
a
proposed
recommendation
at
the
moment,
which
would
require
two
independent
implementations
for
our
charter,
which
would
put
us
in
line
for
a
call
for
review
in
february,
and,
ideally,
everything
works.
We
have
publication
by
recommendation
by
the
next
ac
meeting
in
april
ish
all
right,
so
we've
defined
some
milestones.
G
G
And
so
here
are
some
decisions.
Since
our
last
presentation
in
march,
we
have
added
per
stream
stats
that
means
per
outgoing
and
then
going
in
duplex
stream,
not
not
datagrams,
and
these
are
not.
These
are
bytes
written
by
sent
and
by
technology,
bytes
acknowledged,
which
are
not
total
network
byte
counters.
G
However,
they're
they're
mostly
concerned
with
the
bytes
application
bytes
that
are
written
to
the
stream
and
how
much
of
that
has
been
sent
and
how
much
of
that
has
been
acknowledged,
so
bytes
acknowledged
will
always
be
less
or
equal
to
by
sent,
which
will
always
be
less
or
equal
to
bytes
written
and
then
for
datagrams.
G
We
reduced
the
priority
algorithm
to
normative
guidance,
because
we
found
some
mistakes
in
it.
We
still
haven't
gotten
any
further
on
specifying
specifying
that
algorithm
in
detail.
So
that's
left
to
implementation.
G
Where
we
now
support
a
re
require
unreliable
through
boolean,
which
defaults
to
false.
This
is
so.
Applications
can,
in
the
future,
specify
whether
they
want
to
require
udp
and
by
default
they
will
get
fall
back
to
http
too,
and
we
added
another
read-only
property
for
that,
so
that
you
can
tell
what
what
you're
looking
at
another
issue
was,
is
connection
pulling
off
the
right
default
and
yes,
so
a
lot
pulling
still
defaults
to
false
next
slide.
G
All
right
so
current
issues
of
debate,
so
we
have
three
the
remaining
issues:
we've
been
circling
around
the
same
remaining
issues,
and-
and
this
is
one
of
them,
which
is
people
want
to
send
media
and
that
doesn't
always
work
so
great
with
the
default
congestion
controlling
quick.
So
we
have
agreement
to
provide
some
constructor
level
configuration
api
surface
that
would
allow
an
application
to
specify
its
preference
for
the
type
of
condition
control
to
be
used.
G
Now
we
know
that
that's
not
necessarily
available
anywhere.
Yet,
however,
we
hope
that
we
can
get
the
api
ready
and
have
bashed
out
all
the
api
decisions
here
to
get
us
to
candidate
recommendation,
and
we
can
then
subsequently
mark
this
as
a
feature
at
risk.
If
implementations
fail
to
materialize
prior
to
proposed
recommendation,
so
discussions
around
shape
remain,
so
we
have
two
proposals
with
two
directions.
G
A
second
issue
is
datagrams
versus
streams
and
relative
prioritization
having
a
prioritization
api
discussion,
seems
to
center
here
on
ordering,
instead
of
bandwidth
allocation.
That's
an
observation
from
the
shares
ordering
requires
strict
and
not
weighted
levels.
G
G
G
We
think
we
need
to
to
expose
at
least
eight
resettable
levels
to
match
what
browsers
are
doing
in
most
cases,
and
this
would
allow
javascript
to
down
prioritize
ongoing
streams
by
basically
saying
on
this
stream
that
I've
sent
before
set
its
priority
now
to
a
lower
level,
which
should
give
enough
granularity
to
solve
most
problems
with
some
effort,
and
this
the
assumption
there
is
that
you
are
going
to
have
javascript
in
involved
in
the
send
loop
that
is
going
to
be
very
active
and
responding
to
changing
conditions
in
your
connection.
G
So
this
is
a
low-level
version.
Now
the
alternative
would
be
to
provide
something
more
upfront
where
you
can
specify
fixed
levels,
and
that
has
been
specifically
requested
for
a
warp
and
chrome
has
volunteered
to
investigate.
If
that
is
practical,
having
in
32
number
of
levels
would
provide
javascript
with
some
more
ability
to
just
hear
all
the
priorities
levels,
I
want
send
it
to
that
order
and-
and
I
don't
need
to
change
it
later
next
slide.
H
Sorry
hi,
I'm
jake,
holland.
I
wanted
to
ask
about
the
congestion
control
api.
Are
you
also
going
to
set
the
server
side,
congestion
control
with
this
api?
No.
G
I
should
no,
I
should
have
clarified
this
is
this
would
only
be
for
we
assume
servers
will
take
care
of
their
own
congestion,
control
and
clients
in
that
case,
will
just
receive
what
they
receive,
so
so
apologies,
so
this
would
only
be
for
ingestion
and
basically
for
clients
sending
media
to
servers.
That's
the
that's
the
missing
gap
that
we're
trying
to
specify
thanks
for
that
question.
A
Thanks
and
just
for
for
everyone,
I
think
it's
really
important
to
get
some
itf
involvement
in
the
w
in
these
issues
from
the
w3c.
A
So
I
just
pasted
a
link
to
the
github
in
the
chat
here
and
I
think
the
congestion
control
one
is
a
perfect
example,
so
there
might
be
some
people
in
the
room
who
have
opinions
on
latency
if
latency
or
throughput
matters
more
and
there
might
be
more
in
this
room
than
they
are
at
the
w3c.
A
So
please
go
and
comment
on
those
issues.
That's
what
we're
asking
for
here,
because
that
that's
the
kind
of
thing
where
the
w3c
has
to
figure
out
an
api
for
it,
but
we
have
the
congestion
control
experience
that
ietf.
So
this
is
the
kind
of
cross-pollination
that
we
left
to
see
and
I
see
bernard
in
the
queue.
I
Yeah,
I
just
wanted
to
mention
something
yanovar
which
has
come
up
at
this
meeting,
which
is
the
idea
some
of
the
i4s
stuff,
and
in
that
situation
you
can
have
algorithms
that
are
really
about
both
latency
and
throughput,
like
prague.
G
Yes,
there
was
a
detail
in
the
api
didn't
show,
which
is
that
you
can
still
for
the
second
proposal.
When
you
expose
the
name
you
don't,
you
can
also
expose
other
attributes
of
each
congestion
controller,
such
as
what
the
aim
of
it
is,
and
you
can
have
enums
for
for
several
of
these
properties,
if
you
will
so,
but
but
thanks
david,
that's
a
good
question
that
good
it's
good
to
highlight
that
these
aren't
set
in
stone
in
any
way.
G
This
is
just
early
discussion
and
if
they
provoke
you
to
participate,
that
is
excellent.
So
so
we're
definitely
maybe
a
bit
we.
We
could
definitely
use
some
more
input
from
more
people,
and
that
would
probably
help
move
this
discussion
along.
A
F
Apple
as
david
hinted,
there
are
many
people
with
opinions
about
what
matters
more
servatory
latency.
They
spent
four
wonderful
days
discussing
this
last
year,.
F
It
may
be
not
useful
to
allow
the
application
to
set
the
name
of
the
construction
control,
because
by
careful
tweaking
of
the
parameters,
one
can
cause
neurina
behave
like
cubic
and
cubic
behave
like
tahoe
and,
what's
not,
may
be
much
more
productive
to
have
the
application
express
its
its
goal.
F
Do
I
want
to
be
do
I
do
I
need
the
real
damage
latency,
how
sensitive
mi2
delays?
How
sensitive
am
I
to
so
good
spikes,
etc?
A
Thank
you
omar.
Can
I
ask
you
to
like
kind
of
take
what
you
said
and
put
it
in
that
issue.
I
think
that's
really
good
feedback
I'll
post
a
link
to
that
specific
issue
on
in
the
chat.
Thank
you
so
much
stewart
thank.
J
I
saw
david
was
talking
about
throughput
and
latency
and
looking
meaningfully
in
my
direction,
so
I
felt
compelled
to
say
something.
I
worry
here
that
there's
a
tendency
to
overcomplicate
things,
as
we've
seen
this
week
at
the
hackathon,
with
the
almost
work
done
on
l4s
and
other
things
that
have
been
going
on
in
the
industry
this
year,
it's
possible
to
have
low,
latency
anti-throughput.
J
At
the
same
time,
it's
not
an
either
or
choice.
Priorities
become
very
problematic
because
somebody's
got
to
decide
what
the
relative
priorities
are
and-
and
if
you
have
enough
bandwidth
for
everything,
then
it
doesn't
matter.
Every
flow
gets
what
it
needs
and
if
you
don't
have
enough
bandwidth
for
what
you
need,
then
it
becomes
extremely
tricky
to
figure
out.
What
is
the
right
way
to
resolve
that?
J
Do
you
have
a
strict
priority
where
you
have
you
have
total
starvation
for
the
lower
priority
things,
or
do
you
have
some
relative
priority?
This
is
all
very
complicated,
but
the
good
news
is
with
l4s
and
similar
technologies.
The
whole
problem
goes
away.
You
you
open
multiple
streams
and
they
each
get
a
nominal
fair
share
of
the
capacity
when
it's
scarce.
When
when
bandwidth
is
abundant,
then
everything
gets
what
it
needs.
J
So
I
guess
the
summary
is
that
let's
not
over
complicate
this
with
mechanism
that
is,
is
really
hard
to
understand,
and
even
the
people
at
the
ietf
who
are
congestion,
control
experts
find
this
hard
to
understand.
So
the
average
web
developer
is
probably
just
going
to
twiddle
knobs
randomly
without
even
understanding
the
implications
of
what
they're
doing.
A
Thank
you
stuart.
Can
I
ask
the
same
thing
and
just
and
also
to
everyone,
to
also
add
that
on
the
github
issue
for
the
w3c,
thank
you
alex.
K
Hi
everyone
I'm
alex
schneichowski,
I
work
at
google,
and
one
of
the
things
I
wanted
to
mention
is
that
I
was
a
little
bit
surprised
when
I
saw
this
slide,
because
I
remember
when
we
were
deploying
bbr
on
the
youtube
cdn,
and
one
of
the
concerns
that
we
had
was
that
we
actually
saw
people
complaining
about
bbr's.
Initial
lack
of
fairness
with
all
the
other
congestion
controllers,
and
one
of
the
things
that
I
worry
about
here
is
that,
even
if
you
do
something
nice
like
saying,
you
know,
aim
low
latency
versus
aim
throughput.
L
Hi
luke
from
twitch
here
so
first
thing:
the
congestion
control,
it's
something
that
is,
I
think,
the
low
latency
hint
is
pretty
important.
One
of
the
things
with
warp
that
we
struggle
with
is
cue
management
and
just
trying
to
have
this
buffer
in
the
socket
that
needs
to
be
sent
and
buffer.
Bloat
is
an
issue
like
if
there's
500,
milliseconds
of
rtt.
It's
like
there's
no
point
prioritizing
anything
like
you,
just
everything's
gonna
be
ordered
over
the
wire.
L
So
just
a
way
of
you
know
saying
congestion.
Control
like
keep
the
rtt
down
is
important
and
for
the
next
slide
just
I
think
there's
two
little
things
that
come
down
to
it.
One
is
like
you
said
mentioned:
ordering
is
important.
It's
not
clear
if
the
eight
levels
the
ordering
is
mainly
like
is
priority
two
always
lower
than
priority.
Three,
and
exactly
like
you
mentioned
as
well.
L
You
need
at
least
enough
levels
as
there
are
active
streams
and
eight
is
kind
of
low,
but
for
warp
it
would
be
fine
honestly,
but
if
you
start
doing
stuff
like
per
frame
priorities,
then
eight
is
just
gonna
be
artificially
low.
It's
almost
like
a
flow
control
limit
of
eight
hard-coded
there's
just
not
much
you
can
do,
but
all
end
of
the
day
it
all
comes
down
to
buffer
management.
L
Just
a
way
of
saying
we
want
this
data
to
be
sent
over
the
wire
first
and
nothing
else
can
get
in
front
of
it.
Thank
you.
M
Yeah
donald
linux,
I
mean
I
get
the
impression
that
the
what
is
actually
meant
by
throughput
versus
low
latency
is
cubic
versus
gcc,
which
I
mean
personally
I'd
be
fine
with,
but
I
mean,
obviously
short
term
longer
term.
Personally,
congestion
control
people
will
come
up
with
something
more
clever,
but
in
the
short
term
those
are
the
two
algorithms
that
are
actually
deployed
in
chrome
and
I
suspect
the
idea
is
to
switch
out
the
one
for
the
other.
M
A
Thanks
yeah,
I
know
to
relay
that
conversation
that
happened
away
from
the
mic
you
mentioned.
Gcc
is
google's
congestion
control,
not
everyone's
second
favorite
compiler,
victor
you're.
Next.
C
To
the
mic,
please,
okay,
I
just
wanted
to
say
that
there
is
practical
tradeoff
between
throughput
and
latency,
in
the
sense
that
there
is
some
level
of
fundamental
uncertainty,
of
what
your
benefits
and
any
attempt
to
probe
it
would
result
in
building
up
secure.
So
that
is
one
of
the
fundamental
tuning
properties
that
pretty
much
every
congestion
control
scheme
has
to
overcome.
So
from
that
perspective,
setting
latency
targets
make
sense.
N
Ian
sweat,
google
yeah.
I
would
also
prefer
a
objective-based
approach,
whether
it's
latency
or
throughput,
I
mean
even
two
levels
is
vastly
preferable
so
like
there
are
times
or
we
actually
have
deployments
where
we're
using
bbr
b1,
but
we
have
it
tuned
to
be
much
lower
latency
and
it's
not
as
good
as
like.
You
know,
a
real-time
congestion
control,
but
it
does
prevent
buffer
bloat
and
so
for
a
given
dash
controller,
as
did
before
you
can
commonly
tune
parameters
to
like
provide
output.
That's
much
more
similar
to
one
of
the
other.
N
I
don't
really
know
what
we're
going
to
do
with
cubic
in
this
situation
cubic
seems
like
always
the
wrong
option
as
a
concession
controller,
but
it's
becoming
a
proposed
standard
and
it's
what
we
got.
So
I
don't
let's.
Let's
hope
that
no
one
actually
ships
cubic
by
default
here,
but.
O
Thanks
colin
fascinating
to
hear
the
only
thing
we're
standardizing
sucks
the
I
wanted
to
actually
jump
back.
A
bunch
too.
There
are
some
comments
about
users
of
this
at
the
api
level.
Will
just
be
confused
with
this
and
not
how
know
how
to
set
these
things,
and
that's
that's
unquestionably,
true
in
some
cases,
with
all
these
things,
I'm
not
arguing
against
that,
but
I
think
that
is
the
wrong
thing
to
design.
For
that.
O
The
thing
is
we
have
to
realize
that
whatever
levels
of
controls
here,
we
give
limit
what
the
applications
that
literally
billions
of
users
use
like
zoom
webex
these
other
things
that
are
using
huge
numbers
of
minutes.
They
do
know
how
to
set
this
stuff.
Okay,
they
have
some
very
good
people
at
all
of
those
companies
are
doing
broad
webrtc
products
and
if
you
don't
give
them
the
controls
to
be
able
to
set
things
up
the
way
they
need,
whether
it's
twitch
or
somebody
else.
O
They
just
can't
use
this
and
they
will
will
just
abandon
the
web
stuff
and
go
use
thick
apps,
which
is
it
was
the
problem,
so
we
have
to
design
for
the
use
cases
that
represent
large
numbers
of
users
of
end
users
on
the
internet,
not
designed,
for
you
know
an
average
web
developer.
Who
may
not
understand
this
stuff,
so
I
think
that
we
should
design
for
giving
lots
of
control
of
what's
going
on
at
this
api
level
and
I
think
that's
a
different
direction
than
we
have
traditionally
gone
on
javascript
level
apis.
B
Cool
tommy,
pauly
apple,
so
to
colin's
point
I'm
sympathetic
that
you
want
to
be
able
to
have
fine
grain
control,
particularly
like,
if
you're
doing
something
like
option
b,
where
you
want
to
give
like
here's,
the
specific
name
of
my
congestion
control
algorithm,
and
I
I'm
sure
the
people
who
know
what
they're
doing
will
want
to
take
advantage
of
that.
B
B
But
is
there
a
reason
that
we're
specifying
the
properties
of
the
congestion
controller
we
want,
as
opposed
to
specifying
the
properties
of
the
traffic
we're
doing
to
say
you
know
I,
I
am
doing
real-time
latency,
sensitive,
interactive,
audio
or
I'm
doing
just
streaming
of
video
or
I'm
doing
more
bulk
data
transfer,
and
that
way
the
system
can
choose
the
right
congestion
controller,
but
also
potentially
other
things
and
that's
the
model
that
we've
seen
in
other
apis
and
like
in
taps
and
stuff
like
when
they
expose
it.
B
B
With
these
other
names
like
low
latency
and
throughput
so
like,
if
we're
not
going
to
give
it
a
specific
name,
can
we
describe
the
traffic
instead
of
the
congestion,
controller
properties.
P
Moza,
daddy
cisco.
Regarding
collins
point,
I
think
if
you
look
at
the
webrtc
example
where
we
started
off,
you
know
thinking
this
stuff
is
way
too
complex
for
the
average.
You
know,
javascript
person
don't
give
them
control
list.
The
browser
has
to
do
the
rtp.
The
browser
just
do
the
codex.
P
So
I
think
we
should
realize
that
the
application
innovators
are
faster
than
the
browser
vendors
and
we
need
to
bias
some
of
our
designs
to
that
one
specific
thing
on
the
prioritization,
though,
when
you
start
talking
about
abstract
levels,
you
know
one
two,
three,
four,
five,
six,
whatever
like
someone
said,
those
numbers
don't
mean
much.
Unless
you
know
what
the
actual
prioritization
method
is,
whether
it's
you
know
strict
priority
or
what
the
queuing
discipline
is
and
all
that
one
of
the
things
people
may
want
to
consider
is
in
rmcat.
P
There
was
a
proposal
called
nada,
it's
actually
an
rfc
now,
but
it's
an
experimental
congestion
control
and
one
of
the
interesting
things
about
it
is.
It
has
weighted
fairness,
and
so,
rather
than
expressing
priority
in
terms
of
you
know,
abstract
numbers
they
are
weights
and
they
are
weights
relative
to
what
a
default
unprioritized
stream
would
be.
So
you
have
an
atom.
Almost
you
know,
as
if
you
have
an
atom
stream
that,
if
you
don't
do
anything,
that's
what
you
get.
P
But
if
you
want
to
have
a
priority,
you
you
specify
a
weight.
So
if
you
want
to
be
a
three
three
times,
heavy
stream
or
a
half
heavy
stream,
so
that
weight
is
in
is
in
terms
of
an
absolute
thing.
It's
in
terms
of
it's
relative
to
an
absolute
thing,
which
is
the
default
stream
that
you
would
get
if
you
didn't
do
any
prioritization.
So
I
think
that
may
be
a
useful
concept
to
look
at
when
you
look
at
doing
the
prioritization
apis.
Q
Eric
kinnear
apple,
if
we
go
back
to
the
congestion
control
stuff
and
continue
to
talk
about
it,
one
of
the
challenges
that
we've
seen
in
trying
to
express
something
like
low,
latency
and
throughput,
and
I'm
usually
one
of
the
first
people
to
jump
up
and
say
no,
no
describe
what
you
want
the
properties
of
what
you're
looking
for
more
than
you
know,
hard-coding
cubic
and
assuming
that's
just
going
to
get
you
what
you
want.
Q
But
I
think
we've
alluded
a
bit
in
this
discussion
to
the
fact
that,
like
you
might
have
something
that
gives
you
both
low
latency
and
throughput,
and
so
we've
almost
we've
had
real
trouble,
trying
to
specify
something
that
actually
makes
sense
in
real
life
for
people.
It's
almost
like
you
want
the
inverse
of
that
of.
Q
Let
me
tell
you
the
thing
I
am
most
willing
to
compromise
on,
because
we
haven't
talked
about
like
power
here,
but
that's
another
consideration
that
you
might
be
taking
into
account,
especially
if
you're
you
know
trying
to
upload
something
to
do
ingestion
of
media
in
the
background,
while
the
user
goes
off
and
does
something
else,
and
so,
once
you
start
saying,
oh
well,
I'm
most
willing
to
compromise
on
latency
because
I'm
interested
in
everything
else
being
better.
Q
That
starts
to
get
really
messy
and
kind
of
gross,
so
I
would
almost
support
what
tommy
was
saying
of
either.
Let's
go
all
the
way
up
to
the
top
and
like
describe
what
we're
doing,
rather
than
some
intermediate
property
that
we
think
will
accomplish
that
goal
or,
and
maybe
both
also
give
people
a
direct
ability
to
just
say
nope.
Like
I'm
advanced,
I
know
what
I'm
doing,
I'm
working
on
developing
conjunction
controls
and,
like
I
know
I
want
bbr
v2.
Let's
do
it.
That's
going
to
be
exactly
what
I
want.
A
That
just
poking
the
congestion
control
there
would
be
very
successful
in
an
itf
meeting,
and
it
was
so
thanks
everyone
for
the
really
good
discussion.
I'll
repeat
my
point
about
please
adding
that
on
the
w3c
github.
This
is
really
good
input
for
them
and
that's
something
that
they
can
act
on.
So
thank
you
all
right,
yanivar
keep
going.
G
Oh
yes,
thank
you.
Can
you
hear
me
so?
Yes
thanks
again
and
yes,
I
think
we
should
say
the
w3c
will
be
probably
perfectly
happy
to
specify
whatever
you
guys
come
up
with
and
we're
very
open
to
your
input.
So
thank
you.
The
last
slide.
The
current.
The
third
issue
under
debate
is
to
expose
some
stats
to
enable
javascript
to
build
more
rtp,
like
real-time
protocols
for
client
to
server
audio
video.
G
So
the
previous
discussion
was
all
about
giving
javascript
control
knobs
for
what
the
browser
can
do
about
it
and
there's
some
some
who
are
trying
to
hand
off
this
wholesale
to
javascript
as
well,
and
you
know
somewhere
in
the
middle.
You
want
to
control
all
of
this.
So
this
it's
a
separate
issue
that
we're
tracking
it's
assumed
to
be
about
datagrams
only
or
at
least
at
the
connection
level.
Only
so
this
is
again
open
for
discussion.
G
G
So
we've
asked
the
question:
what
kind
of
what
kind
of
stats
would
javascript
need
in
order
to
build
its
own
congestion,
control
algorithm,
for
example,
here
and
so
there's
an
rfc
8888
that
suggested
latest
rtt
packet
departure
package
or
packet
arrival,
which
I
assume
is
arrival
on
the
server
right
and
then
ecn?
Maybe
an
ack
info
would
be
sufficient
and
we've
also
reached
out
to
david
balderson.
I
hope
I
got
your
name
right
for
some
experimental
data
over
implementing
rtp
over
web
transport
with
bbr
2
or
maybe
bbr2
plus
screen.
G
G
And
the
reason
why
this
is
only
for
datagrams
or
only
at
the
connection
level,
is
that
the
javascript
api
for
outgoing
incoming
streams
do
not
operate
at
the
packet
level.
So
the
questions
we
still
have
are
our
packets
and
datagrams
sufficiently
analogous
for
an
rtp
like
implementation,
and
you
know
this
is
again
an
exploratory
issue,
so
questions
of
welcome
or
input
welcome,
I
should
say
I
think
that's
my
last
slide.
A
All
right,
thank
you,
genevar,
the
only.
Q
How's
that
gonna
do
yes,
no
good
bad,
sweet,
all
right
cool,
I'm
eric
kinnear
from
apple
and
if
we
can
get
the
next
slide,
please
we're
going
to
talk
a
bit
about
the
capsule
design
team
that
we
started
in
ihf
113
and
the
main
question
was
what
the
heck
should
we
do
about
capsules
so
like?
Should
we
use
them?
Should
we
not
use
them?
Q
We
had
an
existing
h2
spec
for
how
we
do
web
transport
over
h2
and
we
were
defining
all
of
these
new
h2
frames
that
we
wanted
to
use
to
make
a
kind
of
a
baby
quick
that
you
run
over
an
h2
stream,
and
we
said
some
of
these
could
also
look
very
very
similar
to
what
we're
using
in
h3,
where
we
defined
a
couple
of
different
capsules
as
well.
Q
And
if
we
go
to
the
next
slide,
we
can
see
like
we
had
a
datagram
capsule,
which
is
coming
from
hp
datagrams,
and
we
want
to
use
that
to
send
it
on
an
h2
stream.
Just
as
much
as
we
want
to
send
that
in
h3,
there's
also
a
closed
web
transport
session
capsule
in
h3,
and
if
we
go
to
the
next
slide,
we
can
see
we
have
a
whole
pile
of
them
for
h2.
So
the
obvious
crossover
here
is
something
like
datagram.
Q
We
also
had
padding
reset
stream,
stop
sending
actual
stream
capsules
and
then
flow
control
which
we've
stuck
onto
one
line
here,
but
is
some
combination
of
max
data,
max
streams,
max
stream
data
and
then
blocked
variants
for
all
of
those
things.
Q
We
had
looked
at
this
slide
in
113
as
kind
of
the
precursor
to
spinning
up
this
conversation,
so
I
wanted
to
just
look
at
it
again,
so
this
is
the
full
list
of
all
the
different
things
that
we
defined
for
h2
and
if
you
go
to
the
next
slide,
we
had
kind
of
tentatively
talked
about
hey,
there's
this
datagram
one
and
it's
shared
with
h3,
and
so
that
would
be
cool
if
these
things
shared-
and
we
didn't
just
define
two
of
the
same
thing
and
that's
kind
of
what
got
us
talking
about.
Q
Should
we
be
sharing
everything
else?
How
does
the
rest
of
this
work?
What
is
the
role?
If
I
can,
I
send
a
wt
stream
capsule
on
an
h3
stream,
and
is
that
cool?
Does
that
give
us
awesome
version
independence?
Does
that
destroy
everything
and
make
things
go
down
in
flames?
So
next
slide?
Please,
we
also
opened
the
can
of
worms
that
is
flow
control.
Q
That's
what
we
started
with
in
terms
of
problems
for
everything
and,
like
I
said
on
the
slide
here,
we'll
talk
a
little
bit
more
about
flow
control
and
some
of
that
stuff
later,
but
the
opportunity
arose
to
say:
hey.
We
have
all
of
these
different,
like
stream,
max
stream
data
blocked
capsule.
Should
we
send
that
on
h3
and
what
does
that
mean?
Q
So
that's
kind
of
what
we
set
out
to
solve
where
we
are
right
now
is
we
have
a
pull
request
against
h2
and
a
pull
request
against
h3
that
we
will
send
the
links
out
to
on
the
mailing
list
and
ask
for
a
bunch
of
input
and
review.
I'm
going
to
summarize
very
quickly
in
slide
form
what
those
do,
because
that's
often
a
lot
more
grockable
than
reading
a
bunch
of
diff
from
what
it
used
to
look
like.
Q
But
I
chose
to
use
explode
here,
so
we're
going
to
continue
with
that
one,
and
we
looked
at
some
pros
of
why
we
would
want
that.
It's
really
attractive
from
a
symmetry
perspective,
to
have
this
single
conceptual
model
that
looks
kind
of
like
a
miniature
version
of
quick
that
you
can
run
on
any
http
exchange
that
you
have
anywhere.
You
could
potentially
get
h1
support
out
of
this
for
free,
you
just
you
know,
doesn't
matter
it's
completely
transport
agnostic.
This
is
just
how
I
send
web
transport
streams.
Q
Q
This
is
also
kind
of
fun,
because
if
we
reuse
all
this
stuff
things
that
we
do
in
the
future
for
different
extensions,
if
we
add
new
capsules,
those
automatically
work
for
h3,
they
automatically
work
for
h2
and
if
there's
a
butt
coming,
it's
on
the
next
slide,
there's
a
way
longer
list
of
cons
which
are
mainly
and
primarily
that
we
care
the
most
about
h3
for
web
transport
and
h3
is
the
one
where
you
have
the
most
native
feature
usage
already
right,
so,
like
datagrams,
actually
go
in
h3
datagrams
now
you'd
have
to
be
able
to
handle
all
of
those
capsules
arriving
on
the
same
stream.
Q
Q
Now
the
common
case,
you
have
to
be
like
how's
it
coming
in.
What
do
I
do?
Are
you
allowed
to
switch
part
way
through
like
what,
if
some
go
over
a
single
h3
stream,
but
others
you
choose
to
split
out
into
its
own
h3
stream
and
like?
Can
I
restrict
that,
if
I'm
not
willing
to
give
you
some
of
those
resources
and
what
does
that
do
to
our
stream
limits
for
flow
control
which
we're
going
to
talk
about
in
a
second?
Q
Q
Q
So
you
are
not
sending
a
datagram
capsule
on
your
h3
stream;
it
is
an
actual
datagram
and
that
persists
throughout
all
of
h3
everything
that
h3
can
split
out,
which
is
most
everything,
looks
just
like
it
does
today.
There's
no
debate
over
oh,
but
it
came
in
this
other
way.
Am
I
supposed
to
handle
it
some
weird
different
way,
and
can
I
signal
to
the
other
person
about
it?
So
no
weirdness
there
just
if
it
can
use
a
native
feature.
Q
Q
Q
B
So
tommy
paulie
I
mean
the
datagram
support
is
indicated
via
transport
parameter
in
quick.
So
you
need
that
and
that
might
not
be
there
right
and
you
have
the
h3
level
setting.
B
So
I
mean
either
you're
going
to
say
you
don't
like
web
transfers
just
going
to
break
in
those
cases
or
you
need
to
tweak
the
language
where
you
say
that
you
must
use
datagram
in
h3
to
say
you
must
use
datagram
if
you
have
the
transport
parameter,
but
if,
for
some
reason
the
other
side
didn't
do
the
transfer
parameter,
you
know
already
that
it
can't
do
it.
So
then
you
must
use
the
capsule
version
of
it.
But
I
think
saying
that
you
have
to
support.
Capsules
is
fine,
because
you
always
can
do
that.
A
Q
Q
E
E
In
this
context,
it
does
mean
that,
if
someone's
going
to
try
to
deploy
this
and
they're
using
intermediaries
in
their
in
their
deployment,
they're
going
to
need
to
ensure
that
when
the
the
front
end
receives
one
of
these
things
and
says
yes,
it's
okay,
the
connection
onward
is
dealt
with
somehow,
whether
that
means
translation
or
whether
it
means
full-on
support
for
the
same
sort
of
feature
set.
I
think
that's
just
something
we
can
write
down
and
and
explain.
E
E
E
I
think
you
want
to
have
all
all
the
prerequisites
in
also
signaled
in
that
process
and
if
they
don't
appear,
then
something's
broken,
and
you
fail
so
that
you,
you
can
build
software,
that's
rational
with
with
all
of
these
things,
so
you
don't
have
to
okay,
so
I've
got
a
layer,
that's
dealing
with
capsules.
E
Q
E
I
understand
that
to
be
a
problem
for
some
people,
but
I
think
this
whole
idea
of
implicit,
signaling,
that's
tied
to
other
things
is
com
is
problematic
when
it
crosses
layers,
it
may
be
appropriate
at
the
layer
in
which
it
was
done
for
that
one,
because
it
was
all
tied
into
the
same
negotiation.
I
don't
like
that,
but
that's
where
we
ended
up
and
yeah.
I
guess
something.
A
F
S
I
would
like
to
point
out
that
there
is
a
use
case
of
web
transport
that
doesn't
require
datagrams
a
somewhat
primitive
use
case,
but
you
can
imagine
just
doing
the
stuff
you
did
on
on
websocket
via
web
transport
now
and
benefit
from
streams
and
never
sent
a
single
datagram.
So
I
don't
see
why
datagram
support
needs
to
be
a
requirement.
Q
That
that
was
going
to
be
my
next
question
was:
is
there
anybody
who's
planning
on
deploying
this
that
doesn't
have
datagrams
and
doesn't
want
them
and
would
rather
have
code
that
handles
datagram
capsules
coming
in
on
an
h3
stream,
because
that's
kind
of
your
alternative
right,
so
you're
still
going
to
have
to
write
code?
That
has
the
letters,
data
and
gram
in
them?
It's
just
now.
You
have
to
have
an
if
statement
and
deal
with
it
in
multiple
places.
S
P
E
Martin,
I
noticed
the
queue
has
just
gotten
long,
so
I
think
what
we're
looking
for
here
is
interoperability,
and
if
we
have
people
that
want
to
use
the
protocol
without
with
a
sort
of
I'd
like
to
pick
and
choose
the
the
pieces
that
I'd
like
we
end
up
in
a
situation
where
we
don't
have
interoperability
in
those
cases,
if
you
have
a
deployment
that
wants
to
use
something
that
looks
a
little
bit
like
web
transport,
but
doesn't
have
datagrams
in
it.
E
That
is
possible
as
a
proprietary
protocol,
but
building
something
that
doesn't
have
datagrams
in
it
and
specifically
designing
to
allow
for
that.
Possibility
does
complicate
how
we
build
this
thing,
and
I
think
it's
a
complication
that
we
don't
necessarily
want
here.
Implementing,
datagrams
and
or
implementing
the
possibility
of
receiving
datagrams
from
someone
is
relatively
simple
to
do,
and
even
if
you
don't
plan
to
use
them,
and
and
all
you
do
is
throw
them
away,
then
that's
probably
something
that
that
you
could
possibly
do
in
that
context.
And
then
you
would
get
interoperability.
E
However,
building
something
that
says
well
datagrams
are
optional,
makes
it
very
much
more
difficult
for
those
of
us
who
are
building
to
this
sort
of
thing
and
have
to
talk
to
arbitrary
servers,
and
then
we
have
to
deal
with
the
possibility
that
maybe
datagrams
aren't
present.
We
have
to
think
about
how
to
move
things
on
capsules
and
all
sorts
of
other
things.
So
I
think
that's
nice,
but
I
don't
want
to
go
there.
A
Yeah,
david
schnazzy,
no
hats
well
mask
enthusiast
hat,
so
just
to
add
in
the
http
datagrams
document
we
say
that,
like
you,
you
must
support
receiving
datagrams
on
inside
capsules,
so
that's
kind
of
a
requirement.
Here
I
mean
at
the
end
of
the
day,
if
you
already
have
a
capsule
parser,
which
you
need
because
of
the
closed
web
transport
session.
Capsule
like
having
that
call
the
I
received
a
datagram
frame
function
is
pretty
trivial,
so
I
wouldn't
worry
about
that
too
much.
I
think
this
boils
down
to
do.
A
We
want
to
say
you
must
send
them
over
datagrams
if
they're
available
or
you
must
support
datagrams
like
at
the
end
of
the
day.
L
Hi,
it's
luke,
so
I've
deployed
a
quick
stack
without
datagram
support,
you're
right,
it's
really
easy
and
it's
it's
kind
of
trivial
to
just
throw
them
away.
I
think
the
only
concern
is
maybe
capabilities
on
the
w3c
side.
I
think
there
was
a
slide
there
saying
datagrams
are
reliable
or
unreliable,
and
it's
kind
of
hard
to
tell
if
a
server
actually
supports
these
unreliable
datagrams.
If
it
just
lies,
it
just
says
I
support
them.
L
I
need
to
say
this
to
get
web
transport,
but
then
you
actually
try
to
send
them
and
it
doesn't
work.
So
there
might
still
be
a
use
case
there
to
say
that
the
I
yeah
I
don't
know.
I
can't
really
think
of
any
reason
why
you
can't
just
lie
about
it,
but
I
definitely
would
like
to
avoid
having
to
implement
anything
complicated
with
datagrams
there's,
just
no
reason
to
use
them
in
most
use
cases.
I
think.
Q
Or
we
say
you
don't
get
web
transport
if
you
don't
have
datagrams
and
it's
easier
to
implement
datagrams
and
throw
them
on
the
floor
than
it
is
to
consider
both,
and
I
think
that
second,
one
is
the
thing
that
we're
proposing
right
now,
but
that
would
be
a
great
thing
to
chime
in
with
on
the
actual
pull
request
for
the
stuff,
or
we
can
also
make
an
issue
if
we
want
to
have
a
continued
back
and
forth
but
like
if
you
have
a
a
implementation
where
it
really
would
be
a
burden,
it
would
be
good
to
talk
that
through
all
right,
capsule
protocol
stuff
is
nice
and
easy.
Q
The
first
one
is
fairly
easy,
so
we've
talked
about
having
a
setting
to
limit
the
number
of
sessions
that
you
can
have
and
if
we
go
to
the
next
slide,
this
ends
up
being
fairly
ergonomic.
It's
pretty
straightforward.
Q
We
say
instead
of
sending
a
flag
that
says,
settings
enable
web
transport,
you
just
send
settings
web
transport
max
sessions
and
if
you
set
it
to
xero
no
web
transport
for
you
today,
but
if
you
set
it
to
one,
you
have
web
transport,
you
have
no
pooling,
you
get
one
session
and
if
you
set
it
to
more
than
one
now,
you
have.
However
many
you
asked
for
so
completely
reasonable.
Q
Q
But
if
you
have
multiple
web
transport
sessions
and
each
of
those
are
you
using
native
h3
streams,
it
would
be
very,
very
easy
for
my
first
session
to
use
the
entire
budget-
and
my
next
session
says
I'd
like
to
open
a
new
web
transport
stream.
And
the
answer
is
nice?
Try?
So
this
is
a
way
for
within
a
web
transport
session,
that,
in
a
context
that
understands
that,
as
opposed
to
h3,
which
just
sees
lots
and
lots
of
very
equal
streams,
you
can
just
say
mac
streams
and
use
that
same
capsule.
Q
S
Q
Yes,
so
there's
a
fun
caveat,
which
is
that,
if
essentially,
I
think
to
paraphrase
in
h3,
because
things
can
come
in
out
of
order
if
you've
potentially
closed
a
stream
before
the
stream
is
considered
to
be
opened,
you
lose
that
you
essentially
leak
that
credit,
and
our
answer
is
essentially
don't
do
that
and
we
don't
think
it
actually
is
going
to
kill
anything.
Q
Q
The
and
the
reason
for
that
is
because
you
can't
necessarily
tell
that
that
was
associated
with
that
session,
because
the
stream
is
gone
and
you're
not
gonna
when
the
the
frames
arrive
you're
having
a
bad
day,
and
so
I
think
we
we
discussed
wordsmithing
some
text
around
how
the
capsule
kind
of
has
to
be
paired
to
try
to
make
that
less
possible.
But
I
don't
know
that
we
ever
got
it
to
a
zero
percent.
Chance
possibility.
K
K
Yes,
because
I'm
wondering
like
do
you
have
to
have
a
guarantee
that,
like
all
of
the
sessions,
some
max
streams
must
be
less
than
or
equal
to
the
connection
level
one
or
is
it
possible
in
a
valid
deployment
to
have
the
sum
exceed
and
like
yeah
have
a
higher
limit
and
like
does
that
mean
that
a
particular
session
does
not
have
the
guarantee
that
it
can
get
all
of
its
allowable
maximum
streams?.
Q
The
the
the
latter,
so
it's
it's
very
similar
to
what
happens
for,
for
you,
know,
connection
level,
data
limit
versus
stream
level.
Data
limit
is,
if
you
wanted
to
say,
hey,
you
know,
I'm
willing
to
use
I'm
willing
to
give
anybody.
10
streams.
Q
I
could
say
you
know,
I'm
only
allowing
the
whole
connection
to
have
10
more
streams,
but
any
of
you
could
take
those
10
or
I
could
say
no,
no,
you
know
the
first.
Some
of
you
only
get
five
right.
S
S
The
the
problem
I
see
is
that
there's
a
race
condition
here,
like
let's
say
I
you
give
me
10
streams.
I
close.
I
have
this
10
streams
open.
I
close
five
of
them
and
open
five
new
streams
and
now
the
fin
bits
for
the
closed
streams
get
reordered.
Then
you
will
think
that
I
open
15
streams
and
you
will
give
me
a
protocol
violation.
I
would
assume
got.
Q
It
the
closing
before
being
opened,
is
essentially
like,
so
if
a
stream
gets
reset
before
you
knew
it
existed.
So,
if
I
say
hey,
I'm
resetting
the
stream
and
you
go.
Excuse
me
what
stream
very
often
in
at
least
several
implementations.
You
basically
say:
okay,
cool.
This
stream
is
dead
and
when
things
then
show
up
for
it
later,
you
basically
just
completely
discard
everything
to
do
with
it,
which,
without
careful
wording,
it
means
that
you're
also
discarding
the
information
that
told
you
which
web
transport
session
you
should
have
built
for
that
stream.
T
Allen
from
dell
meta
so
yeah,
I
think
what
martin
said
about
the
streams
that
are
closed,
gracefully
with
a
fin
bit.
Don't
have
this
problem,
but
the
ones
that
reset
yeah
could-
and
I
think
there
may
be
a
separate
issue
that
we'll
talk
about
in
the
h3
section,
but
I
think
the
leaking
it
is
bad
and
that
we
probably
need
a
reset
capsule,
which
would
be
reliable
to
make
sure
that
that
doesn't
happen.
T
So
you
would
in
h3
you
would
have
you
would
reset
the
stream
and
also
send
the
application
level
message
which
is
sort
of
like
the
way
things
work
in
qpack
and
that
would
go
on.
E
So
I'm
not
even
sure
that
we
need
this
capability.
Honestly,
there
is
always
the
possibility
that
you
can
have
the
the
bad
session
completely
overwhelmed
the
capacity
of
the
connection,
for
instance,
if
I
as
a
as
a
bad
website
or
just
one
that
didn't
know
what
they
were
doing,
were
to
create
multiple
web
transport
sessions
and
use,
lots
and
lots
of
streams
on
them,
it's
possible
that
you
could
exceed
the
available
streams
that
are
there
for
that
connection.
E
Maybe
you
forgot
to
close
them
right,
and
it
could
be
very
simple
like
that,
but
that's
the
sort
of
thing
that
we'll
have
to
deal
with
anyway,
because
the
number
of
streams
within
a
session
times
the
number
of
sessions
could
well
exceed
the
number
of
streams
the
entire
connection
could
could
have
anyway,
at
which
point
you
have
a
connection
that
is
entirely
consumed
by
all
of
the
web
transport
stuff,
and
you
have
no
means
of
doing
other
things
on
that
connection
like
making
a
simple
http
request,
for
instance,
or
making
a
new
session,
or
what
have
you?
E
Q
Well-
and
there
is
a
line
to
be
drawn
there,
so
sneak
peek.
The
next
slide
is
going
to
be
taking
all
of
the
data
limits
and
throwing
them
in
the
we'll
deal
with
this
later.
If
we
decide
we
actually
need
it.
It's
a
real
problem
bucket.
So
we
could
choose
to
do
that
for
this.
The
thing
you
say
about
you
know
hey.
I
want
to
send
non-web
transport
requests
on
this
h3
connection.
E
So
it
it's
always
going
to
be
the
case
that
you
can
exceed
it
unless
you
have
reserved
a
few
streams
for
the
purposes
of
making
other
requests,
which
a
browser
is
quite
capable
of
doing.
If
that's
what
we
want
to
do,
but
that's
a
lot
of
that's
going
to
depend
on
what
the
server
is
willing
to
allow
for.
So
if
the
server
only
gives
us
a
budget
of
three
streams,
then
we
don't
have
a
lot
of
options
available
to
us.
E
So
some
of
this
is
going
to
come
down
to
just
having
sensible
practices
on
on
servers
and
conveniently
the
people
who
are
writing
the
code
to
consume.
The
streams
also
have
some
degree
of
control
over
how
the
server
is
going
to
be
operating
here.
C
Yeah
one
observation
I
wanted
to
make
is
partially
that,
because
this
is
some
of
this
is
limiting.
So
the
situation
is
like
when
the
client
opens
too
many
streams,
so
the
browser
can
send
a
http
request.
Well,
browser
can
control
the
number
of
open
streams
on
top
of
what's
imposed
by
http
connections.
It
is
to
say
the
browser
might
decide
that
you
only
get
32
streams
from
this
connection,
and
there
is
no
need
to
support
this
in
protocol
because
it's
all
local
to
the
browser.
Q
The
reason
you
would
need
to
support
that
in
the
protocol
is,
if
you
needed
to
have
them
explicitly,
communicate
about
it,
and
especially
if
you
wanted
to
let
the
application
have
input
onto
into
whether
or
not
that's
happening.
C
T
Alan
frindell,
I
think
the
concern
that
I
have
with
letting
the
browsers
just
decide
like
we're
going
to
reserve
some
streams
and
it
doesn't
need
to
be
communicated.
Is
that
then
servers
have
to
deal
with
browsers
that
have
different
limits,
or
maybe
you
decide
that
you're
not
going
to
have
the
limits
or
some
browsers
do
or
don't
so
that
it's
sort
of
inconvenient
just
better
to
be
able
to
like
have
some
guarantees,
but
I
think
also
to
victor's
point.
T
I
seem
to
remember
maybe
something
like
this
along
with
websockets,
where
chrome
has
some
limit
for
like
how
many
websocket
streams
you
can
have
in
an
h2
session
which
is
kind
of
similar.
So
I
don't
know
it
would
having
some
explicit
way
to
communicate
with
what
the
limits
are.
I
think
would
be
good
right.
S
Q
All
right,
this
is
the
part
where
we
declare
bankruptcy
for
bytes,
and
we
say
if
the
conversation
we
just
had
seems
kind
of
twisted
and
a
little
bit
complicated
when
you
start
having
the
same
type
of
conversation
but
for
byte
limits.
It
gets
way
worse.
Q
So
we're
going
to
say
that
at
least
for
those
bytes
you
have
the
ability
for
any
h3
stream,
and
obviously
a
lot
of
this
also
applies
to
h2,
but
specifically
for
h3
for
any
h3
stream.
You
already
have
flow
control.
You
already
can
use
it.
You
already
screw
it
up.
Sometimes,
let's
not
make
it
any
more
complicated.
Q
You
can
reserve
practically
pretty
much
what
you
would
actually
need,
and
so,
if
we
discover
a
need
for
some
additional
signaling
beyond
what
you
can
already
do
in
h3
across
you
know,
connection
and
then
stream
limits
and
to
the
point
of
you
know,
can
the
sum
of
the
stream
limits
be
larger
than
the
connection
like
absolutely
yes,
and
that
can
lead
to
all
sorts
of
interesting
strategies,
we're
not
going
to
add
any
additional
complexity
there.
Q
If
we
end
up
needing
some
kind
of
thing
that
we'd
actually
want
to
signal
about
that,
we
can
certainly
add
it
later.
It's
not
super
hard
to
add
capsules
and
extend
things
by
making
that
work,
but
we're
gonna
propose
not
doing
that.
So
the
number
of
streams
we
said
was:
this
is
a
thing
that
you
cannot
necessarily
do
otherwise
and
it'd
be
interesting.
Q
If
we
can
chime
in
with
some
some
clear
text
on
how
we
would
explain
that
browsers,
should
you
know,
do
a
sensible
default
there
or
do
concurrent
stream
count
or
pick
some
other
strategy.
That
would
be
excellent,
but
we
said
it
was
worth
biting
off
a
little
bit
of
this
complexity
for
streams,
because
that's
something
that
you
don't
necessarily
have
good
control
over
otherwise,
but
for
bytes
you
have
lots
of
knobs.
We
have
yet
to
prove
that
we
can
use
those
knobs
successfully
in
every
case.
Q
Excellent
next
slide.
Please
intermediaries
make
the
entire
conversation.
We
just
had
a
lot
more
complicated.
That
is
also
potentially
a
reason
to
have
a
little
bit
more
explicit
signaling
if
we
need
to
distribute
some
of
that.
So
this
is
a
place
where
we
have
a
split
between
a
way
to
conceptualize.
What's
going
on
and
the
thing
you
actually
need
to
do
when
you
write
your
code
so
conceptually
the
proposal
here
is
that
and
it's
less
of
a
proposal
and
more
of
a
reality,
flow
control
is
terminated
at
an
intermediary.
Q
So
when
I
have
my
h3
connection
that
is
terminated
by
somebody,
who's
then
going
to
talk
upstream
of
that
via
h3
or
h2.
A
lot
of
those
flow
control
limits
are
actually
terminated,
especially
if
they're
translating
between
h3
and
h2,
but
even
if
they're,
just
sending
h3
to
h3
or
h2
to
h2
that
intermediary
could
choose
to
allow
someone
to
send
it
more
than
it
was.
Then
it's
allowed
to
send
upstream
and
vice
versa.
Q
S
Q
So
these
lovely
fast
forward,
symbols
that
you
see
are
actually
double-ended
arrows
between
these
different
boxes.
Q
And
if
we
skip
to
the
next
slide,
since
I
think
building
this
in
in
segments
is
not
necessarily
going
to
help
much,
we
got
more
numbers
here
that
refers
to
a
thing,
that's
on
the
left
somewhere
conceptually.
What
this
is
saying
is
that
if
you're
an
intermediary
and
somebody's
saying
hey,
you
can
send
me
100
bytes,
you
probably
want
to
be
very
careful
before
you
tell
the
person
sending
you
stuff
that
they
can
send
you
more
than
that
100
bytes,
which
should
be
pretty
straightforward.
Q
Did
you
just
join
the
queue
yeah?
Let's
do
it.
S
How
does
that
work?
Because
the
client
establishes
the
connection
to
the
intermediary
first
and
during
the
quick
handshake
you
communicate
these
limits.
The
initial.
Q
Limits?
Yes,
yes
right,
so
you
need
some
sensible
set
of
initial
limits,
but
essentially
as
you're
going
to
increase
those
limits.
You
need
to
be
careful,
but
yes
you,
you
could
be
stuck
in
a
situation
where,
when
you
establish
your
upstream
connection,
it
says
my
initial
limit
is
50
and
you'd.
Already
advertised
100
and
you're
gonna
have
to
deal
with
that.
E
So,
let's
try
to
be
a
little
bit
more
pragmatic
about
this
sort
of
thing.
This
is
going
to
be
a
gateway
sitting
in
front
of
a
bunch
of
servers
and
a
lot
of
cases.
The
gateway's
going
to
know
something
about
those
servers.
Now,
whether
that's
based
on
the
fact
that
it's
already
talked
to
those
servers
in
the
past
or
because
they're
actually
operated
by
the
same
people,
and
they
run
off
the
same
configuration
as
largely
a
material.
E
E
I
was
just
thinking,
there's
interesting
complications
here
when
you
talk
about
having
quick
on
both
sides
of
the
intermediary
and
when
you
have
quick,
quick
and
tcp
on
on
different
sides
with
quick.
If
you
get
an
out
of
order
piece
of
stream
information,
you
just
forward
it
on
and
sort
of,
say.
Oh,
this
is
just
stuff
that
you'll
need
to
deal
with
in
the
future.
That's
easy
with
ccp
with
headline
blocking.
You
have
to
wait
for
everything
you
have
to
buffer
things
up.
E
So
ultimately
the
intermediary
can't
sort
of
blindly
forward
those
things
on
in
the
in
the
tcp
context,
because
it
does
need
to
have
all
of
the
space
that
that
it
advertises
available
for
buffering.
Otherwise
it
could
end
up
in
a
situation
where
it
it
has
data
that
it
said
it
could
take,
but
it
couldn't
right.
Yeah.
Q
And
I,
I
think,
that's
a
really
good
point
and
kind
of
underscores
the
the
idea
that
conceptually
you
are
terminating
that
flow
control.
You
you
are
responsible
for
whatever
you
choose
to
advertise
and
the
fact
that
in
many
cases
it
is
fairly
straightforward
to
send
that
through
is
okay
but
the
underlying
reality.
You
like
you,
can't
just
completely
ignore
that.
L
Hi
luke
from
twitch,
so
flow
control
is
usually
based
on.
Like
I
have
limited
ram,
I
think
the
assumption
here
the
intermediary
is
like
we've,
just
got
big
beefy
servers
and
they
can
have
as
much
ram
as
the
client
of
the
server,
but
I
mean
exactly
like
it
sounds
like
it's
a
poor
decision
to
just
forward
flow
control
if
you're
running
a
raspberry,
pi
or
something
like
there's
going
to
be
congestion.
All
of
a
sudden,
you
advertised
you
know
a
gigabyte
of
ram
available,
but
you
didn't
have
that
right.
L
So
I'm
not
sure
it's
actually
a
good
idea
to
ever
forward
flow
control,
and
I
don't
think
it's
an
end-to-end
thing.
I
think
it's
literally
just
I
just
have
this
much
ram
availability,
each
hop.
Q
Yes,
well
and
martin
had
also
made
a
good
point
around.
If
you're,
translating
between
h3
and
h2,
like
h2
to
h2
is
pretty
straightforward.
H3
to
h3
is
pretty
straightforward,
plus
some
extra
ordering
fun,
but
at
the
end
of
the
day,
we're
not
defining
a.
F
Q
Signaling
mechanism
for
this
in
the
spec,
so
if
you've
got
a
bunch
of
big
beefy
servers,
that's
awesome.
Other
people
may
not
have
a
bunch
of
big
beefy
servers,
that's
cool
too.
I
think
what
we're
trying
to
do
is
provide
enough
guidance
that
we're
giving
a
heads
up
as
to
some
of
the
pitfalls
and
the
things
you
need
to
be
careful
with,
as
you
choose
to
do
this.
Q
But
what
what
your
intermediary
chooses
to
do
with
web
transport
is
not
something
like
we're,
not
defining
additional
signaling
about
it
and
we're
not
really
putting
any
requirements
on
it
either.
So,
if
you've
got
a
raspberry
pi-
and
you
want
to
be
super
careful-
and
you
want
to
manage
it
completely
on
your
own
and
not
have
any
signal
from
upstream
like
go
downstream-
that's
totally
cool.
L
N
Yeah
ian
sweat,
google.
I
would
agree
that
yeah
thinking
about
this,
as
end-to-end
is,
is
probably
just
not
gonna
work,
but
the
good
part
is
that,
like
intermediate
areas
that
terminate
like
h2
and
h3
already
deal
with
this
problem,
and
so
like
I'm,
not
really
sure
you
really
need
to
say
anything
at
all.
I
will
call
it
one
note
for
your
example.
The
intermediate
to
the
server
could
have
like
an
incredibly
small
rtt
like
in
the
order
of
a
millisecond
or
less.
It
is
not
uncommon.
N
The
client
would
have
an
rtt
that
is
like
two
orders
of
magnitude
larger.
As
a
result,
the
bdp
between
the
client
and
the
intermediary
is
fairly
often
going
to
be
probably
at
least
an
order
of
magnitude
larger
than
the
server
to
the
intermediary.
So
unless
you're
going
to
give
a
bunch
of
information
to
the
server
about
the
client
and
that
bdp,
even
trying
to
do
end
to
end
is
going
to
hose
you
because,
like
you're,
going
to
be
sending
far
too
little
flow
control
from
server
to.
N
Q
E
So
what
I'm
getting
from
this
conversation
is
that
building
an
intermediary
could
be
hard,
but
people
do
it
anyway
and
have
done
so
successfully
for
some
amount
of
time,
and
it
might
be
the
case
that
trying
to
find
the
guidance
that
you're
looking
to
put
in
here
is
subtle
and
difficult
enough
that
maybe
we
shouldn't
even
bother.
Maybe
we
should
simply
say
intermediaries
exist
and
that's
it
something
very,
very
simple
and
anodyne.
E
Basically,
I
don't
think
there's
much
so
that
we
benefit
from
from
trying
to
explore
all
the
various
ways
in
which
you
might
implement
an
intermediary
under
the
varying
conditions
that
ian's
talking
about
because
yeah.
That's
that's
why
people
building
intermediaries
still
continue
to
have
job
security.
I
think
so.
The
the.
Q
Just
for
for
clarity
that
the
current
proposal
is
we're
saying
this
is
essentially
hot
by
hop
if
you
commit
to
it
like
you're
the
one
left
holding
that
bag.
That's
up
to
you
and
any
other
text
we
choose
to
put
on
top
of
which
we
have
proposed
very
little
right
now.
If
we,
if
we
want
to
describe
something
that
helps
people
and
lays
out
some
of
the
here,
are
common
pitfalls
and
things
you
might
want
to
think
about.
Q
That's
totally
cool,
but
the
in
terms
of
our
like
actual
pull
request
for
this
stuff.
The
only
hard
line
statement
that
we're
making
is.
This
is
not
an
end
to
end
concept
like
if
you
advertise
something
that's
higher
than
what
your
upstream
could
do
like
you
got
to
deal
with
that.
That's
on
you.
O
Colin
jason,
I
I
mean
I
I
anytime,
I
was
just
sort
of
reacting
a
little
bit
to
martin's.
You
know
like
any
time
we're
trying
wish
intermarry's
intermediaries
away.
We
ten
years
later
deeply
regret
having
done
that
right,
but
I
think
your
statement
that
you
have
what
I
read
in
the
draft
of
you're,
not
wishing
them
away
at
all,
you're
saying
very
hardcore.
You
know
you
have
to
fully
be
you
know
whatever
you
advertise
you
have
to
provide,
and
that
means
you're
a
full
sbc
in
the
sip
sense
or
a
full.
O
You
know,
I
think,
that's
a
great
way
to
d.
In
fact,
I
think
it's
the
only
practical
way
to
deal
with
intermediate
problems,
but
I
think
you
should
claim
you
are
dealing
with
interiors,
and
this
is
the
answer.
Not
we're
sort
of
you
figure
it
out
yourself,
because
the
figure
it
out
yourself.
It
leads
to
bad
results.
Later,
thanks
beautiful.
Q
All
right,
so,
if
we
summarize
what
we've
talked
about,
we
are
proposing
that
h2
should
use
capsules.
We
are
saying
that
h3
should
use
capsules
and
share
with
h2,
where
appropriate,
which
is
actually
a
reasonably
small
list.
Our
main
reason
for
using
capsules
is
because
the
frames
look
exactly
the
same,
but
now
they're
in
a
shared
list,
and
we
can
reuse
them
between
protocols.
Q
We're
saying
that
capsules
will
always
use
native
features
if
possible,
and
I
think
we
may
want
to
split
out
a
specific
github
issue.
Even
just
so.
We
can
write
down
some
of
our
conversation
around
what
happens
if
datagrams
aren't
there
and
and
how
we
are
going
to
maybe
have
text
that
that
makes
a
takes
a
strong
stance
on
that.
If
that's
what
we
want
to
do,.
A
Eric
can
you
take
the
action
item
of
filing
that
issue
yep.
Thank
you.
Q
So,
instead
of
it
being
you
know,
zero
or
one,
you
can
now
have
zero
one
or
more
than
one
and
the
last
one
is
we're
proposing
that
h3
gets
a
stream
count
limit
within
a
session,
but
I
will
actually
split
out
a
similar
issue
for
that,
where
we
can
make
sure
that
we've
fully
written
down
everything
we
need
to
for
that
stuff,
and
if
we
get
to
the
end
of
that
issue-
and
we
say
you
know
what
flow
control
is
not
the
thing
we
were
trying
to
solve
when
we
talked
about
capsules,
that
is
totally
okay.
Q
Nobody
will
be
sad
with
less
of
that
all
right,
so
just
process
wise.
We
have
two
pull
requests
for
this
they're
gonna
move
around
a
bit
in
github
and
stuff,
so
I
will
send
them
out
with
links
to
the
list,
so
keep
an
eye
out
for
that
and
if
you
can
come
in
and
read
a
lot
of
that,
and
especially
if
your
reading
of
them
does
not
give
you
the
same
impression
as
the
words
that
we
all
just
said,
that'd
be
really
cool
to
call
out,
but
yeah.
Q
A
All
right,
thank
you
very
much.
Eric
any
last
questions
for
before
we
move
on
okay,
so
process
wise
eric
will
send
out
this
email,
and
then
the
chairs
will
turn
that
into
a
formal
consensus.
Call
on
those
pr's,
since
this
is
a
non-trivial
change
to
how
h2
works
and
and
then,
while
assuming
that
goes
through,
we'll
have
a
set
design
going
forward.
All.
H
A
L
C
Victor
editor
for
the
h
free
spec,
each
space
track
is
hopefully
approaching
the
stage
where
it's
almost
done
so
today
we're
going
to
go
over
the
some
of
the
remaining
issues.
Next
slide,
so
update
since
last
minuting.
First
of
all,
for
the
overview
draft
we've
merged
the
pr
that
defines
the
common
operations
that
any
web
transport
should
provide.
C
This
is
meant
to
be
as
a
layer
of
abstraction
on
top
of
web
transport
over
h3
or
transport
over
h2
and
whatever
else,
and
this
is
mostly
useful
for
people
who
edit
w3c
spec,
but
everyone
is
encouraged
to
read
the
updated
version
next
slide.
C
For
web
transport
over
it
freeze
up,
this
has
been
mostly
minor,
we've
notable
one
is
since
last
meeting
as
we
decided.
We
clarified
what
happens
once
again
away
frame
is
sent
on
stage
free
connection
and
added
some
missing
details
about
how
exactly
you
turn
down
the
transport
session.
So
on
the
next
slide,
we
have
some
of
we
have
about
10
remaining
issues
if
the
roughly
five
of
those
are
either
editorial
or
just
near
the
pr.
C
So
the
issues
are
still
not
discussing
are
each
free
is,
though
we
currently
do
not
define
what
we
do
with
http
redirects.
The
rfc
9205
says
that
we
have
to
provide
explicit
guidance
on
what
to
do
with
this.
Our
current
behavior
in
the
web
browser
is
that
we
explicitly
do
not
handle
them,
as
in
there
is
now
automatic,
redirect
support,
but
we
need
some
normative
tax
since
the
draft,
so
do
people
have
opinions
on
what
should
be
there.
C
Oh
as
an
individual,
an
implementer,
I
err
on
should
not
we've
definitely
from
what
I
understand
have
ran
in
a
bunch
of
implementation
issues
when
we
with
redirects
in
websockets-
and
there
are
some
rough
edges
around.
C
The
fact
that
those
aren't
really
http
requests,
oh,
and
what
does
it
mean?
What
is
the
difference
between
every
director
I
would
moderately
prefer
should
not,
as
in
you
could
follow,
but
we
will
not
normally
follow
and.
E
So
this
this
advice
that
we've
got
is
not
actually
very
helpful,
advice,
I'm
afraid,
and
so
when,
when
you
think
about
using
something
like
fetch,
you
will
normally.
You
would
normally
expect
to
have
the
redirects
followed
until
the
point
that
you
get
something
that
requires
action
on
the
part
of
the
the
thing
following
the
redirect
here.
I
think.
E
Because
browsers
work
following
redirects
generally,
I
would.
I
would
hope
that
we
can
follow
redirects
here
as
well,
simply
be
mainly
because
that's
just
how
everything
else
works,
but
also
because
there
is
value
in
having
redirects
in
terms
of
being
able
to
put
resources
on
different
servers
for
deployment
reasons
or
being
able
to
move
things
around
when
people
are
given
a
url
for
something
and
they
find
that
that
that
needs
to
move
somewhere
else.
So
I
would
be
on
the
must
end.
E
Anything
in
the
should
may
space
here
is
awful,
because
it
means
that
you
have
no
determinism,
you,
you
don't
know
who's
who's
following
and
who's
who's,
not
if
we
can
find
a
set
of
reasons
why
you
might
not
follow
a
redirect.
E
That
would
be
interesting,
but
I
would
probably
er
toward
the
the
must
end
on
this.
One.
P
R
R
L
Hi
luke
here
so
just
like
martin
said
it
should
be
a
must
or
must
not
as
a
user.
I,
if
I'm
going
to
use
a
redirect
feature
on
my
server,
I
need
to
know
if
the
browser
is
going
to
do
it.
Otherwise
I
could
just
do
it
through
some
other
mechanism.
So
if
it
does
support
redirects,
that's
one
more
tool
to
my
toolbox.
If
it
doesn't,
I
can
just
do
redirects
via,
like
some
other
endpoint,
so
I
think
either
way
just
one
of
the
musts.
O
I
was
getting
colin
jones,
I
was
getting
up
to
say
what
luke
said:
it's
got
to
be
muster
must
not
absolutely
mandatory
has
to
be
one
of
those
two,
but
I
totally
assumed
it
was
a
must.
It
never
occurred
to
me
in
any
way
whatsoever
that
it
wouldn't
be,
and
I
think
that
that's
probably
what
most
implementers
using
this
are
going
to
assume.
E
Yeah,
so
to
to
mike's
point,
I
think
part
of
the
problem
we're
having
here
is
that
the
the
model
that
we're
using
for
connect
here
is
somewhat
different
than
the
model
that
you
might
imagine
for
a
classical
http
proxy
connect
where
there's
a
target
that
isn't
really
a
target,
because
there's
no
resource
involved
in
any
of
any
of
the
connect
stuff
classically.
E
Here
we
have
a
resource,
we're
making
an
http
request
to
a
particular
resource,
and
the
effect
of
that
request
is
to
establish
a
web
transport
session
to
that
resource
and
so
having
a
redirect
here
makes
a
a
great
deal
of
sense,
because
it
does
fit
much
more
within
the
http
model
of
resources
and
redirects
and
all
those
sorts
of
other
things.
So
I
think
that's
why
I
lean
toward
the
must
here.
E
More
than
anything
else,
it
doesn't
make
a
lot
of
sense
to
have
a
redirect
for
a
connect
you're
right,
because
there's
that
that's
bizarre
but
connect
is
weird
in
its
native
form,
and
this
is
what
we're
building
here
is
much
less
weird.
It's
still
a
little
bit
weird,
but
it's
much
much
less
weird.
So
I
I
think,
must.
M
C
S
A
So
victor
said
that
he
believes
that
some
people
who
would
object,
are
not
in
the
room
and
he
wants
to
continue
discussion.
So
that
makes
sense.
We
don't
have
consensus
here,
we'll
keep
discussing
on
the
list
and
I'm
going
to
give
victor
an
action
item
to
get
those
folks
to
chime
in
because
you,
I
suppose
you
know
who
to
contact
there
yeah
all
right.
Thank
you.
C
Stream
frame
ordering,
so
there
is
so
the
way
we
do
unidirectional
streams
as
we
define
a
stream
type,
which
is
just
okay.
The
way
we
do
unidirectional
streams
is,
we
define
a
specific
stream
type,
so
there
is
a
stream
type,
and
then
there
is
the
web
transport
session
id
and
then
there
is
payload
for
bi-directional
streams.
C
E
E
E
A
E
For
later,
I'm
going
to
say
everything,
I
think
I
think
you
want
to
have
a
dis
disposition
frame.
This
basically
establishes
what
the
stream
is
and
will
ultimately
determine
what
extensions
are
are
available
or
not.
Now
we
may
regret
that,
at
which
point
we
can
revise
this
specification,
but
I'm
fairly
confident
that
when
you
have
a
disposition,
thing
that's
definitive
and
you
want
to
have
that.
T
Alan
frindle,
so
my
first
opinion
is
they
should
we
should
they
should
be
the
same
like
whatever
we
do
for
bi-directional.
We
should
do
for
unidirectional.
I
think
the
point
that
martin
was
making
about
well.
If
we
did
this,
then
people
might
assume
that
it's
different
or
have
buggy
logic.
I'm
not
sure
I
totally
buy
that
like.
T
If
the
specification
says
that
web
transport
is
a
series
of
frames
followed
by
a
frame
which
begins
the
unframed
part,
then
people
will
write
parsers
that
handle
that
and
if
they
don't,
they
have
they're
not
following
specifications,
and
I
don't
know
we
can't
make
people
follow
specifications.
I
guess,
but
in
terms
of
I
don't
have
a
super
compelling
use
case
either.
So
I'm
I'm
not
going
to
lie
down
in
the
road
here,
but
you
know
that
I
think
just
the
issue
mentions
either
greece
or
potentially
priority.
T
I
think
the
ability
to
have
extensions
in
the
future
is
easier
if
we
say
if
there's
a
series
of
frames
followed
by
the
beginning
of
unframed
data,
otherwise
you
would
have
to
have
a
different
way
of
you'd.
Have
a
different
kind
of
web
transport
session
frame.
In
the
future,
to
support
that
so
anyway,
but
I'm
I'm,
I'm
not
super
passionate.
L
Luke
here,
quick
question:
do
we
have
similar
wording
for
the
header
frame,
because
I
think
martin
brought
up
a
point
there,
that
if
a
client
assumes
the
header
frame
is
first,
but
is
that
prohibited
already
or
is
that
just
left
up
right?
Because
we
should
probably
just
follow
what
hp3
does
and
if
it
leaves
it
open,
then
we
can
leave
it
open.
R
Yeah,
so
it's
a
little
more
complicated
than
that,
because
h3
does
allow
for
the
possibility
of
other
frames
to
be
introduced
in
the
future.
So
what
h3
does
is
when
the
spec
says
that
a
certain
frame
must
be
first,
which,
for
example,
the
settings
frame
on
the
control
stream,
then
that
exact
frame
must
be
first
with
nothing
else
before
it.
But
when
we're
talking
about
the
ordering
of
headers
and
data
on
the
request
stream,
it's
all
about
sequence.
The
headers
must
come
before
you
see
any
data.
E
So
I
think
I
think
the
challenge
that
we're
facing
here
is
that
http
3
assumes
very
strongly
that
the
the
streams
that
it
has
the
the
bi-directional
streams
that
are
established
are
for
the
purposes
of
requests.
E
We're
messing
with
that
assumption,
because
that
an
endpoint,
that's
implementing
this
under
the
same
assumption,
will
look
at
things
and
see.
Oh,
this
is
an
arbitrary
frame
that
I
don't
understand
I'll.
This
must
be
a
request
stream
because,
of
course,
you
could
add
new
frame
types
that
are
extensions
without
any
prior
negotiation,
so
we're
basically
punt
if
we,
if
we
allow
other
things,
we're
essentially
punting
it
across
to
the
h3
assumption.
E
C
M
T
Want
to
say
that
I
mean
there's
a
way
to
have
what
martin
is
suggesting
and
which
is
you
have
a
frame
up
front.
The
first
thing
that
says
this
is
a
web
transport
session,
but
it's
a
frame
which
does
not
start
unframedness
and
then
you
have
potentially
more
frames
and
you
have
another
frame
which
says:
okay,
but
now
we're
starting
the
unframed
part
of
the
stream.
So
you
could
do
it
that
way,
but
I
think
you
could
also.
T
C
Yeah,
I
I
think
we
don't
yeah.
I
think
the
key
point
is
we
don't
actually
have
a
compelling
use
case
for
frames
and
web
transport
data
streams
so,
and
I
believe
that
if
we
require
web
transport
session
to
be
in
front
that
actually
would
simplify
implementations,
including
the
one
in
our
code.
So
there
are
some
practical
advantages
to
that.
C
C
So
the
problem
is
here
is
that
once
that
happens,
is
let's
say
we
do
that
as
a
client
on
the
server?
There
is
a
stream
that
is
half
open,
and
it's
now
in
a
state
where
it's
not
clear,
what's
supposed
to
happen
to
it,
because
the
client
has
reset
it,
and
the
issue
is
what
happens
to
the
other
side
of
the
stream,
and
I
think
my
answer
is.
C
R
Mike
bishop,
so
h3
has
an
error
that
basically
boils
down
to.
I
didn't
see
enough
of
your
request
to
act
on
it
go
away,
so
I
mean
in
this
case
it's
very
similar.
Q
Eric
can
your
apple,
so
this
also
could
be
solved
by
the
thing
that
I
think
alan
was
talking
about
if
we
wanted
to
have
an
explicit
message
that
goes
on
the
control
stream.
That
says
like
hey,
you
need
to
bill
me
for
this
one.
This
is
essentially
the
same
problem
right,
you're,
saying:
hey
I've
got
this
stream,
it's
hanging
out,
and
I
don't
I
don't
even
know
if
it
was
web
transport.
Q
If
it
is,
I
don't
know
which
web
transport
session
it
was
what
the
heck
right,
so
that
there
is
an
opportunity
to
have
a
way
to
kind
of
catch
up
with
that
to
to
kind
of
have
it
resolve
those
lingering
inconsistencies
and
leaks
of
things.
It's
not
not
trivial.
I
think,
to
write
text,
that's
good
for
that,
but
it
would
solve
this
and
the
other
problem.
So
maybe
we're
approaching
the
point
where
it's
worth
doing,
yeah.
E
A
Right
mt,
do
you
have
a
reason
to
jump
the
queue
here,
yeah.
A
S
E
T
T
Yeah,
so
I'm
I'm
a
little
bit
concerned.
If
that's
the
only
thing
the
server
says:
it'll
have
a
chance
to
send
the
like.
Oh
you
sent
me
this
stream
and
I
don't
know
what
it
was
also
what
if
that
was
a
unidirectional
stream,
how
would
you
even
well?
This
is
only
for
bi-direction
well,
this
is
for
both.
A
A
T
Okay,
yeah,
and
so
I
think
I
mean
I'll
admit
so,
yeah
like
I
said
this
is
we
mentioned
earlier.
This
is
how
qpac
does
stuff,
which
is
like
just
it,
puts
the
stream
number
on
the
other
stream,
and
it's
like
hey.
This
thing
got
reset,
so
the
accounting
can
be
taken
care
of
it's
a
little
bit,
not
wonderful
that
it's,
this
actual
quick
stream
id
floating
around
in
qpack
space,
but
that
is
you
know
with
just
we
did
it
that
way.
A
So
I
and
and
correct
me
if
I'm
wrong,
martin
seaman
since
you
filed
the
issue
like
this,
doesn't
sound
specific
to
web
transport.
For
me,
it's
a
bi-directional
stream.
So
in
h3
already,
if
the
client
opens
a
bi-directional
stream
and
resets
it,
the
server
could
in
theory,
keep
it
open
forever.
A
But
that
would
be
silly,
I'm
pretty
sure
I
mean
I
should
check
what
our
implementation
does,
but
the
correct
thing
to
do
is:
go
whatever
kill
this
thing.
Otherwise
that
sounds
like
a
resource
exhaustion
attack.
E
Now
we're
not
doing
if
we
decide
not
to
do
any
stream
limiting
and
whatever
else
we
don't
need
to
deal
with
that
particular
problem,
but
it
does
create
on
the
server
side,
a
stream
that
the
server
can
send
on
and
it
needs
to
know
it
needs
to
know
what
it
needs
to
do
with
that
that
thing
it
might
want
to
send
on
it,
because
I
don't
know.
Maybe
this
is
the
protocol
that
you
develop
you
you
create
a
stream
and
reset
your
end
and
then
expect
the
other
end
to
do
something
with
it.
E
I
don't
know
so
I
think
eric's
suggestion
was
perfectly
good
here.
We
need
we
could.
E
We
can
resolve
the
problem
here,
whether
or
not
we
want
to
do
the
stream
level
accounting
stuff
we
can
still
solve
it
in
in
that
way,
and
it's
probably
still
worth
doing
is,
is
to
do
exactly
what
allen
was
suggesting,
as
we
did
with
qpack
just
say,
oh
by
the
way,
I
reset
this
thing,
reset's
fairly
uncommon,
we'll
still
need
to
send
the
resets,
because
that's
how
quick
expects
us
to
behave
but
having
a
having
a
message
saying:
oh
by
the
way
this
the
stream
was
created
and
reset
now
you
can
connect
it
up
with
your
session
is
probably
something
that's
worthwhile
having
just
so
that
everything
runs
neatly
and
everything
can
be
accounted
for
properly.
I
Yeah
we
have
about
six
more
minutes
in
five
slides,
so
we
might
want
to
limit
the
cube
from
here
on
out.
I
think.
T
It
quick
yeah
I'll,
be
quick.
I
just,
I
think
what
martin
said,
or
someone
just
said
reminded
me
that,
like
what
would
happen
if
all
he
received
is
a
reset
and
then
the
server
was
like
sent
to
an
http
error,
page,
for
example,
onto
a
web
transport
stream
on
the
other
side,
that
might
be
very
unexpected
or
do
very
weird
things.
So
that's
probably
not
what
we
want.
C
Okay,
so
I
think
this
is
the
last
of
the
actual
issues
is.
We
do
not
actually
describe
how
we
expect
the
web
transport
section
to
be
closed.
There
have
been
a
couple
of
proposals
floating
around
one
of
them.
Is
you
sends
a
closed
capsule?
You
send
the
fin
and
then
some
points
appear
responds
with
a
fin
and
other
is
like
you,
send
capsules
and
venus
and
stop
sending.
C
Oh,
I
said
that's
a
typo
yes,
either
of
those
would
require
you
to
have
a
timer
to
eventually
tear
the
entire
thing
down.
If
the
peer
does
not
respond.
C
C
Oh,
I
suspect.
When
you
have
multiple
sessions,
you
can
wait
and
you
just
lead
to
resource
exhaustion.
If
you
never
send
the
finn
in
the
response.
T
C
Yes
like
because
we
have
no
idea
what
the
application
wants.
Maybe
application
wants
to
push
notifications
once
every
five
minutes.
T
A
A
M
E
It's
not
that
simple,
okay
yeah,
so
I
think
there's.
E
I
think
there
is
value
to
having
the
ability
to
do
it
either
way
here,
in
fact,
so
I
think
I
think
that
there's
value
in
sending
a
capsule
saying
why
it
is
the
connection
closed
and
I
think
either
either
side
should
be
able
to
send
that
in
this
circumstance.
So
I
think
this
is
neither
one
or
two
it
is
either
side
can
simply
send
a
capsule
and
a
pin
explaining
what's
going
on
independent
of
each
other.
E
Now
the
question
is
how
you,
how
you
respond
to
seeing
that-
and
I
think
either
one
or
two
is-
is
an
option
here,
because
in
the
case
where
I
want
to
walk
away-
and
I
just
say
I'm
not
going
to
pay
any
attention
to
what
it
is
that
you
send
from
this
point
onwards,
stop
sending
is
perfectly
acceptable
in
that
in
that
scenario,
but
you
may
be
interested
in.
G
E
What
the
other
side
has
to
say
as
well
or
at
least
you
can't
stop
them
from
sending
something
with
a
capsule
saying
that
they
wanted
to
go
away
as
well.
So
I,
I
think,
all
of
the
all
the
possible
options
are
available
to
anyone,
and
I
don't
see
why
we
have
to
pick
one.
A
A
C
Very
specific
thing
is
there
are
free,
pr's
everyone?
Please
read
them.
I
guess
that's
all.
I
want
to
say.
D
A
You're
all
good
thanks
mary,
so
we
got
folks
to
be
like
everyone
seems:
okay
with
the
output
of
the
capsule
design
team.
That's
great!
That
was
blocking
quite
a
few
other
issues
that
we're
going
to
be
able
to
make
progress
on.
We
got
a
resolution
quite
a
few
other
issues
on
h3,
and
so
that
was
very
productive.
We
do
have
a
few
other
one
issues
that
we
haven't
gone
through
in
prs.
That
need
to
be
reviewed.
So
please
take
some
time
to
do
that.
A
The
editors
will
spend
some
time
like
we'll
be
discussing
the
capsule
design
tprs
on
the
list.
Once
those
are
emerged,
we'll
have
more
discussions
on
the
issues
that
were
blocked
on
that
resolution
and
we'll
see-
maybe
maybe
we'll
have
an
interim
before.
C
I
A
Yeah,
let's
my.
A
Is
let's
get
the
output
of
the
design
team
merged,
assuming
there's
on
this
concessions
for
that
and
see
if
we
can
progress
the
other
issues
on
github
and
if
we
feel
like
a
face-to-face,
you
know
probably
virtual
conversation
would
help
we'll
organize
an
interim.
So
thanks
everyone
for
coming
and
see
you
all
soon
on
the
list.