►
From YouTube: IETF113-WEBTRANS-20220324-1330
Description
WEBTRANS meeting session at IETF113
2022/03/24 1330
https://datatracker.ietf.org/meeting/113/proceedings/
B
C
A
A
A
D
This
is
our
first
in-person
meeting
since
we
formed
the
working
group
in
march
of
2020,
because
we
had
an
in-person
buff
and
then
we
were
virtual.
So
now
you
know
how
tall
some
of
you
are
and
we're
looking
forward
to
seeing
more
folks
at
hopefully
the
coming
itfs.
We
still
have
a
lot
of
folk,
remote.
D
Oh
yeah,
and
so
my
name
is
david
kanazi,
I'm
one
of
our
chairs
and
our
other
chair
bernard
is
here
remote,
hello,
hi
bernard
all
right.
The
audio
seems
to
be
working
well.
So
as
a
quick
reminder,
this
session
is
being
recorded.
D
The
usual
tips
for
remote
meetings
haven't
really
changed,
but
if
things
have
changed
for
the
in-person
folks,
since
we
were
last
in
person
in
particular,
if
you
want
to
join
the
mic
line,
you
need
to
do
it
on
the
online
version.
Otherwise,
you'll
just
be
standing
at
the
mic,
stretching
your
legs
and
you
can
use
either
the
onsite
or
the
remote
version.
They
both
work
for
that.
D
And
yeah,
if
you
are
in
the
room
like
please,
join
it
as
well,
because
that's
the
only
way
we
populate
the
blue
sheets,
which
is
important
and
then
the
buttons
for
meat
echo
are
roughly
the
same.
You
press
the
hand
to
join
the
cue,
and
then
you
can
unmute
once
it's
your
turn.
Please
stay
muted
until
that's
the
case.
D
The
note
well
it's
thursday,
so
most
of
you
have
probably
seen
it,
but
some
folks
might
be
joining
for
their
first
session
today.
So
everything
we
do
itf
is
covered
by
the
notewell
and
if
you
haven't
read
it,
you
really
should
do
that.
It
said
that
you
had
to
read
it
to
register
so
hopefully
you
at
least
glanced
at
it.
In
particular.
Anything
you
say
has
implications
with
regards
to
copyright
and
patents,
and
also
we
have
the
itf
has
a
code
of
conduct
that
the
chairs
will
be
enforcing
it.
D
Pretty
much
boils
down
to
be
nice,
which
is
what
everyone
has
been
doing
in
this
working
group
so
far.
So
we're
really
happy
for
that.
Let's
keep
it
that
way,
all
right,
some
more
links.
We
would
like
volunteers
as
usual
for
a
jabra
scribe
and
a
note
taker.
Do
we
have
volunteers?
Please
thank
you,
lucas
for
jabberscribing
who
would
like
to
be
our
notetaker.
Please.
D
D
For
note,
taking
notes,
awesome
thanks
and
we're
yeah,
you
can
collaborate
both
of
you
on
the
usual
thing,
that's
accessible
from
the
itf
agenda.
Page
awesome
thanks
right.
Our
agenda
today
is
me
talking
for
a
while.
That's
done,
then
we're
gonna
have
a
w3c
update
from
will.
We
are
then
going
to
talk
about
web
transport
of
http
3
and
then
we'll
transfer,
http,
2
and
wrap
up.
Would
anyone
like
to
bash
the
agenda.
D
E
Okay,
thank
you
good
morning
from
california.
I'm
will
law
from
akamai,
I'm
representing
the
w3c
group.
Looking
at
developing
the
api
for
web
transport
for
browsers,
I
represent
yaniva
brewery,
my
co-chair,
so
just
a
quick
update
for
you,
three
slides
progress
since
november
the
9th
last
time
we
reported
the
status
of
the
spec
is
now
a
working
drop.
The
latest
version
is
march,
11,
2022
and
there's
a
link
in
the
slides.
Here
you
can
go
to
it
as
chairs.
E
We've
set
a
somewhat
optimistic
timetable
for
the
year
it's
outlined
here
with
candidate
recommendation
coming
up
awfully
quickly.
The
main
blocker
here
is
by
july
31st.
We
had
the
idea
for
a
proposed
recommendation,
which
requires
at
least
two
independent
implementations.
Chrome
have
been
in
production
since
m97.
However,
mozilla
have
not
yet
signaled
a
date
so
july.
31St
is
somewhat
challenged
at
this
point,
and
I
would
encourage
alternate
browser
vendors
to
chrome.
E
To
please
step
up
here.
We
have
a
charter
valid
through
september
30th.
We
will
extend
that
charter
if
we're
not
able
to
get
the
second
implementation,
then,
by
that
rate,
milestones,
we've
adjusted
our
milestones
to
match
the
release
process.
So
we
have
a
new
milestone
up
now
called
candidate
recommendation.
It
obviously
contains
the
issues
that
need
to
be
resolved
before
we
can
proceed
to
that
stage.
Minimum
viable
ship,
the
prior
one,
has
a
few
issues
remaining.
D
So
there
should
be
over
the
slides
at
the
bottom
controls
for
you,
where
you
can
just
hit
next
slide.
E
E
Some
of
the
resolution
establishing
a
session.
Clients
should
not
be
providing
credentials
and
I've
got
issue
links
there.
You
can
go
through
and
read
the
details
blocking
ports
on
on
fetch's
bad
ports
list,
adding
smooth
rtrt
variation
to
web
transport
stats.
With
the
definitions
coming
from
elsewhere,
packets
lost
web
transport
stats
extra
requirements
on
server
cert
hashes
on
web
transport,
get
stats,
we've
added,
expired,
outgoing
dropped,
incoming
datagrams
lost
and
decided
that
stream
stats
will
be
an
individual
getstats
method
on
each
stream
instance.
E
And
after
some
long
debate
mostly
resolved
but
not
yet
merged,
we
have
some
decisions
around
reliability
and
fallback,
and
also
some
good
discussions
about
the
actual
state
of
the
constructor
argument.
E
You
know
require
unreliable
versus
request,
reliable,
for
example,
content
security
policy
has
also
been
a
long
issue
and
there
was
recently
a
decision
to
to
break
the
compatibility
with
the
early
version
of
chrome,
in
the
hope
that
it's
better
to
do
it
now
as
there's
not
that
many
users
and
that
it
would
be
more
stable,
going
forward
datagram
priority
over
streams,
there's
been
a
change.
The
priority
algorithm
had
some
some
guidance
and
it's
replaced
with
normative
language
that
the
user
agent
should
make
sure
that
datagrams
get
out
quickly.
E
The
last
one
was
very
sort
of
absolute
datagrams
always
go
out
first,
and
there
were
some
potential
issues
there.
Current
issues
under
debate,
bike
shedding
on
names
for
stream
stats
that
should
be
resolvable
connection,
pooling,
there's
new
issues
raised
around
whether
it's
default
on
or
off,
and
how
to
post
that
in
the
constructor
and
then
lastly,
priorities
between
streams
is
this
necessary,
and
also
this
is
a
question
for
our
etf?
E
Is
their
intent
to
define
a
method
of
stream
prioritization
that
might
then
filter
down
to
the
browser
level
that
we
could
take
advantage
of.
That
concludes
the
summary
from
w3c.
Many
of
the
participants
of
w3c
are
on
this
call.
So
if
there
any
questions,
we
can
take
them
now
or
else
back
to
david.
G
Go
ahead
so
world's
asking
about
priority
here,
which
I
think
is
probably
one
of
the
more
interesting
questions
that
we
could
we
could
discuss
here,
there's
potentially
something
we
could
do.
We
could
take
the
http
priorities
thing
and
just
sort
of
say:
well,
there's
eight
levels
and
off
we
go
or
we
could
take
a
leaf
out
of
the
webrtc
book.
I
think
they
have
four.
G
Does
anyone
want
a
bike
shed
on
this
one?
I
think
it's
really
coming
down
to
that.
H
E
No
is
the
answer
and
there's
they
don't
have
to
be
precisely
identical
in
feature
set.
I
think
there's
an
intent
that
they
should
deliver
the
majority
of
the
capability,
but
you
know,
chrome
is
certainly
shipped
not
with
these
features
in
and
that's
acceptable
they're
getting
added
post
ship.
So
I
think
the
requirement
is
that
we
want
basic
functionality,
be
able
to
connect,
be
able
to
establish
streams,
send
datagrams.
E
D
I
Allen,
frindell
with
respect
to
priority,
I
think
I
I
like
the
http
scheme,
a
lot
in
terms
of
the
eight
lanes
and
I
think,
in
terms
of
different
priority
schemes.
We've
looked
at,
at
least
in
http
over
time
that
this
has
got
the
right
balance
of
flexibility,
power,
simplicity
and
creating
another
one.
I
I'm
not
sure
would
be
really
useful,
so
finding
a
way
to
adapt
that,
as
like
a
model,
there's
two
pieces,
one
would
be
if,
if
webtransport
adopted
it
would
we
add
signaling
to
the
wire
protocol
so
that
the
browser
side
can
set
the
priority
on
streams
that
the
server
is
sending
and
another
one
is
just
on
the
api
side.
Can
the
browser
set
priorities
on
the
streams
that
it
is
sending
which
is
strictly
does
not
require
wire
format,
changes
it's
just
between
the
api
and
the
sending
implementation
thanks.
D
Thanks
just
double
checking
I'm
not
seeing
any
typing
happening
in
the
doc
bernard.
Are
you
taking
minutes.
J
D
D
K
I
was
mostly
under
the
impression
that
the
priorities,
at
least
outside
of
those
communicated
as
the
wire
by
gtp,
are
typically
considered
the
api
concern.
So
I
personally,
unless
we
intend
to
provide
a
mechanism
for
end-to-end
signaling,
I
don't
think
there's.
There
is
much
in
the
working
group
to
be
done
since,
as
far
as
I
can
see,
it's
mostly
an
api
problem
as
in
it
is
about
how
web
applications
signal
priorities
to
the
browser's
networks.
Act
then
not
to
the
peer.
G
Yeah,
I'm
going
to
agree
with
victor
here.
I
think
that
if
an
application
wants
to
do
the
signaling,
it
can
use
web
transport
for
that.
That's
probably
the
most
obvious
thing
to
do
here
and
then
it
becomes
a
purely
api
consideration
and
I,
I
think,
to
allen's
point
that
extensible
prioritization
scheme
scares
me,
I
think,
probably
for
something
like
web
transport.
D
D
D
D
D
K
I'm
vector
by
the
editor
for
the
overview
and
web
transport
or
http
drafts,
and
those
are
the
two
drafts
that
are
probably
currently
further
along
in
the
process
in
the
sense
that
web
transport
over
http
has
a
implementation
in
chrome
and
we
to
some
extent
now
it
works
and
we've
interrupt
with
servers
and
it's
been
used
by
people
to
interrupt
with
servers
that
were
written
by
not
us,
but
there
are
still
a
lot
of
unresolved
issues,
especially
in
the
overview
draft,
so
the
overview
draft
was
originally,
it
was
written,
probably
the
first
earliest
in
web
transport
process,
so
it
is
to
some
extent
the
least
reflective
of
what's
currently
is,
and
the
reason
we
so
far
have
not
paid
much
attention
to.
K
However,
now
that
web
transport
over
http
2
is
moving
along,
we
need
to
provide
some
form
of
abstraction
between
what
what
transport
over
h3
does
and
web
transport
over
h2
does,
and
that
is
something
I
believe
we
should
do
in
the
overview
draft,
since
it
is
well
positioned
to
do
that,
I
wrote
up
a
pull
request
to
do
that,
or
at
least
some
of
that
that
pull
request
defines
all
of
the
operations
that
are
cited
in
w3c
spec,
except
the
w3cs
pack
currently
size,
either
or
c9000,
or
with
transport
over
http
draft
for
those
which
obviously
does
not
work.
K
If
we
want
this
to
work
for
perfect
free
image
too,
I
encourage
everyone
to
take
a
closer
look
at
that
pull
request.
I
don't
think
there's
much
to
discuss
since
it's
mostly
around
the
text.
Oh.
K
So
now,
web
transport
over
http
still
has
a
lot
of
some
issues
that
are
not
resolved,
and
one
of
them
is
the
question
of
whether
we
need
to
support
a
go
away
like
mechanism
for
training
sessions
and
the
reasoning
behind
the.
Why
we
need
to
send
the
protocol
as
opposed.
We
need
it
in
the
application
and
the
application
can
handle
it
by
itself.
K
Is
that
the
go
away
can
be
not
only
sent
by
the
end
points.
It
can
also
be
sent
by
a
reverse
proxy,
which
is
going
away
or
some
other
element
in
the
chain.
K
K
I
see,
then
the
question
would
be
alan.
Would
you
like
to
write
out
a
pull
request
for
this.
K
D
What's
he
helen
says,
sue?
Okay?
What
are
you
drawing?
No,
no
all
right.
Let
the
minutes
reflect
that
allen
said
sure.
K
K
I
Alan
frindell,
so
I
think
when
you
think
about
it
in
the
context
of
http
2,
where
all
of
the
web
transport
streams
are
not
visible
at
the
http
session
layer,
you
know
an
http
2
go
away,
would
have
no
impact
on
whether
webtransport
endpoint
could
continue
creating
new
web
transport
streams
within
that
established,
http
2
stream,
and
I
think
we
should
preserve
those
semantics
for
http
3..
So
if
you
receive
a
go
away,
it
means
no
new
web
transport
sessions,
but
you
can
still
create
web
transport
streams.
If
you
want
to.
K
Martin
says
in
japan
he
agrees
with
alan.
Does
anyone
else
has
anything
to
say.
K
D
So,
to
clarify
victor
you,
what
are
your
thoughts?
Do
you
agree
with
alan
and
empty?
Oh,
yes,
cool
and
I'm
seeing
another.
D
K
K
At
some
point,
alan
suggested
that
we
might
want
to
keep
the
internal
streams
open,
even
when
the
sessions
is
closed
and
when
we
discussed
this
a
year
ago.
The
conclusion
was
to
revisit
this
once
we
have
more
implementation
experience
and
my
current
impression
after
having
more
implementation
experience
is
that
they
have
having
the
semantics
where
the
control
stream
automatically
terminates.
K
Everything
on
session
is
much
easier
to
follow
and
much
easier
to
manage
in
terms
of
resource
life
cycle
and
also,
if
we
allow
that,
I
am
not
sure
how
that
would
look
for
http
2,
where
everything
is
on
a
single
stream.
So
I
think
we
should
keep
the
current
semantics.
Where
closing
the
connect
stream
automatically
determines
everything.
L
I
think
I
would
generally
agree
with
having
killing
the
connect
stream
kill
everything
else.
It
seems
like
if
you
don't
do
that.
What
you're
effectively
saying
is
that's
kind
of
an
implicit
a
couple
slides
ago
the
go
away
that
we
wanted
for
a
web
transport
session,
and
so
it
seems
safer
to
make
that
explicit,
go
away.
If
you
wanted
those
semantics
and
then
still
have
your
control
stream
around,
if
you
needed
control
messages
rather
than
having
the
control
stream
disappear
and
trying
to
assign
some
sort
of
meaning
to
that.
K
M
K
M
Yeah
I
mean
it's
depending
on
what
protocols
you're
doing
in
some
cases.
It
might
be
that
you
actually
want
to
finish
saying
it's
the
assumption.
What
what,
if
you're,
relying
on
the
higher
level
protocol
built
on
top
of
web
trans
or
evapotrans
itself,
to
do
the
close,
which
kind
of
semantics
you
want.
So
it's
a
bit
yeah.
K
D
G
So
if
we're
thinking
about
the
http
2
version
of
this
one,
if
you
close
the
connect
stream,
you
also
have
just
shut
off
your
ability
to
send
any
more
stream
data
of
any
type,
so
I
think
you're
also
prevented
from
receiving
it
in
a
sense,
probably
not
so
I
tend
to
think
that,
once
we've
closed
the
streams,
then
everything
else
is
effectively
orphaned.
At
that
point,.
D
Hey
victor
speaking
as
an
individual
contributor.
Well,
rather,
as
a
mask
enthusiast
here
in
the
http
datagrams
draft
that
we
have
over
in
mask
when
you
close
the
connect
stream,
you
can
no
longer
send
datagrams,
so
my
personal
take
would
be.
It
just
makes
everything
easier.
If,
when
you
close
the
connect
stream,
everything's
done.
K
I'm
under
impression
that
we're
all
at
this
point
agreeing
that
closing
the
connect
stream
effectively
closes
everything
on
the
session,
which
I
think
is
the
current
text
of
the
draft.
I
need
to
make
sure
that
this
is
very
clear
on
this,
but
that
sounds
like
we
are
good
to
just
keep
it,
as
is.
M
Magnus
pestle
one
question
I
have
is
about
receiving
the
data
that,
if
you're
basically
sending
the
rest
of
the
data
you
have,
and
then
you
hit
close
is
the
risk
that
the
receiver
will
get
the
close
before
the
date,
the
other
like
data
is
sent
and
then
you
reject
it
and
that's.
I
will
be
a
little
bit
careful
about
that
particular
use
case.
D
There
is
in
web
transport,
a
sorry
I'll
guess
I'll
just
jump
in
victor.
If
that's
okay
with
you
a
like
closed
session
capsule,
that
is
the
like
graceful
close.
So
that's.
M
M
K
To
clarify
there
are
effectively
three
ways
you
can
close
the
connect
stream.
You
can
close
it
by
sending
close
session
capsule,
which
gives
you
the
session,
which
gives
you
the
status
code.
You
can
close
it
by
just
sending
thin,
which
is
equivalent
to
sending
an
empty
capsule,
and
you
can
reset-
and
this
issue
roughly
covers
all
three
of
those
cases
but
yeah
in
general.
K
For
pooled,
we
have
to
be
cognizant
of
what
regular
http
3
does
and
one
of
the
tricky
one
of
the
first
questions
I
have
before
even
discussing
this
issue
is
whether
this
belongs
in
the
iatf
draft
or
or
doubly
first
t
draft,
or
this
belongs
in
fetch,
martin.
G
So,
in
terms
of
the
effect
on
a
transport
session,
it
should
be
nothing.
It
may
affect
what
the
browser
does
with
future
requests.
But
that's
that's
not
something
that
word
transfer
necessarily
has
to
worry
concern
itself
with
the
the
sort
of
broader
questions
about
how
to
transport
and
interact
with
connections.
G
K
D
Thank
you.
Victor
eric
come
on
up.
L
All
right,
so
this
is
still
an
hdb3
topic
as
we
go,
but
we're
splitting
some
of
this
across
h3
and
then
we'll
also
talk
about
h2.
L
So,
let's
talk
about
pooling,
we
have
decided
a
while
back
that
we
would
like
to
support
pooling,
and
that
means
that
we
want
to
be
able
to
have
more
than
one
web
transport
session
within
a
given
http
3
connection,
and
I've
got
a
little
side
note
here
on
the
side
that
this
is
really
not
that
much
of
an
issue
for
h2,
because
each
session
is
fully
encapsulated
within
one
connect.
Stream
and
h2
is
already
perfectly
capable
of
having
more
than
one
connect
stream
and
so
a
lot
of
the
layering.
L
There
works
perfectly
well
and
is
not
a
huge
problem,
but
within
h3
we've
kind
of
got
these
native
h3
streams
that
we're
also
using
for
web
transport,
and
so
we
need
to
figure
out
how
do
we
make
more
than
one
web
transport
session
work
there's
also
another
consideration
which
applies
to
both
versions
of
http,
which
is
that
we
want
web
transport
to
be
able
to
coexist
with
regular
http
requests
going
over
that
h3
connection,
not
just
web
transport
sessions.
L
So,
coming
back
to
the
pretty
picture
that
we
looked
at
last
time,
this
is
kind
of
visually.
What
we
think
we're
doing-
and
we
had
this
up
in
the
context
of
flow
control
for
h2,
but
essentially
within
http
2
every
web
transport
session,
is
within
a
connect
stream
and
you
can
have
multiple
of
those
pooling
done.
L
H3
is
a
little
bit
more
interesting
because
you
have
web
transport
sessions
where
here
we've
kind
of
broadened
this
box,
and
this
is
all
a
little
bit
abstract.
So
we
don't
need
to
get
too
into
the
details
of
this
diagram.
But
what
we're
really
trying
to
convey
is
that
we've
got
these
different
native
http
3
streams
that
are
serving
different
purposes
on
behalf
of
that
web
transport
session.
L
And
so,
if
you
want
to
have
multiple
web
transport
sessions,
those
are,
by
definition,
going
to
end
up
using
multiple
actual
h3
streams
for
the
different
web
transport
streams
that
go
within
those
web
transport
sessions.
L
So
the
proposal
that
I'm
going
to
make
with
these
slides
for
discussion
is
that
we
do
the
necessary
things
in
order
to
keep
any
individual
web
transport
session
from
essentially
unfairly
dominating
the
h3
connection
and
call
it
a
day
and
explicitly.
That
means
we're
not
going
to
do
anything
to
try
to
preclude
future
enhancements
to
this.
L
So,
as
we
discover
other
places
where
we
need
to
do
things,
it
would
be
great
to
do
those
as
extensions,
but
we're
not
going
to
put
any
time
and
energy
into
them
right
now,
and
that
would
close
all
four
of
these
different
github
issues,
which
is
a
non-trivial
chunk
of
what
is
remaining
across
the
set
of
web
transport
repositories.
L
What's
left,
though,
is
some
sort
of
control
on
the
maximum
count
of
sessions
that
you
can
make
right?
So
if
I
wanted
to
open
a
million
web
transport
sessions
within
my
h3
connection,
that
might
not
leave
any
room
for
non-web
transport
requests
to
go
on
there
and
also
the
count
of
streams
for
each
session.
For
the
same
reason,
because
the
streams
are
native
h3
streams,
those
are
kind
of
entangled.
L
We
do
have
a
note
in
both
h3
and
also
actually
in
h2
right
now.
That
says
that
we
don't
actually
go
out
of
our
way
to
provide
any
limit
on
the
number
of
web
transport
sessions
that
you
can
open
and
that
instead
there's
two
different
ways
that
the
server
or
the
the
receiver
of
that
can
just
say,
nope.
This
isn't
happening,
and
I
just
reject
it.
L
L
L
It's
also
very
similar
to
how
it
looks
in
quick
and
so
in
the
spirit
of
maintaining
similarity
with
that,
making
something
that
parallels
that
very
closely
for
web
transport
seems
appreciably
nice.
L
The
way
that
we
generally
do
that,
for
example,
within
quick,
is
we
have
a
max
streams
frame
that
adjusts
the
limit
on
the
number
of
streams,
and
then
we
have
max
data
and
max
stream
data.
And
of
course,
all
of
these
have
their
associated
blocked
frames
as
well
to
help
with
debugging
and
other
issues
like
that.
L
We
have
this
nice
property
here
in
which,
because
web
transport
over
h3
is
already
using
native
h3
streams,
those
have
a
flow
control
limit
on
them
already
from
quick,
which
is
that
mastering
data
frame.
So
we
don't
actually
need
a
new
one
there
that
is
already
present,
and
if
we
do
the
maxweb
transport
sessions
setting,
then
we
don't
need
any
other
limit
for
total
session
count.
L
If
we
go
back
to
our
pretty
picture
from
before
I've
added,
some
vaguely
shaped
annotations
to
kind
of
try
to
help
visualize
what
this
means.
This
may
or
may
not
be
meaningful
if
it
is
not
no
worries,
so
the
first
one
is
down
at
the
bottom.
There
we've
got
our
setting
for
mac
sessions
and
I've
put
that
around
the
fun
little
dot
dot
dot.
L
This
is
to
basically
allow
you
to
portion
out
the
stream
limit
that
you
have
for
the
total
h3
connection
and
then,
within
that
web
transport
session
across
all
of
the
streams
for
that
session,
you
can
do
the
same
thing
for
the
number
of
bytes,
so
you've
got
count
of
streams
plus
you've
got
bytes.
L
We've
answered
as
much
of
the
pooling
question,
as
we're
planning
on
answering
you've
got
associate
the
the
necessary
resource
limits
in
order
to
make
sure
that
different
web
transports
can
coexist
on
an
h3
connection
along
with
other
actual
http
requests.
Everybody
plays
nice
and
we
are
happy.
L
H
L
I
don't
have
a
strong
negative
aversion
to
keeping
track
within
a
web
transport
session
of
the
amount
of
data
that
you're
allowed
to
send
and
having
a
limit
be
communicated
from
the
other
end
about
that.
But
if
there's
it
could
also
be
that
I'm
not
correctly
internalizing
how
painful
that
would
actually
be
see.
David
is
not
standing
up.
Yeah.
D
H
Yes,
so
of
course
you
can
read
the
data
out
of
the
stream
and
then
have
all
your
buffers
at
the
web
transport
layer.
So
I
guess
my
my
question
is:
how
do
we
account
for
if
they
are
missing,
missing
packets
or
missing
frames
on
a
stream,
so
in
quick
flow
control,
then
the
maximum
offset
that
you
have
received
would
count
for
flow
control.
H
Do
we
then
say
like
we
don't
care
about
this
or
we
let
the
quick
layer
handle
this,
and
we
have
like
this
totally
separate
flow
control
on
on
the
web
transport
layer?
I
I
haven't
fully
thought
this
through,
because
we
are
now
reusing
the
the
quick
flow
control
from
this
stream
on
the
web
transport
layer.
If
there's
any
mismatch
there,
I
would
need
some
more
time
for
that.
D
G
I
suspect
that
probably
the
right
thing
to
do
here
is
is
not
deal
with
the
session
level
data
limits
and
simply
say
that
you
have
a
quick
level
flow
control.
Acting
on
on
the
bytes.
L
G
Knowing
that
that
enforcement
is
going
to
be
imperfect
at
that
point-
and
I
certainly
don't
want
to
have
it
with
its
tentacles
deeply
embedded
in
the
quick
stack
in
order
to
get
the
values
that
the
quick
stack
is
using.
So
I
I
think
I'm
I'm
all
for
saying
just
okay,
maybe
we
we
just
don't
do
that
one.
L
If
we're
not
doing
that
right,
I
mean
the
the
reason
we
have
a
lot
of
those
tiers
is
to
allow
people
to
be
more
restrictive
and
therefore
allow
any
individual
stream
to
use
that
entire
limit
without
having
to
commit
to
using
these.
You
know
to
being
willing
to
devote
the
resources
for
the
sum
of
all
of
the
streams,
but
it's
entirely
possible
that
you
can
tackle
that
elsewhere
anyway,.
G
So
many
new
buttons,
I'm
sorry
about
that.
The
mac
streams-
one,
I
think,
would
be
an
absolute
number
right,
so
you
would
say
I'm
dedicating
20
streams
to
each
web
transport
session.
We
probably
have
to
have
some
sort
of
minimum
commitment
for
the
purposes
of
like
this.
G
You
must
provide
this
many
streams,
but
that
doesn't
mean
that
you're
gonna
get
that
many
streams,
and
so,
if
you
have
enough,
but
you
can
just
count
how
many
streams
you've
got
at
the
moment,
there's
a
little
bit
of
trickiness
in
terms
of
the
accounting,
but
I
think
we
can
make
that
work.
And,
of
course,
if
this,
if
the
quick
layer
isn't
able
to
issue
new
streams
faster,
then
you
won't
be
able
to
get
up
to
this
limit.
But
that's
okay.
L
And
for
what
it's
worth,
I
think
that's
probably
a
good
thing
in
the
same
way
that
max
data
gives
you,
you
know
a
way
to
say
my
I'm
not
willing
to
take
the
sum
of
all
of
them
in
total.
I
want
something
smaller
than
that
you
could
always
say
at
the
quick
layer.
I'm
only
willing
to
take
this
many
streams,
but
each
web
transport
session
is
able
to
use
all
of
them,
which
does
not
then
commit
you
to
the
sum
of
all
of
your
web
transport
sessions
in
stream
count.
L
G
L
I
Alan
from
dell,
so
I
think
I
I
hear
the
like
concern
that
max
data
is
hard.
Maybe
I
don't
know
I
want
to
say
impossible,
but
much
harder
than
stream.
So
I
think
maybe
what
makes
sense
is
we.
We
separate
this
out
and
kind
of
table
max
data
for
now,
but
I
I'm
left
a
little
uncomfortable
thinking
that,
like
one
session,
can
cannibalize
quick's
entire
flow
control
and
leave
other
sessions
blocked
and
so
be
good
to
have
a
solution.
But
you
know
how
complicated
it
needs
to
be.
Is
it
worth
implementing?
I
Maybe
not
so
I,
but
I
think,
I'm
okay
for
the
to
move
forward
here,
like
let's
just
look
at
streams
for
now
and
maybe
keep
an
issue
open
to
revisit
data
or
have
somebody
sketch
it
out
like
what
would
it
really
take.
D
Jumping
in
as
chair
here
that
what
I
would
propose,
as
chair
assuming
folks
agree,
is
well
starting.
First
with,
like
limiting
the
total
session,
count
via
settings,
my
understanding
from
the
room
that
I
I'm
hearing
consensus
that
that's
good.
If
anyone
disagrees,
please
come
to
the
mic
line
now.
D
Okay,
like
limiting
max
streams,
it
sounds
like
this
might
be
hard,
but
we
want
to
try
so
I'm
getting
consensus.
Let's
write
a
pr
to
do
this
and
have
someone
implement
it
and
see
from
there.
If
you
disagree,
please
jump
up
to
the
mic,
I'm
getting
a
sense
that
max
data
most
folks
think
that
it's
too
hard,
but
alan
thinks
it
might
be
possible.
D
I
personally
as
well
as
chair.
I
I
don't
love
the
idea
of
keeping
an
issue
open
with
no
clear
resolution
points,
so
what
I
would
suggest
would
be
to
close
the
issue
and
if
someone
has
a
design
that
they've
implemented,
that
they
they
want
to
bring
to
the
working
group
to
say
I
figured
out
how
to
do
this.
D
I
Just
repeat
what
I
said
in
the
chat,
which
is
I'm
not
any
more
sure
than
anybody
else
that
it's
possible,
but
I
am
interested
in
spending
some
more
time
or
having
also
the
group
spend
more
time
exploring.
Is
it
possible,
without
bending
over
backwards
or
doing
something,
that's
not
worth
implementing.
So
in
terms
of
I
don't
real,
I'm
personally
like
to
keep
the
issues
open
until
they're
really
resolved,
but
if
I
mean
otherwise,
it
might
get
lost,
but
I'll.
Let
other
people
decide
how
we
want
to
handle
administratively.
D
My
my
take
is,
if
there's
something
that
no
one
has
an
intuition
on
how
to
solve
keeping
it
open
like
doesn't
I
don't
see
a
what
leads
us
to
closing
it,
and
so
I'd
rather
have
like
us
fail
like
if
no
one
proposes
anything,
the
answer
is
gonna,
be
we're
not
doing
it,
and
so
I
I
would
put
the
burden
on
whoever
and
you
know
we're
at
a
point
on
this
document
where
anyone
can
file
an
issue.
So
if
anyone
comes
up
with
a
proposal,
then
they
open
an
issue.
I
No,
it's
okay,
I
mean
go
ahead
and
close
it.
If
that's
what
we
want
to
do,
I
will
probably
forget
about
it.
We
will
all
forget
about
it.
Probably
sometime
after
we
ship
it,
somebody
will
report
a
very
complicated
bug
where
we've
hit
this
issue
and
complain
and
then
we'll
kick
ourselves
for
not
having
spent
more
time
on
it.
D
Okay,
all
right!
Well,
so
I'm
getting
plus
one
from
the
to
what
I
was
proposing
on
the
chat.
I'm
gonna
go
with
this,
then
all
right
I'll
add
some
text
to
these
issues.
Eric
if
you
wanna
keep
going.
L
L
So,
since
112
we've
landed
a
bunch
of
the
changes
that
we
talked
about,
then,
however,
a
bunch
of
the
changes
that
we
talked
about,
then
the
resolution
was
wait
for
the
http
datagram
design
team
in
mask
which
has
now
concluded,
so
that
is
fantastic.
We
are
unblocked
on
that.
The
remaining
outcome
from
that
that
we
need
to
decide
what
to
do
with
is
capsules.
L
So
that's
going
to
be
where
I
suspect,
we'll
spend
the
bulk
of
our
time
today
before
we
do
that.
There
is
one
other
issue
that
I
think
martin
filed.
That
is
something
I
wanted
to
bring
up,
so
we
all
talk
about
it
a
little
bit
and
see
if
there's
any
intuition
from
anyone
that
that
would
prevent
what
seems
like
the
right
answer
to
me.
L
L
Neither
side
can
send
any
web
transport
frames
at
all
you're
stuck
for
that
entire
rtt,
and
I
think
martin
very
rightly
points
out.
That's
basically
just
delaying
things
arbitrarily
any
frames
that
you're
allowed
to
send
by
the
flow
control
limits,
et
cetera,
et
cetera,
so
any
frame
that
would
otherwise
be
legitimate.
L
Why
should
we
prevent
you
from
sending
them,
and
so
the
proposal
here
is
allow
sending
them
don't
force
people
to
have
extra
rtts,
or
you
know
trips
as
opposed
to
round
trips,
and
let
everything
get
going
a
little
bit
sooner.
If
you
want
to
wait,
nothing
stops
you
from
waiting
that
can
be
a
nice
simple,
easy
way
to
implement
it.
L
But
if
somebody
wants
to
put
in
the
effort
to
correctly,
you
know,
send
frames
to
allow
things
to
get
going
sooner,
especially
since
web
transport
is
focused
on
allowing
the
server
to
also
initiate
web
transport
streams.
If
one
of
the
first
things
that
you're
planning
on
doing
is
having
the
server
open,
a
number
of
streams,
then
having
it
be
able
to
send
those
frames
without
waiting
for
the
connect
response
to
get
to
the
client
and
then
for
frames
to
get
back
from
the
client
to
the
server
so
that
it
can
then
open.
L
Some
streams
seems
really
very
pleasing.
So
if
anybody
is
strongly
opposed
to
that
or
sees
a
reason
why
we
should
keep
that
restriction
around
now
would
be
the
time
to
speak,
because
it
seems
great
to
let
things
get
going
and
not
have
extra
rtts
be
there
when
we
don't
need
them.
L
So
the
datagram
design
team
is
now
complete.
Thank
you,
david
and
a
bunch
of
other
fine
folk
who
participated
there
and
helped
get
that
out
in
time
so
that
we
could
talk
about
this
here
previously
in
h2,
we
defined
tlvs
for
every
web
transport
frame,
and
now
that
capsules
are
a
thing
we
could
potentially
be
registering
them.
L
Instead
of
in
our
registry
of
places
where
we
keep
our
list
of
web
transport
frames
to
use
with
h2,
we
could
register
them
in
a
different
registry
where
we
keep
the
list
of
capsules,
and
so
that
brings
up
the
somewhat
obvious
question
of
great.
So
what
do
we
get
if
we
use
capsules-
and
this
is
our
current
list
of
frames-
we
arrived
at
this
a
couple
of
meetings
ago-
it's
basically
the
minimal
subset
of
things
that
we
need
in
order
to
send
on
the
connect
stream
so
that
web
transport
works.
L
If
we
use
capsules,
there
exists
this
cool
capsule
already
called
datagram,
and
we
could
potentially
just
use
that
instead
of
having
it
be
wt
datagram,
the
definition
is
the
same.
The
wire
format
is
the
same.
The
semantics
are
essentially
the
same
with
a
little
bit
of
caveat
that
we'll
talk
about
a
little
bit
later.
L
We've
just
talked
about
removing
web
transport
max
data
from
that
list,
so
the
reuse
here
on
the
later
part
of
this
slide
was
previously
going
to
be
web
transport
mac
streams,
along
with
the
blocked
variant
as
well
as
web
transport
max
data,
along
with
the
blocked
variant.
That
would
now
just
be
web
transport
max
streams.
But
the
point
is
the
same:
if
there's
anything
from
h2
that
we
need
to
reuse
in
h3,
we
can
just
use
it.
It's
great.
It's
happy
we're
in
a
shared
registry.
Already,
anyway,
no
problems
there.
L
The
thing
that
I
think
we
want
to
discuss
is
whether
the
end
to
endianness
of
these
capsules
is
going
to
be
an
issue.
Is
that
a
good
thing?
Is
that
a
bad
thing?
And
so
my
understanding
is
that
the
tlvs
that
we
were
previously
defining
for
your
web
transport
over
h2
frames
were
not
end
to
end.
They
would
be
consumed
by
whoever's
terminating,
that
particular
h2
connection.
L
And
if
you
imagine
a
scenario
where
not
everybody
wants
to
implement
or
be
supporting
web
transport
over
h2,
but
we
do
think
that
we
need
it.
Maybe,
on
the
actual
connection
over
you
know,
kind
of
the
last
mile
of
the
internet
to
whatever
client
is
connecting
because
it
may
not
have
access
to
quick
on
some
percentage
of
networks.
L
You
could
imagine
that
an
intermediary
would
allow
a
client
to
fall
back
to
web
transport
over
h2,
but
still
want
to
speak
web
transport
over
h3
to
whatever,
upstream
or
origin
that
it's
going
to
be
talking
to,
and
it
would
be
doing
that
for
all
of
its
clients.
So
it
would
be
talking
h3
upstream
all
the
time
and
it
would
usually
be
talking
h3
downstream
to
the
client
most
of
the
time,
but
sometimes
some
of
the
clients
are
going
to
need
to
fall
back
to
h2,
and
we
want
that
to
work.
L
L
I
Alan
frindell,
so
just
to
repeat
what
I
just
put
in
the
chat,
which
is
that
the
datagram
capsule
it's
end
to
end
as
a
concept,
but
it's
not
transmitted
at
each
hop
as
a
capsule
right
when
you
go
through
a
hop
and
it
came
in
as
a
capsule,
but
the
other
end
supports
a
native
datagram
concept.
Then
it's
the
intermediary
is
allowed
to
sort
of
convert,
and
so
some
of
the
capsules-
I
don't
even
know
if
you
want
to
go
back
to
the
slide.
I
That
has
them
all
that
might
make
sense
where
you
know,
for
example,
probably
like
reset
stream
or
stop
sending,
can
make
sense
in
that
context,
I'm
not
so
sure
about
the
ones
that
are
trying
to
control
resource
limits,
because
those
resource
limits
are
very
much
hot
by
hop.
L
Yeah,
I
think
that
matches
kind
of
my
initial
intuition,
which
is
at
the
very
least
the
flow
control
ones,
seem
usefully
hot
by
hot,
although
end-to-end
flow
control
across
web
transport
session
is
an
interesting
concept,
but
I
don't
know
that
it
was
the
original
intent
of
having
those
resource
limits
and
and
allowing
an
intermediary
to
share
that
across
multiple
people,
potentially
who
are
sharing
the
same
upstream
connection.
L
G
So
I
think,
mark
nottingham's,
review
of
donna
graham's
draft
that
asked
the
question
is
capsules,
premature,
something
or
other
generalization
abstraction,
and
I
think
it's
this
here-
that
sort
of
highlights
the
challenges
of
using
capsules
for
anything,
because
I
think
you
can
simply
look
at
the
datagram
thing
in
isolation
and
go
yeah.
So
that
makes
a
very
bit
of
sense.
We
can
pull
those
out
and
we
can
forward
those.
However,
we
choose,
but
then
I'm
looking
at
the
stuff
you're
doing
with
streams
here
and
thinking.
G
If
there
are
connection
level
constraints
on
what
we
can
do
with
streams
the
amount
of
data
we
can
put
on
the
number
that
we
can
have.
L
I
think
there's
something
in
the
sense
that,
as
alan
pointed
out
like
there
is
a
way
and
datagram
is
actually
an
example
of
that
where,
in
the
http
datagrams
document
it
says
you
know,
each
capsule
type
can
define
the
translation
that
it
should
do.
Potentially
when
running
over
different
transports,
and
the
only
capsule
that
is
defined
also
defines
one
of
those
to
say.
Datagram
is,
you
know,
potentially
broken
out
into
being
a
native
datagram
so
like
that
concept
is
appealing-
and
it's
maybe
possible
to
do
that
for
each
of
these
things.
D
David
schnazzy
speaking
as
an
individual,
so
I've
been
thinking
about
this
capsule
thing
for
a
while
over
in
mask
land
and
kind
of
saw.
This
come
up
and
was
like
trying
to,
because
I
I
agree
with
martin
is
it
I
mean
that's
what's
made
mask
interesting
is
depending
on
how
you
stare
at
it.
D
Everything
looks
completely
different
and
the
I
think
what
it
comes
down
to
is
are:
is
web
transport
of
http
3,
the
same
protocol
as
web
transport
over
http,
2
and
1.,
or
are
they
different
protocols
and
by
protocol
I
made
http
upgrade
token,
perhaps
and
and
where
this
gets
really
interesting?
Is
let's
say
you
have
a
scenario
where
you
have
two
h2
hops
and
an
intermediary
in
the
middle
you
would
like
capsules
are
great
there.
Everything
you
want
everything
to
be
end
to
end.
D
D
You
have
a
server
that
terminates
the
client's,
hd
web
transfer,
http
3
connection,
and
then
it
turns
around
and
becomes
a
web
transport
client
over
http
2
to
to
the
back
end
for
lack
of
a
better
word,
and
at
that
point
the
capsules
are
end
to
end
from
the
client
to
the
first
node
and
from
the
first
node
to
the
second
node,
in
that
you
have
three
ends
or
four
to
this
protocol,
as
opposed
to
two
that
kind
of
makes
it
make
sense
to
me.
This
feels
consistent.
D
It
actually
like
I
I
we
had
another
discussion
of
like
they
do,
should
they
have
different
http
upgrade
tokens.
It
kind
of
pushes
me
in
the
yes
bucket
for
that
a
little
bit,
but
that
makes
it
all
of
this
kind
of
self-consistent
and
kind
of
clean
and
reasonable
to
me
empty.
Does
this
make
any
sense
to
you.
G
That
is
amazingly
slow.
That
makes
sense.
I
I
think,
that
the
sort
of
minimization
that
comes,
if
you're,
if
you've,
got
that
sort
of
intermediary,
there's
no
edge
away
through
h3,
or
vice
versa
and
you're
doing
the
web
transport
thing,
then
that
intermediary
needs
to
understand
web
transport
pure
and
simple
up
and
down
back
and
forth
yep
there's
still,
there
are
still
problems
that
we
don't
have
solved.
G
In
that
scenario,
though,
and
that's
where
it
starts
to
get
really
interesting
if
the
first
hop
has
no
constraints
on
on
creating
streams,
but
the
second
hop
does
and
you
you
have
a
stream
come
in
at
the
intermediary
and
the
intermediary
cannot
create
the
outgoing
stream.
What
is
it
going
to
do?
Yeah
exactly
the
sort
of
thinking
that
I'm
going
through
right?
How
the
hell
do
you
deal
with
that?
G
And
the
answer
is
I
don't
know
really
that
that's
the
best
that
I've
got
right
now.
D
And
just
to
make
sure
I'm
understanding
you
right
well,
first
off
I
I
agree
like
that
kind
of
back.
Well,
it's
a
form
of
back
pressure,
just
17
different,
like
degrees
of
back
pressure,
50
shades
of
back
pressure.
If
you
will
the
thinking
about
it,
though,
that,
like
this
issue
at
hand,
whether
we're
using
capsules
or
something
or
any
possible
framing
for
this,
we
still
have
to
solve
that
right
like,
unfortunately,
if
you
have
an
intermediary,
no
matter
what
we
have
that
problem.
G
If,
if
it
stays
in
the
stream
and
can
stay
in
the
stream,
then
it's
not
a
problem
if
it,
if
it
has
to
be
lifted
out
and
the
constraints
are
different,
then
then
that's
not
going
to
be
the
case.
So
if
you,
if
your
intermediary
was
just
tunneling
the
effectively
bite
string
back
and
forth,
then
then
that
would
be
fine.
I
think
it
wouldn't
be
constrained
in
any
way
and
the
the
constraints
on
the
number
of
streams
that
you
have
would
be
fully
end.
C
D
Same
right,
alan
you're
next
in
queue.
I
Alan
fendell,
I
was
when
you
talked
about
that
stream
problem
going
through
intermediaries.
It
reminded
me
that
we
have
had
that
problem
with
I'm
going
to
say
the
p
word.
Everyone
cover
yours.
If
you
don't
want
to
hear
it
push
server
push.
We
had
that
problem
where
the
one
hop
thought
it
could
create
new
pushes
and
it
got
to
a
proxy
that
no
longer
had
available
streams
so
and
it's
yeah
it's
kind
of
a
hard
problem
and
in
the
in
the
h2
version
of
web
transport.
I
F
Hello,
lucas,
buddy
speaking
for
myself,
yeah,
I
made
a
comment
in
the
jabra.
I
think,
like
my
understanding
of
what
martin
said
about
the
stream
impedance
between
creation
on
one
side
of
an
intermediary,
and
the
other
seems
just
to
be
something
that
exists
already
today
to
me
anyway.
If
maybe
maybe,
if
there
is
a
problem,
we
could
solve
this
with
contexts
and
capsules.
Would
let
us
do
that
if
we
really
wanted
to
so
I
don't.
I
don't
see
a
humongous
issue
here
myself
that
is
different
than
the
problem
we
already
have
today.
L
L
Once
upon
a
time,
we
had
a
conversation-
and
I
think
this
echoes
a
lot
of
what
martin
just
said
in
the
in
the
chat,
which
is
we
talked
about
kind
of.
Do
we
define
these
things
that
we're
sending
over
the
connect
stream
and
capsules?
Have
this
cute
way
to
say
you
know
it's
a
datagram
when
you're
on
a
transport
that
supports
datagrams
split
it
out
to
be
a
datagram,
we
could
make
the
webtransport
stream
capsule
do
the
same
thing
and
say
when
you're
on
h3
split
it
out
when
you're
on
h2
don't.
L
D
Thanks
david
in
the
queue
again
as
an
individual,
I
agree
with
eric
here
just
wanted
to
reply
to
martin's
point
like
what
is
this
bias
and
adding
redundant
lengths.
So
I'm
looking
at
the
h2
draft
and
it
like,
because
you
know
we
have
these
frames
and
I
think
it's
reasonable
that
we
might
want
to
add
a
future
frame
for
an
extension
later.
D
These
are
self-describing,
and
so
we
have
a
sequence
of
tlvs
on
the
data
stream,
which
starts
to
look
like
something
I
know
so
like
we
don't
have
any
redundant
links.
These
are
literally
wire
format
compatible
with
capsules.
The
only
question
is
are
the
do
we
share
types
are?
Is
this
one
eye
on
a
registry
or
two
in
the
registries
and
for
the
reason
that
eric
mentioned
earlier,
having
some
of
them
be
the
same
everywhere
and
some
of
them
be
popped
out
in
quick
for
there?
D
G
You
don't
get
any
real
performance,
so
you
might
you
might
attempt
at
one
point
to
get
true
datagrams
your
data
and
drew
quick
streams
for
you
and
then
further
down
the
chain.
The
case,
I
was
going
to
say,
there's
probably
a
simpler
way
of
building
this
and
it
may
not
be
as
pleasant,
but
don't
do.
G
That
is
actually
a
pretty
good
strategy
and
if
your
intermediary
doesn't
need
to
do
it
doesn't
advertise
support
for
http
3
version
of
web
transport
if
it
cannot
guarantee
end-to-end,
quick
and
http
3
all
the
way,
I'm
losing
my
brain.
It's
really
good
at
this
time
in
the
morning.
If
I
can't
guarantee
and
to
win,
then
it
doesn't
doesn't
offer
it
and
the
fallback
here,
which
is
the
h2
one
which,
by
the
way,
probably
works
on
http
one
one
as
well
adequately.
G
Then
that's
your
fallback.
The.
L
L
This
is
terrible,
they're
separate
for
a
reason
they
should
stay
separate,
but
there
does
seem
something
reasonably
attractive
about
having
them
share
the
same
thing
and
then
web
transport
over
h3
is
broken
out
and
we
just
choose
to
not
define
a
way
to
break
them
out
into
native
h2
streams,
because
we
don't
want
to
deal
with
that
and
then
we're
all
happy,
and
it's
done
the
only
other
point
that
I
do
want
to
make
david
when
you
say
that
using
a
capsule
is
a
choice
to
put
this
tlv
in
a
different
iana
registry.
L
I
think
that's
absolutely
true,
but
when
we're
looking
at
the
other
things
that
we
get
from
that,
I
think
the
is
our
litmus
test
for.
Should
this
be
a
capsule?
Should
it
be
end
to
end
or
not?
And
if
not,
what
is
the
litmus
test,
because
I
think
which
in
a
registry
do
we
want
it
in?
Is
I
mean
that's
nice
and
I'm
happy
like
I
don't
care
what
I
enter
registry
we
put
things
in,
but
it
seems
as
though,
if
we're
like
great
this
is
nice.
L
L
D
So
clearly
we
don't
have
an
obvious
answer,
definite
consensus
on
this
one,
but
it's
worth
like
I
like
eric's
plan
of
trying
it
out
and
then
showing
like
having
an
empty
look
at
like
a
pr
from
eric
when
it
empty,
has
had
a
decent
night's
sleep
and
see
if
he
likes
it
or
not,
would
be
a
good
next
step
and,
of
course,
to
have
the
entire
working
group
take
a
look
or
contribute
to
that.
L
Yeah,
I'm
certainly
getting
to
do
that
and
you
know
happy
to
do
that
with
other
folks
if
we
want
to
sit
down
together
and
and
talk
about
it
on
a
call
somewhere
and
and
brainstorm
a
little
bit
while
we
do
it
but
yeah
the
other
thing
just
in
case
victor
wasn't
looking
at
the
chat.
I
know
we
talked
about
this
a
little
bit
before,
but
I
wanted
to
make
sure
if
any
reaction
there,
since
a
lot
of
what
we're
just
discussing
kind
of,
has
impacts
for
how
h3
looks.
L
D
Thanks-
and
I
I
think,
having
like
I'm
just
a
call-
sounds
like
a
good
idea
with,
like
the
most
passionate
folks
to
move
this
forward.
I'm
debating
whether
I
want
to
use
the
term
design
team
or
not
I'm
realizing
that
the
terminology
doesn't
matter
too
much.
D
All
right
alan
says:
if
it
it,
it
does,
walk
like
a
design
team
in
quack
like
a
design
team.
After
all,
okay,
we
shall
be
forming
a
design
team,
so
the
I
will
send
an
email
to
the
list
asking
for
volunteers
eric
since
you've
been
kind
of
leading
this
and
offering
to
do
the
pr.
D
Would
you
be
willing
to
lead
this
design
team
sure
indeed,
thank
you
so
we'll
send
an
email
to
the
list
and
if
you
are
interested
in
joining,
please
reach
out
to
the
chairs
and
then
we'll
have
the
design
team
figure
this
out
for
us
and
in
an
ideal
world.
If
that
that
would
happen
in
the
near
future,
we
could
have
an
interim
to
discuss
it
similar
to
what
we've
been
doing
over
the
last
three
months
at
a
very
different
working
group.
J
No,
I
think,
I
think,
we've
hopefully,
we've
hopefully
overcome
the
note-taking
problem.
You
should
see
the
notes
in
the
coming
up
very
soon
and
it
sounds
like
we've
got
our
next
steps.
D
O
Yeah,
jonathan
linux,
this
is
luke
curley,
who
said
he
missed
the
initial
priorities.
Discussion
that
raised
an
interesting
point
in
the
chat,
since
we
seem
to
run
out
of
everything
else
to
talk
about
that.
He
has
a
mock
style
scenario
where
he
wants
to
be
able
to
prioritize
new
things.
He
sent
over
old
things
he
sent,
which
is
hard
to
do
with
a
small
thick
set
of
priorities.
O
So
I
guess
I
know
the
priorities.
Api
is
not
in
scope
of
this
working
group
per
se,
but
it's
something
to
people
who
are
designing
mechanisms
like
that
to
keep
in
mind
like
whether
that
means
I
mean
it
sounds
like
his
implementation.
Just
has
you
know
64
for
priority
and
just
names
his
priorities
after
his
stream
ids.
O
That's
one
possibility,
or
somebody
could
have
some
way
of
dynamically
adjusting
priorities
of
older
stuff
or
something
like
that,
but
just
something
to
keep
in
mind.
Okay,.
D
I
I
Talk
about
the
priorities
and
then
make
my
other
point
too.
So
I
think
in
terms
of
luke's
scenario,
you
don't,
if
you
really,
all
you
want
is
to
switch.
You
know
your
mode
from
fifo
to
lifo.
Maybe
the
easiest
way
to
do
that
is
have
a
single
message
that
goes
over
your
web
transport
application
protocol.
That
says,
please
use
you
know,
please
tell
the
scheduler,
on
the
other
side,
to
do
everything
and
reverse
remedy
order
and
then
you're,
rather
than
juggling
your
eight
lanes
or
whatever,
so
that
may
be
a
possibility
that
could
work.
I
The
other
point
I
wanted
to
make
was
something
that
came
up
right
at
the
end
of
the
media
over
quick
session
that
had
me
thinking
about
its
intersection
with
web
transport,
which
is
in
media
over
quick.
I
But
if
you
want
to
go
as
fast
as
possible
at
some
point
you
are,
you
need
better
interaction
with
or
closer
closer
ties
with,
congestion
control,
loss,
recovery,
re-transmission
policies,
introspection
from
the
like
lowest
level
of
the
transport.
In
terms
of
you
know,
information
it
might
have
that
you
want
to
feed
back
into
the
application
a
really
tight
coupling-
and
I
personally
have
not
been
very
involved
in
the
w3c
api
definition
for
web
transport.
So-
and
I
saw
there's
like
web
transport
stats,
which
maybe
have
some
of
that
feedback.
I
D
Okay,
thanks
that
sounds
useful.
I
and
this
topic
has
definitely
been
the
biggest
queue
size
we've
seen
in
this
working
group.
So
just
to
be
clear,
this
is
no
not
the
mach
buff,
but
we
can.
We
can
please
so
yeah.
We
can
talk
about
this,
but
let's
scope
it
to
the
implications
to
web
transport
and,
on
your
other
point,
yeah,
I
encourage
you
to
join
the
w3c
sessions
for
there.
D
N
Hi
my
name
mentioned
yeah,
just
just
to
summarize
the
priority
stuff.
It's
something
that
right
now,
it's
fine
because
I'm
sitting
I
I
have
a
server
and
the
server
you
can
prioritize
quick
streams,
but
it
is
something
that
you
eventually
need
browser
support,
for
is
the
idea
that
we
want.
N
Use
the
connection-
and
we
want
the
connection
to
degrade
cleanly
when,
when
the
window,
when,
when
it's
out
of
bandwidth
effectively
so.
N
Gist
of
just
the
warp
is
nearer
segments
and
audio
or
higher
priority,
it's
kind
of
like
a
lefo,
although
it
gets
more
complicated
than
that
to
some
some
people
saying
you
could
just
switch
the
mode
stuff
like
control
messages,
still
need
to
be
on
a
higher
priority
stream
and
whatnot,
but
I
agree.
This
is
something
that's
not
really.
The
transport
doesn't
need
to
support,
but
we
do
need
a
way
for
for
end
points
to
tell
other
endpoints
like
this
is
the
party
of
the
stream.
N
Like
I
created
the
stream,
could
you
please
write
back
to
it
with
this
priority
is
effectively
what
we're
looking
for,
and
that's
that
can
be
accomplished
with
like
an
http
header
like
luke,
is
the
the
extensible
private
party
header
yeah
a
little
bit
of
signaling.
That's
probably.
K
G
So
I
get
it
was
a
little
unclear
from
the
description
there,
but
it
seems
like
this
is
mostly
something
that
the
application
can
can
do
for
itself,
because
you're
you're
having
the
client
tell
the
server
to
prioritize
in
a
particular
way,
which
I
I
guess
means
that
you're
your
your
signaling
can
be
worked
out
on
on
your
own
terms.
G
There's
nothing!
There's
nothing
here
that
that's
possible,
because
anything
is
possible.
Once
you
give
someone
the
ability
to
send
bytes
to
another
one,
I
was
wondering
whether
there
was
an
end
requirement
from
the
browser
side
from
their
application.
So
you're,
getting
multiple
streams
on
the
browser
and
trying
to
send
on
them
is.
Is
there
any
requirement
that
that
those
big
life
in
in
terms
of
the
sending
behavior
of
the
browser?
G
D
Thanks
and
yeah
jump
jumping
in
ass,
nope
sorry
go
okay,
just
jumping
in
as
chair
yeah.
I
think
that's
a
great
place
to
kind
of
draw
the
line
for,
like
you
know,
our
goal
for
web
transport
has
been
to
ship,
something
that
that
works
at
least
for
for
now
and
in
terms
of
scoping
it
like
if
something
can
be
applica
solved
at
the
application
layer.
D
Kind
of
our
philosophy
has
been
to
just
keep
it
there
for
now,
and
perhaps
at
some
point,
if
we
realize
every
other
application
is
doing
the
same
thing
like,
for
example,
we
talked
about
messaging
framing,
then
we
might
consider
adding
it
to
like
web
transport
v2
whatever,
but
yeah.
That
seems
like
to
be
a
good,
a
good
demarcation
for
for
what
we
consider
in
scope.
Just.
F
Some
thoughts
lucas
go
ahead:
hello,
lucas,
pardo
speaking
yeah,
like
obviously
this
topic
is
super
exciting.
For
me,
I
I
I
just
I'm
not
that
worried
because,
like
the
the
extensive
priorities,
draft
is
just
one
kind
of
signal
and
it's
intended
to
be
used
in
one
way
in
and
in
practicality,
it
should
should
definitely
be
combined
with
all
the
other
information.
Any
sender
is
using
to
schedule
how
stream
data
or
datagram
data
is
being
sent.
Some
might
ignore
it.
Some
some
might
use
it.
Some
might
use
it
more
strongly
than
other
inputs.
F
It
seems
like
web
transport
has
all
the
pieces
there
to,
let
us
add
extra
signals
or
to
provide
some
guidance
on
how
those
signals
could
be
used
slightly
differently.
I
think
the
browser
world
in
particular
has
a
lot
of
constraints
that
others
don't
in
the
quick
world.
We
we
say
you
know
we
should
provide
an
ability
for
applications
to
control
the
prioritization
of
stream
data
as
an
example,
and
then
we
leave
that
completely
undefined.
F
Some
people
hate
that,
but
it
works
okay
for
now
and
we'll
we'll
use
these
things
and
we'll
figure
some
more
stuff
out
and
maybe
in
a
v2
or
some
other
additional
document
we
can
address
this.
My
main
concern
is:
are
we?
Are
we
blocking
anyone
from
being
able
to
extend
or
innovate
or
address
the
use
case
that
they
want?
And
I
don't
think
that's
true,
so
I
think
we're
in
a
decent
state
and
I'm
more
than
happy
to
continue
following
the
discussion
and
seeing
what
we
can
do
to
make
things
better.
P
P
That
seems
pretty
important
for
a
lot
of
use
cases,
particularly
the
ones
luke
was
talking
about,
but
I
don't
think
that
just
I
think,
that'll
for
a
lot
of
them
just
saying
a
lifo
doesn't
necessarily
work,
even
in
the
the
case.
The
video
case,
I
think
you
run
into
problems
where
it
works.
It
makes
it
a
little
bit
better
on
semi-congested
networks,
but
as
soon
as
things
get
a
little
bit
the
next
level
of
worse
that
lifo
mechanism
totally
fails
without
an
actual
prioritization
scheme.
P
J
There
so
I'm
talking
with
my
itf
web
transport
hat
off,
but
I
will
just
say
that,
in
terms
of
w3c
work,
we
now
have
a
functioning
version
of
web
transport
api
as
well
as
web
codecs
in
chromium,
and
so
some
of
these
mock
scenarios.
You
can
actually
build
them
in
a
fairly
little
amount
of
code.
J
But
one
thing
I'd
like
to
make
clear
is
that,
as
I
see
it,
I
I
believe-
and
maybe
allen
can
correct
me
if
he's
tried
this-
that
this,
the
video
upload
scenario
should
be
implementable
using
the
web
transport
api
as
it
exists
today.
J
The
downstream
scenario
is
a
little
bit
more
complicated,
but
the
key
thing
to
understand
here
is
in
that
scenario
as
as
has
been
described,
the
server
can
do
things
like
priority,
assuming
it's
available
without
the
web
transport
client
having
to
support
that
so,
at
least
within
the
api
scope.
So
far,
it
doesn't
seem
to
me
like
it
seems
like
the
current
api
can
at
least
accommodate
those
two
scenarios,
the
the
downstream
and
the
video
upload.
J
Of
course,
we're
very
willing
to
understand
if
there
are
issues
doing
that,
and
I
would
urge
people
to
actually
try
it
out.
J
So
you
know
we
we
very
s
consciously
didn't
try
to
go
down
to
the
lowest
levels
in
the
api,
like
you
originally
thought,
people
need
access
to
the
ack
info
or
the
guts
of
quick
bandwidth
estimation
and
after
some
debate
we
said
not
really
that's
turning
out
to
be
an
issue
for
rtp
directly
over
quick,
but
I
hopefully
not
for
the
for
most
of
the
mock
use
cases
and
the
stats
that
are
there
to
make
clear
to
people
they're.
J
Not
so
you
can
implement
congestion
control
they're
just
there
really
to
to
give
you
a
sense
of
how
well
your
app
is
performing,
but
I
urge
people
to
try
the
apis
out
and
see
if
there
are
any
issues.
M
Magnus
westland
is
the
in
the
priority
here.
Is
it
question
if
you
actually
need
to
have
dynamic
or
as
they
update
the
priority
value
for
a
transmission,
so
say
it?
Is
that
part
of
the
problem
here?
Really,
I
wonder
if
it's
not
are
you
assuming
that
you
have
so
small
data
blocks
that
you're
sending
one
single
datagram
and
that's
the
only
thing
you
need
or.
N
Yeah,
so
I
wanted
to
also
answer
bernard,
so
we're
doing
it
we're
actually
right
now
twitch
is
using
web
transport.
I
think
it's
like
one
percent
of
chrome
users
right
now,
we're
not
using
web
codecs
for
using
msc
still,
but
I
have
a
player
that
uses
it
and
yeah.
We
are
doing
prioritization
on
the
server
side.
It
is
currently
the
current
design
to
answer
the
previous
question.
It
is
a
fixed
when
the
stream
is
created.
N
We
know
the
priority,
it
doesn't
need
to
change,
although
in
theory
you
could
make
a
slightly
better
design
if
it's
allowed
to
change,
it
really
just
depends.
There's
no
right
answer
for
prioritization,
so
it's
nice
to
have
that
functionality,
but
it's
not
required
either.
I
think
for
bernard.
The
main
concern
is-
and
this
is
a
w3c
thing
is
just
if
you
want
to
switch
the
direction.
If
you
want
to
to
create
media
at
the
browser
and
push
it
like
via
warf
or
whatever
to
to
the
server.
N
You
can't
control
the
priority,
then,
and
that's
something
that
we're
thinking
about
like
how
could
you
do
ingest
and
send
newer
media
for
older
media?
So
you
can
just
starve
and
eventually
drop
the
only
media.
J
I
Alan
from
dell
yeah,
I
think,
just
responding
to
what
bernard
had
said
when
he
was
up
last
time
just
that
I'm
thinking
about
the
same
scenario
that
that
luke
is
talking
about,
but
also
probably
not
going
to
be
in
the
early
scoping
of
mock.
So
probably
maybe
not
so
urgent.
But
I
just
got
the
sense
that
there
were
people
in
the
room
who
have
very
low
latency
low
latency
use
cases
for
ascending
video
and
and
probably
weren't
going
to
be
happy
with,
like
the
default,
quick
congestion
control
that
the
browser
was
providing.
J
To
follow
up
on
what
alan
said,
alan
I'd
be
very
interested
if,
in
particular,
if
you're
encountering
any
issues
with
bbrv
one
in
your
video
upload
scenario,
I
don't
know
if
you,
I
don't
know.
If
your
quick
implementation
you
were
using,
did
the
rv1
or
I
don't
think
that
same
problem
would
occur
with
new
reno,
but.
I
Sorry
luke,
I
just
jumped
the
queue
to
respond
to
bernard,
which
is.
I
just
want
to
be
clear
that
I
personally
and
I'm
not
sure
that
meta
either
does
have
any
code
that
does
this
right
now.
It
was
just
based
on
sort
of
the
comments
and
discussion
that
happened
on
wednesday.
So
if
I
have
running
code-
and
I
have
problems
I'll-
let
you
know
okay.
N
I
I've
implemented
bbr
v1
there's.
I
also
tried
v2,
but
there's
some
bugs.
I
haven't
fixed
those.
Yet
most
of
our
concerns
with
it
are
theoretical.
At
this
point,
we're
not
really
pushing
latency
low
enough
that
the
probe
rtt
phase
is
an
issue
and
you're
right
like
reno
and
cubic.
They
don't
really
have
the
same
latency
concerns,
but
they
also
kind
of
congest
a
lot
easier.
N
I
think
eventually
it
would
be
nice
to
have
more
control
over
congestion
control
in
web
transport,
but
I
think
that's
probably
asking
a
lot.
I'll
be
honest.
I
think
that's
probably
asking
too
much
for
browsers
to
say,
like
let
me
control
the
fine
grained
when
every
package
should
be
sent,
but
I
think
that's
up
for
discussion
and
certainly
a
w3c
thing:
how
much
control
should
an
application
have
over
when
individual
packets
are
sent.
P
Which
is
good,
I
mean
I.
I
think
that
maybe
the
level
that
you
might
want
to
consider
at
some
point
is
that
there's
some
sort
of
hints
that
can
be
passed
on
down
about
the
congestion
control.
That's
used
at
the
quick
layer,
but
we're
probably
not
anywhere
near
that
point
yet,
but
it
probably
at
some
point
we
want
to
do.
P
We
probably
will
be
at
that
point
and
we
might
want
to
think
about
whether
you're
going
to
directly
control
what
what
quick
uses
or
whether
you're
just
going
to
pass
down
some
hints
about
the
attributes
of
what
you
wish
it.
What
about
your
desired
performance-
and
I
I
don't
have-
I-
don't-
really
care
as
long
as
there's
some
way
to
influence
it.
J
Just
to
respond
to
cullen,
that's
kind
of
what's
been
proposed
in
the
api.
It
would
be
some
hint
in
the
constructor
that
would
tell
you
what
kind
of
cc
you
want.
P
D
That
was
a
nice
screen
frame.
Freeze,
victor.
K
My
my
idea
was
the
simplest
one
is
like
would
have
an
enum
which
would
let
you
switch
between
something
like
bbr
and
something
like
I
don't
know,
goog
cc
or
any
other
real-time
appropriate
algorithm,
but
it
is
not
entirely
infeasible
to
hand
off
suggestion,
control
to
javascript
I've
seen
at
least
a
research
project
that
they,
I
don't
know
if
they
ever
published
a
paper,
but
they
had
a
working
prototype
of
to
move
congestion,
control
out
of
linux
kernel
to
user
space,
and
that
was
a
roughly
like
cross
process
can
out
of
process
construction
control.
K
J
J
I
think
you
know
victor
there's
some
people
working
on
scream,
so
I
think
they've
had
some
results
that
suggest
that
something
like
google
cc
or
scream
might
be
worthwhile,
but
there's
also
the
issue
you
know,
particularly
if
you're
doing
weird
things
like
pooling
like
what
would
happen
there.
It
may
not
be
that
simple.
D
Unless
folks
have
other
things,
I
think
and
we're
getting
closer
to
time,
I
might,
I
think,
we're
going
to
wrap
this
up
thanks
everyone
for
coming
thanks
to
the
presenters
for
taking
the
time
to
make
slides
and
to
present,
I
think
we
had
a
very
productive
session
today
we
got
closer
to
resolving
quite
a
few
issues
and
we're
starting
a
brand
new
design
team
to
address
a
thorny
one.
I
think
we're
making
good
progress
to
getting
these
documents
close
to
something
very
stable,
which
is
great
thanks.
D
Everyone
for
that
thanks
everyone
for
keeping
always
making
sure
this
working
group
is
a
fun
and
polite
pleasant
place
to
work.
So
thanks
and
we'll
see
you
on
the
mailing
list
and
potentially
at
an
interim
assuming
the
design
team
reaches
something
in
the
near
future.