►
From YouTube: IETF115-WEBTRANS-20221110-1300
Description
WEBTRANS meeting session at IETF115
2022/11/10 1300
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
B
A
A
There
we
go.
Most
of
you
probably
have
seen
these,
especially
if
you're
here
virtually
it
means
you've
figured
out
how
to
join
virtually
but
yeah,
especially
if
you're
virtual,
please
use
the
directional
microphone
or
headphones.
Otherwise,
we
get
bad
Echo.
A
A
A
Well,
in
particular,
we,
the
ITF,
has
an
IPR
policy
Sony
contribution
you
make
be
it
on
chat
at
the
microphone
on
the
mailing
list,
triggers
your
responsibility
with
regard
to
patents
that
you
need
to
disclose
if
you
have
any
so,
if
you're
not
aware
of
this,
please
read
it
before
making
any
contribution.
A
A
If
you
see
anything
that
might
amount
to
harassment
or
other
problems,
please
contact
the
chairs
or
the
ombuds
team,
or
anyone
in
leadership
we're
all
happy
to
help
resolve
these.
These
situations.
A
As
a
reminder,
the
this
ITF
has
the
roughly
same
mask
policy
as
last
time.
So
please
keep
your
mask
on
most
of
the
time.
You
can
remove
it
briefly
to
eat
or
drink,
but
that
is
not
an
excuse
to
keep
it
off
for
the
entire
time.
A
I
really
need
to
cue
the
Jeopardy
music
for
this
one,
so
I'm
looking
at
some
newcomers,
that's
a
great
way
to
help
out
if
you'd
like
not
twisting
anyone's
arm,
but
that
is
always
much
appreciated
by
the
community.
The
notes
really
don't
need
to
be
perfect.
If
you
look
at
those
from
any
other
meeting,
it's
just
a
rough
idea,
especially
now
that
we
have
the
YouTube
video.
So
is
anyone
willing
to
take
notes?
Please
foreign
's
like
staring
at
their
laptop
instead
of
making
eye
contact?
Wow,
it's
okay!
We
just
won't!
A
Oh
thank
you
very
much
appreciate
it
if
you
can
take
them
in
the
notes.itf.org,
there's
a
link
from
the
agenda
or
from
the
slides
and
anyone
else.
If
you
want
to
jump
in
and
help
that's
much
appreciated.
Thank
you
and
please
put
your
name
at
the
top,
so
I
can
probably
thank
you
in
the
email
right.
This
is
our
agenda,
as
per
is
often
the
case
here
at
web
transport.
A
We're
going
to
do
an
update
on
what's
going
on
at
the
w3c,
then
discuss
the
capsule
design
team
and
its
impact
on
H2
and
H3
and
other
H2
and
H3
open
issues.
Then
we
have
a
specific
topic
on
reliably
resetting
streams
from
Martin
and
then
we'll
see
if
we
need
any
hums
and
wrap
up.
Would
anyone
like
to
bash
this
agenda
foreign.
A
With
this
will.
C
Yes,
let
me
just
confirm
the
mic's
working
all
right.
Yes,
it
is
okay.
Thank
you.
Happy
middle
of
the
night
from
California.
My
name
is
Will
law
from
Akamai
I
coach,
you
at
the
w3c
group,
along
with
yanivar
Brewery
from
Mozilla,
so
we
have
three
quick,
slides
on
updates
and
then
a
slide
with
four
questions
for
this
group,
which
I
think
would
hopefully
be
the
bulk
of
our
time.
So
just
an
update.
Since
we
last
presented
July
26th,
we've
updated.
Our
working
draft
to
the
latest
revision
happened
on
October
11th.
C
The
charter
is
in
the
midst
of
being
extended
through
to
December
31st
23.
We're
actually
just
passed
our
Charter
limit
currently,
but
we're
in
process.
So
we're
fine
from
a
w3c
perspective,
our
timetable
for
the
year
we're
really
trying
to
move
to
candidate
recommendation.
It
requires
stability
in
the
API
and
also
a
proposed
recommendation.
The
Next
Step
requires
two
independent
implementations.
Google
have
implemented
in
Chrome.
Mozilla
are
close
on
Firefox,
so
we
should
reach
there
and
hopefully,
within
Cube
one.
Our
goal
is
published
in
the
to
publication.
C
In
the
first
half
of
next
year,
Milestone
status.
We
have
a
candidate
recommendation,
that's
the
one
we
track
closest.
It
has
10
issues
actually
should
have
updated
the
slide
here.
Eight
open
and
five
ready
for
PR
next
slide.
Please.
C
So
just
some
updates
for
you,
I
won't
go
through
them.
I
won't
read
through
all
those
texts
in
detail,
I
think
the
one
not
most
meaningful
for
this
group
is
we
added
a
new
congestion
control,
Constructor
argument,
so
it's
an
enorm.
It
has
three
values
default,
throughput
and
low
latency,
so
you
give
them
an
indication
of
what
type
of
congestion
control
you
would
like
the
user
agent
to
invoke
on
your
behalf
in
the
web
transport
connection
that
you're
instantiating.
C
C
Couple
of
editorial
changes
editorial
changes.
We
also
want
to
thank
Granada
Boba
for
contributing
sample
code,
both
for
a
simple
Echo
server
and
then
an
echo
server
with
Pub
sub
on
video,
it's
really
been
instrumental
in
moving
forward
some
of
the
early
work
around
web
transport
on
the
web.
D
Jonathan
Linux
is
that
decision
about
whether
low
latency
can
be
done
done
before
the
connection
is
established
because
I'm
concerned
about
the
case
where
you
have
a
congestion
control
algorithm
that
wants
I'm
thinking,
in
particular
of
say
the
time
stamp
extension
and
I
can
use
it.
You
know,
and
I
can
Implement
Google
congestion
control
if
I
have
time
stamps
on
the
other
side,
but
I
can't,
if
I
don't
so,
which
means
you
wouldn't
know
until
after
the
connection
is
established,
whether
you
got
the
low
latency
congestion
or
not.
Is
that
something
you.
C
A
Yeah
I
can
answer
on
that
one.
This
is
part
of
the
Constructor
in
JavaScript,
so
at
least
for
now
in
Chrome
we
don't
pool
multiple
web
transport
sessions
and
different
connections,
so
you
would
always
have
this
information
when
creating
the
connection
else.
Even
if
we
do
Implement
pooling
you
could
totally
say.
Oh
this
connection
isn't
valid
for
this
Constructor
we'll
create
a
new
one,
so
I
think
we
we're
covered,
because
this
is
on
the
Constructor.
It's
not
something
you
can
change
later.
D
A
A
C
You
can't
query
it
immediately
right.
You
need
to
wait
for
the
connection
to
be
ready
before
you
can
then
query
whether
it
was
able
to
comply
with
your
congestion
look
control
request,
so
the
code
wouldn't
literally
be
what
you
see
on
the
screen
right
there.
Bernard
is
next
for
Q
yeah.
B
Yeah
I
think
Jonathan's.
Question
may
also
have
been
about
whether
web
transport
requires
some
of
these
time
stamp
options,
and
the
answer
is
no
I
I,
don't
know
if
Jonathan
wants
to
come
back
and
clarify
that,
but
anyway,
this
is
just
about
the
congestion
control
algorithm
that
would
be
implemented
on
the
web
transfer
client.
It
doesn't
influence
what's
on
the
server
or
anything
like
that,.
C
C
So
we
have
two
main
issues
of
debate.
The
first
remains
around
prioritization.
We
have
three
issues
up
there
and
we've
proposed
over
the
last
few
months.
Various
ever
more
complicated
apis
constructs
around
flows
of
short-lip
streams,
for
example,
and
waiting
schemes.
The
latest
issue
there's
a
comes
from
myself,
which
is
as
a
baseline.
Maybe
we
should
consider
simply
doing
the
minimum
necessary
to
support
mock
media
over
quick,
which
has
a
a
simple,
relatively
simple
requirement
for
ever
escalating
priorities.
C
So
we
have
not
resolved
this
issue,
it's
an
issue
of
debate,
but
that's
on
the
books.
The
second
remaining
large
issue
is
the
stat
surface.
We've
had
prior
discussion
around
packet
arrival
departure
times
latest
rtt
and
ecn
information,
Peter
Thatcher's
put
forward
a
PR
that
resolves
that
to
a
single
property,
basically
expressing
available
throughput
and
there's
also
debate
over.
What's
the
time
base
over
over
that
value.
C
F
Yeah,
no
no
worries
Alan
from
Dell
meta
and
priority
Enthusiast,
I,
I,
guess
I
just
want
to,
and
also
my
co-chair
Mark
is
in
its
very
early
days,
and
if
you
were
in
our
mock
session
this
morning,
you
know
there's
still
a
lot
of
debate
about
what's
going
on
with
priority.
A
A
related
question
challenge:
Point.
Is
it?
Would
it
be
possible
to
ship
like
a
first
version
of
web
transport
without
priorities
and
then
add
that
later
or
do
you
think
it
it's
important
for
the
first
version.
C
We
have
some
difference
of
opinion.
I
personally
think
it's
important
for
the
first
version,
because
it
seems
necessary
to
implement
one
of
our
strongest
use
cases,
which
is
the
web
conferencing
use
case,
in
other
words,
a
client
sending
real-time
audience
video
to
a
server.
It's
working
well
today
and
and
good
Network
conditions,
but
it
doesn't
work
well
under
poor
Network
conditions.
So
we
feel
that
we
need
a
prior
some
type
of
prioritization
scheme
to
be
able
to
make
that
happen.
B
Yeah
I
just
wanted
to
clarify
here.
You
know
priority,
isn't
free
in
that
when
you,
if
you,
when
you
do
that
and
have
this
strict
sending,
then
you
lose
concurrency
and
you
know
for
the
at
least
for
the
conference
in
case
it's
I
I,
don't
agree
that
it's
it's
required
to
to
do
that
and
in
fact,
when
I've
done
the
samples
it
can
actually
be
harmful.
B
C
Bernard,
would
you
be
comfortable,
leaving
it
out
of
the
first
version
completely
as
opposed
generic
and
extensible
that
might
accommodate
future
requirements?
I.
B
I
would
say
that
the
goal
should
be
what
Alan
said,
which
is
figuring
out
a
way,
making
sure
that
it's
not
precluded.
Let
me
put
it
that
way
rather
than
because,
because
yeah,
and
also
you
know
they
can
be,
there
can
be
some
experiments,
because
I
think
you
just
have
to
be
very
careful
because
priority
goes
hand
in
hand.
You
can't
get,
but
both
have
priority
and
and
claim
to
get
rid
of
head
of
line
blocking
right.
I
want
I
want
I,
want
strict
priority
of
my
frames.
B
C
C
So
we
can
define
something
on
the
same
side
of
user
agents,
but
is
there
an
underlying
protocol
based
send
Priority
that
it
could
be
tied
into
consistently.
E
F
Alan
Ferndale,
yeah,
I
I,
think
Martin
made
this
point
a
second
ago
in
the
chat,
which
is
that
we,
the
web
transport
protocol
as
it's
being
defined
in
ITF,
does
not
have
a
way
to
Signal
priority
over
the
wire
and
I.
Don't
think
we
plan
to
create
one.
The
API
consideration
is
very
much
about
the
JavaScript
code
being
able
to
communicate
to
its
local
implementation,
how
it
would
like
things
to
be
prioritized
before
they
leave
that
endpoint.
C
C
I
think
we
can
take
that
answer
to
satisfy
number
two
number
three:
can
web
transport
require
an
alpha,
Randle
l4s
friendly,
low
latency
congestion
control
such
as
Prague,
and
this
I
think
is
probably
what
I
was
alluding
to
in
the
prior
question
and
Bernard
may
want
to
add
more
color
here
that
we
feel
this
is
necessary
for
a
client
to
send
video
in
a
congested
environment
up
to
a
server
in
real
time.
A
David
schenazi
speaking
as
individual
contributor,
so
l4s
is
not
just
a
congestion
controller.
It's
a
concession
controller
and
a
marking
system
using
ech
and
an
aqm
algorithm.
A
So
for
it
to
work
properly,
you
need
your
bottle-like
bottleneck
link
to
implement
some
specific
kind
of
smart
queuing
and
the
idea
is
l4s
like
the
the
short
of
it
is
you
have
a
shorter
cue
that
l4s
gets
into,
but
it
promises
not
to
grow
the
queue.
There
are
a
lot
of
folks
working
on
those
and
we
saw
like
there
are
a
lot
of
progress
with
the
hackathon,
but
they're
not
deployed
yet
and
most
importantly,
web
transport
being
can't
do
requirements
on
anything.
A
That's
not
the
endpoints,
so
additionally,
yeah
on
the
server,
so
I
don't
think
requiring
it
from
what
transport
is
feasible.
It's
saying
that
you
know
being
able
to
have.
This
is
great,
but
if
you're
on
a
network
that
doesn't
do
it
you're
kind
of
stuck.
G
Eric,
okay,
so
Eric
can
your
Apple
I
think,
even
if
you
take
that
in
a
slightly
more
narrow
definition
of
on
the
endpoint
and
only
on
the
endpoint,
can
we
require?
Essentially,
you
know.
Non-Q
building
flows,
regardless
of
what
queuing
mechanism
is,
is
happening
elsewhere
in
the
network.
G
It's
still
a
little
bit
unclear
as
to
what
why
we
would
actually
want
to
be
able
to
require
that
I
think
this
kind
of
comes
back
to
number
two
and
I
don't
want
to
reopen
that
if
we've
got
a
nice,
no
answer
but
like
I
think
the
the
sentiment
that
I'm
feeling,
which
I
think
might
be
shared,
is
like
we're
not
currently
planning
to
Signal
priorities.
We're
not
currently
planning
to
try
to
require
that
kind
of
congestion
control,
but
that
doesn't
mean
that
that
could
never
happen.
G
That
means
that,
like
right
now,
there
is
no
clear
use
case
for
why
you
would
need
to
Signal
those
priorities
as
opposed
to
having
it
be
an
endpoint
only
thing.
Similarly
for
question
three,
like
I'm,
not
seeing
a
huge
reason
that
you
would
want
to
require
that
everybody
anywhere
ever
doing
web
transport
can
only
do
Prague
or
something
similar
as
opposed
to
Simply,
making
it
available
for
an
implementation
that
needs
one.
C
B
It
wasn't
yeah
I.
My
question,
I
guess
was
probably
to
you.
Eric.
Have
you
seen
implementations
of
quick
that
have
Prague
and
support
l4s.
G
Have
I
seen
yes
I'm
aware
of
at
least
one
but
the
yeah,
even
if
it's
available
we're
saying
that
you
can't
call
it
web
transport
if
it
doesn't
offer
Prague,
because
I.
G
Because
when
we
were
talking
about
hey
for
congestion,
control,
I
want
to
request
low
latency.
That
is
very
much
a
request,
and
even
if
you
have
Prague
on
your
local
endpoint,
that
does
not
necessarily
mean
that
we
would
consider
what
you're
getting
to
be
low,
latency,
so
I'm
a
little
bit,
hesitant
to
say
that
we're
gonna
like
say
this
is
a
thing
and
everybody
should
go
off
being
happy
and
then
be
surprised
down
the
line
when
that's
not
what
they
got.
G
B
Anyway,
yeah
having
this
available
I
think
to
play
with
to
see
if
it
addresses
the
some
of
the
low
latency
use.
Cases
will
would
make
sense,
but
that's
probably
something
outside
of
this
meeting.
C
But
I
think
the
the
feedback
here
is
the
answer
for
three
is
no,
it
might
be
fun
to
experiment,
but
we
cannot
require
it
and
it
cannot
be
relied
upon
to
be
present
Martin's
next
security.
I
So
I
am
hello.
I
think
we've
answered
the
priority
questions
adequately.
I
think
there's
there's
a
lot
of
space
in
here
for
signaling,
but
that
will
be
very
much
application.
Specific
and
really
all
that
we
can
do
with
this
at
this
layer
is
provide
an
API.
That's,
thankfully,
not
a
problem
that
the
ITF
has
to
concern
themselves
with
there's
a
bunch
of
people
who
have
some
experience
in
that
area.
I
Who
can
take
that
to
the
w3c
I
think?
That's
probably
the
cleanest
way
to
manage
that
I
think
on
point
three:
what
we
have
with
with
the
the
application
expressing
a
preference
for
the
way
in
which
congestion
control
is
managed,
but
leaving
implementations
and
by
implementations,
I
mean
browsers
and
and
servers
to
to
just
sort
of
compete
on
the
quality
of
the
congestion
control.
Algorithms
is
probably
the
most
sensible
approach
having
a
mandate
for
a
very
specific
set
of
congestion.
I
I
A
Thanks
Martin
before
we
go
to
Christian
I'm,
going
to
cut
the
cue
soon.
So,
if
you
want
to
comment
on
this
topic,
please
join
the
queue
Christian
you're
next.
J
Yeah
I
mean
I'm
pretty
much
with
Smart
in
on
this
I
mean
certainly
for
position.
Control
I
have
a
financial
issue.
There
is
that
the
construction
control
is
the
property
of
the
connection
as
a
whole,
and
a
web
transport
is
only
using
part
of
the
connection,
so
you
might
even
have
two
web
Transportation
on
the
seven
quick
connection
and
what,
if
those.
E
J
J
Can
we
say
that
oh
I
I
see
that
they
they
do
support
l4s
and
that
Mercy
one
of
those
client
implementation
I
mean
we
we
have
to
analyze
when
we
do
things
like
that,
what's
the
Privacy
impact
and
based
on
that
I
would
say
no
to
everything
there
and
as
for
the
timestamp
option,
I'll
be
more
than
willing
to
get
feedback
and
updated.
K
Hey
Luke
from
twitch,
so
fortunately
Christian
opened
Pandora's
Box
for
me,
but
pooling,
there's,
there's
some
on
the
previous
slide.
You
exposed
some
Network
stats
like
the
estimated
bit
rate
rtt
a
congestion
controller.
It
doesn't
really
make
sense
when
you're
pulling
so
it
it
seems
like
at
least
the
JavaScript
API
is
assuming
no
pooling
or
is
that
is.
C
D
Jonathan
X
I
said
this
in
the
chat,
but
I
thought
I'd
repeat
it
in
response
to
what
Martin
said,
I
think
the
point
of
the
JavaScript
based
congestion
control
is
not
to
replace
the
congestion
controller
but
to
provide
additional
congestion
control
specifically
to
avoid
Q
building
behaviors,
either
in
the
network
or
in
your
or
under
our
local
output
buffers
to
keep
your
latency
low.
D
Even
if
you
have
a
default
congestion,
controller
I
think
that's
the
point
of
that
and
I
think
there
have
been
some
experiments
early
experiments
with
like
RTP
over
quick
that
have
gotten
reasonable
results
with
that,
basically
avoid
the
queue.
Keep
the
cues
low,
keep
your
latency
low
over
a
default
congest
controller.
C
Yeah
it
wasn't
that
people
wanted
to
implement
congestion,
control
and
JavaScript.
It
was
to
to
try
to
improve
the
delivery
and
to
do
it
on
top
of
the
existing
digestive
control.
That's
happening
under
the
base
layer
is
very
difficult
and
I
think
you're
referencing
the
ADT
core
work
that
was
done
over
bbr.
If
I
remember
correctly,
there.
A
So
there
was
random
in
the
queue
between
Jonathan
and
Peter,
but
I'm,
assuming
that
was
that
you're
not
intending
to
be
there
or
all
right.
Peter.
L
Gotcha
I
was
gonna,
say
roughly
the
same
thing
as
Jonathan
that
we
wouldn't
be
if
we
added
an
API
which
we
don't
currently
have.
But
if
we
added
one
for
number
four,
it
would
not
be
trusting
the
JavaScript
with
congestion
control.
It
would
just
be
allowing
it
to
go
lower
than
whatever
built-in
congestion
control
there
is.
If
it
wants
to,
it,
wouldn't
be
allowed
to
go
higher
necessarily.
So
that's
in
response
to
Martin's
comment.
L
I
I
think
there
might
be
potential
for
doing
something
like
that,
but
it's
something
that
needs
to
be
explored
and
I.
Don't
think
that
the
quick
timestamp
option
is
even
mature
enough,
yet
to
be
something
we
can
rely
on
so
I
I
think
while
number
four
has
potential,
it's
not
something
that
is
mature
enough
either
with
the
extension
or
to
see.
If
this
whole
idea
can
work
to
be
something
we
can
rely
mandate
right
now.
A
My
computer
well
I,
was
in
the
queue
as
an
individual
as
well,
but
everyone
already
said
it
so
plus
one
as
chair.
My
read
of
the
situation
is
I'm
getting
agreement
in
the
room
for
no
on
all
four
points:
I'm
gonna
reopen
the
cube
briefly
in
case
someone
wants
to
disagree
just
so
we
can
have
that,
but
we're
not
we're
not
going
to
run
a
formal
Contessa's
call
because
I
don't
think
you're
asking
for
a
full
liaison
here,
but
does
anyone
if
anyone
wants
to
disagree?
A
G
I'd
just
like
to
say,
I
think
it's
no
on
all
points,
but
not
like
a
no,
and
you
should
be
sad
for
doing
it
more
of
like
a
no
and
if
there's
a
cool
use
case
that
really
needs
this
or
like.
If
there
is
a
place
where
you
do
need
to
Signal
priorities
like
that
is
an
open
thing.
So
I
don't
think
this
is
a
no
we're
uninterested
in
what
you're
trying
to
solve
it's
more
of
a
from
what
we're
aware
of
for
what
you're
trying
to
solve.
A
C
A
You
cheers
and
Eric:
let's
chat
about
web
transfer
over
http
2.
A
A
E
Let's
talk
about
capsules,
we've
been
doing
this
for
a
while.
We
got
some
capsules,
there's
some
more
capsules.
We
should
do
capsule
stuff.
G
We've
now
removed,
basically
all
of
the
stuff
we
were
talking
about
about
flow
control
from
the
capsule
design.
Team,
pull
requests
at
this
point,
so
we've
we've
ripped
a
bunch
of
stuff
back
out.
We
have
left
a
very
little
bit
of
it
in
so
we've
we've
left
all
of
the
stuff
for
H2,
because
none
of
our
reordering
issues
occur
in
H2,
because
H2
is
conveniently,
on
top
of
this
Knight's
reliable,
ordered
protocol
and
there's
a
little
bit
of
Base
text
for
session
based
flow
control.
That
is
still
in
H3.
G
There's
some
settings
changes
that
we'll
talk
about
later,
and
otherwise
this
just
converts
everything
over
to
take
our
existing
tovs
and
call
them
capsules
and
list
them
this
nice
new
list
of
capsules
that
we
have
so
this
is
now
fairly
minimal.
Please
actually
go
read
and
review
you've
seen
this
diagram
at
three
ietfs.
By
now
it
has
not
changed
so,
let's
land
it
and
move
on,
so
we
can
stop.
Looking
at
this
diagram.
A
Just
very
briefly,
jumping
in
as
chair,
we,
we
did
a
consensus,
call
on
the
list
from
the
output
of
the
design
team
that
Eric
just
presented
and
didn't
get
any
response
because,
like
you
know,
everyone
who
had
been
involved
was
part
of
the
mostly
part
of
the
team,
so
we're
gonna,
we're
assuming
consensus,
is
we're
going
to
merge
that
the
the
call
actually
ended
three
days
ago
and
I
forgot
to
say
that
so,
unless
you
want
to
jump
up
and
scream
now,
but
otherwise,
like
that's
getting
merged
like
pretty
much
today
or
whenever
we
get
time
beautiful,.
G
Thank
you
and
thank
you.
The
folks
who
did
review
I
know
Martin
gave
multiple
great
rounds
of
reviews.
So
thank
you
for
that
sweet.
Thank
you
chairs,
all
right,
yes
negotiating
web
transport,
so
we
talked
a
little
bit
about
this
at
the
previous
ITF.
G
Some
implementation
experience
said
that's
actually
really
really
painful
when
you're
implementing,
because
it
means
that
you
have
to
teach
your
quick
datagram
implementation
about
web
transport
and
about
the
extended
connect
protocol
and
everything
in
between,
and
so
that
ends
up
being
mildly
painful,
especially
because
some
of
the
settings
don't
show
up
until
after
you
need
to
have
looked
at
the
transport
parameters
anyway.
So
when
you
suddenly
start
getting
in
datagrams
you're
like
hey
I'm
annoyed
now,
so
these
are
likely
to
become
totally
separate
things,
there's
a
slide
and
an
issue
for
it
later.
G
One
piece
of
update
from
the
last
time
we
talked
about
this.
We
had
said
that
we
wanted
to
switch
completely
from
settings,
enable
web
transport
over
to
settings
web
transport
Max
sessions
and
just
if
you
set
it
to
a
non-zero
value.
G
So
this
is
a
place
where
it
would
be
nice
to
have
input
from
the
folks
in
this
room
should
we
have
both
sides
just
continue
to
send
enable
because
settings
are
cheap
and
it's
not
that
hard
and
then
the
server
sends
back
sessions,
or
should
we
try
to
put
in
some
fancy
text
that
says
that,
like
the
client
just
always
sets
it
to
one
or
something
like
that
or
try
to
do
some
weird
definition
of
having
zero
not
mean
zero
anymore.
I
I
Just
you
know
zero
and
one
I
think
is
most
sensible,
I,
don't
think,
there's
any
need
or
value
in
the
client
saying
a
hundred
I.
Just
can't
imagine
what
client
would
know
how
many
sessions
it
would
create
at
the
time
it
makes
the
connection
right.
So
yes,
true
or
false,
which
is
a
bit
weird
but
better
than
having
ooh
two
seconds.
G
I
G
I
So
so
settings
like
cost
the
same
either
way,
so
this
is
just
double
the
cost
and
we're
going
to
be
sending
them
on
every
connection.
Yeah.
Every
connection
we
make
to
any
web
server
will
have
this
setting
in
it.
If
you
make
a
send
to
we're
going
to
be
spending
all
of
those
extra
bytes
and
we
do
kind
of
care
about
those
bytes
so
once
One's
best.
Thank
you.
D
M
F
Alan
Alan
trendel,
since
web
transport
sessions
are
client
initiated
aside
from
the
use
case,
Victor
just
mentioned,
which
is
a
strange
way
to
version
the
protocol
anyway.
But
okay,
what
if
the
client
didn't
have
to
announce
support
via
setting
and
the
server
says
I
could
handle
one
or
more
web
transport
sessions
and
then
the
client
sends
it
one
and
it's
like.
Oh,
the
client
supports
web
transport.
Otherwise,
what
will
you
do
other
than
just
the
client
says?
I,
don't
support
it
and
then
sends
you
a
web
transport
session.
N
Hey
Lucas
party
Cloud,
Player
I,
like
Alan's
suggestion
just
there
I
think
the
the
comment
I
was
going
to
make
was
about
defaults.
I
didn't
hear
anything
about
defaults,
yeah
and
then
maybe
it's
in
this
back
I
didn't
read
but
like
what
the
default
would
be
off.
I
suppose
and.
O
D
Yeah
Jonathan
I
mean
I'm,
not
sure
why
you
need
to
have
I
wouldn't
having
just
the
client
send
enable
and
the
server
meaning
I
I
can
create
my
website
transport,
but
don't
create
them
to
me.
Basically
have
a
different,
don't
have
you
know
basically
the
same
semantic
as
Max
session
zero,
but
it
means
Max
session
zero,
but
I
do
web
transport,
and
so
basically
have
this
can
be
asymmetric.
Basically,
you
can
have
enable
meaning
not
actually
enable
but
I'm
willing
to
create
I'm
able
to
create
them.
D
D
I
mean
I,
think
yeah
I
mean
I
guess.
The
question
is:
if
your
implementation
is
such
that,
like
what
the
web
transport
stuff
and
the
HTTP
3
stuff
are
different
parts
of
your
stack
and
you
need
to
transfer
the
socket
over
to
a
different
part
of
the
handle
but
be
determined
what
different
part
of
your
code
or
even
a
different
process
or
something
that
would
be
the
reason
I
would
think.
But
right.
I
That
was
what
I
was
up
here
to
ask.
David
was
ahead
of
me,
but
now
I'm
yeah
and
never.
I
It
does
anyone
need
to
know
and
I.
Think
I
can
probably
answer
that
question.
So
does
any
server
need
to
know
that
if
a
client
does
work
transplant
I
think
there
may
be
cases
where
servers
need
to
know?
But
in
those
scenarios
you
can
put
the
server
on
a
different
hostname
and
the
client
will
be
able
to
connect
to
them
and
do
the
magic
stuff
that
way,
rather
than
looking
at
other
things,
I
like
our
own
suggestion.
G
G
A
David
schenazi
I
I
was
initially
going
to
say
that
the
client
should
send
it
because,
like
this
modifies,
HTTP
semantics
and
it
tells
the
server
that,
yes,
you
can
send
sort
of
initiated
bi-directional
streams
and
that's
how
we
generally
do
that
in
HTTP.
But
then
I
remembered
that
I
really
don't
care
either
way
works
and
Victor's
visioning
thing
from
the
client
can
be
done
as
a
header
as
well.
O
I
will
be
very
brief
and
just
say
it
does
not
seem
like
there's
any
actual
information
here
that
the
server
needs
to
receive
from
the
client.
The
client
needs
to
receive
the
number
of
sessions
and
anything
other
than
zero
implies
everything
the
client
needs
to
know.
So,
let's
just
have
one
setting
set
in
One
Direction
and
be
done.
K
All
right,
Luke,
Fringe
use
case
but
worth
saying
my
server
is
only
web
transport
I
don't
want
to
serve
HTTP
3.
So
if
quick
connects
to
me
and
it
doesn't
support
web
transport,
I
want
to
close
the
connections
that
it's
not
just
wasting
resources,
but
it's
kind
of
Fringe.
G
N
Hey
Lucas
Pottery,
just
a
quick
one,
the
what
was
I
forgot.
What
was
it
gonna
say?
Damn.
Oh
quite
often
we
think
of
settings
as
the
only
way
to
negotiate
a
semantic
change
in
HP
203.
The
spec
doesn't
require
that
it
can
be
whatever
else
we
would
come
up
with.
So
that
would
fit
potentially
this
as
well.
A
Thanks
as
chair
the
sense
from
the
room
that
I'm
getting
is,
there
are
some
opinions,
but
no
one
feels
overly
strongly
and
there
seems
to
be
a
sense
of
the
room
for
one
setting
only
sent
from
the
server.
Can
someone
not
live
with
that.
M
As
I
said,
the
versioning
sync
cannot
be
done
via
the
only
server
thing,
because
you
have
to
understand
which
version
you
speak
so,
for
instance,
between
regular
2
and
chapter
four.
We
decided
that
we're
Banning
on
bi-directional
streams
where
require
web
transport
session
frame
to
be
in
the
very
front,
and
that
would
be
a
breaking
wire
change
and
we
have
to
know
on
client
and
server
have
an
agreement
on
which
version
is
actively
being
used.
G
So
I
think
we
have
two
concrete
options
here:
option
one
is
the
server
sends
settings
web
transport
Max
sessions
it?
Can
you
can
encode
a
version
in
the
type
of
the
thing
that
it
sends
there
if
you
like,
and
when
the
client
sends
extended
connect,
you
can
always
mint
a
draft
specific
protocol.
Token
just
like
we
did
with
alpn
for
H3
and
things
like
that
right
and
that
arrives
with
the
extended
connect.
There's
no
asynchrony
happening
there,
there's
no
reordering
possible.
G
G
G
So
option
one:
the
server
sends
settings
web
transport,
Max
sessions
to
some
non-zero
value
and
the
type
of
that
transport
of
that
setting
encodes
your
version
and
when
the
client
sends
extended
connect,
you
can
always
mint
a
new
protocol
entry.
So
right
now
we
just
call
it
web
transport,
but
you
could
always
call
it
web
transport-02
or,
however,
you
want
to
Signal
support
for
different
draft
versions
of
things,
just
like
we
did
with
alpn
for
H3.
G
Your
second
option
is
settings.
Web
transport
Max
sessions
is
sent
by
both
sides.
You've
encoded
the
version
in
the
type
of
that
setting,
and
we
Define
one
for
the
client
to
be
a
sentinel.
That
means
that
I'm
not
accepting
any
web
transport
sessions
at
all
but
I
do
support
web
transport,
and
you
can
then
infer
the
version
from
the
type
of
the
setting
that
was
sent.
G
I
You
know
I,
just
I
just
realized
that,
with
with
the
first
design,
the
server
can
advertise
support
for
multiple
versions.
Now
clients
can
exercise
whichever
one
they
choose
simply
by
indicating
with
the
version
in
the
in
the
header
field.
The
challenge
with
the
other
design
is
that
if
the
client
can't
support
multiple
versions,
you
don't
have
any
version
negotiation
available,
because
the
client,
if
the
client
says
oh
I,
support
draft
15
and
draft
16.,
then
when
it
exercises
that
option
later,
there's
no
way
of
knowing
which
one
it
concretely
is
using.
O
B
A
I
M
A
Yeah,
so,
given
all
the
conversation
Victor,
can
you
live
with
this,
and
or
should
we
take
this
to
the
list.
M
I
think
we
should
take
it
to
the
list.
I
can
imagine
us
living
with
this
long
term
as
in
when
we
ship
the
final
version
with
the
laser
client-side
setting.
G
G
A
Yep
well
and
please,
if
you
know
I
I,
see
Victor
Martin
keep
keep
the
constant
the
conversation
going
in
the
chat.
If
you
can
resolve
it
there,
otherwise
we
can
do
it
on
the
list.
Let's
not
please
face
plant
on
the
settings
like
this
is
the
silliest
part
of
the
protocol.
Oh
for
next
time,
no
close
the
slide,
they've
added
a
button
so
where
I
can
reclaim
control
now,
yeah
no
worries
anyway
cool.
Then
oh
right,
no
Victor's
not
going
to
continue
the
conversation
he's
going
to
be
up
here
presenting.
A
Let
me
share
pre-loaded
slides
share.
Yes,
you
can
send
multiple
settings
in
H3
and
Victor.
Can
you
request
a
slight
transfer
or
whatever
that's
called.
M
Okay,
so
I'm
doing
the
presentation
for
web
transfer
to
overage
free,
open
issues.
There
are
plenty
of
those,
but
most
of
those
are
actually
either
addressed
by
the
previous
presentation.
Ie
is
a
design
team,
output
or
they're
addressed
by
the
presentation
that
Martin
will
make
later.
So.
First
of
all,
the
update
on
what
actually
got
merged,
which
is
one
issue,
is
that
we
have
a
clarification
text
on
that
when
exactly
you
can
open
new
streams
and
datagrams
and
the
client
can
basically
open
them
as
soon
as
it
opens
extended.
Connect.
M
M
That
tells
you
that
currently
tells
you
that
everything
all
depending
features
are
enabled,
and
the
proposal
is
that
it
should
no
longer
do
that
and
that
we
explicitly
acquire
you
to
obtain
into
supporting
a
capsules,
be
quick,
datagrams,
C,
HTTP,
datagrams
and
de
extended
connect,
and
you
need
all
of
those
in
order
explicitly
negotiated
in
order
for
web
transfer
to
work
and
I
believe
this
is
a
correct
proposal
and
we
should
adapt
it.
M
F
Alan
frindell,
so
I
actually
got
web
transport
implementation
working
this
week
with
chrome,
and
it
was
a
huge
pain
to
figure
out
why
Chrome
did
not
like
my
server
for
a
long
time
and
I
think
the
thing
I
don't
like
about
this
proposal
is
that
there's
four
different
ways:
you
can
screw
up
and
fail
your
handshake
and
it
might
be
easier
if
there
was
just
like
yes,
I
want
to
do
web
transport,
damn
it.
M
Alan,
my
answer
to
this
is
the
please
file
a
bug
for
better
developer
tooling
for
Chrome,
because
this
is
definitely
something
I
have
have
that
has
frustrated
me
in
the
past
and
I.
Don't
think
we
can
get
rid
of
all
of
this
issues,
so
we
definitely
need
more,
informative,
tooling,
to
tell
you
why
your
server
got
rejected
and
it's
just
a
matter
of
writing
with
the
code.
M
F
Just
to
reply,
I
I
think
maybe
the
it
makes
quick
datagrams
are
a
totally
different
layer
and
I
didn't
like
having
it
for
HP
dreams.
I
think
extended,
connect
was
kind
of
the
one
that
felt
like
why
do
I
have
to
opt
into
this,
but
anyway
sorry.
D
Jonathan
Lennox
I
think
listing
them
all
explicitly
is
probably
a
good
idea,
because
otherwise,
in
five
years
we're
going
to
be
in
a
situation
where
you
know,
okay,
you
send
you
know
if
you
want
these
15
features,
and
this
one
implies
these
four,
but
not
these
other
three
and
it'll
be
a
complete
mess
rather
than
just
let's
list
them
all
from
the
start.
That
said,
I
feel
like
there
should
be
clear
Direction
on
what
to
do.
D
If
somebody
gives
you
a
nonsensical
response,
like
you
know,
I
support
web
transport,
but
not
extended,
connect
or
something
like
that.
So
or
you
know,
HTTP
datagrams
are
not
quick
datagrams.
There
should
be
a
clear
indication
of
just
you
know.
If
so
somebody
has
something
nonsensical,
just
tear
it
down
and
consider
it
a
complete
failure.
Don't
try
to
continue.
O
O
O
O
N
Hello,
Lucas,
Pardo
yeah,
the
comment,
sorry
I
just
got
invoked
to
say,
devtooling's,
really
awful.
All
of
this
layer
are
very
pessimistic.
It's
going
to
get
better
I
want
to
put
energy
into
it
and
try
and
work
with
the
community
to
do
it,
but
I
wouldn't
hold
my
breath
on
that
happening.
N
It's
going
to
make
web
transport
interrupt
difficult
for
us.
The
best
bet
we
can
get
is
probably
seeing
like
400
or
500
response
to
a
connect
UDP
coming
back,
but
anything
else
is
like
going
to
be
super
tricky,
but
we
should
do
a
lot
better.
A
A
So
now
speaking
as
chair
Alan,
you
sound
in
the
rough.
Do
you
can
you
live
with
this
all.
P
A
Cool
I'll
repeat
in
this
in
the
mic
for
anyone,
remote
Alan,
says
he's
fine
with
it.
So
I
think
we
we
have
rough
consensus
here,
thanks
for
thanks
for
thanks
for
that,
Alan
appreciate
it.
M
All
right,
let's
move
on
to
the
next
issue,
which
is
a
bit
more
complicated.
What
do
we
do
with
HTTP
redirects?
So
during
the
last
meeting
we
all
agree
that
we
definitely
should
have
either
a
must
support,
redirect
or
must
not
support,
redirect,
because
anything
else
is
just
highly
unpleasant
developer
experience
and
everyone
was
leaning
and
it's
a
general
consensuses
or
MC
was
leaning
towards
supporting
them
and
requiring
their
support
and
I
went
and
I
asked
Adam
rice,
who
was
the
reason
I?
Originally,
we
did
not
support
toys.
M
Rx
is
what
are
the
pitfalls
with
that
and
he
made
a
reply
which
is
you
can
read
on
the
issue
tracker
with
one
of
the
potential
attack
on
redirects,
but
so
the
more
I
think
about
it.
The
more
I
said
I
come
to
conclusions
that
redirects
have
a
lot
of
really
unpleasant
semantic
edge
cases,
and
a
lot
of
them
revolve
around
the
fact
that,
in
order
to
send
the
redirect
to
a
web
transport
resource,
you
need
to
user
connections
that
supports
web
transferred.
M
But
redirect
is
something
that
a
server
that
either
supports
or
does
not
support.
Where
transport
can
reply
with.
So
what
happens?
If
you
get
a
redirect
for
a
web
transport
resource
and
it
redirects
you
to
somewhere,
where
this
not
support
web
transport?
Or
can
you
consist
what
should
we
handle
redirects
and
attempt
to
fetch
for
those
redirects
in
cases
when
we
are
on
the
connections?
That
does
not
support
web
transport
in
the
first
place
and
then
the
second
issue
is
okay.
M
We
allow
technically
currently
allow
client
to
start
sending
streams
when
the
before
getting
reply
from
the
server,
and
this
has
the
obvious
problem
of
okay.
We
sent
some
data
on
the
streams
to
the
server,
and
now
we
got
redirected
what
happens
to
those
streams,
and
that
is
the
second
problem
we
have
to
deal
with
this.
So
my
personal
current
inclination
is
that
we
should
not
support
redirects,
because
we
will
have
to
deal
with
all
of
those
and
I
would
like
to
know
what
people
in
the
room
think
about
this
Market.
I
I
That's
that's
just
a
final
condition.
That's
just
another
one
of
the
things
on
the
checklist
that
Alan
was
talking
about
before
that's
difficult
to
get
right,
but
you
only
really
have
to
get
it
right
once
item
potency
is
interesting,
which
sort
of
led
me
to
ask
the
question
so
you're
connecting
and
you're
sending
I
I,
don't
know
what
are
you
sending
on
this
on
this
thing,
you're
sending
stream
data?
I
What's
the
stream
limit
on
the
other
end,
do
we
have
a
default?
Can
we
can
we
guarantee
that
you'll
have
a
bi-directional
stream
to
create
and
send
on
you
don't
know,
especially
not
in
the
H2
case
anyway,
maybe
in
the
H3
case,
because
you're
just
taking
from
a
global
pool
that
you
already
know
the
size
of
but
I'm
tending
towards
the
conclusion
here
that
that
is
difficult
and
I'm,
not
sure
what
to
do
with
it.
I
If
it's
just
stuff
that's
sent
in
the
payload
of
the
request,
then
it's
relatively
straightforward
and
I
think
that's
all
you're
allowed
to
really
do
before
you
get
confirmation
that
something
is
done
so
maybe
just
replay
it
is
that
right,
Eric's
nodding,
I,
think
if
we,
if
we
make
the
semantics
of
a
3xx
in
a
response
that
none
of
the
payload
of
this
request
was
was
processed.
If
you,
if
you're
doing
a
connect
and
you've
got
to
redirect,
then
then
you
can
just
take
that
those
bytes
and
send
them
on
the
next
one.
I
It'll
be
fine.
If
you
have
anything
other
than
that,
I
think
this
is
really
really
difficult.
I'm,
not
actually
sure
how
we're
going
to
do
this
for
H2,
the
more
I
think
I
think
about
it.
You
can
do
some
bookkeeping,
but
you
won't
be
able
to
do
any
stream
data
because
our
default
stream
limits
are
zero.
H
Hey
Alex
tranovski
Google
I'm
reminded
of
some
of
the
zero
cost
extensibility
questions
that
we
had
in
the
mass
working
group
and
I
feel
like
this
is
sort
of
similar
here
where
I
seem
to
recall,
we
had
a
similar
discussion
about
you
know
what
happens
if
you
want
to
send
a
capsule
that
may
not
be
supported
on
a
server
opportunistically
and
that
sort
of
feels
like
the
same
thing
here
with
the
API
issues
question.
H
So
if
we
want
to
support
redirects
I
feel
like
the
answer
here,
if
you're
definitely
going
with
sending
data
over
datagrams
is
that
you
just
have
to
assume
that
anything
that
you
sent
prior
to
getting
a
confirmation
that
the
extended
connect
succeeded,
just
like,
as
we
did
with
the
underlying
mask
stuff
that
you
might
have
to
retransmit
it.
So
in
this
case,
I
feel
that
what
Martin
was
saying
here
around
item
potency
is
very
clearly
solved
that
you
know
you
get
the
redirect.
You
assume
it
all
that
data
was
not
processed.
H
M
I
Yes,
sir
Victor,
that's
a
fair
question.
This
is
the
case
where
you
you
attempt
on
on
one
connection
and
you've
got
a
nice
fat
flow
control
window,
and
you
send,
you
know
a
megabyte
of
stuff,
not
that
you
should
be
doing
that.
But
let's
say
you
manage
that
and
you
get
a
redirect
and
then
the
next
connection
only
has
like
10K.
I
What
do
you
do
with
the
the
extra
stuff
that
you
can't
send
I
think
there's
kind
of
an
unavoidable
sort
of
gotcha
in
terms
of
redirect
processing,
in
that
the
client
has
to
remember
everything
that
sent
and
then
spend.
I
Necessary
to
ensure
that
it
gets
sent
again,
which
means.
M
I
I
We're
talking
about
this
right
here,
which
we're
just
talking
about
all
right
within
the
context
of
this
one.
E
G
I
would
agree
at
the
same
time.
I
do
think
the
what
happens
after
I'm
redirected
is
not
massively
different
from
what
I
would
have
done
if
I
went
there
originally
so
like
if
I,
if
the
thing
that
I'm
running
requires
me
to
have
three
streams,
open
and
I
would
have
gone
to
a
server
that
says
no,
you
can
only
have
two
like
this
is
the
same
problem
right.
So
the
fact
that
I
was
able
to
have
to
pack
my
connect
with
additional
data
that
tries
to
open
three
is
not
that's
not
really.
G
M
It's
list
for
dedicated
web
transport
or
for
HTTP
free.
You
can
open
as
many
streams
as
the
quick
transport
settings.
Allow
you
and
the
reason
I
say
opening
streams
specifically,
is
that
buffering
data
is
something
you
can
do
transparently,
but
with
opening
streams
or
API
explicitly
gives
you
a
promise
that
is
not
resolved
until
the
stream
is
actually
opened.
G
M
No
because,
as
far
as
I
remember,
you
can,
but
you
learn
the
server
Max
open
streams.
Initial
before
you
get
the
server
reply.
G
M
I
I
I,
don't
like
the
dedicated
connection
thing,
but
that's
maybe
something
we
have
to
Grapple
with.
I
Maybe
we
need
a
new
setting
for
that,
because
in
in
the
in
the
pooled
scenario,
maybe
you
don't
want
clients
just
opening
up
new
web
transport
streams
for
sessions
that
haven't
been
approved
yet
and
in
the
in
the
dedicated
connection
case?
Well,
maybe
it's
okay
to
do
that,
so
that's
something
that
I
think
we're
going
to
have
to
think
about
a
little
bit
more
carefully,
because
that
changes
the
disposition
toward
this
particular
question.
Quite
a
bit.
G
What
I
was
going
to
say
yeah,
so
this
is
a.
This
is
essentially
the
box
of
things
that
you
open
up
when
you
start
trying
to
do
that
kind
of
flow
control,
and
things
like
that,
so
I
think
I
mean
I'm
totally
game
to
go.
Try
to
figure
that
out
offline,
but
I,
don't
think
we're
going
to
answer
it
in
the
next
half
an
hour.
Yeah.
A
But
just
speaking
as
chair
here,
the
I
think
the
output
of
the
design
team
that
we
declared
because
sessas
on
was
like
the
flow
control
for
that
part
is
oh
right.
Now
we
we,
we
punted
it
out.
That's
true,
so
that
we
didn't
reach
counter
consensus
on
it.
So
that's
still
like
a
nice
landmine
that
we
haven't
decided
how
we
want
to
step
on.
M
I'm
not
sure
flow
control,
specifically,
we
definitely
I
think
it's
better
to
take
it
offline
because
there
it's
very
clear
that
there
are
a
lot.
It
sounds
like
there
are
unsolved
design
issues
here
and
we
should
either
solve
them
or
design,
decide
that
they're
not
worth
solving
or
should
not
be
solved,
but
it
doesn't
sound
like
we're
arriving
to
a
new
conclusion.
That's
submitting
so
I
suggest
we
move
to
the
next.
A
M
M
Okay,
so
the
third
issue,
which
is
actually
three
issues
which
talk
about
truffle,
is
the
same
topic
but
different
aspects
of
it
or
maybe
the
same
aspect
of
it
is
that
the
we
have
for
unidirectional
streams
in
web
transport.
We
just
have
a
unidirectional
strain
type
and
we
use
that
and
we
put
the
session
ID
and
then
we,
it
just
works
for
bi-directional
streams
or
no
stream
types
in
HTTP
free.
So
we
made
a
special
frame
that
just
says
everything
else
is
on
this
stream
is
web
transport
and
we
can
make
that
frame.
M
It's
legal
because
we
use
a
setting
to
negotiate
web
transport
support,
meaning
that
we
can
alter
the
protocol
and
then
the
question
was
well.
Can
you
put
anything
before
that?
And
during
the
last
ATF
meeting
we
roughly
agreed
that
the
answer
to
that
question
is
no,
because
we
want
to
have
consistency
between
what
we
do
with
bi-directional
streams
and
unidirectional
streams,
so
that
is
as
far
this
is
I
think
we
have
agreement
on
that
and
Lucas
and
Lucas.
Please
correct
me,
because
I
am
trying
to
vaguely
restate
what
you
said
in
the
issue.
M
Slash
pull
request
suggests
that,
instead
of
doing
that,
we
should
just
Define
bi-directional
stream
types
and
as
an
extension
to
http
free,
and
there
is
a
pull
request
to
do
this
and
I
will
let
Lucas
advocate
for
it,
because
my
current
opinion
on
it
roughly
this
has
the
same
effects
on
the
wire,
so
I
don't
think
this
is
particularly
worth
it,
but
Lucas.
Please.
A
My
understanding
and
correct
me
if
my
arm
wrong
is
this:
doesn't
change
the
wire
format
in
the
sense
that
the
bi-directional
stream
will
always
start
with
a
variant
that
is
an
identifier
and
that
identifier
will
either
be
in
the
streams
in
a
registry
stream
type
sign
a
registry
or
any
frame
types
I.
Never,
actually
that's
that's.
What
we're
debating
is
which
registry,
but
it
doesn't
change
the.
What
we're
actually
sending
is
that
correct,
I'm
asking
you
Lucas,
okay,.
N
N
If,
instead,
you
kind
of
say
when
we're
using
web
transport
or
an
extension
of
this
type,
there
is
a
way
to
convert
the
semantics
of
hb3
such
that
biddy
streams
no
longer
become
request
streams,
and
this
is
a
way
to
do
that.
The
PR's
in
web
transport
I
think
it's
into
a
section
of
the
document.
That's
like.
N
Maybe
we
don't
want
to
do
this
in
web
transport
and
that
maybe
this
is
something
we
take
to
the
HB
working
group
and
say
we
designed
hb3
for
for
this
use
case
and
we're
trying
to
do
other
things
with
it,
and
this
is
an
approach.
So
there
were
some
good
comments
that
you
made
on
that
PR
and
some
stuff
has
shifted
on
I
haven't
had
the
time
to
go
back
and
comment
them.
It's
not
something
I
want
to
give
up
on
yet
because
I
haven't
had
the
time
to
revisit
it.
N
If
everyone
hates
it,
that's
okay,
but
I
haven't
I,
don't
know
I,
just
don't
like
the
current
design
is
my
issue
and
if
I'm
in
the
rough,
then
so
be
it,
but
I,
don't
think
we've
had
enough
discussion
around
this.
Yet.
N
I,
don't
believe
it's
a
wide
difference
is
a
very
nuanced
kind
of
bike,
shed
it's
a
tiny.
It's
a
Kitty
bike
chat
yeah
like
I've,
not
I'm,
just
not
in
the
right
state
to
for
phrase
that
Nuance
correctly
sorry.
A
I
I
asked
chair,
I'll
jump
in
and
try
to
phrase
the
nuance
and
please
jump
in
and
correct
me
Lucas
if
you
think
I'm
doing
it
wrong
so
in
HTTP,
3
well,
quick
has
server,
initiated
bi-directional
streams
in
HTTP.
It
says
You
must
not
send
them
and
if
you
receive
them
explode,
so
we
know
no
one's
sending
them.
But
now
we
have
magical
super
duper
setting.
So
we
know
that
we're
in
a
different
mode
and
that
setting
tells
you
what
you're
allowed
to
send
on
this
stream
and
how
you're
supposed
to
parse
it.
A
So
we
get
to
decide
and
we
have
two
options
here.
One
is
you
send,
so
you
send
frames
on
HTTP
streams.
A
Sorry
on
survey
issue
bi-directional
streams
in
HTTP:
three,
that
I
have
the
white
transport
setting
that's
a
mouthful
and
the
other
is
you
send
a
stream
type
and
I
mean
as
I'm
saying
this
out
loud
I'm,
realizing
that
this
might
be
outside
the
purview
of
this
working
group,
but
I
think
that's
the
color
of
the
bike
shed
is:
do
we
want
it
to
be
frame
types
and
have
the
the
award
that
Luca
described
where
we
have
a
frame
type
that
doesn't
have
a
length
or
do
we
want
stream
types,
I
hope,
I,
didn't,
say
them
completely
backwards,
every
time,
all
right!
A
N
Yeah,
so
thanks
summary,
it's
reminding
me
some
stuff,
which
is
good.
The
the
other
problem
I
had
with
this
is
like,
if
you're
building
a
a
stream
parser
like
this
and
yes
say
it's
a
client
request
stream.
This
is
the
one
we're
concerned
about
not
the
server,
but
these
stream
known
case.
N
So
the
client
request
stream.
You
have
frames
and
if
you
don't
understand
frames
extension
frames,
you
ignore
them.
What
we're
saying
is
we
want
to
change
some
of
that
Machinery
too,
such
that
when
you
receive
this
Frame
it
seems
the
property
that
we
want
is
that
web
transport
starts
and
converts
the
stream
immediately
into
the
mode
that
it
needs,
but
by
using
a
frame.
N
What
you
end
up
with
is
effectively
an
infinite
amount
of
stuff
that
could
appear
before
that
web
transport
stream
frame,
and
that
just
seems
completely
pointless
and
so
you're
making,
like
another
exception
for
behavior,
for
web
transport
streams,
but
I,
don't
think
we
need
I
think
we
should
just
say
if
you're
using
web
transport
and
you
get
the
stream
open
and
it
has
to
start
with
this
byte.
Otherwise,
there's
something
Focus
going
on.
It's
not
it's
no
longer
a
framed
stream,
it's
something
else
which
is
the
property
that
we
want.
If
I
understand.
I
Yeah
so
I'm,
not
all
that
enthusiastic
about
the
the
way
in
which
this
is
working
out,
because
we
have
bi-directional
and
unidirectional
streams
and
the
client
initiative
initiated
bi-directional.
Streams
have
to
have
frames,
and
in
order
to
for
us
to
to
do
this,
we
have
to
choose
a
thing
that
sits
the
front
of
that
stream.
That
looks
like
a
frame
or
at
least
uses
a
number
from
the
the
space
that
isn't
like.
I
We
have
to
register
a
frame
type
in
order
to
avoid
colliding
there
right,
so
I
don't
want
to
have
I,
don't
want
to
have
us
fix
the
asymmetry
between
unidirectional
streams
and
bi-directional
streams.
Only
to
create
this
problem
with
the
others,
so
I
think
we're
probably
in
a
situation
where
unidirectional
and
bi-directional
is
the
is
the
cleave
that
we're
looking
for
and
then
I
think
we
have
frames
now.
I
I
That's
that's
all
fine.
We
can
also
Define
a
rule
that
says
this
has
to
be
the
first
frame.
That's
on
that
stream.
If
we
want
to
go
that
way,
we've
done
that
for
for
other
things,
I
think
that's
true
of
settings
in
H3
already.
I
What
I
don't
want
to
have
happen
is
we
we
lose
the
the
ability
to
distinguish
our
streams
from
their
streams
when
they,
when
they
share
the
connection,
so
I'm
like
need
a
handle
there.
I
would
prefer
to
use
the
frame
parser
to
do
frame
parsing
on
like
the
bi-directional
streams,
which
potentially
means
it's
wasting
a
bite
for
a
zero
length.
I
I
think
zero
length
is
easier
than
a
than
a
whatever
other
options
we
have
for
that
field,
because
most
frame
parsers
will
be
type
length,
something
and
knowing
that
we
have
type
length
of
zero
and
nothing
following
it
makes
those
stream
passes
very
much
more
easy.
Well,
we've
got
a
session
ID
as
well,
of
course,
so
I'm
actually
kind
of
okay
with
the
current
design.
With
with
maybe
a
few
tweaks.
F
The
magic
words
that
summoned
people-
okay,
I
shouldn't-
have
done
that
Alan,
friendel,
okay,
so
I've
implemented
this
as
a
frame
and
I've
put
it
in
my
frame
parser
and
it's
gross
because
it
has
to
be
the
first
frame.
It
doesn't
have
a
length.
It
completely
changes
all
stream
all
frame
parsing
after
that.
So
I,
don't
think
you.
If
you're
implementing
this
I,
don't
think
you
actually
want
to
implement
it
inside
your
frame.
Parser
I
think
you
want
to
do
is
treat
it
like
a
stream
tech.
F
So
before
you
pick
before
you
even
instantiate,
your
parser
you're
peaking
at
the
first
bite
and
you're
saying.
Is
this
one
of
those
special
things?
That's
not
really
frames
at
all
and
then
once
you
look
at
like
oh
look,
it's
one
of
the
web
transport
ones.
I
will
not
instantiate
my
frame,
parser
I
will
just
go.
Do
the
web
transport
thing
just
like
qpac
has
its
own
world
of
what
it
does.
That's
not
frames.
It's
something
completely
different.
F
That
said,
Martin
is
100
right,
that
you've
got
to
register
them
in
the
frame
space
of
H3,
because
otherwise
somebody
could
create
an
H3
extension
frame
that
could
be
on
a
client
initiated
by
directional
stream
and
that's
not
going
to
work
so
but
I
think
I,
don't
know
how
many
people
are
going
to
actually
implement
this,
who
haven't
already
started,
so
maybe
it's
guidance
for
people
that
don't
exist
in
the
future,
but
I
think
having
them.
F
You
know,
having
it
described
in
the
document,
as
a
stream
type
and
registered
in
the
frame
registry
might
be
the
thing
that
gets
the
most
people
to
write
the
correct
code
here
rather
than
trying
like
oh
it's,
a
frame
I'll
put
in
my
frame
parser
and
then,
like
a
week
later,
they're
like
that
was
such
a
bad
idea.
What
did
I
do
so?
Okay,
that's
all.
A
Q
Correctly,
when
we
develop
HTTP
3,
we
believe
that
bi-directional
streams
would
only
be
used
by
H3.
Therefore,
we
don't
need
any
extension
points,
but
now
we
think
that
we
have
to
use
them
for
our
own
purposes
and
then
the
question
is:
would
there
potentially
be
another
protocol
that
would
co-exist
on
the
same
connection?
That
would
also
want
to
use
by
directional
streams
for
other
purposes,
and
if
we
think
about
that
possibility,
I
think
we
need
a
clear
separation,
a
clear
way
of
separating
the
string
types,
the
type
of
the
streams
that's
being
used.
M
I
kind
of
agree
with
the
fact
that
you
probably
don't
want
to
put
this
into
your
regular
stream
parser.
If
that's
the
only
thing.
Okay,
our
current
implementation
puts
it
into
the
stream
parser.
But
that's
because
we
implement
the
version
of
the
draft
where
it's
allowed
to
appear
not
in
the
beginning
of
the
Stream
I.
Don't
think
that
streams
without
the
frames
without
length
are
conceptually
bad
because
we
had
at
least
one
other
proposal
for
a
frame
without
lens
that
made
sense
and
I
found
in
past,
like
actually
useful.
M
A
Thanks
Victor,
so
I'm
David,
schenazi
speaking
as
an
individual
contributor
here.
So
my
first
point
is
I.
Don't
feel
too
strongly
here,
I'd
much
rather
we
but
whatever
it
is,
even
if
it's
a
coin
toss
and
progress
rather
than
anything,
but
it
I
think
this.
It's
early
enough
that
it's
worth
us
discussing
more
on
the
list,
I'm
not
saying
but
anyway,
my
personal
opinion
is
since
we
need
to
register
it
in
the
frame
type
Ayana
registry,
no
matter
what
it
makes
sense
to
just
keep
it
be
a
frame.
A
A
That's
actually
an
interesting
thing,
because
the
quick
stream
can
only
carry
to
the
62-1
and
you've
already
burned
eight
byte
for
the
lengths
they,
however
many
for
the
thing,
so
you
know
that
you're
not
going
to
go
off
the
end
of
it
anyway,
so
that
effectively
means
taking
all
over
the
whole
stream.
A
So
you
could
use
that
as
they
must
send
the
length
to
all
of
this
and
I'm
seeing
P
Martin
make
Mia
that's
gross
go
away
face
so
anyway,
it
was
just
a
thought
but
I'm
just
throwing
the
idea
out
there
I'm
not
pushing
it
serious
so
strongly
all
right.
N
Some
people
here
doesn't
seem
like
I'm
that
much
in
the
rough
wanting
more
of
a
stream
type
model,
it
doesn't
seem
like
it
needs
to
be
web
transport
specific,
either
and
I'm
wondering
if
there's
interest
in
trying
to
take
this
into
just
a
small
Standalone
ID
to
go
back
to
the
HP
working
group
and
say
look
there's
at
least
one
use
case
here
that
wants
to
repurpose
client
bi-directional
streams
and
here's
a
way
you
could
do
it
and
what
you
need
to
do
is
go
and
Define
a
code
point
for
your
specific
use
case
and
then
web
transport
can
Define
that
code
point
we
don't
in
the
Iona
registry.
N
We
don't
need
to
call
it
anything
and
therefore
a
frame
parser
would
see
that
and
say
I
there's
nothing
I
can
do
with
this,
like
I
I,
probably
need
to
to,
if
I'm
going
to
try
and
parse
that
as
a
frame
I
need
to
crash,
because
if
I,
try
and
read
the
length,
which
is
a
humongous
number
or
Garbage
bytes
after,
like
I,
could
have
like
serious
problems
here
like
I'm
concerned
that
security
problems
related
to
that
kind
of
thing.
A
N
A
Okay,
that
makes
sense
speaking
as
chair
now
to
wrap
this
bit
up.
What
I'm
seeing
is
no
clear
consensus
on
one
way
or
the
other,
and
actually
a
good
conversation
of
trying
to
figure
this
out
I'm,
not
seeing
people
with
disagreeing
priorities
or
goals
here,
so
this
should
be
pretty
straightforward
to
resolve.
A
This
sounds
I
mean
let's
take
it
to
the
list
and
discuss
it
there.
If
we
can't
resolve
it
on
the
list,
I
might
suggest
a
like
mini
design
team
for
or
even
just
you
know,
or
an
interim
whatever,
just
an
excuse
for
people
to
get
in
a
room
and
like
spend
a
bit
of
time.
A
I
would
normally
say:
oh,
you
should
all
grab
lunch,
but
it's
already
Thursday
afternoon
I
see
Lucas
saying
if
anyone
wants
to
go
grab
beer
with
them
that
or
he
needs
a
glass
of
water,
maybe
yeah
go
talk
to
Lucas
after
the
session
is
a
good
idea
as
well.
A
A
My
gut
feeling
is
that,
if
whatever
we
design
in
web
transfer
will
probably
cover
every
extension
that
wants
to
use
bi-directional
streams
in
HTTP
forever
because
in
practice
like
once,
you've
set
that
setting
that's
how
it
is
and
we've
so
I'm
sure
they're
going
to
want
to
have
a
say
so
I'll
discuss
with
them.
My
only
concern
is
I.
M
Oh
I
think
we're
actually
out
of
actual
issues.
There
are
some
open
pull
requests
that
I
would
like
people
to
look
and
but
I
would
like
people
to
look
I
mean
I,
believe
all
of
those
already
had
consensus
in
some
form
another,
so
special
is
the
first
one.
We've
agreed
that
we
want
to
do
this,
but
we've
never
agreed
on
the
specifics.
Everyone
is
welcome
to
take
a
look
at
this
and
unless
there
are
any
objections,
I
will
merge
all
of
those
and
that's
it
for
my
slides
thanks.
Everyone.
A
L
A
Yeah,
it's
true:
the
system
isn't
really
designed
for
what
we're
doing,
which
is
the
entire
slide.
Deck
is
one
thing
so
just
to
confirm
folks,
please
review
those
PRS
and
then
we're
going
to
merge
them
soon.
Martin
I
want
to
come
up
for
for
your
there.
We
there
we
go.
P
Okay:
let's
talk
about
stream
resets
again,
this
is
different
from
what
we
talked
about
in
in
quick
talk
closer
to
the
mic.
Okay,
this
is
different
from
what
we
talked
about
in
quick.
So
just
a
quick,
quick
recap:
when,
when
you
reset
the
stream,
you
you
you
stop
re-transmitting
your
stream
frames,
you
only
transmit
the
reset
stream
frame
reliably
at
the
receiver
side.
You
usually
report
the
stream
reset
error
to
the
application
directly
without
waiting
for
any
stream
frames,
since
they
are
not
delivered
reliably
anyway.
P
So
this
is
how
your,
how
a
general
web
transport,
HTTP
3
setup,
would
probably
look
like
in
every
stack
at
the
bottom.
You
have
your
you.
Have
your
quick
layer,
your
quick
layer,
accepts
streams
and
hands
them
to
the
HTTP
3
layer
and
then
HTTP,
3
parses,
the
first
frame
or,
as
we've
just
discussed
the
first
warning
and
decides
what
to
do
with
that
stream.
P
It
could
be
an
HTTP
request
stream
if
you
parse
an
HD,
an
H3
headers
frame.
Then
you
pass
this
past
the
stream
to
your
HTTP
Handler.
If
you
parse
the
frame
type
for
the
web
transport
stream,
then
you
put
you,
you
pass
your
your
stream
to
to
your
web
transport
stack,
which
then
might
pass
the
session
ID.
E
P
P
This
one,
yes,
so
so
the
problem
is
we
we?
What?
What
does
your
your
HP
three
layer
do
when,
when
it
receives
a
stream
that
is
already
reset
and
the
stream
frame,
the
stream
frame
carrying
the
the
first
byte
was
lost,
so
we
are
now
sitting
there
with
this
stream,
and
we
don't
know,
is
if
it's
an
HTTP
request
stream
or
if
it's
a
web
transport
stream
and
then
which
session
it
belongs
to
so
next
slide.
P
The
first
option
is
to
do
nothing
like
your.
What
do
you
do
in
in
H3?
You
receive
the
stream.
It's
reset.
You're,
like
the
client,
doesn't
want
the
stream
anymore.
You
immediately
set
a
reset
on
on
your
side
of
the
stream,
and
the
stream
is
gone,
so
we
could
say
we
do
the
same
when
you're,
when
you're
running
web
transport.
On
the
on
the
same
connection,
you
receive
that
stream
reset
it
it's
gone.
P
The
the
the
downside
of
this
is
that
application
protocols
that
are
built
on
top
of
web
transport
might
need
that
reset
as
a
signal
for
for
some
some
signal
at
the
at
the
application
layer.
So
if
we
decide
to
go
with
this
option,
we
limit
the
the
the
kind
of
protocols
that
can
be
layered
on
top
of
on
top
of
web
transport
next
slide.
P
The
second
option
is
to
to
use
a
a
reset.
Capsule,
we've
already
defined
a
bunch
of
capsules
and
every
web
transport
session.
Has
this
one
reliable
stream,
which
is
not
closed
until
the
the
web
print
board
session
is
closed.
So
we
could
use
this
that
that
that
stream
to
send
a
web
transform,
capsule
and
I
just
made
up
that
frame
format.
P
That's
not
defined
in
any
PR,
but
this
is
how
it
could
look
like
you
just
sent
this
capsule
on
on
the
on
the
on
the
request
stream
on
the
yeah
on
the
Control
stream,
and
you
know
that
it
will
be
desired
and
will
be
delivered
reliably.
P
So
let's
go
back
to
our
our
general
stack
and
we
are
now
at
the
rgb3
layer
and
we
have
received
the
stream
that
is
reset
now.
What
do
we
do?
We
haven't
received
the
capsule
yet
so
this
could
be
an
HTTP
request
stream,
but
we
don't.
We
don't
know
that
yet
or
it
could
belong
to
a
web
transport
session.
So
we
now
need
to
wait
to
see
if
we
receive
this
capsule
on
one
of
our
web
transport
sessions.
P
If
we
don't
receive
that
capsule
after
a
certain
time,
then
we
can
conclude,
it
was
probably
an
HTTP
stream
and
then
HTTP
stream,
and
then
we
can
reset
it,
which
of
course
now
we
have
the
problem
like.
What's
what's
the
right
value
for
this
timer
and
there's
probably
no
good
answer,
because
the
your
Capital
could
have
been
lost
needs
to
be
retransmitted
could
could
have
been
blocked
by
flow
control
like
there's,
there's
just
no
good
value
for
for
this
timer
things
get
even
more
complicated.
P
If,
if
you
consider
that
there
might
be
a
second
web
transport
session
which
is
not
yet
established,
but
the
client
has
sent
the
connect
request
and
has
already
optimistically
opened
the
stream
for
that.
For
that,
for
that
session
is
now
canceling
that
session
by
by
resetting
the
request
stream.
So
now
you're
sitting
there
like
this
was
a
web
transport
stream,
but
you
won't,
you
won't
get
the
capsule,
because
that
that
session
has
already
gone
away.
So
it's
it's.
It's
not
really
clear
what
to
do.
P
K
P
So,
with
the
reset
capsules,
there's
there's
like
a
lot
of
error
conditions:
I've
ex
I've
walked
you
through
one
of
them,
but
there's
more
like
you
could
have
you
could
you
could
receive
the
the
reset
capsule
but
not
receive
the
stream
reset
reset,
because
there's
there's
no
guarantee
that
the
client
is
well
behaved
right
because
the
client
could
be
attacking
you
and
just
sending
you
sending
sending
you
nonsense
so
when
you're
implementing
this
you
need
to
be
prepared
to
to
handle
a
malicious
client
could
also
be
that
the
client
sends
multiple
reset
capsules
on
on
different
sessions,
all
claiming
the
same
stream,
and
you
also
needs
some
Logic
for
that
I
guess
so.
P
P
A
P
So
for
the
next
option,
I
made
new
slides.
So
if
you
could
fill.
A
Those
out
it
would
be
really
great,
yeah
I
did
and
I
hit
the
pull,
but
let
me
let
me
let
me
run
through
it
again.
One
second.
B
P
So
the
third
option
is
to
to
solve
this
at
the
quick
layer
and
in
the
quick
reference
group
I
presented
one
proposal
called
reliable
stream
stream
resets.
That's
not
the
only
way
to
solve
this
problem,
but
it's
just
one
proposal
and
there
were
other
other
proposals
floating
around
to
solve
to
solve
it
in
a
similar,
but
maybe
less
complicated
way.
Next
slide.
P
So
the
the
question
is
so
for
for
option
option
three
you're
solving
it
at
the
quick
layer.
This
has
the
the
the
obvious
benefit
of
allowing
the
the
applications
to
to
react
to
every
stream
reset.
P
P
The
reason
we
might
want
to
do
this
is
because
web
transfer
is
probably
not
the
only
protocol
that
is
sending
some
kind
of
identifier
at
the
beginning
of
a
stream,
so
solving
it
at
the
quick
layer.
One
might
argue,
is
the
the
correct
layering
here
downside
is:
we
need
to
Define
that
that
that
extension,
too,
quick
or
the
quick
working
group
needs
to
work
on
this
next
slide
or
maybe
there's
no
next
slide.
E
P
A
P
So
if
I
understand
correctly,
the
the
the
the
the
proposal
was
to
extend
not
the
to
to
extend
the
reset
stream
frame,
not
with
a
with
a
size
Warren,
but
basically
with
a
data
block
and
in
this
data
block
you
would
send
the
the
web
transport
stream
frame.
A
So
we
can
call
that
3B
for
the
purpose
of
this
discussion,
so
just
to
set
the
stage
we
have
15
minutes
left.
The
chair
is
going
to
keep
five
minutes
to
wrap
up.
So
we
have
10
minutes
to
discuss
this
Alan
go
ahead.
F
Alan
prindell
originally
when
we
talked
about
what
is
now
called
3B,
so
this
with
some
instead
of
a
reliable
size,
some
amount
of
application
data
which
we
get
delivered
to
the
application
when
it
received
a
reset
I,
had
thought
that
that
wouldn't
solve
the
case.
Where
you
still
don't
know.
If
you
don't
receive
this,
you
have
to
wait
for
a
while.
You
gotta
like
what
do
you
put
in
here.
If
it's
not
obviously
I
don't
have
a
good
idea.
O
O
So
it
would
be
nice
to
see
a
solution
to
this
more
generally,
whether
it's
3A
or
3B,
I,
really,
don't
care
odds
are
good.
You
have
two
varins
that
you
care
about.
O
It's
probably
shorter
to
just
stuff
them
in
here
versus
re-transmitting,
but
if
you
already
sent
it-
and
it
goes
then
acknowledged-
then
sticking
this
in
shorter.
Ultimately,
it
doesn't
matter
it's
a
couple
bytes,
but
it
fixes
a
real
problem.
I'll
bet
one
that
doesn't
happen
very
often.
We
think.
D
Yeah
jonathanics
would
this
quick
extension
B,
mandatory
and
Implement
from
web
transport
implementations,
because
if
it
isn't,
then
you
sometimes
get
the
unreliable
resets
case,
which
I
think
would
be
even
worse.
Because
then
you
couldn't
predict
whether
you
could
use
full
use
as
an
apple
as
an
application
signal.
So
I
think
if
you're
going
to
do
this,
it
would
need
to
be
MTI
for
web
transport
implications,
but
it
has
its
own
unfortunate
implications.
So
I
think
that
needs
to
be
thought
about.
A
And
just
jump
in
his
chair,
we
already
have
the
datagram
quick
extension
as
mandatory
tutorial
to
implement
for
web
transport.
So
that's
totally
an
option.
We
have.
I
Yeah
I
think
we
just
add
this
to
the
checklist
that
we
that
we
had
on
that
previous
slide.
There's
four
things:
now:
it's
five
things
and
Counting
I
have
a
slight
preference
to
the
to
the
sort
of
partial
delivery
option
here,
as
opposed
to
the
metadata
one
I
think
the
metadata
one
creates
some
interesting
challenges
so.
A
I
You
explain
what
you
mean
by
those
two
options:
yeah,
so
the
this
option.
That's
on
the
slide
is
partial
delivery,
I'm
going
to
say
that
I'm
only
partially
delivering
the
contents
of
this
stream.
Up
to
this
point,
there's
it's
like
saying:
oops,
I
I
may
have
not
provided
you
with
a
finbid
or
I've,
provided
you
with
a
thin
bit,
but
I'm
not
going
to
deliver
up
to
that
I'm
going
to
deliver
to
something
less
than
that
right.
I
So
for
for
this
application,
we
can
say
that
web
transport
streams
you
have
to
provide
a
number
in
this
one
that
is
large
enough
to
cover
the
session
ID
and
the
other
other
piece.
That's
in
that
frame,
the
type
that
that's
at
the
start,
the
other
option
also
works
and
I
think
I
could
probably
live
with
that
which
is
to
to
have
some
chunk
of
metadata
in
here
that's
arbitrary
length
and
would
be
defined
by
the
application
there
in
in
some
way.
I
There's
there's
some
interesting
challenges
with
that
one
in
terms
of
what
happens
if
it's
different
to
what
the
stream
sent
in
the
first
place-
and
this
is
the
sort
of
thing
that
I
think
Martin
really
wants
to
avoid
here-
is
that
there's
there's
never
any
ambiguity
with
this
design,
as
opposed
to
the
one
that
has.
Oh
here's,
the
here's,
the
information
that
you
needed
to
process
this
stream.
Oh.
A
I
I
A
I
Yeah,
you
still
have
that
problem,
but
but
quick
implementations
deal
with
this
problem
today
in
a
non-deterministic
fashion,
which
is
really
really
fantastic,
and
we
could
do
the
same
here.
I
guess
so.
I'm
kind
of
okay
with
with
either
one
of
them
I,
think
I
have
a
preference
for
this
one.
It's
smaller
you've
already
probably
sent
this
information,
and
we
can
certainly
require
that
people
send
this
information
anyway,
and
so
from
a
practical
standpoint,
it's
probably
better
I'm,
just
trying
to
think
size
wise.
It's
about
the
same.
I
If
you
have
to
send
it,
if
you've
already
sent
it.
This
is
more
efficient,
there's
a
number
of
different
sorry,
there's,
always
the
mismatched
chance,
but
the
mismatch
chances
like
a
constant
in
this
design
and
and
we've
just
created
a
new
way
to
do
it
in
in
the
other
design.
I
think.
That's,
probably
that's
probably
what
it
comes
down
to
now.
A
Yeah,
no,
so
the
purpose
of
the
discussion
today
in
this
room
speaking
as
chair,
is
to
decide
if
we
want
to
let's
say:
option:
one
and
option
two
were
didn't:
involve
the
quick
working
group.
It
would
be
we've
solved
this
here.
We're
done
option
three
or
option
three
B,
whatever
we're
calling
them
require.
A
G
I
suggest
trundling
in
that
case,
more
concretely,
I
think
we've
seen
that
this
keeps
coming
up
for
basically
everything
that
we
ever
build
on
top
of
quick
and
so
I
think
it
is
very
much
worth
trying
to
solve
this
in
a
generic
way
that
will
apply
to
web
transport
and
others.
I
would
actually
have
a
slightly
inverse
take
from
Martin
in
that
both
of
these
seem
workable
and
I
have
a
slight
preference
for
the
other
one
that
we
can
sort
out
within
quick.
G
It
seems
as
though
the
ability
to
mismatch
what
you
do
in
web
transport
and
what
you
put
in
quick
is
going
to
exist
in
any
case,
no
matter
what
you
do,
and
so
it
may
be
that
it's
simpler
to
just
say:
hey
here's,
my
attribution
for
this,
rather
than
trying
to
deliver
a
part
of
the
actual
stream
data,
but
that's
something
we
can
take
to
quick.
Q
So
I
think
the
two
options
that
we
have
on
the
both
ends
of
the
table
is
if
we
want
to
fix
this
problem
in
a
generic
sense
or
if
you
want
to
do
a
quick
fix
and
the
generic
solution
being
affixed
into
the
quick
protocol.
I
think
the
concern
is,
if
you
can
reach
a
consists
there
in
a
short
time
frame
and
if
that
consists,
is
going
to
be
something
very
simple
to
implement,
because
honestly,
those
two
conditions,
it
will
drag
the
standardization
and
the
implementation
of
web
transport.
P
Q
Q
Get
it
and
regarding
that
constantial
question
is
my
concern
about
this
specific
proposal.
Is
that
it's
not
concrete
enough
to
solve
all
the
issues
that
we
had
in
history
like
recognizing
a
push,
a
very,
very
end
of
a
large
response
and
they're
having
to
send
reference
to
it.
So
there
will
be
discussion
like
that.
If
we
move
this
to
quickbooking,
also
I
think
my
slide.
Preference
would
be
to
fix
this
in
the
dishwashing
group.
A
David
schenazi
speaking
as
an
individual,
well,
first
speaking
as
chair,
I've
cut
the
queue
and
please
keep
it
short
because
we're
running
out
of
time,
not
speaking
as
an
individual
I,
initially
really
wanted
to
fix
this
in
web
transport.
To
for
the
exact
reasons,
because
Google
is
saying
to
like
get
this
done.
A
But
like
after
seeing
Martin's
presentations,
And
discussing
like
offline
this
week,
I'm
convinced
that
there's
a
real
problem
and
I
begrudgingly
I'm
in
favor
of
solving
this
generically
at
the
quick
layer,
and
that's
that
Victor.
M
I
also
agree
that
we
should
solve
it
the
quick
layer,
because
last
time
we
had
a
similar
problem
and
we
did
not
solve
it
at
upper
layer.
We
came
to
regret
it
and
I'm
referring
to
help
yes,
but
like
in
general.
If
there
is
a
correct
layer
to
solve
the
problem
that
we
should
solve
that
problem
at
that
layer
and
I,
don't
think
it
is
actually
that
much
more
expedient
to
solve
it.
A
12
transport
layer,
in
this
case.
A
K
Agree
with
solving
it
correctly,
I
will
say
for
web
transport.
I
would
just
send
the
reset
stream
after
you
get
an
ack
of
the
reliable
size
like
you,
don't
need
to
send
the
reset
immediately.
You
can
hold
it
until
you
know
that
they've
got
the
reliable
data,
but
I'd
like
to
avoid
that
extra
like
round
trip
or
two,
and
this
would
be
a
way
of
doing
it.
A
F
Alan
Alan
Ferndale
I'm,
not
sure
that
works,
because
a
transport
act
doesn't
mean
the
H3
stack
has
seen
the
data
I
can
still
get
reset
before
so
I
think
we
should.
We
should
be
solving
it.
At
the
quick
layer,
the
I
will
say
that
maybe
we
are
taking
a
little
bit
The
Spicy
take
for
the
end
of
the
meeting
a
little
bit
too
narrow
a
view
on
what
the
problem
is.
F
The
problem
is
that
applications
like
to
group
things
and
that
we
should
quick,
maybe
should
provide
a
way
to
identify
those
groups
on
the
wire,
not
maybe
at
least
in
reset
stream,
but
maybe
also
stream,
and
that
would
also
potentially
make
some
web
transporting
things
easier
or
the
server
push
thing
easier
or
other
applications.
People
want
to
build.
It
might.
F
It's
probably
going
to
generate
some
controvers
or
maybe
not
I,
don't
know,
but
this
is
something
we
have
talked
about
internally,
we've
actually
implemented
in
our
stack
already
the
ability
to
group
streams
and
have
that
be
part
of
the
API
and
an
extension
on
The
Wire
setup.
It
could
be
something
interesting.
N
Lucas
party
wearing
no
hats,
not
being
a
web
transport
implementer,
yet
just
a
keen
backseat,
Enthusiast
yeah
the
character
thinker,
kind
of
work
solving
it
this
way,
whatever
Final
Solution,
we
pick
seems
better
to
me.
Hence
why
you
know
we
we
wanted
some
of
that
discussion
in
quick,
I.
Think
yeah,
it's
good
to
see
that
other
people
are
kind
of
agreeing
with
what
I
thought.
N
If
I
was
wrong
or
we
were
wrong-
that's
fine,
but
yeah
like
let's
I,
think
it
seems
like
it's
emerging
that
we
should
try
and
solve
it
properly.
Web
transport
could
have,
but
you
know
if
we
got
in
a
paging
enough
context
in
now
to
understand
the
problem
a
bit
like
try
and
write
that
and
codify
the
solution.
N
A
Lucas
and
stay
there
and
put
your
hat
on
I'm
going
to
meet
you
soon,
speaking
as
chair
I'm,
getting
a
sense
in
the
room
that
most
folks
prefer
solving
this
at
the
quick
layer,
because
you
can
you
live
with
that
and
or
does
anyone
else
like
object
to
solving
this
at
the
quick
layer
and
we're
not
prescribing
a
specific
solution,
but
it
would
be
going
too
quick
and
saying
we
need
this
problem
solved
at
your
layer
plays.
Q
A
Right
and
to
be
fair,
if
it
starts
dragging
on
forever
and
quick,
we
would
we
have
the
option
here
to
change
our
mind
like
okay,
so
I'm
gonna
say
as
chair,
we're
gonna
start
moving
that
we're
gonna
move
the
discussion
too,
quick
saying
that
we
have
this
problem
and
we
would
the
folks
in
the
room
at
web
transport,
preferred
that
and
that
thread
people
are
still
completely
welcome
to
say
that
they
want
to
absolutely
do
it
in
web
chessboard
and
not
do
that.
N
It's
completely
in
scope
of
work
in
our
Charter.
To
do
this
kind
of
thing.
We
we
have
a
fairly
quiet
work
key
right
now,
so
we
have
the
capacity
in
terms
of
interim
I'll
need
to
go
away
and
chat
to
people.
I
have
a
think.
Obviously
we
have
stuff
like
the
mock
interim.
G
N
A
B
A
Agreed
I'll
I'll,
take
action
items
to
go
talk
to
the
HTTP
chairs
and
the
quick
chairs,
and
we
can
all
play
the
game
of
swapping
hats
to
try
to
get
this
done
quickly,
and
that's
done
thanks
everyone
for
coming
to
web
transport.
We
will
see
you
on
the
list
if
you're
not
subscribed
to
the
quick
or
HTTP
best
lists.
You
should,
because
we're
going
to
have
discussions
very
relevant
to
web
transport.
There
thanks
for
everyone
and
see
you
virtually.