►
From YouTube: IETF112-WEBTRANS-20211109-1200
Description
WEBTRANS meeting session at IETF112
2021/11/09 1200
https://datatracker.ietf.org/meeting/112/proceedings/
B
So
apparently
folks
are
having
a
hard
time
logging
in
to
either
to
data
tracker
to
the
integration
with
meat
echo.
So
maybe,
let's
give
them
a
couple
more
minutes
to
enter
the
meeting.
B
B
B
B
Since
mike
asked
on
the
chat,
this
is
just
making
sure
that
audio
works.
Can
someone
confirm
on
the
chat
that
you
can
hear
me.
B
Thanks
yeah
we're
we're
still
waiting
on
some
more
participants
to
get
through
I'm
seeing
victor
all
right,
then
I
think
I
think
maybe
at
this
point
we
can
get
started
or
not.
What
do
you
think.
C
Yeah,
I
think
I
think
it
looks
like
the
log
jam
has
been
broken.
We've
just
added
like
20
people
in
the
last
minute
or
two
okay,
all
right.
Okay,
thank
you.
This
is
the
itf
112
meeting
of
the
web
trans
working
group.
C
C
Please
use
headphones
or
an
echo
canceling
speakerphone
and
stake
your
full
name
before
speaking.
So
we
can
get
you
in
the
minutes.
C
So
we
are
going
to
be
running
a
queue.
I
guess
david
you'll
run
the
queue,
and
this
is
a
little
bit
of
advice
on
how
to
get
into
it
and
out
of
it,
you
use
the
basically
the
hand,
tool
and
then
you'll
need
to
enable
your
audio
or
you'll
be
talking
well,
muted,
you
do,
or
you
can
enable
video
you
don't
have
to.
It-
does
help
a
little
bit
with
the
comprehension,
but
it
isn't
required.
C
So
the
notewell.
This
is
an
itf
working
group,
so
itf
policies
are
in
effect
and
by
participating.
You
agree
to
follow
those
processes.
Definitive
information
is
in
the
bcps
below
in
this
slide
and
we
encourage
you
to
read
them
and
understand
them
and
also,
of
course,
personal
information.
You
provide
is
handled
in
accordance
with
the
privacy
statement.
C
Also,
we
like
to
be
clear
that
the
itf
meetings,
virtual
meetings,
mailing
lists
etc,
are
intended
for
professional
collaboration
and
we
have
itf
guidelines
for
conduct,
anti-harassment
and
procedures
for
the
same.
If
you
have
any
concerns
about
anything
you're
observing.
Please
talk
to
the
boots
team
who
are
available
for
you
and
you
can
do
that
confidentially
and,
of
course,
we
strive
to
maintain
an
environment
which
exemplifies
dignity,
decency
and
respect,
and
we
expect
you
to
do
the
same.
C
B
E
A
B
C
Of
okay,
so
some
other
meetings
this
week,
which
you
may
be
interested
in,
there's
a
media
over
quick
side
meetings
and
if
you
go
to
the
link
above
you'll
see
when
they
are
there's
one
today
a
little
bit
later
and
there's
another
one
on
friday.
So
we
encourage
you
to
have
a
look
at
those.
They
will
be
talking
about
some
of
the
issues
of
transporting
media
over
quick
or
or
web
transport.
C
Okay.
So
the
agenda
we're
mostly
through
the
preliminaries.
Thankfully
we
should
do
the
agenda
bash,
but
here's
basically
what
we
have
on
store
for
you.
We
have
a
web
transport
update
from
the
w3c
yanivar
will
do
that.
We
then
have
victor
talking
about
web
transporter,
http,
3
and
eric
talking
about
web
transport
over
http
2
and
then
we'll
have
our
wrap
up
and
summary.
F
All
right,
can
you
hear
me
we
can
all
right
great.
So
this
is
an
update
from
the
w3c
web
transport
working
group
with
progress
we've
made
since
last
itf111
on
july
30th.
F
The
status
is
now
a
we
published
a
another
working
draft,
which
means
we're
technically
no
longer
first
public
working
guy
we're
just
working
left
so
yeah,
we
finished,
we
believe
we
finished
all
discussion
of
issues
in
our
minimum
viable
ship
milestone
that
we
advertised
last
time.
There
are
four
non-editorial
issues
remaining
that
are
ready
for
pr,
but
we
believe
there
are
not
controversial
and
three
editorial
and
attractor
for
web
platform
tests,
which
are
been
added,
some
decisions
and
pr's.
F
We
sort
of
error
handling
in
the
algorithms
and
there's
a
new
web
transport
error,
dom
exception
with
the
following
members
and
there's
a
source
to
say
whether
the
source
of
the
error
is
stream
or
session
session
being
the
entire
connection
and
with
a
also
an
eight
bit
stream
error
code
that
defaults
to
zero.
F
This
means
that,
upstream
of
a
sender,
even
you
can
now
abort
a
what
wg
stream
with
a
new
web
transport
error,
and
you
could
specify
stream
error
code
that
will
be
sent
to
the
receiver
there's
also
a
max
datagram
size,
read-only
member
on
datagrams,
that
is,
a
user
agent
implementation
defined
integer.
You
can
query.
F
F
We
added
in
minimal
prioritization,
which
means
that
all
outgoing
datagrams
have
priority
over
outdoor
other
outcome
data
like
in
other
streams,
we
added
a
32-bit
close
code,
which
is
not
an
error,
but
we
can
await
the
web
transport
closed
promise
with
a
recent
string
as
well
next
line.
F
F
We
also
highlighted
a
chrome
update.
They
are
shipping
web
transport
in
m97,
which
is
built
on
the
o2
version
of
the
draft
and
has
support
for
8-bit
reset
stream
code,
and
I
really
32-bit
closing
error
code,
no
origin
which
trial
is
required.
This
is
fully
shipped
available
in
window
and
workers
only
secure
context.
F
Some
things
that
still
need
to
be
added
are,
you
know
deeper
prioritization
of
sending
and
stats.
Also,
the
hash
based
certificates
instead
of
web
pki
may
be
available.
M98.
As
we
said,
implementation
is
fairly
mature,
covered
by
web
platform
tests
using
an
echo
server
based
on
aio
quick.
F
Also,
a
presentation
was
made
by
a
member
for
multicast
for
the
web
from
the
w3c
multicast
community
group
with
the
demo,
and
they
made
a
request
to
the
w
actually
clarified.
The
member
made
a
request
to
the
w3c
to
add
multicast
datagrams
as
a
use
case,
and
so
the
w3c
is
not
requesting
this,
but
we're
forwarding
that
request
for
consideration
by
this
working
group,
because
it's
more
network
related
looks
like
also
an
issue
was
identified
around
bi-directional:
server-based
video
conferencing
and
low-latency
video
upload
from
client
to
server.
F
So
this
is
an
important
use
case
for
for
us,
and
we've
seen
some
evidence
that
better
integration
between
encoder
and
congestion
control,
algorithms
can
be
helpful.
Not
sufficient.
Stats
apis
are
not
available
today
in
web
transport.
To
do
that.
Well,
we
believe
web
codex
api
for
average
bit
rate
result
target,
for
instance,
can
results
in
overshoots
when
keyframes
are
sent
and
undershoots
for
delta
frames,
while
an
application
can
always
send
less
what
the
condition
would
they
will
allow?
F
F
F
Potential
conflicts
with
other
congestion
controls
being
discussed,
such
as
this
paper
and
rdp
over
quick.
So
the
wtc
working
group
is
requesting
that
the
this
working
group
support
this
use
case,
because
it's
an
important
use
case
and
help
us
develop
measurements
that
can
be
used
to
do
this
more
effectively.
C
Okay,
thank
you
yanivar.
I
would
note
that
some
of
this
is
being
discussed
now
on
the
media
over
quick
mailing
list
and
with
respect
to
rtp
over
quick.
There
will
be
a
discussion
in
abt
core
later
in
the
week.
C
C
G
Okay,
so
I'm
going
to
give
update
over
web
transport
to
reach
free
as
well
to
go
over
some
issues.
G
So,
let's
start
with,
there
is
a
new
version
of
draft
draft
o2.
It
incorporates
like
months
like
six
months
worth
of
changes
to
the
draft,
and
it
includes
the
clean
close
of
the
entire
session,
reset
error,
codes
and
bunch
of
other
stuff,
and
this
is
the
version
that
is
in
its
non-pooled
incarnation.
Shipping
in
chrome
97,
which
means
at
least
we
as
a
club
team,
would
have
to
maintain
it
for
quite
a
while
next
slide.
G
So
one
of
the
details,
which
I
had
to
add
because
I
realized
we
don't
have
anything
like
that-
is
we
added
a
header
that
indicates
the
version
of
the
draft
you're
using
it
is
intentionally
framed
in
the
draft
version
and
it
will
probable
so
that
and
we
will
likely
reshape
it
or
remove
it
when
we
ship
the
final
version,
but
that
this
is
how
it
looks
right
now.
G
So,
let's
move
on
to
the
issues,
as
I
said,
I
didn't
put
that
many
issues
on
the
sides,
because
most
of
them
are
eyes,
are
closed
or
require
more
effort
next
slide.
G
So
the
first
issue
we
kind
of
discussed
in
past
but
never
came
to
conclusion-
is
how
do
we
do?
We
need
a
mechanism
for
training
sessions,
gracefully
that
is
the
equivalent
of
a
go
away
frame
for.
G
E
E
Well,
the
capsule
may
be
a
loaded
term
based
on
what's
going
on
from
yesterday,
but
as
long
as
there's
a
structured
way
to
send
messages
over
the
extended
connect
stream,
then
it
seems
very
lightweight
to
be
able
to
add
a
message
like
go
away
and
we've
seen
lots
of
examples
of
people
building
protocols
without
things
like
go
away
and
it
causes
a
bunch
of
problems
for
for
servers
that
that
want
to
drink,
gracefully
and
anybody
who
would
want
it.
E
G
D
In
in
this
case,
I'm
not
sure
what
the
semantics
you
would
attach
to
a
go
away
would
be.
It's
gonna
depend
on
the
application.
That's
using
the
web
transport.
Isn't
it.
E
Okay,
waiting
for
audio
yeah-
I
I
mean
it
can
be
a
signal
that
just
says,
but
it
signals
just
an
intention
like.
I
would
like
this
to
go
away
soon
and
can
include
information
like
and
the
last
set
of
whatever
streams
I
received
anyway,
you
know
could
be
identified
by
these
numbers
and
what
the
semantics
are
you're
right
are
sort
of
dependent
on
whatever
protocol
is
running
over
top,
but
the
I
think
the
high
level
signal
is
still
valuable.
D
Yeah,
I
I
guess
what
I
was
getting
at
there
was.
If
there
is
a
generic
signal,
then
it
would
wouldn't
really
make
a
whole
lot
of
sense
to
attach
anything
with
to
it,
because
the
the
concepts
that
an
application
uses,
the
sort
of
granularity
of
its
stream
usage
and
datagram
usage
is
unknown
and
unknowable.
E
I
agree
with
what
you're
saying
I
think
the
information
you
can
include
is
information
that
is
exposed
at
the
to
the
web
transport,
which
is
like
I,
I
have
seen
these
stream
identifiers
like
they
have
at
least
been
whatever
like
pass
to
the
application,
but
everything
else.
I
guess
it's
certainly
possible.
The
application
has
higher
level
semantics
and
that
that
it
would
need
to
convey
beyond.
What's
in
the
transport,
in
which
case
they
have
to
write
their
own
frame
anyway.
Okay
yep
that.
G
I
guess
to
the
later
comment
about
messages.
I
think
that
control
that
capsules
are
to
some
extent
not
controlled
by
application,
because
they're
in
some
in
some
sense
privileged
as
in
we
assume
they
come
from
the
browser
and
not
from
the
web
application.
G
In
that
sense,
we
can't
really
give
you
the
full
control,
but
also
in
general,
my
intuition
is.
We
should
not
add
new
concepts
which
are
already
trivially
achievable
with
the
apis.
We
give,
because
it's
just
like
adds
multiple
ways
to
do
things
that
are
somewhat
redundant.
E
I
would
I
don't.
I
don't
know
that
how
trivial
it
is
or
if
it's
you
know
everyone
there,
because
there
may
be
applications
that
don't
use
or
need
a
control
stream.
I
don't
know
and
so
and
requiring
every
application
to
do
it.
I
guess
we
had
this
discussion.
I
don't
know
four
years
ago
and
quick
when
someone
tried
to
add,
go
away
originally
go
away.
It
was
a
quick
frame
and
then
we
said
no
make
every
application.
Do
it,
and
now
we
have
so.
I
guess
maybe
the
same
principles
apply
here.
E
I
don't
know.
I
still
think
the
like
signaling,
the
intent
and
capturing
transport
level.
Information
to
the
other
side
about
what's
been
received
is
a
building
block
that
can
be
reused
and
it's
low
cost.
E
I
Hello,
so
I
I
think
I
was
agreeing
with
martin's
earlier
point
that
the
value
of
this
thing
go
away
whatever
you
want
to
call.
It
depends
on
what
information
it
includes
and
what
the
receiver
would
act
upon.
So
I
think
I
don't
understand
really
what
does
ads
above
http
3
go
away
and
what
it
would
include.
That's
generic
enough
that
anything
else
that
the
web
transport
layer
would
be
able
to
read
that
and
act
on
it.
I
J
I
guess
the
benefit
of
doing
this
in
the
at
this
layer
would
be
if
there
was
an
intermediate
that
wanted
to
go
and
and
send
that
send
it
to
both
of
the
endpoints.
But
without
knowing
the
inner
protocol.
The
endpoints
were
using
such
as
like.
If
you
had
a
cdn
that
implemented
web
transport,
the
alternative
may
be
on
the
api
side.
J
E
Alan
yeah,
the
I
think
I
now
the
context
is
like
paging
in
after
months
and
it
being
four
in
the
morning.
The
intermediary
case,
I
think,
is
the
one
where
it's
useful,
because
you
may
have
first
of
all
the
intermediary
goal
that
is
sort
of
proxying
web
transport
without
understanding
the
underlying
protocol
and
it
wants
to
go
away
and
it
needs
to
tell
both
endpoint
like
it's
got
a
web
transport
session
going
in
each
direction,
and
it
wants
to
tell
both
of
them
that
it's
going
away
and
it
may
be
using
pooling.
E
So
I
guess
in
that
case
the
http
3
level
go
away,
is
probably
sufficient,
but
there
may
also
be
cases
where
somebody
wants
to
make
a
web.
There
are
pool
web
transport
sessions
and
I
want
one
session
to
go
away,
but
not
other
sessions
or
other
http
requests
that
are
on
the
same
connection,
which
is
why
it
would
be
a
session
specific
message
rather
than
a
connection,
specific
message.
G
All
right,
I
think
my
main
question
is
think
we
it
sounds
like
we
need
a
specific
proposal
that
explains
the
use
cases
that
this
is
for,
and
my
question
would
be
alan.
Would
you
volunteer
to
write
a
pr
or
a
proposal
for
how
to
handle
go?
Aways
in
web
transport
over
http,
free.
E
Yeah,
I
can
write
a
pr
and
we
can
people
can
argue
about
it.
We
can,
we
can
repeat
our
arguments
here
on
the
pr.
G
All
right
coots
either
of
the
chairs
recorded
on
the
issue.
While
I
moved
to
the
next
slide
or
someone
next
slide,.
G
You
david
now
here
is
an
interesting
issue
that
we
have
historically
just
punted
until
later,
and
the
later
is
kind
of
approaching,
and
that
is
at
some
point.
G
We
agreed
that
it
would
be
useful
to
support
pulling
of
multiple
web
transport
sections
or
of
with
each
other
and
with
regular
http
free
traffic,
and
in
order
to
do
that,
we
need
to
have
some
understanding
of
how
to
do
resource
management,
namely,
how
do
we
limit,
namely
quick,
provides
flow
control
for
all
of
the
resources,
but
the
flow
control
is
it's
not
it's
connection
global
and
the
pulling
is
between
sessions
that
are,
in
some
sense,
not
related
to
each
other
and
in
ideal
scenario.
G
We
would
strongly
prefer
to
isolate
the
sessions
in
a
sense
that,
even
if
one
session
misbehaves
and
acts
in
a
way
that
starves
itself
as
a
connection,
it
does
not
serve
others
of
resources,
and
the
question
is:
how
do
we
do
that?
G
D
For
someone
to
say
something:
okay,
so
last
time
we
talked
about
this,
you
raised
the
point
that
we
essentially
give
sites
the
ability
to
use
fetch
to
consume
all
of
the
available
bi-directional
stream
resources
for
a
connection
sort
of
almost
without
constraint,
and
we
sort
of
assume
that
there's
enough
resources
there
that
they
can,
they
can
do
that
if
they
so
choose,
and
the
more
that
I
think
about
this,
the
more
that
I
think
that
that
sort
of
class,
a
fair
attitude
toward
this
thing
might
actually
be
the
best
solution,
at
least
for
the
meantime.
G
G
G
Yes,
well,
I
am
not
challenging
to
intentionally
do
that.
I
am
saying
I'm
talking
about
likelihood
of
that
occurring
unintentionally
right.
A
D
I
think
it's
pretty
difficult
to
add
them
in
such
a
way
that
they
would
be
reliable
and
such
that
you
could
depend
on
them
being
there
and
depend
on
them
being
effective.
So
if
there's
something
that
was
truly
bad
going
on
and
things
were
breaking,
I
don't
think
you
could
rely
entirely
on
whatever
mechanism
that
you
introduced,
but
that's
just
guessing
I'd
have
to
sort
of
think
about
what
sort
of
system
that
you
might
want
to
introduce.
I
Hello,
I
was
just
reading
up
on
the
issue
which
is
like
a
year
old,
so
my
memory's
a
bit
hazy,
but
it
kind
of
seems
a
bit
similar
to
a
discussion.
I
We
are
having
a
few
weeks
ago
on
the
http
priorities
draft
unrelated
to
priorities,
but
around
the
ability
for
endpoints
to
detect
how
many
concurrent
streams
they
might
be
able
to
open
or
that
they
gave
to
the
peer
to
be
able
to
open
some
of
that
hinges
on
the
quick
api
that
is
exposed
up
to
applications
and
that
you
know
you
should
assume
that
the
api
gives
you
all
the
information
you
need
to
know,
but
that
assumption
maybe
doesn't
hold
for
different
implementations.
I
So
I
think
like,
if,
if,
if
we
look
at
it
from
a
perspective
of
trying
to
avoid
a
client
consuming
all
of
the
concurrent
request,
streams
or
bidirectional
streams,
it
could
open
like
I.
G
Thank
you
lucas.
Your
attacker.
K
Can
you
hear
me,
oh
yes,
so
that
I
dare
you
the
memory
consumed
in
the
connection
should
be
a
function
of
the
boundaries
so
that,
even
if
we
have
many
many
streams,
the
bytes
transmitted
per
second
shouldn't
increase
by
creating
more
streams
so
with
ideal
implementation.
G
G
So
my,
in
fact,
this
is
like
more
of
my
main
concerns
is
that
their
quick
has
some
limit
of
how
many
streams
you
can
open
on
the
same
http
free
connection
and
those
that
limit
is
shared
between
regular
fetch
requests
and
web
transport
in
the
situation
when
there
is
pooling.
So
the
problem
here
is:
how
do
we
make
sure
that
we
don't
open
too
many
web
transfers,
so
we
are
incapable
of
doing
fetch
and
vice
versa.
G
Thank
you,
dragana.
L
Hi
you
hear
me:
do
you
hear
me?
Yes,
yeah?
Okay,
sorry,
I
haven't
used
that
this
often
I'm
afraid
of
different
implementation
having
different
limits,
and
this
really
being
the
timing
issue.
That
pages
opening
and
there
is
a
lot
of
fetches
and
then
they
cannot
open
the
streams
and
that's
this
is
like
timing
dependent
that
for
application
developer
can
be
pretty
tricky
to
like
notice
the
problems
by
testing.
L
G
I
do
some
so
I
agree
that
we
should
have
sub
give
developers
some
idea
of
what
their
they
should
at
least
expect
the
browser
to
support,
because
if
my
web
transport
only
supports
four
concurrent
streams,
that's
well
that's
a
very
different
situation
if
it
supports
100
and
that
would
to
some
extent
the
fact,
even
the
fundamentally,
how
I
design
protocol
so.
L
Yeah,
I'm
just
wondering
about
the
pooling,
behaviors
different
different
browsing
genes
that
there
are
some
of
them
pull
more,
that
some
amber
moon
less
than
if
you
open
two
three
taps
that
in
the
end,
that
transfer
doesn't
work
because
they
are
completely
to
the
same.
But
this
is
like
a
different
yeah.
Okay,.
D
D
It's
just
a
for
loop
with
a
create
stream
call
right
and
you've
used
them
all
up.
The
streams
are
easy
to
create,
but
there's
no
guarantees
about
the
number
of
concurrent
streams
that
will
be
available
to
you
at
any
time.
Quick
doesn't
provide
any
any
real
guarantees.
Http
2
has
this
sort
of
you.
You
can
have
100
concurrent
streams
or
whatever
the
number
happens
to
be,
and
you
know
what
that
number
will
be,
but
quick
you.
D
G
I
see
so
it
sounds
like
we
have
some
ideas
on
how
this
should
work,
and
I
think
that
we
should
discuss
the
rest
on
the
mailing
list,
because
I
think
we've
spent
already
like
20
minutes
or
so
talking
about
this.
But.
G
Yes,
this
is
definitely
needs
further
discussion
and
probably
mailing
list
thanks
for
free
everyone.
This
was
very
valuable.
Next
slide.
G
We
had
similar
discussion
a
while
ago
in
w3c
and
I
think
back
then
we
agreed
that
we
should
redirect
to
the
application,
because
that
makes
the
stack
just
much
simpler,
but
I
wanted
to
know
what
opinions
on
this
matter
to
fox
since
this
working
group
have
so.
Oh,
if
anyone
has
opinions
please
during
the
queue.
G
If
no
one
has
opinions,
I
I
think
I
will
just
close
with
the
double
ifrc
conclusion,
which
is
we
don't
support?
Redirects
david.
B
Thanks
speaking
as
individual
contributor
and
the
person
who
followed
this
issue,
I'm
okay
with
that,
I
I
don't
have
a
strong
opinion
on
what
on
whether
we
should
support
it
or
not.
I
just
think
we
should
be
very
clear
in
the
spec
whether
it
is
supported
or
not.
D
Sorry,
I'm
going
to
have
to
refresh
my
memory
on
bcp56
biz,
but
I
think,
if
we're
playing
http
we
have
to
play
by
the
rules
and
3xx
is
one
of
the
http
things
that
you
just
have
to
deal
with,
and
so
there
are
three
x's
that
don't
make
any
sense
for
web
transport.
D
But
there
are
three
x
x's.
That
probably
do
I
can't
remember
which
one
which
one
goes
where,
but
I
think
that,
if
we're
playing
http,
then
we
have
to
not
pretend.
G
So
I
so
I
think
we
need
to
figure
out
whether
we
actually
need
to
do
redirects
according
to
the
bcp,
and
if
that
is
the
case,
I
do
we
should
support
them.
And
if
not,
we
should
revisit
this.
G
G
Multiple
forms
of
attacks
that
exists
in
which
you
a
can
confuse
appear
into
billy,
in
which
you
can
confuse
various
parties
into
believing
that
you
are
speaking
a
protocol
different
from
the
one
you're.
G
Actually
speaking,
on
one
level,
there
is
obvious
attacks
like
the
ones
we
one
of
the
reasons
we
don't
get
rock
quick
sockets
to
web
directly
is
it
allows
you
to
bypass
authentication
and
whenever,
in
web
transport,
we
go
out
of
our
way
and
as
a
part
of
requirements,
we
have
to
identify
that
all
of
our
traffic
that
is
initiated
by
a
web
application
is
identified
as
being
initiated
by
a
web
application,
as
opposed
to
any
native
client.
G
But
there
are
also
families
of
attacks
that
do
you
can
still
perform,
and
one
of
them
is
the
one
websocket
tried
to
avoid
with
masking
is
an
attack
where
you
try
to
make
your
traffic
from
client
to
server
look
like
certain
kind
of
http
traffic,
and
that
would
lead
intermediaries
to
assume
that
you're
speaking
http,
even
if
you're,
obviously
not
and
websocket,
addresses
that
by
trying
to
make
the
traffic
unpredictable
via
masking
and
the
question
is,
do
we
need
to
do
anything
similar
in
web
transport
or
not?
G
G
D
Takes
about
five
seconds
for
this
to
come
through,
so
victor,
the
only
thing
that
I'd
probably
correct
in
your
summary,
was
that
the
impersonation
of
another
protocol
is
not
just
http.
You
can
make
it
look
like
other
things,
and
I
think
one
of
the
the
net
slip
streaming
stuff
was
was
using.
I
think,
stun
at
one
point
or
sip
or
something
else.
D
I'm
not
yeah
harold
harold's
much
closer
to
this
than
I
had
it
was.
It
was
done
with
special
control
over
the
udp
message,
fragmentation,
which
I
don't
think
that
quick
natively
has,
but
it's
worth
considering.
I
have
not
come
to
any
conclusions
myself
on
this
one.
I
was
hoping
that
mike
harold
would
be
able
to
provide
their
insights,
and
I
know
adam
rice
has
been
doing
some
work
on
this
as
well.
G
Yeah,
so
the
questions
I
would
have
like
the
first
question
I
would
ask,
is:
is
this
specific
to
web
transport
or
if
this
is
a
problem?
Why
is
this
not
a
problem
with
something
like
fetch.
G
Okay,
harold.
M
Yes,
so
the
problem
with
masking
is,
if
you
give
the
attacker
control
over
basically
the
first
bytes
of
a
packet,
then
you
can
count
on
some
idiot
box
on
the
on
the
network
somewhere
doing
something
inappropriate
with
it.
M
Chopped
into
more
than
one
udp
fragment
and
the
attacker
had
control
over
the
first
bytes
of
the
of
the
b
fragment
fragment.
Now,
if
you
encrypt
content,
then
the
attacker
does
not
have
control,
and
if
you
have
a
header
that
is
controlled
by
the
protocol,
the
attacker
does
not
have
control.
So
if
you-
and
if
you
encrypt,
what's
the
what
the
attacker
has
control
over
and
and
and
format
the
rest,
according
to
your
own
rules,
you're
basically
safe
against
forgery.
M
H
I
see
thank
you,
davis,
quinase,.
B
Yes
speaking
as
individual
contributor
again
so
encryption
helps,
but
not
in
this
scenario,
because
it
is
predictable.
The
issue
here
is
the
the
these
attack.
Setups
include
like
a
a
participating
evil.
Web
transport
server
with
you
know
like
evil
web
transport
javascript
on
the
client,
and
so
even
though
we
are
encrypting
because
we're
using
stream
ciphers
in
quick,
you
can
have
the
evil
web
transport
server,
send
the
keys
over
the
application
layer
and
the
packet
numbers
it
is
receiving
from
the
client.
B
So
the
client
can
pick
some
packet
number
in
the
close
future
and
then
blast
that
a
bunch
of
times,
because
since
it's
a
stream
cipher
you
can
you
know
if
you
know
what
the
you
can
recover,
what
the
key
mat
is,
so
you
just
absorb
your
plain
text
with
that,
and
then
you
can
have
influence
over
the
cipher
text,
so
you
can
defeat
the
protection
brought
about
by
encryption
using
this.
So
so
we
we
still
have
a
problem.
B
We
need
to
solve,
is
kind
of
all
I'm
saying,
but
the
next
step
is:
how
do
we
solve
it?
My
personal
take:
is
this
isn't
specific
to
web
transport?
Like
you
have
the
same
issue
with
regular
fetch
requests,
and
so
potentially
the
best
solution
is
to
do
the
same
thing
as
what
we
did
for
the
slipstream
attack,
which
was
to
add
a
few
ports
to
the
fetch
tcp
block
list.
B
G
Yeah,
so
I
agree,
one
thing
I
will
notice
is,
as
far
as
I
remember,
web
transport
actually
respects
the
tcp
blood
port
block
list,
and
the
other
thing
is
this:
will
protect
us
against
port
blocking
will
attack
us
against
this
netflix
streaming
attacks,
but
that
doesn't
mean
that
it
will
protect
us
against
all
classes
of
attacks
that
we
may
not
be
yet
aware
of
utaka.
K
Only
for
client
to
server
direction,
server
to
client
communication
is
not
masked,
because
the
original
scenario
was
the
client
attacks
proxies
living
between
the
survived
client.
But,
given
the
scenario
scenarios
for
web
transport,
it
seems
the
client
to
server
and
server
to
client
both
needs
masking.
Is
that
right.
G
Well,
to
some
extent,
I
don't
think
masking
can
help
on
server
to
client,
because
if
server
is
malicious,
it
will
do
like.
The
entire
attack
depends
on
the
premise
that
both
web
application
and
the
server
are
malicious
and
you
it
doesn't
help
you
to
demand
to
require
masking
on
server,
because
the
server
can
already
send
whatever
it
wants,
and
the
fundamental
problem
of
this
attack
are
the
client
to
server
flight.
As
far
as
I
understand.
K
G
G
G
Thank
you,
harold.
M
I
kind
of
kind
of
panicked
when
I
realized
what
what
what
you're
saying.
M
About
an
evil
server
being
able
to
leak
the
keys
and
thereby
influencing
the
ciphertext,
because
that
that
attack
will
actually
work
against
anything
that
uses
quick,
and
that
means
that
well,
first
of
all,
it's
a
much
bigger
problem
than
I
thought,
and
second,
it's
not
this
working
group's
problem
to
solve
the
first,
the
first
bad.
The
other
is
good
for
this
meeting,
but
not
not
for
the
not
not
for
the
ecosystem.
M
If
that,
if
the
attack
is
real,
then
it's
then
it's
applies
to
any
protocol
using
quick.
G
Yes,
I
think
I
think
my
intuition
is
that
this
is
an
issue
we
should
probably
punt
to
quick
working
group,
because
this
is
much
extends
far
beyond
web
transport
and
to
some
extent
this
is
the
problem
that.
G
But
I
don't
think
we
are
web
transport
is
the
correct
level.
Does
anyone
object
to
that
conclusion.
E
D
But
if,
if
we
take
the
necessary
steps
to
ensure
that
we
have
a
discussion,
perhaps
in
http
perhaps
in
tls,
then
I
can
live
with
that.
N
So
we've
had
quite
a
few
changes
since
we
last
talked,
if
you
may
recall,
at
ietf
111,
we
talked
about
moving
to
a
layered
design
which
effectively
pulls
in
kind
of
a
minimal
set
of
framing
from
quick
and
runs
that
over
a
single
hv2
stream
in
large
part,
because
we
were
saying
that
it
was
easier
to
build
something
on
top
of
hgv2,
then
to
own
an
http,
2
implementation
and
expect
those
implementations
to
change.
N
Martin
has
also
graciously
said
he
is
willing
to
join
us
as
an
author
and
has
been
contributing
a
lot
of
nice
texts.
So
thank
you.
Martin
next
slide.
Please.
N
So
I
drew
a
couple
of
fairly
crude
diagrams
to
kind
of
help
us
all
page
in
what
we
mean
when
we
talk
about
these
things
so
to
set
the
stage
here.
I've
got
this
box
in
the
middle
that
we're
going
to
worry
about
in
a
little
bit
later,
but
instead
of
adding
additional
frames
to
http
2,
we
now
have
an
h2
implementation
on
either
end.
N
There's
client
server
pick
your
favorite
intermediaries
and
we
use
extended,
connect
to
say,
hey,
I'm
speaking
web
transport
and
then
that
connect
stream
becomes
the
thing
that
carries
the
entirety
of
the
web
transport
session.
So
when
we've
talked
about
kind
of
breaking
out
to
use
native
streams
or
datagrams
when
they're
available
or
something
like
that,
this
is
kind
of
the
the
far
end
of
that
spectrum,
where
nothing
is
broken
out
and
everything
is
self-contained
within
the
single
connect
stream.
Next
slide.
Please.
N
If
we
then
move
into
what
was
inside
that
box
in
the
middle,
we
have
a
web
transport
session.
Where
we
have
bi-directional
streams.
We
also
have
unidirectional
streams
and
we
also
have
datagrams
and
of
course,
these
can
be
initiated
from
either
side,
which
is
why
I've
got
some
of
them
kind
of
anchored
on
the
left
and
some
of
them
anchored
on
the
right.
N
N
In
order
to
do
that,
we
end
up
pulling
over
a
subset
of
the
frames
that
came
from
quick
and
we
saw
a
variant
of
this
slide
at
111
and
in
actually
writing
up
a
bunch
of
the
text
and
the
definitions.
There's
a
couple
of
issues
that
we
need
to
work
out
which
we'll
get
to
in
a
little
bit,
but
in
general
it
was
fairly
clean.
So
there's
kind
of
the
main
set
on
the
left.
You've
got
stream
and
datagram
and
we
also
brought
along
padding
and
that
allows
you
to
do.
N
You
know
the
bi-directional
unidirectional
streams
and
datagrams
and
then
there's
a
set
of
reset
streams,
stop
sending
kind
of
state
management
there
and
then
there's
flow
control,
which
is
pretty
much
lifted
exactly
from
quick,
which
is
subtly
different
from
h2.
But
that's
not
the
end
of
the
world.
N
So
for
that
we
have
max
data
for
the
whole
web
transport
session.
We
have
mac
stream
data
for
a
given
stream,
and
then
we
have
mac
streams
for
stream
counts
and
brought
along
the
the
blocked
variants
of
those
frames
as
well
for
the
same
purposes
as
we
have
them
in
quick
and
next
slide.
Please.
N
The
first
one
is
when
we
talk
about
mirroring
quick,
there's
kind
of
two
pieces
that
come
into
this
and
the
first
one
is
that
there's
a
couple
of
fields
in
certain
quick
frames
and
I've
pulled
up
reset
stream
as
an
example
that
are
just
not
necessary
in
the
way
that
we're
doing
this
so,
for
example,
within
reset
stream
there's
final
size
and
it
turns
out.
There
aren't
very
many
other
examples
of
this.
But
here
is
an
example
of
a
field
where
both
sides
already
know
the
amount
of
data.
N
That's
that's
gone
on
the
stream.
We
don't
actually
need
final
size.
It's
not
communicating
anything,
that's
interesting,
and
so
the
main
question
is
we're
talking
about
hey.
You
know,
there's
some
additional
lifting
necessary
in
that.
You
now
need
to
parse
a
bunch
of
these
frames.
One
of
the
ideas
that
was
brought
up
that
is
potentially
really
attractive.
Is
you
know
you
could
just
reuse
your
quick
parser
for
a
bunch
of
these
frames
and
so,
rather
than
needing
to
do
a
bunch
of
lifting
to
handle
all
of
this
stuff,
you
could
just
say
hey.
N
The
other
thing
that
comes
up
there
is
we've
added
a
length
field
to
every
frame
that
gives
you
a
couple
of
things,
one
it
lets
you
take
unknown
frame
types
and
be
able
to
skip
them
and
you're
packing
all
of
these
frames
into
a
reliable
byte
stream,
and
so
it's
really
kind
of
nice
to
have
the
delineations
of
where
things
are,
especially
since
things
can't
be
lost,
and
you
can't
just
ignore
any
of
them.
N
We've
currently
put
that
right
after
the
type,
but
it
is
entirely
possible
that
we
could
put
the
length
first
and
I
think
alan
had
the
interest
made
the
interesting
observation
that
if
we
put
the
length
first,
you
could
consume
the
length
strip
it
off
and
then
shove
the
entire
rest
of.
What's
there
at
your
quick
frame,
parser
and
be
done,
and
so
I
think
the
the
place
where
we
want
to
stop
right
here
is
to
get
some
sense
from
people
of.
Is
this
an
attractive
thing
to
do?
N
Is
this
a
as
somebody
who's
implementing
this?
You
say:
oh
yes,
this
is,
you
know
substantially
easier.
If
I
don't
have
to
do
any
of
this
parsing
there's
a
bunch
of
variable
length,
integers
here,
I'd
like
to
just
take
my
quick
code
and
you
know
strip
off
the
length
and
shove
the
frame
at
it.
Or
does
this
become
useless
and
and
not
actually
particularly
helpful?
D
Well,
I
was
just
going
to
point
out
that
you
could
be
even
even
more
clever
than
this
and
use
the
same
length
field
for
multiple
frames,
because
that's
what
quick
does
right
and
you,
but
I'm
not
sure
that
I
really
need
to
go
there.
This
is
this
is
the
fallback
protocol
and
we
could.
We
can
always
be
too
too
clever.
D
I
Yeah
like
if,
if
you're
gonna
change
the
frame
layout,
then
you're
not
gonna
get
any
reuse
from
quick.
Unless
you
start
to
special
casey,
quick
parser,
which
I
wouldn't
suggest
anyone
does
like
the
parsing
code
for
these
things
is
like.
If
you
have
a
length,
it's
pretty
easy,
it's
not
that
hard
to
you
know
copy
paste
the
code
and
call
it
a
web
transport
frame,
parser
or
something
so
I'll.
Just
go
for
the
simple
thing
and
not
try
to
be
overly
clever
thanks.
E
I
I
guess,
since
this
is
my
suggestion,
I'll
just
say
that
I
think
I
prefer
like
having
the
I
would
either
put
the
length
first
and
keep
the
flame
formats
the
same
and
then
just
reuse,
the
quick
parser
I
mean
yes,
you
can
copy
and
paste
it,
but
I
think
I
would
prefer
not
to,
and
I
don't
and
the
other
option
we
have
talked
about
is
that
quick
itself
doesn't
have
the
lengths
and
do
we
need
them
here.
Do
we
want,
I
mean
the
main
benefit.
E
Is
you
could
add
more,
you
could
add
unknown
types
and
that's
why
you
have
a
length
if
you,
otherwise
you
know
exactly
what's
going
to
be
in
there?
That's
what
quick
does
you
can't
add
unknown
types
in
quick,
so
I
think
it's
sort
of
bound
together
we
could
drop
length
all
we
could.
We
could
drop
length
entirely.
O
Actually,
I
think,
ask
basically
the
same
question
of
like
is
that
do
we
really
think
we
need
a
length
here
because
yeah,
I
think
you
know
out
of
a
slight
preference,
we're
just
like
not
adding
a
length
and
then
like
making
it
look
like
quick,
because
it
does
seem
slightly
preferable
to
me,
but-
and
I'm
not
sure,
if
I
understand
like
why
we
care
about
skipping
unknown
frame
types,
but
maybe
there's
some
use
case
that
I'm
missing.
So
the
other
thing
I
wanted
to
conf.
O
This
might
pop
stack
a
little
bit
but
like
at
some
point,
can
we
get
back
around
to
it
looks
like
there's
some
functionality
that
on
the
session
layer
here
that
is
per
connect
stream
in
hp2,
and
we
when
we
go
back
to
hp3
like
max
data
and
such
apply
across
all
sessions,
which
we
sort
of
alluded
to
earlier
today.
O
I'm
not
saying
that
either
of
those
are
right
or
wrong,
but
it
does
seem
like
the
functionality
that
you're
mapping
over
hp2
is
offering
is
actually
a
slightly
richer
functionality
when
multiplexing
than
the
hp3
version.
Is
that
correct.
N
N
G
Oh,
I
just
wanted
to
say
that
in
general,
the
landfills,
it
sounds
like
a
great
idea
for
anything
that
is
not
super
concerned
about
the
size
and
we're
not
concerned
about
the
size
here.
So
it
sounds
good
to
keep
lens
and
remove
the
final
size.
B
Speaking
as
participant
yeah,
my
general
take
here
is
that,
since
this
is
the
fallback
transport,
we
don't
need
to
squeeze
every
single
bit
of
overhead
out
of
it.
The
same
way
we
do
for
the
quick
transport
layer
so
having
a
length
seems
like
a
recipe
for
fewer
headaches
down
the
road,
and
once
you
have
a
length
I
this,
this
is
then
becomes
a
segway
into
your
next
slide,
which
is
this
is
starting
to
look
an
awful
lot
like
a
capsule,
and
why
don't
we
simply
bingo?
Thank
you
mt.
B
Why
don't
we
just
use
capsules,
then,
because
this
sounds
exactly
like
what
you
want
to
do
and
as
a
wise
mt
once
said,
why
have
another
frame
type
and
you
know
another
extensibility
joint
when
you
can
just
use
a
capsule
type,
and
then
we
register
whatever
eight
capsule
types
for
all
these
wt
frames
and
we
call
them
wd
capsules
and
we
move
on.
That
would
be
my.
N
Put
here
so,
I
think,
just
to
make
sure
I
understand
you're
saying
make
it
a
capsule
register.
It
don't
worry
about
trying
to
make
the
types
overlap
with
quick
because
we're
doing
a
separate
parser
anyway,
and
so
this
is
the
we're
not
super
worried
about
making
it
look
like
quick,
just
make
them
be
capsules
and
keep
going.
E
E
So
I'm
not
sure
that
the
capsule
protocol,
like
that
code,
I'm
not
worried
about
either
copy
or
pasting
or
templating
or
whatever
the
parsing
of
the
type
and
length
it's
more
handling
the
specific
type
types
which
I
already
have
code
for
in
quick
and
would
rather
just
reuse
that
that's
a
lot
more
code
to
copy
and
paste
versus
like
getting
out
the
the
type
end
length
or
in
you
know,
or
if
we
decide
we
really
want
length,
then
length
and
type
would
be
my
preference.
B
I
would
just
to
echo
what
I
have,
and
I
think
empty
have
been
saying
on
the
chat
like
given
that
there
are
like
six
frame
types
that
all
have
like
on
the
order
of
two
to
three
fields.
On
average,
like
reusing
the
quick
frame,
parser
sounds
like
trying
to
shoehorn
something
that
doesn't
fit,
so
I
would
very
much
push
to
like
not
have
that
as
a
requirement
and
then
just
say
we
have
a
different
encoding
of
the
frames.
B
Their
semantics
are
the
same,
but
we
can
have
a
completely
different
encoding
like
when
it
comes
to
processing
these
frames,
like
all
the
hard
part,
is
the
stream
logic.
It's
the
flow
control.
It's
the
all
that
machinery
which
you
can
reuse.
The
parsing
is,
like
you
know,
0.1
percent
of
your
code
size
at
the
end
of
the
day,
so
I
would
very
much
push
to
drop
the
trying
to
reuse
the
quick
parser
and
then
things
become
a
lot
simpler
and
it
also
avoids
some
kinds
of
confusions
down
the
road.
N
I
think
that
makes
a
lot
of
sense,
so
I'm
I'm
hearing
a
overall
lack
of
concern
with
having
it
not
look
just
like
quick
and
then
people
think
it's
it's
a
reasonable
thing
if
it's
a
little
bit
different
and
that
we
sidestep
any
of
the
negotiation
issues
by
having
a
length
in
there
and
can
just
keep
going
all
right.
Thank
you
all
next
slide.
Please.
N
For
our
next
issue,
as
david
alluded
to
earlier,
if
we
have
this
type
and
we're
not
sticking
the
length
in
front
to
try
to
do
anything
cutesy,
the
thing
we
were
just
looking
at
happens
to
kind
of
already
be
a
capsule
and
so
effectively.
N
My
understanding
of
what
we're
doing
here
is
saying
yeah
previously,
when
we
defined
these
frames,
we
used
the
exact
frame
types
from
quick,
but
now
we're
going
to
just
register
them
as
capsules
and
keep
going
so
it
sounds
like
this
probably
falls
out
from
what
we're
doing
before,
but
I
thought
we'd
stop
and
talk
about
it
just
in
case
anybody
feels
really
strongly
one
way
or
the
other.
So
basically,
assuming
whatever
is
going
on
in
mask,
we
resolve
and
come
to
a
adequate
consensus
on
for
where
http
datagrams
are
going.
N
What
this
will
end
up
looking
like
is
you
make
this
connect
stream,
and
then
you
start
sending
capsules,
which
are
effectively
just
the
web
transport
frames
that
we've
been
defining
and
we're
just
gonna
say
one
capsule
is
one
web
transport
frame
effectively
types
are
kind
of
in
this
more
shared
place,
but
otherwise
we're
good
and
then
the
other
question
there
is:
do
we
make
web
transport
datagram
be
its
own
thing
or
do
we
just
use
the
actual
datagram
datagram
from
http
datagram
land,
and
I
see
alan,
is
in
the
queue.
E
I
I
mean
I
really
I'm
not
sure
what
the
value
capsule
brings
here,
even
if
the
format
is
defined
exactly
the
same.
It
now
means
we
have
to
register
our
type
the
types
for
this
protocol
in
a
global
place
when
I
don't
know
that
that
it's
not
like
some
other
protocol
is
going
to
want
to
parse
these
frames
in
their
capsule
handling.
E
Nor
would
we
really
want
to
handle
their
capsules
in
ours.
So,
even
though
they
look
the
same,
I
don't
think
they
need
to
be
the
same,
and
if
we
were
saying
that
it's
no
big
deal
to
copy
and
paste
your
quick
parser,
then
it's
definitely
no
big
deal
to
copy
and
paste
your
capsule
parser.
N
L
B
Well,
so
I
mean
not
to
open
up
the
capsule
discussion
that
we
are
having
in
mask
and
that
we
will
be
having
as
part
of
the
masked
http
datagrams
design
team,
but
in
general
my
I
think,
a
lot
of
this
comes
down
to
what
your
plans
are
for
getting
across
intermediaries.
B
If,
at
the
end
of
the
day,
for
let's
say
you
know,
if
you
have
like
an
h2
to
h1
through
an
intermediary,
are
you
saying
if
you're
saying
that
web
transport
needs
to
be
done,
reimplemented
on
each
hop
as
opposed
to
it,
can
go
through
intermediaries
seamlessly
that
kind
of
changes?
The
outcome
here,
because
capsules
gives
you
a
the
the
justification
and
motivation
for
having
capsules
as
a
shared
infrastructure
layer
is
the
idea
that
they
go
through
intermediaries
just
unchanged
simply,
and
so
that
way.
B
If
you
get
on
that
bandwagon
yeah,
you
use
the
general
capsule
registry.
So
you
don't
need
to
have
your
own
iana
registry,
which
saves
some
work
there,
and
then
they
just
you
know
you
put
them
in
your
stream.
They
get
across
to
the
other
side.
You
don't
need
to
worry
about
things
like
intermediaries,
but
you
know
that
comes
down
to
if
we
instead
say
well,
transport
needs
to
be
re-implemented
for
each
german
area,
especially
between
h2
and
h1.
Then
that
is
a
bit
less
attractive.
N
So
clarification
question:
there
we're
not
talking
a
whole
ton
about
h1,
specifically
here,
because
we've
been
continuing
to
focus
on
h2
to
keep
from
expanding
super
rapidly
as
we
type
all
this
in.
But
if
I
weren't
using
capsules
and
I
sent
connect
to
an
intermediary
that
then
was
doing
h1
to
the
back
end
and
it
made
a
sensibly
tcp
connection
somewhere
and
started.
Sending
my
web
transport
frames.
Are
you
saying
that's
meaningfully
different?
If
we
call
those
web
transport
frames,
capsules.
B
The
the
meaningful
difference
is
that
it
so
first
off
you're
right
that,
like
if
we're
just
focusing
on
h2,
it's
not
as
bored,
but
I
guess
you
know
if
h2
to
h2
have
an
intermediary
in
the
middle.
It's
if
you're
using
capsules
and
we
you
know,
I'm
waving
my
magic
wand
and
we
have
defined
a
you
know.
Generic
capsule
thing,
where
you
tell
the
intermediary
with
a
simple
way:
oh
yeah,
this
is
a
capsule
protocol.
Then
I
will
just
be
forwarding
these
to
the
corresponding
connect
stream
on
the
up
bound
connection.
B
Then
you
get
that
you
know
forwarding
property
for
free,
whereas
if
you
don't,
then
you
need
a
more
you
know.
Web
transport
specific
handling
at
that
layer.
N
Thank
you,
okay.
We
can
also
have
this
sit
until
the
design
team
and
mask
comes
out,
which
is
probably
something
we're
going
to
want
to
do
either
way.
E
B
E
E
E
The
only
reason
they
would
ever
need
to
parse
them
is
if
they
were
actually
participating
in
the
protocol,
because
they
have
to
understand
it
anyway.
So
again,
I
just
don't.
I
don't
think
the
only
thing
I
think
a
cap
bringing
account
even
if
they
look
exactly
the
same
and
you
overlaid
capsules
here.
N
All
right,
so
it
sounds
like
we're.
Gonna
sit
on
this
until
we
hear
which
way
we're
jumping
over
and
mask
either
way.
But
let's
take
some
of
this
discussion
onto
the
issue
and
see
if
we
can
actually
write
down.
I
know
some
of
what
you
were
just
saying:
david
might
be
worth
putting
in
the
issue
so
that
we
have.
You
know
something
we
can
refer
to
later
and
say:
okay,
here's
the
kind
of
concrete
difference
between
these
approaches.
N
N
So
there's
the
you
know,
sequence
of
data
frames,
that
is
this
web
transport
session
and
the
lifetime
of
that
connect
stream
wherever
you
are
on
the
spectrum,
is
going
to
determine
the
lifetime
of
this
web
transport
session,
but
we're
currently,
we
have
retracted
everything
back
into
that
into
that
connect
stream,
and
that
makes
a
whole
lot
of
things
fall
out
rather
nicely
like
flow
control
becomes
fairly
elegant
and
a
bunch
of
other
stuff
pooling
is
a
non-issue
there's
all
sorts
of
things
that
are
nice
about
that,
but
part
of
how
we
got
here
through
what
we've
been
calling
the
web
transport
framework
is
effectively
that
you
know
we
can
have
a
fallback
or
something
like
this,
but
we
want
to
be
able
to
use
the
native
implementation
of
things.
N
We
want
to
be
able
to
send
actual
datagrams
when
we're
over
something
where
the
transport
does
have
support
for
that
capability.
So
when
you
pack,
your
web
transport
datagram
frames
into
your
h2
data
frames
and
send
them
on
your
connect
stream,
they're
not
actually
unreliable,
they
can
be
unreliable
when
they
get
somewhere,
and
you
know,
flow
control
has
some
interesting
lack
of
application,
and
so
they
may
be
dropped,
but
they're
not
going
to
get
lost
in
a
lost
packet.
That
is
not
retransmitted.
N
So
with
that
said,
h2
does
provide
native
streams,
and
so,
if
you
take
all
of
our
words
and
don't
really
think
too
hard
about
it,
you
could
assume
okay.
Well.
If
there
are
native
streams,
even
if
there
aren't
native
datagrams,
then
we
should
split
out
every
web
transport
stream
into
a
native
h2
stream
and
the
last
bullet
on
this
slide
kind
of
goes
to
where
we
are
right
now,
which
is
we
didn't
actually
do
that
because
there
didn't
seem
to
be
significant
benefit
from
doing
so.
N
So
if
anybody
feels
strongly
that
we
need
to
do
that,
now
would
be
a
time
to
speak
up,
but
unless
somebody
can
see
a
significant
benefit
for
us
doing,
that,
there's
obviously
benefit
for
doing
that
for
datagrams,
but
for
streams
themselves.
It
doesn't
seem
like
it.
It
really
helps
one
way
or
the
other
and
there's
all
sorts
of
nice
benefits
to
having
them
inside
the
connect
stream.
N
So
we
have
an
issue
in
github
and
I
think
this
is
one
of
our
last
ones
here
that
we're
going
to
talk
about,
but
we
have
an
issue
in
github
to
write
up
kind
of
how
flow
control
works
with
this
whole
thing,
and
so
I
wanted
to
just
kind
of
talk
through
that
and
make
sure
that
everybody's
comfortable
with
where
we
think
we
are
and
how
this
is
going
to
interact
with
some
of
the
pooling.
N
So
you
end
up
with
four
nested
layers
of
flow
control,
which
is
rather
a
lot,
but
does
mean
that
each
concept
and
each
kind
of
scope,
if
you
will
gets
its
own
level
of
control-
and
you
can
do
exactly
what
you
want.
So
I
drew
a
picture
for
this
on
the
next
slide
in
case
it's
more
interesting.
So
if
we
go
there.
N
I
added
some
pretty
colors,
so
you
effectively
have
flow
control.
N
Where
there's
you
know
the
h2
connection,
that's
governing
everything
across
all
of
h2,
that's
useful
for
saying
you
know
here's
how
much
you
can
send
across
all
web
transport
sessions
that
are
going
on,
then
there's
the
connect
stream
itself,
and
so
that
one,
you
know,
contains
an
entire
web
transport
session
and
that
allows
you
to
limit
the
amount
of
data
for
that
whole
web
transport
session
and
that's
especially
useful
when
we
talk
about
cooling
and
things
like
that
right.
N
So
those
kind
of
outer
two
boxes
are
the
things
that
allow
you
to
have
multiple
web
transport
sessions
and
express
the
relationships
between
them.
Then
the
next
two
colors
are
within
a
web
transport
session.
You
have
the
kind
of
overall
session
level
flow
control
which
is
analogous
to
what
quick
is
doing
or
a
little
bit
like
what
h2
is
doing,
which
allows
you
to
control
the
total
across
all
of
the
web
transport
streams,
and
then
each
stream
obviously
has
its
own
flow
control.
N
So
that
ends
up
being
a
lot
of
flow
control,
but
with
all
of
that
possibility
to
potentially
screw
things
up,
you
also
have
the
ability
to
control
all
of
these
relationships
between
the
different
things
and
you
can
have
multiple
web
transport
sessions
and
they
can
all
be
pooled,
and
you
have
limits
on
the
number
of
streams
that
can
be
established
both
within
web
transport
streams
and
within
h2
for
the
connect
streams,
so
without
a
whole
lot
of
extra
effort
or
anything
like
that
kind
of
the
existing
model
that
you're
using
for
h2
and
for
quick
applies
respectively.
N
Here
and
much
of
the
pooling
questions
become
moot
and
we
continue
on
our
merry
way.
N
N
So
we
have
an
issue
on
github
for
doing
some
error
handling,
to
which
the
initial
response
is
yeah.
We
should
probably
do
that
so
the
existing
draft.
We
did
not
import
all
of
the
error
handling
from
quick.
That
goes
along
with
these
frames,
and
so
we
have
kind
of
an
outstanding
item
of
we
need
to
import
that
over,
but
we
wanted
to
talk
a
little
bit
about
the
shape
of
how
that
should
look.
N
N
N
E
E
O
Ian
seems
like
this.
The
same
question
is
very
relevant
to
the
hp3
mapping.
So
as
much
as
possible
shouldn't,
we
make
sure
our
answers
for
the
two
are
similar
or
should
they
not
be
similar,
and
I
don't
know,
yeah
thoughts.
N
N
D
But
you
want
to
see
what
here
one
is
one
is
that
things
have
gone
poorly
and
you
want
to
tear
the
whole
thing
down,
which
is
the
thing
you
send
before
you
close
the
stream
and
the
h3
web
transport
thing
already
has
that
we
should
have
the
same
thing
and
if
we're
using
the
same
format,
then
we
should
use
the
same
message.
Just
makes
it
easier
to
forward
these
things
right,
they're,
end-to-end
signals
again.
The
same
applies
to
the
go
away,
discussion
that
we
have
previously.
D
So
let's
work
out
what
the
requirements
look
like
see
what
airline
comes
up
with
with
respect
to
to
go
away
and
then
we'll
we'll
take
it
or
not.
N
So
there's
some
text
in
the
h3
document
right
now,
or
at
least
the
version
I
was
looking
at
when
doing
some
of
this,
where
we
take
a
error
code
and
apply
a
transform
to
it
to
be
our
actual
error
code,
because
we
needed
to
put
those
errors
in
the
same
registry
in
space
as
the
other
h3
ones.
G
G
Yes,
so
I
guess
the
question
I
would
have
is
this
is:
does
this
issue
cover
both
streams
and
sessions.
N
Or
that's,
that's
not
something
that
we've
been
super
clear
about,
but
I
think
the
the
short
answer
is
yes:
we've
we
need
to
be
able
to
to
remove
a
stream
and
we
need
to
be
able
to
say
this
is
very
bad.
You
just
you
know,
did
something
that
doesn't
align
with
the
spec
or
or
broker
requirement
that
I
had
and
the
whole
thing
is
over
and
we're
not
going
to
be
talking
anymore.
N
So
this
this
issue,
I
think
the
original
intent
of
the
issue
was
was
for
the
I
need
to
tear
down
the
session,
but
we
need
to
have
the
ability
for
people
to
do
both
and
we've
got
that
pretty
well
spelled
out,
I
think
for
streams,
but
obviously
we
shouldn't
jump
in
one
direction,
strongly
for
streams
in
a
different
direction,
strongly
for
the
session.
N
N
N
N
Of
what
we
just
did
in
moving
to
this
more
layered
model,
where
we're
using
web
transport
frames
on
top
of
http
2,
rather
than
adding
additional
frames
into
http
2
is
we
have
now
defined
a
way
that
you
can
use
http
semantics
with
ostensibly
any
http
version,
not
just
http
2,
to
communicate
via
web
transport
frames
with
the
other
side,
and
we've
talked
about
how
that
works
really
nicely
with
flow
control
when
everything
is
kind
of
retracted
back
into
that
connect
stream
or
that
connect
exchange
message
and
we've
also
talked,
but
not
actually
done
a
lot
about
being
able
to
split
out
to
use
native
features
like
datagrams
in
h3
and
maybe
not
doing
that
for
streams
in
h2.
N
But
you
might
choose
to
do
that
for
streams
in
h3,
because
you
wanted
some
of
the
benefits
that
h3
provides
around.
You
know
blocking
between
streams
and
that
sort
of
thing
which
doesn't
exist
for
h2,
and
so
the
the
question
we
come
back
to
is:
should
all
of
the
mappings
of
web
transport
be
doing
the
same
thing
in.
N
Try
to
capture
a
little
bit
of
why
we
might
want
to
do
this
or
why
we
might
not
want
to
do
this,
but
I
think
there's,
there's
kind
of
two
things
going
on
that
are
a
little
bit
in
conflict.
One
of
them
is
that
if
you
are
going
through
an
intermediary
being
able
to
do
this,
you
know
using
hp
semantics
to
talk
to
the
remote
end
allows
you
to
ignore
much
of
the
pooling
questions.
N
It
allows
you
to
go
through
an
intermediary
really
nicely
because
you're
just
sending
web
transport
frames
in
h2
you're
just
sending
those
inside
of
data
frames,
but
that
your
intermediary
can
go
from
h2
to
h1
to
h3
and
back
again,
and
you
don't
really
need
to
care.
N
I
mentioned
already
pooling
becomes
much
less
of
an
issue
because
all
of
the
limits
about
how
you
handle
streams
are
fine
here,
but
then
there's
also
some
drawbacks
to
this
approach,
which
are,
if
you've
retracted.
Everything
inside
of
this
connect
stream,
pooling,
isn't
an
issue,
but
that's
at
odds
with
splitting
out
datagrams
and
splitting
out
streams.
And
if
you
split
out
the
datagrams
and
just
let
out
the
streams.
N
Now
you
have
to
deal
with
the
pooling
for
them
again,
and
the
real
thing
that
I
think
gave
a
lot
of
folks
pause
is:
how
do
we
negotiate,
which
one
you're
doing
like
if
I'm
implementing
h3
do?
I
suddenly
need
implementing
web
transport
over
h3
like
do?
I
suddenly
need
to
be
able
to
handle
web
transport
frames
coming
in
on
a
new
h3
stream
and
on
the
original
connect
stream
that
that's
carrying
the
lifetime
of
the
web
transport
session
and
that's
kind
of
annoying
and
like
could
the
same
web
transport
session?
N
N
Yeah,
so
if
we
go
back
one
just
so,
we
have
our
kind
of
pros
and
cons
things.
Where
do
we
want
to
go
with
this
like?
We
don't
have
to
touch
h3
at
all,
but
there
is
some
architectural
elegance
to
having
the
same
thing.
That
is
version
independent
and
the
mapping
of
it
onto
h3
provides
these
native
capabilities
and
it
happens
to
solve
a
number
of
other
issues.
N
B
D
G
My
uptake
is
that
we
should
not
require
servers
to
implement
with
transport
over
h2
if
the
only
protocol
they're
interested
in
is
web
transport
over
h3
and
the
reason
for
that
there
are
servers
that
would
be
only
interested
in
web
transport,
overage,
free
or
web
transport
in
general,
if
they
can
rely
on
quick
performance
guarantees,
and
because
of
that
me
they
are.
That
would
mean
they're
not
really
interested
in
other
versions
of
web
transport,
and
this
would
effectively
require
them
to
implement
twice
the
code.
E
Yeah,
I
also
if,
if
we're
allowing,
if
they're
the
same
protocol
and
h3
receives
the
frames
that
are
only
allowed
on
h2,
then
it
has
to
have
sort
of
error
checks.
To
like
say
no,
you
can't
send
wt
stream.
This
is
h3,
sorry
I
mean,
and
except
for
the
right
so
right
now
we
would
only
have
one
shared
message
between
what
you
can
send
on
the
h2
connect
stream
at
h3,
which
is
the
closed
web
transport
session.
Maybe
they'll
be
go
away.
E
N
Yeah,
I
think
that
could
make
some
sense.
I
martin's
proposal
is
kind
of
attractive,
which
is
you
could
almost
do
a
slightly
modified
version
of
that
you
could
almost
do
the
same
or
at
least
where
they
overlap
the
same
set
of
things
but
require
them
to
be
split
out
into
streams
and
require
them
to
be
split
out
into
datagrams
when
you're
operating
over
h3,
and
I
think
that
would
address
victor's
point,
which
is
where
at
that
point
we
wouldn't
be
asking
people
to
have
to
implement
both
you
would
implement.
N
One
thing
which
was
this
is
how
I
speak
web
transport
over
http
semantics
and
if
you
chose
to
only
support
h3
that
would
be
totally
okay.
You
would
send
those
things
over
http
3
streams
and
over
hp3
datagrams,
and
if
you
chose
to
also
support
h2,
you
would
take
effectively
the
same
things
and
bundle
them
up
in
you
know
a
connect
stream
in
data
frames
and
you'd
have
that
for
free,
but
we
wouldn't
be
asking
people
to
actually
support
two
things.
N
I
think
architecturally
in
words,
that
sounds
really
nice,
but
in
practice
allen
just
pointed
out.
You
know
it
isn't
a
100
overlap,
because
a
lot
of
the
things
that
we're
trying
to
provide
with
the
web
transport
frames
over
these
data
frames
in
h2
are
things
that
are
already
available
natively
in
h3
anyway,
and
so
you
know,
we
could
write
a
lot
of
text
about
how
that's
kind
of
a
null
mapping
and
it's
all
really
the
same
thing
underneath.
N
But
at
that
point
we're
just
calling
it
that,
because
it
sounds
nice,
they
could
just
remain
separate
things
that
don't
actually
overlap,
and
I
don't
know
that
that
would
upset
anybody.
G
What
so
conceptual
is
the
way
I
see
web
transport
is
web
transport?
Is
it
well?
The
overview
draft
is
called
web
transport
protocol
framework
and
what
web
transfer
fundamentally
provides.
It
provides
a
model
of
how
you
interact
with
a
remote
server,
and
that
model
is
fundamentally
based
on
quick,
that
is
to
say
my
idea
of
how
this
works.
G
When
I
started
writing
the
web
transport
overview
and
work
transferred
over
free
draft,
when
we
first
adopted
was
that
there
is
this
model
of
that
web
transport
has
streams
and
datagrams,
and
then
web
transport
over
http,
3
and
web
transport
over
http
2
are
implementations
of
that
model,
but
the
level
of
abstraction
is
not
on
this
frame
remapping,
which
is
just
something
that
kind
of
naturally
emerged
for
those
two,
yes
cases,
but
the
level
of
abstraction
is
fundamentally
the
streams
and
data
groups
model
and
the
I
think
there
is
a
value
in
that,
because
this
is
on
some
conceptual
level
is
a
very
clean
model.
E
So
I
think
if
we,
if
we
said
that
you
weren't
allowed
to
speak
the
h2
version
over
h3,
you
know
the
feature
you
would
you
might
lose
is
if,
for
some
reason
you
had
a
client
that
was
talking
h2
to
an
intermediary
which
it
must
have
advertised
web
transport
over
h2
support.
But
it
didn't
really
want
to
go
to
the
trouble
of
looking
inside
the
data
stream
and
it's
speaking,
h3
upstream
to
a
web
transport
capable
h3
server.
So
now
that
proxy
has
to
do
the
translation.
E
It's
kind
of
what
martin
pointed
out
in
the
chat
like
it
has
to
parse
that
data
stream
and
pull
all
the
things
out
and
and
translate
it
for
that
upstream
h3.
E
N
Sounds
like
what
we're
saying
is
that
we're
totally
okay?
If
these
don't
look
like
the
same
thing
and
we'd,
let
we
we'd
let
any
intermediaries
have
to
deal
with
translating
between
the
two
and
pooling
works,
great
right
up
until
your
intermediary,
because
you
have
a
bunch
of
different
connect
streams.
N
Assuming
that
you
know
h2
is,
is
the
leg
that
faces
the
client
and
then
upstream,
depending
on,
what's
going
on
with
pooling
there
and
how
we
choose
to
solve
that
in
h3,
it's
possible
that
the
intermediary
needs
to
make
multiple
h3
connections
upstream,
and
vice
versa.
You
could
have
a
client,
that's
speaking.
H3
to
an
intermediary
would
make
multiple
h3
connections
and
then
those
might
be
shared
over
a
single
h2
upstream,
which
seems
not
unlike
how
we
do
other
things.
E
Just
one
thing
that
brought
to
mind
is
that
the
you
know
the
stream
limits
can
just
be
different
on
different
sides
of
these
things,
and
this
is
not
an
h2
to
h3
specific
problem.
It
exists
even
with
pure
h3
or
pure
h2
intermediaries,
and
it
it
was
sort
of
all.
It
was
true
in
h2,
sometimes
with
push
where
you
you'd
have
like
an
upstream
server
that
was
allowed
to
create
a
push
stream.
E
B
B
So
thanks
everyone
for
coming
to
web
transport,
we
had
some
good
discussion
on
quite
a
few
issues.
We
do
realize
that
we
are
a
bit
blocked
on
the
mask
work
with
you.
Yours
truly
will
work
hard
to
resolve
that
as
quickly
as
possible,
and
but
otherwise
yeah
thanks
everyone
for
coming
we're,
making
good
progress
and
we're
looking
forward
to
you
know
having
this
in
in
a
shape
where
we
don't
move
quite
as
much
bernard
any
other
words
you
want
to
add.
C
Yeah
just
to
note
that
web
transport
is
available
now
in
chromium,
so
you
can
actually
this
isn't
just
an
academic
discussion.
You
can
actually
play
with
it
and,
in
particular,
some
of
the
quick
media
over
quick
stuff.
You
know
we
have
all
the
apis
now
for
people
to
actually
build
things.
B
Yep
definitely
spencer
go
ahead.
A
I
just
wanted
to
say
that
james
and
I
have
had
pretty
close
notes
to
the
discussion,
but
if
people
could
take
a
chat,
take
a
look
at
them
and
see
if
we're
massively
misquoting
anything.
H
B
Cool
thanks,
thank
you,
spencer,
all
right,
and
we
wanted
to
especially
thank
alan
for
the
jabber
scribing
which
didn't
wasn't
required
today,
but
thanks
for
keeping
an
eye
on
that,
and
thanks
also
to
james
and
spencer,
for
the
work
on
the
minutes.
I've
been
keeping
on
them
and
they
look
absolutely
great.
Thank
you
so
much
for
taking
the
time
to
do
that.
We
really
appreciate
it.
B
Okay,
that's
all
folks
and
we'll
see
you
on
the
mailing
list
to
keep
discussing
these
issues,
otherwise
see
you
in
march
at
the
next
atf.
If
we,
if
we
don't
have
an
interim
before
then,
okay
thanks
everybody
bye.