►
From YouTube: IETF109-WEBTRANS-20201116-0900
Description
WEBTRANS meeting session at IETF109
2020/11/16 0900
https://datatracker.ietf.org/meeting/109/proceedings/
B
Let
me,
let
me
see
here
I'll,
enable
video,
okay
and
go
back
to.
A
Slides
so
the
session
is
being
recorded
and
if
you're
here
you're
automatically
in
the
blue
sheets,
please
join
the
session
jabber
room
and
use
headphones,
although
I'm
not
doing
that,
I'm
using
an
echo
cancelling
speakerphone
and
state
your
full
name
before
speaking,
so
we
can
get
you
into
the
minutes
david.
Do
you
want
to
give
a
few
tips.
C
So
the
medical
tool
makes
sense,
but
it's
not
necessarily
very
intuitive.
So
you
need
to
do
separate
things
like.
If
you
want
to
speak,
you
need
to
first
enter
the
queue
and
then
like
the
chairs,
will
call
on
you
at
some
point.
You'll
have
to
manually
leave
the
queue
yourself
when
you're
done
speaking,
but
also
once
we
call
on
you
we'll
just
say
something:
we
don't
have
a
button
to
press
to.
Let
you
in
like
last
time,
so
you
need
to
yourself
enable
your
audio
by
unmuting.
C
So
that's
the
button
here
and
then,
when
you're
done
you
like
kind
of
stop
by
saying
you're
done,
and
then
you
mute
and
we'll
move
on
to
the
next
person.
We
encourage
folks
to
use
their
camera
if
they're
comfortable.
It's
a
lot
easier
to
understand
each
other,
especially
across
like
different
accents
and
everything
when
you
can
kind
of
see
the
person
you're
talking
to,
but
that's
not
a
requirement
by
any
means.
Yeah
also.
A
A
Okay,
the
note
well
just
a
reminder
of
itf
policies.
You've
probably
heard
this
several
times
already.
If
you've
been
attending
other
meetings
and
by
participating,
you
agree
to
follow.
Itf
proceed
processes
and
policies
and
definitive
information
is
in
the
document
listed
below
and
other
bcps.
If
you
want
more
info,
please
talk
to
the
working
group
chairs
or
the
ids.
A
So
at
this
meeting
the
agenda
is
posted
and
it's
up
to
date
we
will
want
a
volunteer
for
notetaker.
I
believe
we
have
a
volunteer
for
jabber
scribe,
but
we
will
want
somebody
to
take
notes.
We
have
the
kodi
md
at
this
link,
so
it's
easy
to
do,
but
we
do
need
a
volunteer,
so
we
can
keep
notes
for
what
is
about
to
transpire.
C
C
D
A
Yeah,
if
it's
working
the
way
we
think
it
is,
you
should
see
the
agenda
in
the
tool
when
you
open
it,
at
least
that's
what
I
think
I
put
there.
A
We
really
appreciate
it.
We
really
do
okay,
so
here
is
the
agenda
we
had.
I
have
the
preliminaries
which
you've
just
talked
about.
We
should
probably
do
the
agenda
bash
as
well.
David
will
handle
a
cue
and
then
we're
going
to
have
will
law
from
the
w3c
web
transport
working
group.
Give
us
a
little
update.
A
Luke
will
give
us
some
developer
feedback
from
the
web
transport
origin
trial.
Then
we'll
have
victor,
do
the
overview
and
requirements.
Eric
will
do
http,
2
and
then
victor
will
do
the
quick
and
http
3
drafts
and
then
we'll
have
the
wrap
up
and
hopefully
a
bunch
of
time
for
discussion
during
this
session.
A
A
Well,
you
need
to
if
you're
there,
you
need
to
press
the
audio
tool
to
unmute
yourself.
A
Okay,
well,
I
can.
I
can
give
his
slide,
probably
well
enough
for
now,
so
I
will,
I
will
do
it.
The
w3c
web
transport
working
group
has
been
established
and
the
charter
has
been
published
and
the
working
group
has
been
created
and
all
that
they
did
hold
meetings
during
the
w3c
tpac
held
two
two-hour
meetings,
and
now
they
have
set
up
bi-weekly
meetings
at
an
alternating
time,
slot
of
7
a.m
or
4
p.m.
Pacific,
and
it's
all
on
the
w3c
web
transport
wiki.
So
you
can
go
and
find
out
all
about
it.
A
A
There
are
lots
of
issues
that
have
been
filed
by
various
people,
including
folks
that
are
part
of
this
meeting,
and
so
work
is
underway
and
if
you're
interested
please
please
participate
all
right.
So
now
we're
going
to
have
some
developer
feedback.
Luke
will
tell
us
a
little
bit
about
what
is
experienced.
I
have
also
have
a
few
observations
to
make.
So
luke
I'd
like
to
hand
the
floor
to
you.
G
So
I'm
luke-
I
am
a
software
engineer
at
twitch
and
also
amazon.
We
were
purchased
a
while
ago
and
always
forget
about
that.
But
yeah
we've
been
messing
around
with
web
transport
quite
a
bit
and
have
some
things.
I
want
to
just
talk
about.
A
A
Everybody
luke's
gonna
be
giving
feedback
on
the
web
transport
origin
trial.
Here's
some
basic
info
about
it.
Basically,
if
you've
got
chromer
edge
up
to
m88
like
the
canary
you
can
play
with
it,
you
should
probably,
I
guess,
do
for
the
latest
api
if
you're,
just
working
from
the
api
draft,
I
guess
m87
or
later
is,
is
a
good
thing
and
there's
some
various
info
on
the
web
to
get
more
experience
with
the
api,
but
go
ahead.
Luke.
G
Yeah
so
to
that
extent
as
well,
the
there's
a
few
hurdles
to
get
the
api
going,
but
the
stream
part
of
the
api,
quick
transport
works
somewhat
well
I'll
go
into
some
of
the
specific
bugs
and
the
datagram
component
works
as
well,
and
you
have
to
do
stuff
with
self-signed
certs.
So
but
there's
there's
guides
out
there,
which
is
great,
so
yeah,
just
some
some
some
heads
up.
G
Why
we're
trying
to
use
quick
transport
so
the
first
problem
we're
trying
to
deal
with
is
head
of
line
blocking.
Surprise,
surprise,
it's
kind
of
why
quick
was
designed
to
power
hv3
because,
as
we've
noticed,
when
you
deliver
a
lot
of
media,
it's
usually
sequential.
G
You
know
like
we're
streaming.
This
video
call
or
you'd
be
streaming
watching
a
video
download
and
anytime
you
have
congestion.
It
just
causes
the
queue
to
backup,
either
when
you're
contributing
to
a
website
like
you're,
creating
a
new,
a
stream
we
use
rtmp
at
twitch
and
congestion
causes
back
pressure
which
increases
latency
and
then
in
my
half
I
work
on
the
distribution
team.
Like
the
cdn
team,
we.
G
Low
latency
live
video
cdn
and
again,
if
any
time
your
network
sputters
just
a
little
bit,
it
causes
a
roadblock
increases
latency.
So
we
really
want
to
use
quick
more
than
anything
to
investigate
how
to
solve
this,
and
this
is
some
interesting,
like
ramifications
with
yeah
how
to
do
it
so
next
slide.
G
The
crux
of
our
idea
and
how
to
do
low,
latency
video,
is
that
we
really
want
to
break
video
into
components
such
that
if
there
is
congestion,
they're
independent
of
each
other,
so
we
want
multiplex
multiplexing.
We
want
to
have
multiple
requests
in
flight
and
I
think
more
critically.
We
want
to
prior
to
prioritize
such
that.
If
there
is
congestion,
we
make
sure
the
important
data
gets
there
first,
so
we're
using
quick
transport.
G
I
went
through
and
run
my
own
quick
implementation
because
we're
we're
a
go
shop
and
the
existing
quick
wasn't
there
and
yeah
we're
we're
right
now
running
experiments
for
employees
using
the
origin
trial,
and
we
wanna
try
to
hit
some
production
traffic
next
early
next
year
to
try
and
see
how
it
works
next
slide.
G
G
You
have
to
go
into
the
area
of
like
http
push
and
you
have
to
start
mapping
well,
I'm
gonna
have
to
make
the
client
do
these
requests
in
this
order.
This
frequency-
and
it
also
gets
difficult
when
you
wanna,
have
the
same
api
for
contributing
and
distributing
video
you
have
to
you
have
to
deal
with
firewalls.
So
if
the
client
is
pushing
video,
it's
a
different
api
than
the
clients
downloading
video.
G
So
it's
really
nice
to
have
like
a
quick
based
api
where
it's
just
arbitrary
streams.
It
doesn't
matter
who
initiates
doesn't
matter
who's
receiving
the
api
is
symmetric
in
that
regard
and
quick
transport
is
so
simple.
We
don't.
We
don't
need
connection
pooling,
like
hp3
transport,
there's
almost
no
point
having
fallback
hv2
transport
support,
because
we
just
don't
want
to
work
with
tcp.
G
We
want
to
use
quick
and
also
it's
nice
that
datagram's
a
potential
fallback.
So
this
is
if
we
do
run
into
issues
with,
let's
just
say,
the
quick
implementation.
It's
always
nice,
knowing
that
you
have
this,
like
swiss
army
knife
of
being
able
to
do
it
yourself.
So
next
slide.
G
I've
gone
through
and
filed
quite
a
few
bugs
with
the
chrome
implementation,
which
is
great,
more
or
less
the
state
machine
just
needed
some
to
be
able
to
handle
few
a
few
cases
better.
A
lot
of
stuff,
where
really
the
only
case
that
worked
out
of
the
box
was,
if
was
remote,
initiated
unidirectional
streams
were
great,
otherwise,
there's
always
like
little
subtle
issues
with
every
other
configuration,
so
here's
just
a
few
tickets.
If
you
want
to
go
through,
I
think
most
of
them
are
being
worked
on.
G
Some
of
them
are
fixed.
I
know
the
using
the
entire
cpu
core
is
fixed.
That
was
just
a
busy
loop
in
the
kind
of
like
the
wake
up
logic,
but
we've
run
into
a
few
things
like
half
closed
streams,
for
example,
should
the
would
transport
api
require
half
closed
or
it
should
just
be
vague,
or
does
it
defer
to
quick
just
a
few
things
that
needed
like
the
draft
to
mention
it
explicitly
so
that
implementations
don't
miss
it?
G
G
So
this
is
a
little
like
a
feature
bucket
list
at
the
very
end,
but
prioritization
we're
really
concerned
about
congestion
control,
and
I
know
it's
very
hard
to
do
like
a
way
to
pick
things
out
and
some
issues
with
datagrams
like
if
you
try
and
send
a
high
rate
of
datagrams,
you
run
into
fighting
with
the
underlying
congestion
control
mechanism
that
you
also
don't
have
control
over,
but
yeah.
That's
it.
G
I
I
think
I've
given
more
fine
detailed
feedback
by
posting
issues,
at
least
on
github
and
yeah.
If
anybody
has
any
specific
questions
as
well
about
what
the
the
origin
trials
are
like,
but
it's
been
great
so
far,
just
being
able
to
run
quick
transport
in
the
browser,
it's
been
a
lifesaver
even
before
h3
support
is,
you
know,
ready.
A
Okay,
thank
you
luke.
I
just
wanted
to
give
a
little
bit
of
feedback.
This
is
the
requirements
and
use
cases
that
the
w3c
is
developing.
I'm
not
going
to
talk
about
any
all
of
these,
but
just
make
your
observation
about
a
few
of
these
use
cases.
A
In
particular,
I
wanted
to
talk
a
little
bit
about
low
latency
streaming
use
cases
such
as
game
streaming
and
remote
desktop
and
then
a
few.
What
I
would
call
large
scale
events
use
cases.
A
These
are
for
a
lot
of
participants
like
at
least
ten
thousand
things
like
company
meetings
or
concerts
political
gatherings,
sporting
events
stuff
like
this,
the
distinction
between
these
two
is
the
large-scale
events
generally,
don't
have
the
same
tight,
latency
requirements
so
for
many
of
these
things,
like
a
second
delay,
is
okay,
whereas
the
low
latency
it's
needs
as
low
latency
as
possible,
because
you're
talking
about
things
that
could
be
disrupted
by
by
additional
latency,
so
a
little
bit
on
the
low
latency
streaming
use
cases
and
the
primary
ones.
A
A
One
thing
I
think
it's
important
to
understand-
and
this
is
true
of
any
use
case-
is
to
understand
whether
it's
a
green
field
use
case,
whether
people
have
something
they're
already
doing
doing
it
with
and
then,
if
they
do,
you
have
to
understand
what
the
bar
is
to
get
them
to
move
from
whatever
they're
doing
now
to
to
what
you
want
them
to
move
to.
So
in
this
particular
case,
for
these
things
we
see
the
developers
are
using
today
webrtc
the
data
channel
with
rtp
audio
video.
A
So
in
the
case
of
the
remote
desktop,
typically,
the
audio
video
they'll
use,
whoever
to
see
screen
sharing
and
then
the
data
channel
for
the
keyboard
and
mouse
events
and
control
for
gaming.
There's
we
see
client
server
game
streaming,
which
is
done
exclusively
with
the
data
channel,
so
they
send
the
audio
video
and
the
control
trap
of
the
keyboard
mouse
stuff
over
the
data
channel.
A
We've
also
seen
gaming
done
with
rtp
audio
and
video,
as
well
with
the
data
channel
being
used
for
the
for
the
keyboard
and
mouse
as
well,
and
I
would
mention
that
there's
a
number
of
developers
who
use
both
client
server
and
peer-to-peer
use
cases.
So,
for
example,
you
could
stream
a
game
from
the
cloud,
but
you
could
also
stream
it
from
your
game.
A
Console
to
a
mobile
device
or
the
remote
desktop
could
be
a
remote
desktop
in
the
cloud
or
you
could
be
just
doing
a
remote
desktop
with
your
desktop
your
own
desktop,
for
example,
as
well.
So
a
bunch
of
these
developers
they're
supporting
both
client
server
and
peer-to-peer
and
they
like
to
have
a
single
code
base.
They
don't
want
to
have
to
write
a
completely
different
implementation
for
peer-to-peer
versus
client
server,
and
that's
one
of
the
reasons
they
like
webrtc
is
because
it
kind
of
gives
them
both
and
yes
for
the
client
server
case.
A
They
need
ice,
but
it's
more
important
to
them
in
general
to
have
the
ability
to
write
one
code
base
than
it
is
to
get
rid
of
ice.
So
I
know
we
pitched
one
of
the
things
about
web
transport.
Is
we
get
rid
of
ice
and
the
answer
from
a
bunch
of
these
developers
is
I'll?
Keep
the
ice
if
I
can
keep
do
the
peer-to-peer.
So
a
lot
of
these
folks
have
a
strong
interest
in
the
in
the
peer-to-peer
extension
to
great
transport
as
well.
A
few
other
things.
A
These
some
of
these
applications
are
pretty
high
performance
like
the
game
streaming,
and
what
I
mean
by
this
is
that
the
streams
themselves
may
not
be
huge
in
terms
of
bandwidth,
but
often
the
the
things
that
you're
streaming
to
are
not
very
high
performance,
so
it
can
be,
for
example,
an
older
game
console
or
something
that's
not
very
powerful.
A
So
some
of
these
folks
were
having
issues
with
the
data
channel,
because
the
the
way
the
javascript
back
pressure
is
implemented
in
data
channel,
so
offering
them
potentially
a
better
back
pressure
was
it
was
a
very
attractive
aspect
of
web
transport,
but
as
luke
just
mentioned
it's
not
for
de
for
for
web
transport
for
reliable
web
transport,
it
does
solve
a
lot
of
the
back
pressure
problems
kind
of
nicely.
So
you
you
don't
have
to
have
basically
what
we
saw
with
the
data
channels.
A
They
were
doing
a
lot
of
application
layer
acknowledgement,
but
for
datagrams
you
still
need
the
app
layer
acknowledgement.
So
in
some
ways,
if
for
developers
who
are
using
datagrams
in
this
scenario,
they're
not
seeing
a
lower
level
of
complexity
with
web
transport
and
they
were
kind
of
hoping
that
web
transport
would
allow
them
to
get
rid
of
the
application
layer
acting
that
they
were
doing
to
make
the
data
channel
function
but
they're
not
they're,
not
seeing
that
advantage,
they
still
have
to
do
their
application
layer
acting.
A
The
other
thing
is
that
they're
asking
about
just
some
basic
devops
issues
like,
for
example,
say
they
bring
down
a
gaming
server.
Is
there
a
way
to
migrate
these
web
transport
connections
to
another
to
another
server?
So
there's
just
some
practical
aspects
of
the
clustering
and
draining
that
they
they
wanted
to
understand
how
to
do
better.
A
I've
also
heard
the
bandwidth
allocation
it's
a
little
bit
different,
maybe
than
luke's
prioritization,
but
they
wanted
to
be
able
to
allocate
bandwidth
between
the
datagram
flows
and
the
streams,
and
the
reason
is
that
the
datagram
is
used
for
control
traffic,
so
they
don't
want
to
be
choked
off
and
they
don't
want,
like
the
audio
video,
to
take
to
take
so
much
of
the
bandwidth
that
the
control
traffic,
for
example,
experiences
delays.
A
So
that's
the
kind
of
thing
they
weren't.
I
don't
think
they
cared
about
priority
in
the
sense
of
dscp
or
anything
like
that,
but
just
making
sure
that
their
their
control
traffic
had
a
certain
minimum
balance.
A
Okay,
a
little
bit
on
the
event
streaming.
This
use
case.
There
was
more
interest
in
http,
3
transport,
as
I
mentioned,
the
other
one
was
quick
transport
and
the
folks
I've
talked
to
here.
The
existing
solution
is
hls,
so
that's
what
they
consider
to
be
the
incumbent.
A
A
A
Also,
would
it
be
widely
supported
in
browsers,
of
course,
some
of
these
questions?
I
can't
really
answer
right
now,
but
they
were
a
little
bit
concerned
about
the
complexity
of
the
protocol
and
pooling
issues
with
http
3
and
whether
it
would
inter-operate
and
I'll
talk
a
little
bit
more
about
that.
But
quick
transport
didn't
worry
them
at
all
because
they
for
the
other
cases,
because
it's
so
simple,
it's
like
yeah.
A
We
think
this
will
enter
very
likely
to
inter-operate
and
be
the
kind
of
thing
people
could
implement
wildly
but
hp3
they
weren't
so
sure,
and
then
they
went
looking
talking
to
their
server
vendors
and
they
got
some
disturbing
feedback,
which
is
that
most
of
the
http
3
server
vendors
were
not
planning
to
implement
http
3
transport,
so
that
kind
of
meant.
A
If
they
were
using
a
stock
kind
of
a
server,
they
would
have
to
implement
their
own
server
combination
of
http,
3
and
http
transport,
probably
within
a
framework,
and
then
for
the
folks
who
were
using
node,
they
discovered
oops.
I
don't
have
a
there's:
no
quick
module!
That's
going
to
be
available
in
any
reasonable
amount
of
time
due
to,
I
guess
some
blocking
issues
with
openssl
and
so
they're
kind
of
stuck
with
python
and
aio
quick
and
for
the
people
who
are
comfortable
with
python.
A
That
wasn't
a
really
big
deal,
but
if
they
weren't
it
was
and
then
they
would
ask
questions
like
will
http
3
transport
be
supported
widely
by
cdns.
You
know
hard
to
answer
that
and
then
there
was
a
few
folks
doing
this
event
streaming
for
like
company
events
who
are
interested
in
enterprise-
and
this
got
interesting
because
a
lot
of
these
enterprises
they
think
of
http
2
http-
is
the
thing
they
use
as
a
transport
of
last
resort.
A
So
you
know
a
lot
of
enterprises,
they
block
udp
traffic.
They
don't
even
necessarily
allow
tcp
on
all
ports
and
the
problem
is
in
these
kind
of
an
enterprise.
It's
it's
likely
that
quick,
wouldn't
work,
and
so
we
talked
a
little
bit
and
said:
well,
we
can
do
this
http,
2
failover
and
the
problem
there
was,
if
you're,
an
enterprise
that
doesn't
allow
that's
that
tightly
controlled.
You
probably
don't
support
http
2
either,
so
it
wasn't
clear
that
http
2
transport
would
really
help
them
much
that
this
would.
A
This
would
help
them
with
this
enterprise
problem.
Things
are
very
different
from
the
folks
who
are
more
on
the
consumer
side
because
they
don't
encounter
a
lot
of
these
crazy
firewalls,
but
this
was
just
something
out
there.
So
a
bunch
of
questions
among
the
customer
base
about
http
3
transport
and
how
how
much,
how
much
take-up
it
would
get
in
the
in
the
ecosystem
and
also
interoperability
due
to
complexity.
C
The
next
session,
let's
drain
the
queue,
so
I
have
a
speaking
notice
well
as
chair
I'll,
say
one
of
our
main
goals
for
this
session.
Today.
Oh
sorry,
I
forgot
to
enable
the
camera
there
we
go.
C
One
of
our
main
goals
for
the
session
today
is
kind
of
getting
out
of
the
transport
zoo,
as
we've
been
calling
it
making
some
decisions
in
terms
of
of
all
the
possible
transports
we
have,
which
ones
we
want
to
actually
like
focus
on
and
which
ones
we
want
to
like
work
on
so,
but
with
all
my
chair
hat
on,
I
have
some
clarification.
C
Questions
for
luke,
so
yeah,
perfect
cool,
so
the
first
one
you
mentioned
that
you
weren't
interested
in
an
h2
fallback.
So
just
out
of
curiosity,
if
on
a
network
that
blocks
udp
and
quick,
what
would
you
use
instead?
Or
would
you
just
say
my
feature?
Doesn't
work
and
that's
also
a
reasonable
option.
I'm
just
curious.
G
Yeah,
so
we
we
use
hls
for
distribution
and
we
would
fall
back
to
hls
as
well
for
third
party
cdn
support
if
our
first
party
cdn
is
overwhelmed,
so
we're
mostly
looking
for
quick
as
kind
of
a
extra
functionality
and
if
it
doesn't
work
we'll
just
fall
back
to
the
you
know
the
backup.
G
That
being
said
like
we
could
technically
do
our
approach
with
tcp,
but
you
just
have
fighting
tcp
flows
and
requests.
So
it's
just
easier
to
require
quick.
C
Well,
but
I
guess
just
the
my
question:
is
you
know
if
quick
isn't
there,
you
need
something.
You
can't
require
quick,
because
some
users
don't
have
access
to
it,
and
so
then
it's
either
you
have
a
fallback
inside.
You
know
underneath
the
javascript
layer
like
that
could
be
h2
transport
or
you
manually
at
your
layer,
fall
back
to
something
else
like
hls,
I
think
they're
both
reasonable
options.
G
C
G
Oh,
so,
if
we
did
h2
transport,
I
think
the
performance
would
actually
be
worse
than
falling
back
to
a
different
protocol
based
on
based
on
the
the
way
we're
prioritizing
data,
because
we
expect
the
prioritization
to
take
effect.
But
if,
under
the
hood,
we're
using
tcp
and
they're,
you
know
the
kernel
doesn't
prioritize.
We're
just
gonna
have
more
buffering.
A
It's
a
little
hard
to
explain
but
yeah.
I
heard
that
same
approach
david,
which
is
that
they
would
just
fall
back
to
hls.
C
Yeah,
no,
I
think
for
especially
for
use
cases
that
are
already
on
the
market
or
in
production
today
with
something
it's
natural
to
fall
back
to
what
they
already
have
built
today.
That
makes
sense
cortex,
and
then
I
had
a
second
question
for
you
luke,
which
was
you
mentioned,
quick
transport
and
preferring
that
to
h3
transport.
You
know
one
of
the
main
questions
we're
trying
to
answer
today
is:
do
we
do
we
want
to
build
one
of
them?
Do
we
want
to
build
both
of
them?
Do
we
want
to
build?
C
G
G
Muted,
oh,
go
ahead.
I
saw
it
on
the
mailing
list.
I've
left
some
feedback,
but
it's
really
just
h3
transport
is
more
complicated
to
implement
for,
in
our
case
no
benefit
because
our
web
server
doesn't
handle
h3
requests.
G
It
would
only
create
this
web
transport
connection,
so
it
kind
of
seems
redundant
to
also
have
h3
support.
My
my
implementation
doesn't
even
support
h3.
So
it's
more
like
do
you
is
the
complexity
worth
it
and
for
us
it's
like
no.
C
Okay,
so
from
your
perspective,
if
I'm
understanding
correctly
you're
building
your
own
stack
from
scratch
and
in
that
scenario,
having
to
also
build
h3,
is
more
work
for
limited
benefit.
If
you're,
not
the
main
value,
add
would
be
pooling
and
if
you're
not
using
it
you're
having
more
complexity
with
less
with
no
benefit
cool
thanks.
That
makes
sense
and.
A
Yeah
also
david
for
the
for
the
low
latency
streaming
cases
they
liked
the
quick
transfer
because
they
could
conceivably
get
the
peer-to-peer
without
building
an
http
3
server
into
their
browser
or
their
backend.
So
that's
why
they
like
the
quick
transport
for
those.
A
Anyone
else
in
the
queue-
okay,
I
think
victor
you're
up.
F
Okay,
so
I
have
my
main
question,
I
guess
is
the
so.
Do
I
understand
correctly
that
you
serve
directly
video
from
there
is
no
reverse
proxy
or
any
other
intermediation
in
your
use
case,
because
one
of
the
reasons
people
are
interested
in
http
free
transport
is
the
for
things
like
priorities.
F
You
can
not
prioritize
on
your
socket,
but
if
you
have
priorities
that
need
to
cross
a
reverse
proxy,
you
would
have
to
communicate
that
and
http
provides
that
way.
So,
if
you're
using
you,
if
you
do
not
have
any
form
of
reverse
proxy
in
from
your
video
server,
I
guess
you're
not
interested.
That's.
Why
I'm
asking.
G
So
we
would,
we
would
use
any
form
of
h3
prioritization
as
an
alternative
but
yeah.
We
we
right
now
have
a
we
assign
users
to
a
host
directly.
If
you
go
to
twitch
and
look
at
the
network.
Tab,
you'll
you'll,
see
when
you
start
a
video
session,
you're
you're
signed
with
a
specific
host
name
and
that
host
is
then
able
to
prioritize
traffic
okay
from
two
hosts.
Then
we
couldn't
do
it.
G
Yep
yeah,
we
don't
have
any
load
balancers.
We
push
a
little
too
much
data
to
have
load
balancers.
We
have
an
application
load,
balancing
tier
kind
of
like
out
of
band.
C
Hey
everyone:
the
chairs,
completely
messed
up
the
q
management
here
so
hold
on
bernard.
Let
me
handle
the
queue,
and
so
we
have
five
more
people
in
it.
Well,
actually,
four
ian
sweat,
you're
next
cool.
I
Thank
you
david,
so
just
fyi.
I
think
that
it's
very
likely
that
bbr2
would
become
the
default
congestion,
control
and
quick
and
q
at
the
end
of
q1.
2021
is
very
close
to
done
after
a
lot
of
work,
so
I
think
I
am
fairly
optimistic
about
that.
I
do
think
that
is
in
much
better
congestion,
controller
than
either
bbr1
or
cubic
for.
I
Yes,
sorry,
victory
clarified.
Yes
in
the
google
quick,
slash
chromium
implementation,
I
cannot
guarantee
it
for
anyone
else.
Obviously,
but
you
know
if
you
wanted
to
you,
can
just
like
yank,
whatever
we
have
at
the
end
of
q1.
I
I
highly
support
the
direction
of
making
a
decision
on
these
transports
because
I
think
moving
forward
is
important
on
the
fallback
issue
based
on
my
discussions
with
youtube.
They
also
do
not
need
a
fallback
to
tcp,
because
if
they
were
to
have
a
fallback
they
would
actually
prefer
to
use
http,
1.1
and
their
current
solution.
I
And,
although
that
being
said,
as
long
as
they
can
say,
I
only
want
to
use
quick
transport.
If
I
actually
get
quick
transport,
I
think
they
would
be
happy
to
use
quick
transport.
So
it's
not
that
they're,
because
they're
importing
our
stack
anyway,
the
cost
of
the
implementation
cost
is
low,
but
the
they
would
definitely
not
want
to
fall
back
to
h2
transport.
They
would
only
want
to
fall
back
to
each
one.
C
J
Hello,
can
you
hear
me
yes,
great
thanks
for
the
presentation
luke,
it
was
good
to
see
some
more
detail
on
on
some
of
the
stuff
you've
been
working
on.
I've
been
following
some
of
the
issues,
some
of
those
I
experienced
myself,
so
I
was
playing
with
web
transport.
J
I
basically
borrowed
heavily
from
google's
example,
client
code,
but
then
wrote
my
own
web
transport
server
on
top
of
ish,
and
that
was
really
an
experiment
to
see
for
an
existing
stack,
how
much
wicked
that
obviously
leverages
datagram,
and
so
as
when
I
started
this
work.
Some
of
that
was
changes
to
the
library
to
support
datagram
and
I
kind
of
expected
web
transport
might
take
some
changes
to
the
library
too,
but
in
the
end
it
didn't.
I
was
able
to
just
modify
our
example,
client
and
server
to
speak,
quick
transport.
J
Let's
say
that
way
that
was
quite
low
impact.
I
I
suspect
for
me,
like
adding
hp3
transport
to
that
toy
server
would
have
been
quite
simple
too,
because
all
the
existing
http
3
heavy
lifting
would
have
been
there,
but
then,
like
the
fact
I
didn't
do.
That
is
maybe
telling
that
it's
kind
of
a
bit
superfluous.
J
Again,
I
didn't
need
connection
pooling
in
this.
This
toy
example,
all
I
was
doing,
was
echoing
something
back,
it's
not
as
complicated
as
video
streaming
or
anything,
but
you
can
see
how
somebody
who's
coming
kind
of
new
into
quick
would
want
to
do
the
easiest
thing
in
maybe
in
some
respect
so
yeah.
J
I
think
some
of
the
questions
I
had
have
been
answered
in
terms
of
luke
the
the
kind
of
what
work
did
it
take
on
the
server
side
for
you
to
to
implement
this
stuff,
but
it
seems
that
you're
quite
targeted
towards
this
use
case.
So
I
I
think
I
already
got
the
answer
there.
So
yeah
just
cheers
for
sharing,
and
I
I
do
agree
that
it
would
be
good
for
us
to
come
to
some
answer
on
what
people
want
from
either
quick
transport
or
http,
3
transport
and
and
how
the
the
fallbacks
all.
K
Really
nice
to
have
some
more
concrete
uses
of
you
know:
here's
what
I'm
actually
doing
and
how
I
need
to
send
it
and
what
that
means.
K
K
I
don't
want
to
say
that's
a
feature,
not
a
bug,
but
I
thought
one
of
the
one
of
the
benefits
that
we
people
had
been
talking
about
previously
and
it'd
be
really
interesting
to
see
if
this
holds
up
in
practice
or
if
it
actually
turns
out
that
it's
more
annoying
than
it
is
helpful,
is
being
able
to
share
that
congestion
control
context,
with
the
other
data
that
you're
potentially
sending
to
that
same
endpoint
under
the
idea
that
much
of
that
data
is
almost
certainly
taking
the
same,
if
not
a
very
similar
route
and
therefore
a
congestion
controller
that
can
appropriately
manage
the
amount
of
data
that
you're
shoving
through.
K
So
it'd
be
really
interesting
to
see
if
we
can
find
any
data
about
that
or
or
if
it's
just
you
know,
hey
I've
got
a
congestion
controller
built
into
my
thing
right
now,
and
so
it's
it's
annoying
to
have
this
mode
where
it's
not
there.
The
only
other
thought
that
kind
of
comes
with
that
is
about
the
application
layer
acts.
K
I
think
folks
were
mentioning
in
the
chat
that
it's
in
theory,
if
you
had
an
api,
the
datagrams
are
act,
and
so,
if
you
had
an
api
that
exposed
that
to
you,
you
wouldn't
need
application
layer
x
anymore,
and
from
that
perspective,
if
you're
doing
your
own
quick
implementation,
that
seems
a
lot
harder
to
me
than
either
using
someone
else's
quick
or
h3
implementation.
G
We
haven't
gone
down
this
route,
but
one
of
the
problems
with
the
control
with
datagrams
is
that
you
don't
know
when
the
packets
could
be
dropped
by
web
transport.
So
you
have
to
implement
your
own
congest
control
on
top
and
you
kind
of
have
almost
like
two
two
algorithms
and
you
take
the
minimum
of
both
of
them.
G
So
if,
for
example,
chrome
is
using
bbr
for
sending
and
then
you're
using
cubic
on
top
of
that
to
avoid
overwhelming
the
web
transport
api,
you
just
you
have
a
really
low
throughput,
so
it
would
be
great
to
get
the
low
level
either
some
back
pressure
or
expose
some
underlying
state
and
for
the
duplicate
axe.
G
I
I
didn't
realize
this
until
we
talked
about
it
on
github,
but
because
datagrams
don't
have
flow
control,
there's
definitely
states
where
the
quick
packet
could
be
act,
yet
the
datagram
was
actually
never
delivered
to
the
application.
So
I
don't
think
you
can
use
the
underlying
acts
of
the
datagrams.
Unfortunately,
thank
you.
That's
a.
K
G
Thank
you,
yeah.
I
run.
I
ran
into
this
with
data
channels
with
webrtc
as
well
with
the
sctp,
where
I
just
wanted
to
reuse
the
existing
axe
under
the
hood,
but
I
had
to
build
my
own
on
top
of
it
and
it
just
caused
double
the
packets,
but
it
was
unavoidable.
C
All
right,
sorry,
folks,
I
cut
the
cue
after
alan,
which
I'm
hoping
that
everyone
heard,
because
I
wasn't
sure
I
might
have
muted
at
the
time
and
daniel.
I
see
your
videos
on
if
you
want
to
speak,
please
join
the
queue
using
the
andreas
thing
or
oh
you're.
Just
saying
all
right,
all
right
cool
sounds
good.
All
right,
then
we're
gonna
go
on
to
victor's
next
presentation
and
let's
try
to
keep
the
conversation
or
the
comments
a
little
bit
shorter.
C
F
Okay,
apparently
I
didn't
configure
the
camera
in
advance.
So
I'm
sorry,
you
can't
see
me,
but
I
want
to
give
an
update
on
the
one
draft
we've
actually
adopted.
That's
the
web
transport
overview
draft,
so
the
goal
is
roughly
to
sum
up
what
the
requirements
in
web
transport
api
are
and
what
are
the
common
properties
and
the
semantics
we
expect
from
web
transfer
protocols.
F
F
F
I
don't
believe
we
have
like
an
automated
email
to
summarize
all
of
the
issues,
but
we
should,
but
there
is
a
github
repository
for
the
tripo
and
it
has
some
issues
which
I
discussed
at
the
previous
meeting.
F
F
So
quick
transport
first
didn't
have
any
headers
and
we
had
some
clever
tricks
to
get
away
with,
not
sending
them
that
did
not
work.
Then
we
added
a
very
bespoke
unidirectional.
Header
format
called
client
indication,
and
that
is
what's
currently
in
the
chrome
implementation
in
the
origin
trial
and
in
the
latest
revision
of
the
draft,
which
is
still
yet
to
upload,
because
I
also
need
to
solve
some
issues
with
error
handling.
F
There
are
actual
fully
featured
headers,
which
I
tried
to
basically
replicate
the
http
fields
where
there
was
an
equivalent
http
field,
and
there
is,
but
that's
the
fields
defined.
Our
origins
here,
authority
path
status
and
the
proposal
is
that,
since
all
of
the
proposed
transports
have
those
headers
we
should
expose,
we
should
define
those
headers
as
a
shared
property
of
any
transport.
F
C
Well,
thank
you
victor.
That
was
incredibly
fast
that
really
brought
us
back
to
the
agenda
time.
Yeah,
I'm
getting
reports
from
a
bunch
of
people
that
medico
is
going
through
some
serious
technical
issues
if
you're
trying
to
speak
or
join
the
queue-
and
it's
not
working,
please
like
say,
oh,
I
was
gonna
say,
say
so
in
the
jabber.
But
apparently
the
meet
echo
chat
is
busted
as
well.
Go
ahead.
C
J
Yeah
so
my
my
hat's
available
for
free
on
the
snapchat
filter
thing
I'll
send
links
around
later,
but
the
serious
point
victor
just
a
question
about
this
change.
I
just
wonder
if,
depending
on
the
direction
of
like
whether
people
want
quick
transport
and
not
the
rest
of
them,
if
if
this
proposal
is
still
like,
the
thing
you
think
is
the
best
option,
but
so
it's
gonna
be
http
style.
Even
if
we
don't
use
the
http
style
transport,
I
wondered
if
you
could
clarify.
J
C
I'm
sorry
I
kind
of
misheard.
The
end
of
that
question
was
that
aimed
at
me
or
at
someone
else.
C
The
doctor
is
dropped
off
the
list
list.
Oh
yeah
and
he's
sending
me
messages
on
chats
saying
that
he
can't
get
into
medical
anymore.
It's
okay!
C
Yes,
thanks
lucas,
please
bring
it
to
the
list
or
ideally,
even
an
issue
and
we'll
we'll
get
that
sorted.
Yeah.
That's
not
great!
All
right!
I
think
we'll
move
on
to
eric's
presentation.
C
So
what
I'm
getting
from
folks
is
apparently
j.
If
you
you're
in
meet
echo
now-
and
you
can
hear
us,
don't
touch
anything
it'll
keep
working,
but
if
you
refresh
or
anything
you
won't
be
able
to
get
back
in.
So
all
right
eric
if
you
wanna,
join
audio
and
video.
C
Oh,
I
don't
see
eric
in
the
participant
list
anymore,
okay!
Well,
this
is
a
mess.
C
The
h2
slides,
so
that's
an
option
yeah
because
I
was
gonna
say
we
could
also
do
the
next
presentation
then
come
back,
but
if
victor's
gone
we
can't
do
that
so
yeah.
Let
me
yeah,
let's
do
that
out
and
if
you
can.
C
L
Roll
to
the
next
slide
yeah
next,
one.
L
Okay,
so
we'll
just
give
what's
been
going
on
since
the
last
itf
there's,
some
github
issues
have
been
filed
and
a
little
bit
of
discussion,
there's
some
outstanding
prs
and
basically
all
of
hq
transport
is
kind
of
in
a
holding
pattern
waiting
for
the
working
group
to
move
forward
on
what
are
we
doing
with
http
in
general,
because
h2
is
definitely
you
know,
sort
of
the
last
on
the
packing
order
in
terms
of
transport,
so
we're
not
going
to
do
http
at
all.
We're
not
going
to
do
http
2.
L
Okay,
so
one
of
the
issues
in
the
draft
was
about
the
original
draft,
didn't
mention
unidirectional
stream,
support
and
a
question
about
whether
we
want
to
sort
of
have
streams
that
are
half
closed
by
specification
or
just
sort
of
have.
You
know
bi-directional
streams
where
you
know
it's.
L
The
one
side
just
doesn't
send,
and
I
think
this
next
slide
just
talks
about
like
the
proposed
pr
and
there's
like
basically
adding
a
a
flag
kind
of
like
speedy
had,
which
was
like
just
starts,
one
half
and
the
half
closed
date,
and
this
kind
of
maps
to
the
way
quick
streams
work
more
naturally,
next
slide.
L
There's
an
issue
open
about
being
able
to
open
streams
that
additional
round
trips
for
people
who
might
need
a
refresher
so
that
h2
and
h3
drafts,
because
they
can
multiplex
multiple
web
transport
sessions
on
the
same
connection,
they're
sort
of
one
step
where
you're
sending
you're
opening
a
stream.
That's
defining
your
session
and
then
you're
opening
additional
streams
that
are
kind
of
hanging
off
that
session,
and
so
there's
just
an
open
issue
to
discuss
like.
Is
there
a
way
to
do
that
in
one
round
trip?
L
Can
you
hang
can?
Can
you
send
the
new
web
transport
streams
referencing
the
connect
stream
before
you've
received
acknowledgement
from
the
other
side
that
it's
going
to
that
the
handshake
is
going
to
succeed
and
the
there's
sort
of
an
interesting
interaction
with
h3
here,
because
h3
the
streams
could
also
arrive
out
of
order
in
h2.
They
can't
so
you
could
end
up.
The
receiver
can
end
up
with
an
h3
street.
L
If
we
allowed
this
the
h3
draft,
you
could
end
up
with
a
web
transport
stream
that
has
no
session
established
for
it.
Yet
if
you
were
trying
to
do
it
in
a
single
round
trip,
so
I
think
this
issue
is
still
open.
We
have
any
proposed
resolution
for
it
next
slide,
datagrams,
there's
still
not
a
definition
of
what
exactly
the
datagram
support
looks
like
in
h2
transport.
Again
we're
not
working
on
it
hard
because
we're
waiting
for
a
nod
from
the
working
group.
Here,
I
think,
on
the
issue.
L
I
proposed
a
strawman
h2
frame
that
we
could
use
to
transmit
datagrams.
That
would
be
perfectly
serviceable.
L
Next
slide:
okay,
yeah
I've
covered
all
that
next.
L
Okay,
I
haven't
read
the
slide
yet
so
give
me
a
second,
I
mean
I
think,
and
this
will
just
maybe
feed
into
the
broader
discussion
about
http
versus
quick
transport.
So
you
know
the
http
transports.
You
know
have
this
feature
where
you
can
multiplex
the
sessions
right,
whereas
quick
transport
doesn't
have
that
feature.
L
So
if
you
want
to
have
you
know
multiple
column,
web
transport
sessions
with
quick
transport,
you
have
to
open
multiple
connections,
each
one's
sort
of
independent-
and
I
you
know,
I
hear
people
saying
complexity
around
the
h3
draft
and
I'm
trying
to
like
put
my
finger
on
what
is
the
complexity
there
and
one
aspect
that
one
aspect
of
it
is
this
sort
of
being
able
to
to
pool
or
multiplex
multiple
sessions
together?
Obviously,
h3
has
just
you
know
some
additional
complexity.
L
Some
of
it,
I
think,
is
trivial
and
some
of
it.
I
think
you
know
like,
for
example,
you
know
q-pack
or
headers
is
like
non-trivial.
So
anyway,
you
know
the
ability
to
traverse
intermediaries.
L
Using
these
you
know,
session
streams
and
multiplex
together,
I
think,
is
one
of
the
sort
of
key
differentiators
between
the
http
world
and
non-http
world
next
slide.
I
think
that's
it.
C
Point
of
order,
as
a
chair,
I
have
been
informed
by
the
ietf
chair
that
the
data
tracker
authentication
vm
completely,
fell
over
and
they're
gonna
reboot
it,
but
it
means
that
it's
gonna
at
some
point.
It's
gonna
kick
everyone
out
of
the
meeting.
It's
gonna
take
two
minutes
to
reboot
and
then
you're
all
gonna
be
able
to
join
again,
so
I'm
gonna
say
like
keep.
Let's
keep
going.
Let's
keep
talking,
but
if
you're
kicked
out,
go
make
yourself
a
beverage
and
come
back
in
a
few
minutes
and
we'll
resume.
L
C
C
Go
ahead
daniel,
so
you
enable
your
microphone
by
clicking
the
unmute
audio
button.
G
M
L
So
quick
transport
allows
you
to
have
multiple
streams
on
one
connection,
but
it
only
allows
you
to
specify
a
single
web
transport
uri,
whereas
h2
and
h3
transports.
Let
you
open
multiple
sessions
within
the
same
connection
using
different
web
transport
uris
and
directing
subgroups
of
streams
towards
these
different
virtual
sessions.
L
Within
the
connection,
I
think
that's
like
I
said
I
think
it's
a
key
difference
between
the
sort
of
two
transport
worlds
and
I
think
it's
also
where
some
people,
I
think,
that's
where
some
of
the
complexity
that
people
point
to
h3
or
h2
and
say.
Oh,
it's
complex,
I
think
that's
where
some
of
it's
coming.
L
K
C
M
But
got.
M
C
Great
right,
eric
kinnear,
since
you
were
gonna,
give
this
presentation
and
I'll
ended
up
doing
it.
Is
there
anything
you
want
to
add
by
any
chance.
K
Thank
you,
alan.
I
don't
think
there's
anything
else
major
tad.
I
think
a
really
nice
job
of
covering
a
lot
of
this
in
terms
of
other
other.
H
K
K
C
Thanks
eric
do
we
have
any
and
seriously
thanks.
Alan
for
stepping
in
eric
can
probably
meet
echo
both
oh
alan
and
beer,
daniel.
The
question.
C
Oh,
the
no
worries
it
was
at
a
previous
web
session
that
we
have
all
these
transports
that
we're
trying
to
figure
out
which
ones
we
should
build,
and
I
think
victor
called
it.
The
the
big
transport
zoo
the
first
time,
and
so
now
the
running
joke
is:
let's
get
out.
M
Of
the
zoo
okay,
so
when
you
figure
out
whether
it's
going
to
be
h3,
transport,
h2,
transport,
tcp,
transport,
udp
transporter-
or
whichever
set
of
that
it's
going
to
be,
then
we
will
be
out
of
the
transport
suit.
C
Exactly
it's
so
the
main
question
is
saying:
which
of
these
do
we
need
which
of
these?
Should
we
build
and
like
in
what
order
and
we're
hoping
we
have
some
time
at
the
end
of
the
session
tonight
that
we're
hoping
to
really
answer
these
questions.
C
All
right
any,
of
course
yeah-
and
I
should
I've
clarified
that
my
apologies
any
further
questions
on
h2
or
should
we
move
on
to
h3
all
right,
let's
move
on
is
victor
back.
Yes,
I
see
victor
in
the
participant
list.
Victor
you
want
to
re-enable
audio
there.
You
go
all.
F
F
Sorry
I
had
other
layer
of
mute
okay,
so
I'm
going
to
give
update
on
h3,
quick
and
then
I'll
give
update
on
the
transfer
zoo.
So
next
slide.
F
So,
just
as
a
reminder,
h3
transport
is
h2
transport,
but
that's
over
h3.
It
has
the
datagram
support
through
quick
datagram
and
the
specific
mechanism
for
embedding
datagram
vintage
free
is
described
in
draftskin
as
a
quick,
date-free
datagram
their
draft.
We
are
currently
converting
it
with
design
decisions
and
hd
transport.
There
is
a
pull
request.
I
wrote
everyone
is
encouraged
to
take
a
look
at
it.
F
There
is
a
redesigned
to
use
stream
ids
as
like
identifier
for
web
transport
sessions.
That
is
also
design
decisions
that
was
in
h2,
and
that
made
more
sense,
and
that
allowed
me
to
deliver
two
paragraphs
of
text
from
the
specs.
So
that's
nice
and
that's
basically
it.
I
did
some
other
minor
adjustments,
but
everyone
is
welcome
to
take
a
look
at
that
pull
request
for
all
of
the
details.
F
Next
slide,
quick
transport,
quick
transport
actually
get
more
update
and
not
publish
a
draft
because
there
are
still,
I
need
to
fix
the
story
with
error,
handling
and
error
codes,
but
the
basic
idea
of
quick
transport.
If
you're
not
familiar,
it's
a
quick,
but
it
has
a
handshake
in
front
of
it.
But
after
you're
done
with
the
handshake,
you
can
treat
your
web
transport
connection
as
if
it
was
just
a
regular,
quick
socket.
F
So
it
has
lpn
values,
that's
dedicated
for
it
and
that's
not
http,
and
that
allows
us
to
avoid
cross
protocol
attacks.
It
has
its
own
dedicated
uri
scheme,
called
quick
transport,
quick
dash
transport
column,
slash
slash
and
it
has
the
same
syntax
as
http
urls,
so
we
re.
So,
as
I
mentioned
before,
we
used
to
have
a
very
bespoke
header
format.
We
have
a
new
format,
that's
bespoke,
but
now
it's
used
to
have
numbers
as
keys.
F
Now
it
has
strings
as
keys
and
those
follow
roughly
the
same
semantics
as
http,
and
the
reason
I
want
this
is
one
having
header
names
as
strings.
Instead
of
keys
is
greater
for
extensibility,
because
this
lots
people
who
are
rolling
their
own
things
on
top
of
this,
to
add
their
own
headers
without
having
to
fear
any
collision,
because
otherwise
it'd
have
to
work
with
numbers
and
that's
the
main
reason.
F
It's
still
easy
to
parse,
it's
basically
16-bit
lens
prefixed
strings
everywhere,
and
one
of
the
main
conceits
of
great
transport
is
that
you
get
exactly
one
quick
connection
per
your
instance
of
your
transport.
That
is
step
optimal.
If
you
open
a
lot
of
web
transport
connections
because
you
do
not
get
any
pulling-
and
this
also
allows
another
hand-
allows
you
to
do
a
lot
of
things
like
swapping
in
your
custom,
congestion,
controller
on
the
server
side
etc.
F
F
So
this
is
an
example
of
how
quick
transport
url
scheme-
if
you
do
not
remember
this,
says
I've
not
updated
this
slide,
but
all
of
the
highlighted
values
are
now
sent
completely
in
the
handshake.
So
that's
another
update,
slash
improvement
next
slide
as
a
reminder
that
that's
this
draft
is
actually
implemented
in
chrome.
There
are
instructions
on
how
to
do
things
with
it.
If
and
I
think
corey
bernard
went
earlier
today,
so
I'm
not
going
to
go
back
into
this
now,
let's
get
to
the
great
transport
zoo.
F
The
great
transport
zoo
is
where
we
look
at
all
of
our
transport
and
decide
which
ones
we
want
and
which
ones
we
don't,
and
there
are
four
options,
which
means
that
there
are
two
to
the
four
power
two
to
the
power
four
variants
and
we're
largely
arguing
for
three
or
four
of
them.
I
think
at
this
point,
but
there
are
three
of
four
dimensions.
There
is
a
hypothetical
fallback
transport
which
I
didn't
think
I
think
is
this
point.
F
No
one
seriously
considers
because
if
you
want
to
do
that,
it
was
ideas
that
you
could
roll
a
transport
on
top
of
websocket,
but
you
can
do
it
completely
yourself,
so
we
don't
need
an
atf
working
here
for
this
next
slide.
F
So
as
an
overview,
there
are
like
very
easy
axis
that
you
can
split
dedicated
versus
pooled,
quick
versus
tcp,
and
those
are
the
kind
of
defining
characteristic
of
those
those
used
to
be
less
defining
with
last
updates
to
quick
transportation,
free
transport,
etc.
I
feel
like
they're,
most
important
in
characteristics
of
all
of
those.
Next.
F
F
So
here
is,
however,
there
has
been
some
interesting
details
that
happened
on
api
layer.
So
one
thing
we
did
is:
we've
we
updated
the
api
somewhere
in
september
and
that's
in
the
very
latest
versions
of
chrome,
where
we
notice
that
there
is
a
very
similar
redundancy.
If
you
look
at
the
old
approach,
you
will
notice
that
the
choice
of
transport
is
indicated
twice
once
in
the
constructor
name
once
in
the
euro
url
scheme
and,
of
course,
that's
redundant.
F
So
we
renamed
the
api
entry
point
to
web
transport
and
that
had
a
very
nice
side
effect
of
basically
now
before
that,
in
order
to
implement
all
of
the
transport,
we
would
have
to
write
codes
for
all
of
those
javascript
classes
and
that
added
a
lot
of
cost,
but
now
that
this
is
entirely
dispatched
by
url.
This
moves
the
problem
of
actually
picking
the
transfer
down
to
the
network
layer,
which
removes
overhead
from
shipping
multiple
transports
or
both
on
technical
level,
but
also,
I
feel
on
conceptual
level.
Martin.
You.
H
Thanks
victor
for
noticing
so
quickly,
I'm
sorry
about
being
so
slow
to
jump
in
so
point
of
clarification
there,
when
you
have
these
two
say:
quick
transport
or
https,
one,
the
fallback
methods
aren't
necessarily
done
by
the
browser.
Are
they
in
the
quick
transport
version?
So
they
might.
H
F
In
the
new
quick
transfers,
there
is
no
automatic
fallback
if
you,
because
quick
transport
only
exists
over
quick
and
you
basically
get
to
do
this
manually
and
in
http
version.
You
have
to
delegate
this
to
the
browser,
because
the
browser
is
the
only
entity
that
knows
the
state
of
your
socket
pulse,
so
it
if,
if
it
knows
whether
you
have
h2
or
h3,
that
means
it
will
find
the
appropriate
socket
and
opens
the
session
on
that
yeah.
H
Yeah,
I
guess
my
point
was
that
if
you
pick
the
new
approach,
number
one
there's
potentially
a
fallback
involved
that
requires
new
api
surface
of
some
shape.
F
There
is
a
potential
for
new
api
to
add
more
control
over
what
exactly
happens,
but
the
basic
point
I'm
trying
to
make
is
that
there
is
still
it's
still
much
easier
conceptually
because
you
can
just
this
is
now
just
a
knob.
You
tweak,
instead
of
like
completely
different
things
that
require
completely
different
code
paths.
F
Thank
you
next
slide.
F
So,
just
to
clarify
conceptually
the
difference,
so
this
is
this
is
the
first,
the
very
first.
This
is
how
it
looked
before
and
on
the
next
slide,
you
can
see
that,
thanks
to
this,
there
is
one
less
box,
and
actually
one
less
arrow
is
what's
more
important,
because
a
lot
of
implementation
expense
here
is
crossing
the
sandbox
boundary
next
slide.
F
So
here
is
so
I've
made
repeatedly
the
observations
that
as
I've
as
we've,
refactored
more
and
more,
both
quick
transport
and
each
free
transport,
they
become
more
and
more
similar
semantically,
and
at
this
point
I
am
there
is
some
debate
to
this.
But
at
this
point
I
am
basically
convinced
that
http
free
transport
exists
primarily
as
quick
transport
with
connection
pooling
and
that
kind
of
makes
sense,
because
if
you
think
about
it,
there
is
no
really
substantial
difference
for
them
to
be
different
entities.
F
So
one
of
the
ideas
I
had
is
we
have
quick
transport
and
http
free
transport,
but
we
could
so
currently
we
disambigate
in
by
scheme,
but
it
is
actually
like.
That's
a
choice
made
by
client
and
that
makes
them
semantically
different,
but
we
can
also
even
delegate
that
decision
to
the
server
by
offering
both
http
free
as
and
web
transport
as
a
lpn.
F
Alternatively,
it
will
establish
that
connection
a
new
to
that
server,
and
I
the
reason
I
like
this
is
that
this
allows
us
to
make
progress
without
making
any
assumption
of
what
we're
actually
what
transports
we're
actually
using.
But
I
do
believe
we
still
should
make
progress
on
that,
because
this
this
this
is
logical,
but
we
still
should
actually
decide
which
transfer
we're
shipping
next
slide.
F
F
And
let
me
elaborate
on
that
now
that
you've
imp
now
that
we've
implemented
like
once,
you
implement
the
top
layer,
which
is
like
the
web
api
and
the
middle
layer,
which
is
your
the
things
that
crosses
your
sandbox
boundary
and
make
sure
that
flow
control
etc
is
preserved.
F
Now
the
only
there's
a
bottom
layer,
which
is
a
protocol
which
is
the
only
parts
that's
different
between
those
two,
but
if
you're,
implementing
http
free
transfers,
the
measure
marginal
cost
of
implementing
quick
transport
for
both
web
browser
and
even
the
server
but
web
browser
is
the
only
one
which
really
has
to
ship.
Both
is
a
fairly
minimal.
The
google
implementation
is
2
000
lines
of
c
plus
plus,
and
that's
not
that
much
that's
at
least
10
times
less
than
our
http
stack.
F
That's
approximately
five
times
less
than
just
a
file
that
decodes
all
of
the
quick
frames.
It
is
very
little
code
comparatively
and
those
2000
clients
includes
the
integration
test
and
the
unit
tests.
Obviously,
and
the
reason
I
believe
that
quick
transport
is
particularly
appealing
is
that
if
you
think
of
the
other
elements
in
the
ecosystem
as
http
2
as
http
free,
quick
transport
is
kind
of
http
one
of
web
transport,
because
it
is
the
simplest
thing
that
can
work
and
it
is
sufficient
and
optimal
for
a
lot
of
cases.
F
For
instance,
if
you're
doing
video
streaming
and
you
have
a
dedicated
server
you
connect
to,
and
you
do
not
have
any
form
of
pulling
that
you
care
about.
You
will
probably
want
to
have
a
quick
transport,
and
that's
as
that
is
what
luke
mentioned,
and
this
is
also
true,
for
instance,
for
youtube.
Youtube
does
not
use
http
2
for
video
serving
it
always
uses.
Http
1,
because
http
2
is
just
a
necessary
overhead
that
it
doesn't
benefit
from.
F
So
that's
one
example,
and
so
the
main
the
two
main
questions
are:
do
we
want
right
pulling
and
do
we
want
to
provide
tcp
fall
back
now
before
we
go
in
depth
into
the
stew
and
like
in
the
discussion
I
kind
of
want
to
give
my
perspective,
my
personal
feeling
about
one
is
yes,
so
I
will
know
that
this
is
hard
and
the
reason
is
that
we,
as
we've
observed
every
sufficiently
advanced
protocol,
eventually
evolves
into
supporting
pulling
http.
F
F
Who
would
immediately
adopt
web
transport
right
now,
because,
if
you're
adopting
web
transport
right
now,
you're-
probably
someone
who
has
a
lot
of
resources
to
experiment
and
you
probably
have
an
existing
solution
which
works
over
tcp.
So
you
would
not
want
to
replace
that
now
the
reason
if
you're,
building
a
solution
from
a
new,
I
believe
that
web
transport
will
be
widespread
much
earlier.
Then
we
will
get
rid
of
all
udp
blocking,
which
is,
I
believe
we
will
probably
never
get
rid
of,
because
I
do
not
see
firewalls
and
enterprise
policies
ever
going
away.
F
So
I
believe
there
will
be
demand
for
solutions
that
are
architected
from
completely
a
new
to
support
automatic
fallback
and.
C
All
right,
thanks
victor
actually
maybe
go,
go
back.
One
slide
bernards
as
we
that
way,
we
can
still
have
the
questions
on
the
screen.
So
yes,
folks,
please
join
the
queue
now
and
alan
you're
up.
First.
L
Okay,
so
the
I
mean,
I
think
the
question
I
want
to
highlight
is
that
pooling
can
mean
two
different
things.
One
of
them
is.
I
can
pool
multiple
like
web
transport
uris
together
in
the
same
connection
as
I
was
talking
about
the
other
one
is.
Can
I
pool
web
transport
on
the
same
connection
as
my
http
traffic
and
they're
kind
of
two
separate
questions
right,
quick
transport,
as
it's
currently
defined,
will
never
let
you
do
the
one
where
you
can
stick
it
on
with
your
http
connection.
L
You'll
always
need
a
second
connection,
and
I
think
personally,
that
there
are,
I
want
to
speak
to
pooling
web
transport
with
http
together,
because
it
has
some
distinct
advantages,
so
all
of
that
traffic
will
end
up
sharing
the
same
congestion
controller.
The
streams
on
that
connection
can
be
prioritized
with
respect
to
each
other
and
there
is
some
non-zero
amount
of
server-side
cost
for
having
connections.
L
So,
if
I'm
doing
something
in
as
a
client
and
I
have
an
http
connection
and
then
all
of
a
sudden,
I
want
a
web
transport
connection.
I
don't
have
to
have
two
connections.
I
can
just
keep
using
the
one
that
I
already
have
that's.
L
K
I,
like
the
framing
that
you've
got
here,
for
how
can
we
kind
of
unify
this
down
into
an
api
that
is
just
I'm
doing
web
transport
and
I'm
not
trying
to
pick
up
front.
I
think
in
a
lot
of
other
areas,
we
talk
a
lot
about
expressing
the
properties
that
we
need
from
the
transport
and
having
it
do
the
right
thing
in
terms
of
providing
you,
those
properties
we
saw,
I
think,
was
bernard's
table
of
the
different
use
cases.
You
know
some
of
them
said
I
need
unidirectional.
K
Some
of
them
said
I
need
bi-directional.
Some
of
them
said
I
need
ordering
some
of
them
said
I
need
unordered
and
then
there's
a
whole
set
of
things.
That
are
perfectly
fine.
K
If
you
don't
have
unordered
or
unreliable
data,
those
being
distinct
items,
some
of
them
are,
you
know
I
am
not
willing
to
even
consider
having
this
conversation
with
whoever's
on
the
other
end,
if
I
can't
have
x
thing
that
I
believe
is
critical,
and
so
I
think,
having
a
having
an
api
that
we
can
use
to
express
that
spectrum
of
need
from
I'd
really
like
this
to
I'm
not
even
willing
to
show
up
to
the
table.
K
If
I
don't
have,
it
is
potentially
of
value,
I'm
not
totally
sure
that
we
can
distill
all
of
that
down
into
just
do
we
want
pooling
in
his
tcp
fallback
necessary,
but
I
think
if
we
can
focus
in
on
how
does
someone
as
a
consumer
of
this
api
interact
with
what
they
need
and
how
do
they
then
use
what
they
get
back
from
it?
K
We're
probably
going
to
be
in
good
shape,
because
some
of
the
concerns
around
is
somebody
going
to
implement
x
versus
y
seems
a
little
bit
like
they
have
to
do.
You
know
amount
of
work
something,
and
so
it's
not
super
clear
to
me
that
that's
a
that
trying
to
guess
it
at
what
people
are
willing
to
support
is
as
useful
of
a
benchmark
as
the
what
properties
do
we
get
from
these
transports
and
which
ones
do
we
need
and
which
ones
can
we
do
without.
G
Hello
again,
so
in
regards
to
pulling
connections,
I
do
want
to
at
least
talk
about
what
situations
that
would
be
useful,
as
it
would
actually
improve
the
experience
for
connection
pooling
and
in
my
mind,
you
get.
You
share
the
congest
control
calculations,
which
is
great,
but
you
kind
of
need
to
have
two
connections
to
the
same
host
that
max.
You
know
that
that
both
of
them
need
to
transfer
a
lot
of
data
as
well
for
that
to
even
be
a
benefit.
G
So
you
kind
of
have
this
case
where
you
have
two.
You
know
tran
web
web
transport
connections
of
the
same
host
or
and
they're
both
using
a
lot
of
data,
or
you
need
to
have
an
hdb
connection
and
a
transport
connection
again
using
a
lot
of
data.
G
Otherwise,
quick
does
a
pretty
good
job
of
connection
pooling
you
can
use
the
same
socket.
You
know
zero
rtt.
If
you
wanted
to
make
a
new
connection,
I'm
just
failing
to
see
the
use
case,
as
which
pooling
is
actually
an
improvement
over
dialing,
a
new
connection.
G
So
that's
that's
at
least
my
feedback
in
there
and
for
number
two
tcp
fallback.
I
think
the
problem
is
that
the
web
transfer
api
is
the
same
for
all
the
transports,
but
the
functionality
is
not
the
same
as
in
we
may.
You
might
get
that
nice,
like
pretending
like
we're
using
datagrams
for
pretending,
like
you're
using
you,
know,
multiplex
streams,
but
if
you're
actually
under
the
hood,
you
have
head
of
blind
blocking
because
using
tcp.
G
There's
no
point
like
I
mean
it's
nice
it.
It
encourages
adoption.
If
you
can
have
the
same
api,
but
it
would
be
something
that,
for
example,
I
would
explicitly
try
and
disable.
I
would
not
want
tcp,
because
that
would
just
ruin
my
application.
I
would
need
to
disable
that
functionality
for
that
fallback.
C
Okay,
by
the
way
can
yeah
someone
hear
me.
We
can
hear
you,
okay,
because
I've
been
doing
cue
management
for
a
while,
and
I
realized
that
no
one
was
hearing
me
so
grumpf,
okay,
so
I
had
to
reconnect
audio
and
we're
back
all
right
thanks
a
lot
luke
victor
you're
in
the
queue
apparently.
F
Yeah
I
had
to
I
yes,
I
wanted
to
say
something
I
have
first
to
regarding
to
looks
remark
it.
It
is
in
fact
the
case
that
we
do
need
to
add
a
knob
to
api
to
express
last.
Something
like.
I
only
want
quick
or
I
don't
care
if
I
get
quick
or
tcp
or
I
want
quicker
tcp,
but
I
want
to
know
which
one
I'm
using
and
there
I
do
agree
with
that
now.
My
remark
is
another
aspect
of
http.
F
You
can
in
http
provides
you
with
a
way
to
communicate
priorities
of
your
individual
streams,
but
that's
not
something
you
can
do
on
real,
quick,
because
real
quick
has
no
way
to
communicate
that
kind
of
metadata,
and
all
of
that
metadata
is
just
an
api
call.
That's
an
example
of
another
consideration
that
I
think
that
was
mentioned
sometime
long
ago,
but
I
forgot
anyway
that's
my
remark.
H
Martin,
it
takes
a
little
while
to
come
through
so
victor's
point
there
about.
Priority
is
an
interesting
one,
but
I
don't
think
it's
particularly
relevant
here,
because
you're
not
going
to
be
able
to
use
the
http
level
signaling
to
get
priority
signals
back
and
forth
and
quick
has
priority
or
any
quick
implementation
should
have
priority
anyway,
and
it's
just
a
question
of
how
you're
signaling
it.
H
H
A
congestion
controller
being
aware
of
the
current
path
state,
but
I
can't
see
very
many
ways
in
which
the
the
co-hosting
of
the
same
two
things
can
can
be
of
benefit
and
there's
yeah.
The
other
thing
was
victor
pointed
out
the
intermediation
point,
and
I
think
it's
a
really
important
one.
We
saw
with
web
transport
that
not
a
lot
of
intermediation
goes
on
because
it
is
there's
no
application
level
semantics
being
expressed
in
these
protocols,
there's
no
generic
functionality
that
an
intermediary
can
hook
into
to
provide
value.
H
So
they
tend
to
be
just
dumb
connections
back
to
back
end
servers
and
with
no
value
added
by
an
intermediary,
no
extra
load,
balancing
capabilities.
What
have
you
so
I?
I
am
sort
of
strongly
now
leaning
toward
the
non-pooled
case,
in
fact,
but
I'm
very
much
still
of
the
view
that
only
one
is
necessary
of
the
quick
options.
M
Muted
I'll
be
brief,
because
this
has
been
touched
on
twice
already
in
the
queue
I
was
just
going
to
say
that
you
know.
As
far
as
the
api
goes,
you
absolutely
have
to
be
able
to
inform
the
the
caller
of
what
the
heck,
what
the
heck
we
did
like.
I
can't
just
if
they,
if
they
ask
for
quick,
I
can't
just
switch
to
tcp.
They
have
to
know.
M
Oh
okay,
yeah!
That's
all
I
wanted
to
say
so
I
wanted
to
to
not
only
second
second,
the
idea
of
of
informing
the
the
user
through
the
api,
but
also
say
that
very
emphatically
in
support
of
it.
Thank
you.
N
Ecker,
so
with
regard
to
both
these
points,
I
think
I
agree
with
martin
that,
like
there
should
be
one
quick
transport,
I've
got
back
up
back
and
forth
with
a
fair
bit
or
there
should
be
h3
or
quick.
N
I
sort
of
read
victor's
victor's
met
under
the
mailing
list
about
how
quick
was
growing
more
like
h3
kind
of
confused
me
frankly,
because
it
seems,
like
maybe
they're
the
same
thing
so
I
mean
I
guess
I
am
also
trying
to
figure
out
what
pulling
means
and
not
saying
a
lot
here
on
benefit
there.
I'm
not
sure
that,
but
I'm
not
sure
that
actually
dictates
the
question
of
which,
which
which
we,
which
quick
base
transport
we
should
have.
N
You
know,
I
think
it's
really
quite
disappointing
that
we're
going
to
end
up,
apparently
with
mask
having
being
based
on
http
plus
datagrams
and
the
web
transport
based
on
some
other
thing.
That
seems
like
kind
of
like
really
ideal,
to
figure
out
like
if
you
want
to
push
data
raw
datagrams
over
over
quick
to
http
servers,
whether
you
should
do
that
with
these.
N
What
kind
of
transfer
that
should
look
like
seems
like
really
something
I
want
to
be
able
to
do,
and
I
remember,
as
I
recall
the
time
when
mask
and
in
fact,
rick
proposed,
there's
a
bunch
of
discussion
about
how
it
might
be
should
ought
to
be
in
principle
possible
to
implement
them
entirely
in
javascript,
using
facilities
offered
by
the
browser.
N
That,
obviously,
will
not
be
the
case
if
we,
if
mask,
is
specified
on
http
and
quick
and
web
transfer
specifies
quick
transport,
so
I
do
think
like
that
there
should
be
only
one
but
like
that.
The
scope
of
there
should
be
only
one,
probably
like
it's
like
somewhat
wider
than
this
working
group
on
the
topic
of
tcp
fallback.
I
I
certainly
agree
with
what
I've
heard
it's
a
number
of
times,
which
we
need
to
notify
people.
N
We
there's
some
knob
of
some
kind
to
let
people
control
slash,
notify
whether
they
have
tcp
or
not.
I
do
think
sometimes
people
are
necessary.
N
All
the
data
shows
that
you,
don't
you,
don't
reliably
get
ddp
transport
through,
like
a
like
a
large
large
number
of
connections
and
many
of
these
connections,
although
they
have
inferior
performance,
if
they
have
tcp
still
can
do
acceptable
performance
of
various
things
over
tcp,
and
so
you
know,
if
you
want
people
to,
if
you
think
part,
if
you
think
part
of
the
problem
with
you
know,
as
I
think
people
do
with
websockets,
is
the
api
stank
and
you'd
like
people
to
have
one
api
that
having
forced
them
to
switch
it
between
web
sockets
and
web
transport
when
they
really
wanted
to
go
like
a
datagram
extraction?
N
That
kind
of
works
seems
like
really
kind
of
a
bad
choice,
so
I
do
think
we
need
to
pee
and
I'd
better
think
we,
maybe
you
turn
it
off
or
or
be
sad.
If
you
got
it.
C
Thanks
ecker
eric
kinnear,
I
saw
that
you
were
at
the
top
of
the
queue
and
you're
not
at
the
bottom.
Like
was
that
a
mistake,
or
did
you
intentionally
go
down?
C
I
removed
one
comment
and
had
a
different
one.
So
let's
keep
going
and
we'll
come
back.
Cool
sounds
good.
Alan.
L
So
to
address
martin's
question,
so
I
mean
I
think
one
of
the
primary
drivers
is
scale.
So
you
know
so
I
work
at
facebook
and
we
that
you
know
basically
every
phone
has
in
the
world
has
a
connection
to
facebook.
It's
probably
an
exaggeration,
but
there's
lots
of
them
and
we
do
http
stuff
right.
There's
lots
of
reasons
why
there
should
be
an
http
connection
from
every
app.
L
But
then
there
are
also
non-http
use
cases
that
are
going
to
go
to
the
same
place,
particularly
sort
of
more
published,
subscribe,
based
models
or
notification
real
time
things
there's
obviously
ways
to
kind
of
clue
to
that
with
http.
But
we
get
a
lot
of
asks
from
people
who
are
trying
to
develop
these
abstractions
and
so
now
we're
faced
with
okay.
L
Well,
we're
gonna
have
one
http
connection
from
every
phone,
and
now
we're
also
gonna
have
a
web
transport
connection
or
if
we
don't
have
any
ability
to
pool
multiple
web
transport
uris
into
the
same
connection.
We're
gonna
have
one
web
transport
connection
for
potentially
every
user
of
web
transport,
which
is
just
not
really
gonna
work.
You
know
these
things
are
all
end
up
getting
terminated
at
one
place
and
in
terms
of
what
intermediation
or
a
proxy
can
do
there.
L
It's
not
doing
very
much,
but
it
is
routing
right,
so
you
might
have
you
know
some
uses
on
the
phone
which
are
being
terminated
in
edge
pop
and
those
web
transport
use.
Cases
are
being
grounded
to
a
data
center
where
that
service
is
running,
but
other
web
transport
uis
are
being
routed
to
other
data
centers
off
the
same
connection
to
the
pop,
so
that
and
in
terms
of
okay.
Well,
you
guys
have
your
own
app.
L
Why
don't
you
just
write
your
own
extension,
and
why
are
you
bothering
the
ietf
with
this
use
case,
which
maybe
is
only
specific
for
you?
You
know
there's
more
and
more
push
to
use
native
stacks,
and
so
this
we're
a
little
bit
out
of
the
web
context
of
web
transport.
L
But
since
we're
talking
about
a
protocol
here,
you
know
we,
for
example
like
we
would
love
there's
there
are
lightweight
apps
that
would
like
to
not
use
our
http
stack
but
want
to
use
the
native
stack
or
they
want
to
use
a
thinner
stack
than
the
one
that
we've
developed
and
it's
very
challenging.
If
there's
not
a
standard
way
to
do
this
kind
of
thing
over
an
http
connection
to
to
get
that
implementation
happening
in
other
contexts
and
there's
also
you
know,
then
we
have
the
same
thing
of
all.
L
Those
products
that
run
in
apps
also
we're
going
to
run
the
web
and
the
web
developers
are
don't
want
to
or
the
people
who
are
developing
that
code
don't
want
to
know
like.
Oh,
I
have
to
write
it
one
way
in
the
app,
but
then
I
have
a
totally
different
protocol
or
way
I
handle
things
when
I'm
running
in
a
browser
context.
O
Thank
you.
Can
you
hear
see
me.
C
O
Me
so
alan's
making
me
think
about
this,
but
so
I
I
came
up
this
I'll,
say
I'll,
say
what
I
can
represent
and
also
see
what
talent's
making
me
think
about.
O
So
some
others
have
also
noted
this
on
the
jabber
chat,
but
I
just
want
to
generally
draw
gently
draw
attention
to
the
fact
that
this
I
fear
that
this
might
be
trying
to
do
too
many
things
at
the
same
time,
and
in
doing
that
it
does
seem
a
little
bit
like
taps,
and
that
scares
me
partly
because
that's
that's
when,
when
something's
not
driven
very
strongly
by
one
application
that
tends
to
have
a
flavor
of
trying
to
satisfy
a
lot
of
requirements.
O
So
with
that
I'll
ask
the
ques.
I
want
to
ask
the
question
of
how
many
of
these
are
actually
use.
Cases
that
are
currently
in
play,
how
many
people
want
to
use,
which
particular
use
cases.
The
question
I
would
ask
now
I
I
came
up
to
say
this
sounds
like.
O
If
we
wanted
to
be
an
evolution
of
websockets,
then
quick
transport
seems
like
a
good
step
forward
with
a
backup
as
something
like
web
sockets.
It
would
have
be
easier
to
see
this
as
an
evolution
in
that
sense,
but
and
and
therefore
I
would
answer
the
questions
that
you
have
there
as
well.
Maybe
pulling
connections
is
not
that
much
worth
it
and
tcp
fallback,
I
would
say
in
general,
you
definitely
want
tcp
fallback.
If
you
want
whatever
api,
you
are
using
to
basically
work
all
the
time.
O
It's
because
we
can't
assume
that
quick
will
work
all
the
time
but
yeah
alan
allen
is.
Ireland
has
made
a
good
point
that
there
are
cases
and
in
fact,
he's
strongly
suggesting
that
it
might
be
necessary
to
pool
connections,
in
which
case
then
I
would
ask,
is
quick
transport
necessary?
Should
we
just
go
to
http
3
transport.
F
Janna
asked
some
question
and
I'm
trying
to
remember
what
was
the
very
question
at
the
beginning.
Oh
the
question
was:
are
there
use
cases
that
are
for
each
of
those,
and
my
answer
is:
I
believe
that,
for
each
of
the
proposed
transfers
there
is
at
least
one
person
who
is
like
actively
interested
in
it
existing,
which
is
why
those
web
transports
which
why
we
defined
all
of
those
in
first
place,
but
that's
my
answer
anyway,
bernard.
A
Okay,
yeah.
I
wanted
to
follow
up
on
what
alan
said,
because
I'm
hearing
some
of
the
same
things
that
there
is
a
desire
to
pool
http
with
web
transport,
and
I
and
not
just
web
transports
with
each
other,
but
but
it
pooling
them
together.
A
Some
of
the
scenarios
I've
heard,
if
you
think
about
it,
it's
both
for
the
scalability,
which
allen
mentioned
also
the
traversal
issues
just
just
to
have
everything
go
over
the
same
connection,
but
also
because
I
think,
there's
a
desire
to
significantly
extend
the
web
so
that
you
have
not
just
request
responses
that
are
reliable,
but
can
also
get
datagrams
coming
back
now.
A
So
in
some
ways
with
what
they're
looking
for
is
an
entirely
new
web
ecosystem
that
supports
datagrams
kind
of
natively
as
part
of
as
part,
naturally
as
part
of
the
web.
It
is
a
big
ask
because
you're,
essentially
you
want
all
of
this.
You
want
web
transfer
to
work
with
http
3
to
work
with
masks,
so
it's
kind
of
like
an
entirely
new
ecosystem.
So
it's
a
very
big
bet.
A
You
know,
and
there
are
many
many
ways
in
which
that
that
could
fail.
If
pieces
of
it
don't
work
together,
but
that's
that's
kind
of
what
I
hear
is
that
is
that
there's
a
they
would
see
that
if
you
could
make
everything
work
and
get
the
pooling
of
both.
That
would
be
the
biggest
win.
A
It's
also
probably
the
hardest
thing
to
actually
make
work,
which
is
the
tricky
part,
because
when
you
know
I've
gotten
into
trying
to
figure
out
what
would
be
needed
to
do
it,
it
does
seem
like
there
are
a
lot
of
moving
parts,
but
it
is.
It
is
a
big
win
if
you
can
make
it
all.
K
K
Yeah,
I
would
second
a
lot
of
what
I
think
ecker
was
saying,
which
is
that
I
think
it
is.
It
is
very
important
that
we
end
up
with
just
one
thing,
but
I
think
if
we
want
to
be
able
to
deploy
this
stuff
everywhere,
we're
going
to
need
something
for
the
cases
where
quick
isn't
getting
through
and-
and
we
know
we
have
that
for
web
pages
that
are
using
http,
because
we
can
fall
back
to
h1
and
h2.
K
K
We
need
to
hop
on
the
hdp
bandwagon
and
use
that
to
go
across
everything
or-
and
I
I
say
this-
not
particularly
caring-
what
we
end
up
with
as
long
as
we
meet
those
needs
or
we
go
with
some
of
the
kind
of
fallback
transport
style
thing
which
I
think
victor
had.
K
It
seems
like
we
end
up
pushing
everything
that
needs
that
fall
back
up
to
h3,
and
then
you
know
kind
of
side
going
sideways
stage
two
or
we
need
to
define
something
that
allows
us
to
do
that
kind
of
uni,
bi-directional
stream
multiplexing
over
something
that
does
traverse
the
corners
of
the
internet,
where
quick,
doesn't
and-
and
we
can
do
that
without
http-
and
that
would
be
great.
But
it
does
seem
as
though
some
sort
of
a
fallback
is
necessary.
C
Thanks
eric
so
to
try
to
get
us
to
converge
a
little
bit
here
and
I'm,
I
guess,
speaking
as
chair,
trying
to
formulate
the
consensus,
that's
emerging,
but
please
feel
free
to
speak
up
if
you
think
I'm
getting
it
wrong,
I'm
getting
a
sense
from
the
room
that
most
of
the
folks
who
have
spoken
up
agree
that
of
at
least
if
we
look
at
the
quick
base
transport,
so
quick,
transform,
http,
3
transport.
I
think
we
have
consensus
that
we
only
need.
C
One
of
the
two
I
like
is
the
main
sense
I'm
getting
from
most
people
in
the
room.
Then,
if
that's
the
case-
and
we
need
to
pick
one-
let's
tease
apart
what
proper,
what
are
the
pros
and
cons
of
each
and
the
main
ones?
I'm
seeing
and
please
jump
into
the
qtad
more
is
h3
transport
allows
pooling.
N
C
Actually,
that's
so
that's
a
very
good
idea,
thanks,
hecker
yeah
bernard.
Does
that
sound
good
to
try
to
do
a
hum
about
the
of
specifically
quick
transport
and
b3
transport?
If
people
agree
that
we
there
should
only
be
one.
C
Bernard
I'm
going
to
assume
yes,
so
we
tested
out
this
virtual
hum
tool
at
the
beginning
of
the
session
and
it
seemed
to
work.
So
let
me
type
in
the
question.
D
Victor
says
he
has
a
question
before
the
hum
go
right,
speak
up
victor.
H
I
would
have
thought
the
burden
of
proof
is
on
people
asking
for
two.
I
want
fewer
things
and
I
don't
necessarily
see
that
we
need
the
complexity
unless,
until
the
complexity
is
justified,
I
haven't
seen
any
real
justification
for
the
complexity.
C
Thanks
yeah
now
that
that
makes
sense
like
at
the
end
of
the
day,
our
job.
As
you
know,
writing
itf
protocols
is
generally
not
to
make
the
biggest
buffet
available
is
to
try
to
find
one
thing
that
works.
I
I
think,
I'm
from
a
personal
level,
not
as
chair,
really
in
agreement
with
martin
on
that
one
david.
O
If
you
don't
mind,
I
think
that
there's
been
a
parallel
conversation
going
on
on
the
on
the
on
this
on
the
channel
chat
channel,
and
you
might
want
to
hear
out
what
philip
has
to
offer
he's
been
in
the
queue
just
because
it's
it
might
help
people
make
up
their
minds.
The
idea
here
is
to
offer
an
alternative
way
of
doing
pooling
without
having
to
do
http
3.
This
would
be
mask
and
quick
transfer
right.
C
Okay,
philip,
if
you
want
to
jump
in
and
make
your
case
go
ahead,.
P
Hi,
so
I,
when
we
had
the
discussion
about
whether
we
want
pooling
or
not,
I
came
with
the
idea
that
we
can
implement
part
of
the
pooling
just
by
adding
mask
beneath
web
transports.
P
So
with
this,
you'll
have
don't
have
to
care
about
another
layer
of
hole
punching
or
session
setup,
because
you
can
just
reuse
the
http
three
session
you
used
and
then
piggyback
on
this
session
with
mask
all
the
web
transport
streams
you
need
within
a
pool.
You
could
also
try
to
add
some
kind
of
prioritization
of
fate
sharing
within
this,
and
so
you
could
get
away
with
pooling
at
this
layer
and
still
having
the
simple,
quick
transport
semantics
beneath.
P
Also
this,
if
you
have
a
multiplexed
use
case
where,
if
a
load
balancer
only
this
you
load
balancer
needs
to
talk,
http
3
and
you
can
get
the
much
easier,
quick
transport
at
the
back
end
systems.
So
this
would
be
a
compromise
of
both
designs
without
having
to
define
those
protocols
at
the
web
transport
level.
C
Thanks,
philip,
so
to
add
to
the
context
as
someone
who
is
enthusiastic
about
mask
so
that
that
would
definitely
work.
It
would
involve
double
encryption
for
for
quick,
and
on
top
of
that,
it
would
also
like
require
browsers
to
implement
mask
which
well
I'm
okay
with,
but
I'm
not
sure
if
everyone
is,
but
that's
definitely
an
alternate
view
as
well,
but
thanks
we're
we're
10
minutes
to
time,
so
I'm
gonna
switch
now
to
the
to
the
to
the
hum.
C
So
the
and,
of
course,
because
I
switched
away
forgot
what
I
typed
thank
you,
but
so
here's
the
question
the
the
tool
is
going
to
say:
do
you
want
to
raise
your
hand
or
not?
And
so,
if
you
raise
your
hand,
it
means
there
should
only
be
one.
If
you
do
not,
it
means
you
think
we
should
do
both.
C
C
C
Q
Q
Q
C
All
right,
I'm
going
to
end
it
now
in
the
next
10
seconds.
If
you
haven't
voted,
go
right
ahead.
C
All
right
so
then
I
I
guess
in
the
I'm,
not
so
I
I
I
see
the
numbers,
I
don't
know.
If
everyone
else
does,
I
don't
know
if
I'm
supposed
to
share.
C
The
hums,
you
can
see
them
all
right.
Well
then,
so
it
was
19.
C
F
Yeah,
I
actually
have
the
proposal
that
I
believe
with
which
I
want
to
believe
which
would
allow
us
to
move
ahead,
and
I
I
want
to
first
reply
to
my
ideas
of
why
I
did
not
raise
my
hand
and
if
you
don't
mind
that,
can
I
sure
but
make
it
quick,
because
we
have
five
minutes
left.
So
my
basic
logic
is
that
there
can.
F
The
argument
for
is
that,
when
our
web
developers,
who
have
different
opinions
on
the
pulling
slash,
complexity,
tradeoff
and
argument
against,
are,
we
should
not
define
too
many
sayings
and
that's
too
many
things
to
implement
for
browser
and
the
reason
I'm
leaning
towards
argument
4.
Is
that
I
personally
and
I'm
not
sure
if
there
are
ietf
documents,
but
I
personally
trying
to
favor
problems
of
web
developers
against
concerns
of
browser
implementers
or
theoretical
people.
F
Now.
My
specific
proposal
is
that
we
adopt
quick
transport
as
a
starting
point,
because
it
sounds
that
it
satisfies
at
least
some
use
cases
for
everyone,
but
we
do
not
preclude
adoption
of
http
transport.
In
case
we
decide
that
the
working
group,
in
case
there
is
a
practical
experience
that
shows
it's
beneficial
and
the
working
group
has
resources
to
ship
that
draft.
C
Or
I'm
asking,
are
you
gonna
respond
directly
to
what
victor
is
saying
or
do
something?
Yes,.
N
I
was,
I
was
go
ahead:
okay
yeah.
I
guess
I
would
not
be
in
favor
of
that
proposal,
because
that
seems
to
bake
in
quick
transport
and
then
have
a
high
priority,
we'll
end
up
doing
both.
So
I
think,
like
we've,
I
think
we
are
getting
close
to
having
the
information
we
need
to
make
a
decision
and
if
we're
not,
we
should
induce
that
information
and
make
a
decision
rather
than
rather
than
deciding
on
one
would
like
them,
maybe
we'll
adopt
another
later.
C
So
thanks
thanks
ecker,
so
I'm
thinking
so
it
sounds
like
there's
a
majority
of
folks
who
would
prefer
to
only
do
one
and
one
of
the
interesting
things
here.
Is
that
we're
we're
not
time-bound.
C
This
do
folks
have
thoughts
on
this
proposal
and
I'm
realizing
we're
getting
really
short
on
time.
So
please
keep
your
your
comments.
As
short
as
possible,
alan
go
ahead.
L
Okay,
I
I
think
I
just
wanted
a
second,
what
ecker
said,
which
is
that,
if
we,
if
we're
adopting
quick
transport
now
and
there's
it
just-
creates
a
high
bar
to
adopt
a
degree
when
it
sounds
like
there
is
consistent
about
which
direction
we're
going.
I
mean
I
really
view
h3
as
a
superset
of
what
quick
transport
can
do,
and
you
know
you
don't
have
to
use
the
features
that
you're
not
using
or
that
you
don't
want
to.
So
I
just
don't
think
that
adopting
quick
transport
right
now
is
what
I've
been
favoring.
K
That,
I
think,
there's
a
there's
a
risk
here.
I
don't
necessarily
care
as
much
about
which
one
we
pick,
although
I
think
we
do
have
a
set
of
needs
that
we
need
to
fulfill,
but
I
I
think
going
with
one
and
then
telling
ourselves
that
we're
maybe
gonna
go
with
the
other.
It
would
be
really
unfortunate
if
we
ended
up
in
a
place
where
we're
trying
to
get
a
the
earliest
implementers
of
everybody
to
actually
interrupt,
and
we
find
out
that
you
know
half
of
people
do.
C
Okay,
all
right,
I
hear
you
back
in
the
queue.
If
yes,
please
30
seconds,
because
we're
almost
at
time
and
then
I'm
cutting
the.
N
Queue
so
yeah,
I
mean,
I
think
I
I'm
happy
for
us
to
choose
one,
but
I
think
we
should
choose
one
of
the
pieces.
That's
the
one
we
want
and
then
yes,
we
could
always
add
one
later,
but
I
think
we
should
choose
one.
The
theory
that
we're
like
just
getting
experience
we
should
choose
the
one
we
think
is
the
right
one.
So
I
guess
I'm
saying
like
I
don't
think
we
should
shortcut
that
discussion.
I
think
we
try
to
decide
it.
C
Oh,
no
absolutely,
and
I
wasn't
proposing
to
pick
anyone
in
particular.
I
was
saying
that
we
should
pick
one
and
but
clearly
we're
not
going
to
pick
one
in
the
room
today,
because
we're
out
of
time
the
the
av
issues
did
really
hurt
our
flow
here,
which
is
unfortunate,
so
we're
we're
at
time
on
the
session.
We
did
not
come
to
a
resolution
on
getting
out
of
the
transport
zoo.
I
sincerely
apologize
as
chair
for
that,
but
we
will
do
better.
C
I
I'm
thinking
that
we
should
need
to
accelerate
this
because
having
progress
happen,
every
four
months
at
atf
is
not
good
enough.
C
So
I'm
thinking
that
an
interim
might
be
a
good
idea
here
in
the
near
future,
so
we
can
try
to
hash
out
so,
like
maybe
get
all
this
discussion
happening
on
the
list
and
then
have
another
like
interim
meeting
sometime
in
the
not
too
distant
future
to
really
try
to
drive
this
home.
I
would
really
love
us
to
like
get
to
a
point
where
we
can
pick
some
number
of
the
protocols
to
adopt
them
in
the
working
group,
so
we
can
actually
move
from
zoo
visiting
to
protocol
design.