►
From YouTube: IETF108 WEBTRANS 20200727 1410
Description
WEBTRANS session at IETF108
2020/07/27 1410
A
A
A
A
A
A
A
B
B
A
Thank
you
so
much
spencer.
How
about
an
ethernet,
etherpad.
A
A
Speaking
lucas
can,
can
I
convince
you
to
do.
D
D
So
I'm
really
bad
at
note-taking.
If
somebody
wants
to
help
me.
B
Awesome,
thank
you
so
much
amelia,
thank
you
great
and
for
everyone
else
in
the
session.
Plea
do
feel
free
to
take
a
look
at
the
the
notes
and,
for
example,
if
you've
said
something.
Please
check
like
how
what
you
said
was
transcribed
and
feel
free
to
make
edits.
If
it's
not
correct
thanks.
A
A
A
Then
victor
will
go
over
the
web
transport
overview
and
requirements.
We'll
have
a
presentation
from
eric
on
http
2
and
then
the
victor
will
talk
about
quick
and
http
3
and
then
we'll
try
to
wrap
things
up.
One
thing
that
people
have
been
noticing
in
itf
108
is
meat.
Echo
will
cut
us
off
apparently
to
the
minute
once
our
session
is
done,
so
we
need
to
be
conscious
of
time
that
has
been
fixed.
Oh.
C
A
A
F
Right
so
thank
you
very
much
bernard
and
david
for
the
invitation.
I'm
gonna
give
a
short
update
on
advancements
at
dub
3c
and
then
I'm
going
to
go
through
a
short
section
on
the
use
cases
as
one
of
our
first
actions.
So
the
work
group
charter
within
w3c
has
been
published.
The
link
is
available
on
the
slides
here.
The
charter
process
is
underway,
voting
is
taking
place.
We
expect
a
decision
in
august
and
then
I've
posted
the
rough
timeline.
F
First
teleconference
should
start
about
september
20
and
if
all
goes
well
progress
through
to
sometime
in
2023
with
various
intermediate
stages,
our
first
order
of
action
is
a
use
case
document.
It's
it's
been
stated
that
there's
been
a
lot
of
interest
and
time
spent
in
promoting
web
transport
and
its
ideals
over
the
last
two
years,
but
there
hasn't
been
a
coordinated
use
case
document
that
has
been
sprung
up
in
many
cases,
so
we're
going
to
initiate
that
we
hope
it
can
form
a
foundation
for
both
the
w3c
api
activities
and
also
the
core
transport
development.
F
F
I
realize
I
haven't
been
involved
in
ietf
activities
previously,
so
I
am
new
to
this
group.
I
have
been
involved
in
standards
within
the
dash
industry
forum
and
with
cta
wave
and
in
ancillary
roles
within
w3c,
so
I'm
excited
to
take
on
this
position.
Next
slide,
please:
okay!
F
Okay,
thank
you
bernard
right,
let's
get
going
so
I!
What
I'm
citing
here
is
is
a
couple
of
sources,
so
we
we
polled
all
the
various
presentations
and
web
activities
that
we
could
find.
I
think
many
of
you
on
the
call
you're
going
to
find
that
these
are
links
to
your
your
documents.
F
We've
collected
these
in
a
temporary
home
on
a
google
doc,
which
is
public
and
accessible
there.
This
will
be
moved
as
possible
over
to
the
wicg
web
transport
github,
we'll
issue
a
pr
there,
and
then
we
can
have
a
more
structured,
curation
and
debate
of
these
use
cases.
At
the
moment.
My
my
goal
is
simply
to
collect
them
all,
and
then
we
can
start
pruning
the
tree
next
slide.
Please.
F
I'm
going
to
go
through
some
use
cases
here
and
again
the
order
does
not
imply
any
priority
priority.
These
are
the
ordering
been
listed
in
previous
documents
in
which
we
them
so
machine
learning
is
listed
as
a
single
topic.
I
think
primarily
the
use
cases
here
relate
to
the
I
o
between
a
cloud-based
ml
process
and
the
clients.
F
Okay,
security,
camera
analysis
is
another
opportunity
here.
This
is
data
and
or
video
being
sent
to
the
cloud
service
from
many
machines.
Now
so,
not
necessarily,
you
know
human
interface
devices
opportunity
is
also
here
for
maybe
not
sending
traditional
video,
but
for
security
might
only
need
to
send
motion
vectors
and
edge
detection.
Do
some
sort
of
preliminary
analysis
on
the
client
and
then
send
a
reduced
data
set
up
to
the
cloud.
F
And
that's
been
cited
as
a
quick
transport
use
case.
I
do
want
to
say
from
the
beginning:
we
want
to
separate
use
cases
from
requirements
and
requirements
from
solutions,
so
we'll
come
later
to
the
fact,
as
well
as
this
quick
transport
is
this
h3
transport,
or
is
this
something
else
I
think
first,
we
should
define
the
use
cases
from
those
extract
requirements.
What
do
we
need
to
do
and
then
look
at
the
technology
set
we
have
and
if
we
have
to
invent
something
new
to
do
it?
F
Well,
then
we
shall
do
that
multiplayer
gaming,
both
web
and
console
based,
so
there's
the
gameplay
instructions
being
sent
from
the
client
wherever
it
may
be
down
to
a
coordination
or
gameplay
server.
These
are
very
science.
There's
a
mixture
actually
of
of
the
data
being
sent.
Some
is
very
time
sensitive
such
that
late
data
has
no
meaning
which
would
be
location
type
data.
F
F
It's
also
this
game
flow
that
has
to
go
to
a
coordinating
server,
but
data
that
be
may
be
more
efficient,
going
directly
to
peers
as
well
and
there's
been
case,
use
cases
cited
for
ar
gaming
that
require
real
world
interaction.
So
you
need
a
very
low
latency
feedback
of
your
real
world
environment
intermixed
with
the
gameplay
virtual
theater
is
a
use
case.
That
falls
within
this,
where
you
have
actors
in
different
geo,
separate
locations
with
virtual
backgrounds,
but
they
are
synchronized
sufficiently
well
such
that
it
appears
to
be
coordinated,
theatrical
presentation
next
slide.
F
F
So
low
latency
live
streaming.
This
is
the
one
that's
closest
to
my
heart.
I
I
work
at
akamai,
I'm
responsible
for
live
streaming.
There
there's
the
unidirectional
broadcast
case.
This
is
one
stream
to
a
million
people.
Traditional
news
events
wagering
the
latency
I've
listed
here
sub
one.
Second,
we've
done
this
because
there's
traditional
segmented
media
today
they
can
certainly
be
hitting
the
low
seconds
in
terms
of
end
to
end
latency.
F
F
F
If
there
was
an
opportunity
for
reduced
contextual
time
and
complexity
compared
to
what
we
have
with
webrtc
facetime
is
a
good
example
of
of
few
to
few
connections
again.
The
debate
here
is
this:
better
satisfied
by
quick
transport,
h3
transport,
that
that
is
a
subsequent
decision
as
to
whether
we
include
this
use
case
as
one
of
our
goal-based
use
cases
cloud
game
streaming
here,
the
game
is
actually
being
rendered
at
the
edge
of
the
cloud.
F
Google
stadia
is
a
good
example
of
this
and
transmitted
to
a
much
thinner,
client,
that's
not
having
to
do
the
heavy
work
entering
the
game.
The
latency
requirements
here
are
some
of
the
strictest
amongst
any
of
these
use
cases,
certainly
for
video,
a
couple
of
two
frames
or
so
of
data
or
even
laces,
is
preferred.
F
Next
slide,
please
so
server
based
video
conferencing.
This
is
exactly
what
we're
taking
place
with
right
now,
obviously,
traditionally
satisfied
via
webrtc,
but
there's
requirements
around
simpler
session
establishment.
If
you
know
you're
talking
to
a
server,
you
don't
have
to
go
to
the
full
nat
reversal
and
connection
overhead.
That
webrtc
is
designed
to
provide
also
requirements
around
censorship
circumvention
not
leaking
as
much
personal
information
during
a
session
establishment
that
perhaps
webrtc
leaks
today
and
the
question
will
be
whether
this
is
quick,
transporter
h3
transport
bernard.
A
Yeah-
probably
not
time
right
now,
but
it
will
come
up
later.
So,
okay,
maybe
we
can
go
back
to
the
slides.
If
people
want
to
talk
about
these
things.
F
Certainly
I'll
I'll
progress
without
a
discussion,
because
I
do
want
to
have
a
long
one
on
each
of
these
remote
desktop
another
submitted
use
case.
There
are
in
all
of
these
there's
technologies.
We
can
do
these
today,
but
perhaps
we
can
have
technology,
so
we
do
this
better.
So
it's
transmission
of
screen,
sharing
collaborative
work
on
ski,
on
screens,
scaling
out
to
large
audiences
online
document,
sharing
on
like
document
editing,
synchronized,
mass
movement
and
remote
control
and
assistance
with
it
support
taking
over
a
system.
F
It's
a
step
beyond
video
conferencing
with
far
more
interaction
with
a
local
applications
than
you
might
have
with
a
web
conference
based
system,
but
it
bears
a
lot
of
the
same
parallels,
also
client
to
server
and
a
mixture
of
p2p
and
is
being
promoted
as
a
quick
transport
use
case
in
in
this
use
case.
Document
next
slide.
Please.
F
Time,
synchronized
multimedia-
this
is
an
interesting
one.
This
is
the
ability
to
have
people
singing
or
playing
musical
instruments
together
when
they're
physically
separate,
but
the
with
the
the
da
the
audio
and
video
precisely
aligned
enough
such
that
it
would
be
a
pleasant,
singing
or
musical
experience.
F
If,
if
your
kids
have
tried
to
do
this
at
school
for
choir
or
music
practice,
you
realize
even
with
webrtc
we're
workingly
inadequate
when
any
sort
of
time
sync
there
today,
so
a
fundamental
layer
for
accurately
syncing
the
video
still
with
very
low
latency
and
audio
iot
sensor
and
data
analytics
transfer.
This
is
all
about
the
efficient
and
intermittent,
perhaps
transmission
of
data.
F
For
example,
if
a
very
low
power
device
wants
to
send
a
one,
a
one
bit
flag
with
h3
today,
there's
a
lot
of
overhead
that
has
to
be
done
simply
to
carry
that
one
bit
of
information.
F
Gps
updates,
mouse
clickers
on
sites
are
specific
examples
in
that
role,
so
we
might
want
to
send
a
little
bit
of
data.
Some
of
it
may
be
lossy,
some
of
it
not
sensor.
Data
upload
may
include
filters
aggregation
triggers
on
on
on
the
data.
As
it's
being
transmitted
either
on
the
client
or
up
to
the
server
and
then
pub
sub
models,
so
they're
they're
in
in
work
today,
we
use
long
polling
other
techniques
for
doing
them,
but
could
could
they
be
done
more
efficiently?
F
These
would
drive
social
feeds
like
twitter
at
extreme
scale,
financial
tickers
messaging
platforms
and
chat
platforms
that
are
being
built
today,
and
this
is
perhaps
better
served
up
front
by
an
h3
transport
use
case
next
slide.
Please.
F
F
Could
they
be
solved
to
improve
degree
by
extending
existing
technologies,
websocket
and
webrtc
being
the
obvious
candidates
here
and
then,
if,
if
the
answers
are
no
to
these
priority,
then
does
this
warrant
the
development
of
a
new
technology
and
then
should
that
new
technology
be
quick,
transport,
h3,
transport
or
another
transport,
and
we
don't
have
time
in
this
call
today
to
debate
these.
But
these
are
the
questions
that
needs
to
be
asked.
The
other
very
important
question
up
front
is
who
creates
the
goals
and
non-goals
between
ietf
and
w3c?
F
We
certainly
require
coordination,
but
if
w3c
is
hoping
for
api
development
in
a
certain
line
that
the
ietf
is
not
developing
or
the
reverse,
itf
develops
a
a
capability
that
is
not
intended
to
be
utilized
by
w3c,
then
loss
coordination,
so
we
we
do
need
to
think
about
pruning.
Some
of
these
use
cases
and
the
encouragement
here
is
my
personal
one.
I
would
hope
that
web
transport
can
do
a
few
things
really
really
well
versus
attempt
to
satisfy.
F
Perhaps
every
use
case
that
that
I've
just
listed
in
in
the
interests
of
it
being
an
efficient
new
development,
we
should
try
to
constrain
it,
and
I
did
want
to
extend
my
wishes
for
fruitful
collaboration.
I
will
be
reaching
out
through
bernard
to
this
group.
I
will
be
inviting
this
group
to
participate
and
we
can
address
the
issues
that
I
have
on
the
slide
here
and
I
hope
we
can
do
so
openly
and
efficiently.
D
Cool,
just
thanks
for
the
presentation,
will
just
a
point,
a
commentary
on
your
put
your
last
point
there
about
collaboration
with
the
itf
and
w3c
yeah
like
some
of
them.
Like
me,
I
am
familiar
with
with
both
groups,
but
tend
to
stick
to
the
itf
work,
a
bit
more
than
others.
How
what's
what's
a
good
way
of
keeping
track
of
this
work
in
w3c.
F
F
So
my
plan
was
to
talk
with
bernard
about
this
and
maybe,
as
as
part
of
ih,
I
don't
know
what
our
meeting
cadence
and
frequency
would
be,
but
I
would
hope
that,
as
certainly
a
part
of
each
meeting
that
I
intend
to
run
within
w3c,
I
would
like
to
do
an
ietf
update
and
if,
if
there's
there's
matters
that
we
need
to
discuss
between
the
two
groups,
we
can
either
invite
representatives
to
the
other
group's
meeting
or
schedule
an
ad
hoc
meeting
that
we,
those
interested
parties,
can
then
join.
D
Cool
thanks,
I
think,
if,
if
those
kinds
of
things
come
back
to
the
mailing
list,
that
would
really
help,
even
if
there's
maybe
a
bit
of
duplication
across
people
who
follow
both
things
would
definitely
help
me
at
least
avoid
missing
things.
F
Additional
questions
and
people
may
also
be
wanting
to
raise
their
hand.
Saying
hey
my
use
case
is
not
there.
I
have
a
definite
one,
so
there
will
be
opportunity
for
that.
We're
not
closing
that
use
case
document.
I
will,
through
the
mailing
list
and
through
to
this
group,
put
out
the
link
to
the
document
online
and
you'll
be
able
to
file
a
github
issue
and
add
your
use
case
to
that.
A
Thank
you
all
right.
So
victor
you
are.
G
Hear
me
we
can
hear
you
cool,
okay,
okay,
excellent,
hello,
I'm
victor
vasilev,
google,
I'm
the
editor
for
the
web
transport
overview
and
requirements
draft
also
known
as
web
transport
model
document
next
slide.
G
So
as
a
reminder,
the
purpose
of
this
document
is
to
establish
the
bridge
between
w3c
and
itf
and
kind
of
formulate
in
general
what
web
transport
is
and
what
we
expect.
The
protocols
that
atf
develops
to
provide
so
that
w3c
can
build
api
and
boost
those
protocols.
So
that's
the
summary
and
next
slide.
G
So
the
main
update
is
that
this
draft
was
adopted
by
the
working
group.
There
is
a
github
repository
which
has
issues.
I
encourage
everyone
to
take
a
look
at
that
repository.
It
currently
has
three
issues
which
are
mostly
me
going
through
all
the
issues
I
filed
on
the
as
that
we
had
on
the
wacg
draft
and
moving
them
over
to
ietf,
because
they
believe
they're
more
appropriate
here
and
I'll
go
over
those
issues.
G
G
G
So,
as
we
might
know,
all
of
our
protocols
per
have
to
provide
streams
which
are
reliable
and
bi-directional
and
or
unidirectional-
and
there
is
some
text
in
the
draft
which
basically
says
that
frame
id
is
an
opaque,
64-bit
blob
and
we
do
not
assume
any
structure
mostly
because
the
initial
idea
was
that.
Well,
we
don't
want
to
copy
the
quick
structure,
because
quick
has
very
specific
structure
and
it's
unclear
if
it
would
map
to
our
tcp
fall
back
and
we
would
not
want
to
us
to
over
constrain
ourselves.
So
next
slide.
G
So
the
story
with
stream
ideas
is
that
they
were
in
the
very
original
api
draft
and
then
in
chrome,
implementation
of
quick
transport,
and
I
think
in
the
current
revision
of
the
api
draft,
we
removed
those,
and
there
are
two
reasons
why
we
remove
this.
And
why
is
this
as
a
hard
issue?
The
first
one
is.
If
we
have
stream
ids,
we
need
to
decide
how
the
stream
ideas
interact
with
proxy,
for
instance.
G
One's
problem
is
that
if
two
streams
arrive
at
the
like
reverse
proxy,
in
a
particular
order
like
let's
say,
first
stream
threes
and
three
months-
and
it
is
quite
possible
that
the
proxy
will
open
stream
free
first
because
it
doesn't
know
what
stream
one
is
and
whether
it
belongs
to
web
transporter
to
something
else,
and
then
it
will
have
to
open
stream
one.
G
But
this
way
the
actual
endpoints,
the
actual
client
and
the
actual
server
do
not
have
consistent
view,
and
this
is
one
of
the
problems
we
would
have
to
deal
with
if
we
decide
to
expose
train
ids
to
the
application.
The
second
one
is
that
well,
if
it's
http
free
transport
and
we're
multiplexing
multiple
transport
to
one
point,
there
is
no
one
to
one
stream
id
correspondence,
and
this
other
problem
which
is
not
related
to
proxying,
is
that
if
you
have
a
pool
to
transport
your
stream
ids,
you
can't
use
the
exact
stream
ids.
G
But
people
web
developers
actually
came
to
get
help
of
the
apis
back
and
they
asked
to
edit
back
and
the
reason
is
a
lot
of
them
care
to
know
exactly
in
which
order
the
streams
have
arrived
and
that's
because
they
use
that
ordering
in
order
to,
for
instance,
if
it's
a
video
stream
to
order
frames-
and
this
is
actually
interesting,
because
this
makes
our
current
definitions
draft
useless
even
with
even
with
like,
what's
there,
because
we
do
not
actually
require
stream
ideas
to
be
ordered.
G
And
in
fact
that
ordering
is
like
very
weird.
If
you
just
copies
quick
ideas,
as
is
so
that's
the
first
issue.
The
second
issue
next
slide.
G
Is
we
have
no
consistent
stream
reset
semantics
and,
namely,
if
we
will
try
to
unify
semantics
across
something
like
quick
and
something
like
http
2?
They
have
different
semantics
because
in
quick
you
can
reset.
You
can
reset
the
right
half
and
you
can
ask
pierre
to
reset
its
right
half,
but
you
can't
like
actually
close,
both
half
of
the
stream
yourself,
whereas
this
is
how
tcp
works,
and
this
is
how
http
2
string
mission
works
and
we
have
two
options
in
either
way.
G
G
The
final
issue
is,
there
is
some
ambiguity,
so
we
define
web
transport
in
terms
of
streams
of
bytes,
because
streams
of
bytes
is
a
primitive
which
is
all
of
our
transports
are
based
off.
That's
both
http
2,
http,
free,
quick,
etc,
they're
all
based
on
streams
of
bytes
and
all
the
previous
apis
that
deal
to
a
similar
problem
on
the
web,
used
finite
size
messages
and
it
was
streams
of
finite
site
messages.
So
websocket
and
rtc
data
channels
use
those,
and
there
are
some
questions
about
whether
we
want
to
do
anything
to
address
it.
G
G
It
is
quite
possible
that
doing
that
will
introduce
complexity
both
like
in
terms
of
adding
things
adding
entities
to
the
protocol
and
in
terms
of
just
designing
api
for
messages
such
that
messages
actually
have
a
variable
size
and
you
can
send
large
messages
it's
hard
and
for
what
it's
worth.
Neither
websocket
nor
rtc
data
channel
really
do
that,
even
so
often
on
paper.
They
claim
to
support
that.
G
G
The
first
one
is
we,
don't
we
define
streams
and
we
define
operational
strings,
but
we've
not
actually
spelled
out
the
state
machine,
and
we
should
spell
it
out
just
to
give
idea
and
like
it
would
be
important
for
reference
when
we
define
api-
and
the
second
point
is
there
like
to
do
in
multiple
sections,
the
biggest
one.
Is
that
there's
a
question
of
what
do
we
do
with
priorities?
G
And
I
don't,
I
think
I
don't
think
we're
at
the
point
where
we're
quite
ready
to
address
it,
because
we
have
more
pressing
matters,
but
that's
a
summary
of
all
open
issues
currently
next
slide.
So
that's
basically
all
interesting
important
issues
that
I
filed
based
on
what
has
came
during
the
api
discussion.
G
People
are
welcome
to
us
more.
I
want
to
know,
since
we
have
time
whether
people
have
comments
on
those
free
issues
presented.
A
Queue
victor.
I
have
a
comment
on
the
state
machine
issue.
I
think
it
is
important
to
get
clarity
on
the
state
machine
at
various
points.
In
the
apai
dock
we
actually
did
have
a
state
machine
figure,
but
as
we
added
transports
it
became
more
and
more
complicated
and
there's
quick
evolved.
We
had
to
change
the
state
machine,
so
it's
definitely
something
that
people
will
want
to
understand
and
make
sure
it
is
correct.
G
B
E
G
Schemer
I
agree.
I
like
my
personal
position
on
issue.
Number
three:
is
that
since
none
of
our
protocols
support
messaging,
we
should
just
stick
with
streams
and
after
we
stick
with
streams
with
and
like,
if
applications
won't
message,
the
elimination
they
can
do
it
themselves.
It's
not
that
hard.
B
So
the
the
key
distinction
there
just
jumping
in
this
david
is
the
the
property
that
folks
often
want
with
messages
is
reliable
and
in
order,
and
so
that
requires
putting
multiple
separate
messages
on
a
given
stream
to
benefit
from
that
stream
ordering
and
it's
possible
for
the
application
layer
to
do
this.
And
then
the
question
for
the
working
group
is:
is
this
a
feature
that
we
want
to
build
into
web
transport?
Or
do
we
leave
this?
We
just
provide
stream
and
leave
the
framing
inside
those
stream
as
an
exercise
to
the
application.
Both
are
feasible.
I
Eric
niger
and
akamai
on
issue
three.
I
I
also
think
that
having
messages
gives
more
flexibility
for
applications
to
be
able
to
do
things
like
handle,
saying
this
is
a
unit
that
I'm
like,
okay
being
lost
as
a
unit
or
or
if
we
want
to
apply
a
different
additional
semantics
on
that
later
being
able
to
say
things
like
like,
I
would
prefer
these
not
be
sent
in
the
same
underlying
packet
so
that,
if
you
build
a
something
using
fec
on
top
of
this,
that
you
have
so
a
little
more
control
over
which
which
sets
of
messages
get
are
more
or
less
likely
to
get
lost
relative
to
others.
I
G
I
guess
a
datagram
model
would
solve
would
would
address
that
so
streams
are
in
addition
to
datagrams
when
we're
talking
about
messages.
We're
specifically
talking
about
messages
that
are
a
reliable
and
b
span,
multiple
packets.
So
that
means
massive.
If
you
are
willing
to
use
datagrams,
you
can
use
datagram
datagrams
and
like
deal
with
all
of
those
issues
yourselves,
and
we
can
imagine
adding
such
low
level
control
as
like
anti-affinity
of
packets
there,
but
like
trying
to
build
it
with
messages
as
a
higher
cut.
G
Let's,
which
is
a
higher
level
concept,
seems
like
it's
a
lot
of
effort
and
it's
unclear
whether
it's
beneficial,
because
I
suspect
most
people
who
want
that
kind
of
control
would
try
to
beat
them
build
on
top
of
datagrams
themselves.
B
Before
I
add
the
next
person,
please
everyone,
even
if
I
introduce
you
just
every
time,
you
speak
after
someone
else,
please
restate
your
name
meet.
Echo
is
missing
a
very
important
feature
which
is
to
visually,
show
who's
talking,
and
this
is
making
it
very
hard
for
our
scribes
and
minute
takers.
So
please,
every
time
you
start
talking
just
start
with
your
name.
Thank
you.
Next
up,
benchwards.
J
Hello,
I
am
squishy
yes,
okay,
I'll
deal
with
that
later
I
okay.
I
I
just
wanted
to
point
out.
I
guess
one
really
silly
thing,
which
is
that
the
websocket
specification
and
protocol
have
a
bunch
of
other
stuff
in
them
like
the
concept
of
binary
versus
text
frames
or
messages.
J
So
it's
not
enough
to
just
create
some
kind
of
a
sequential
message,
delivery
system
and
call
it
a
websocket
replacement.
If
you
really
want
a
full
drop
in
replacement.
J
J
Is
also
naturally
applies
to
http
3
and
it
continues
to
give
you
the
in-order
behavior.
So
I
guess
personally,
what
I
would
like
is
you
know,
let's,
let's
develop
the
cleanest
protocol
we
can
think
of,
and
let's
try
more
on
the
w3c
side
to
find
an
api
specification
that
allows
current
websocket
users
to
easily
upgrade.
G
Thank
you
for
pointing
out
the
text
versus
binary
message
distinction.
I
completely
forgot
about
that.
But
yes,
that's
one
of
the
things
that
would
probably
prevent
that,
like
idea
of
websocket
web
transport
being
like
fit
fitting
one
into
another,.
B
From
the
discussion
we
had,
when
chartering
being
a
drop-in
replacement
for
websocket,
isn't
currently
a
requirement
for
web
transport.
We
can
make
it
that
if
we
wanted
to
but-
and
then
this
is
my-
you
know
two
cents
like
that's-
not
absolutely
necessary.
We're
not
going
to
drop
support
for
websocket
in
browsers
anytime
soon
so
like
having
people
still
use.
Websocket
is
an
option
all
right.
Next
up.
A
Just
one
comment
on
that
david,
I
think
also
you
have
to
be
clear
about
what
you're
trying
to
be
compatible
with.
Are
you
trying
to
be
compatible
with
the
websockets
api
because
it's
possible
to
build,
for
example-
and
this
was
actually
done-
I
think
peter
thatcher
did
a
sample-
was
to
build
a
shim
of
the
websockets
api
like
a
message
api
on
top
of
web
transport.
G
There
are
still
two
people
in
the
queue,
but
still
people
in
the
queue.
Okay,.
K
Okay,
thanks
ian
sweat.
I
guess
I
wanted
to
say
that
the
the
idea
of
messages
is
is
interesting,
but
from
this
conversation
it
seems
like
maybe
there's
a
few
different
set
of
sets
of
use
cases.
Some
people
who
care
about
the
transmission
order
of
the
messages
in
enough
effort
to
reorder
them
when
they
arrive
and
other
people
who
actually
might
want
head
of
line
blocking-
and
I
guess,
putting
a
message.
K
Kind
of
substrate
on
top
of
the
stream
does
provide
head
of
line
blocking,
but
it
does
not
actually
make
it
that
easy
to
do
out
of
order
delivery,
because
it's
difficult
to
figure
out
exactly
where
the
frame
boundaries
are.
So
I
I
think,
having
a
clear
idea
of
what
the
requirements
are,
whether
it's
really
like
we
want
in
inorder
delivery,
or
do
we
want
just
to
know
what
the
transmission
order
originally
was
would
be
helpful.
L
Hi
jenna
anger.
I
actually
ian
said
mostly
what
I
wanted
to
say,
which
is
that
I
think
you
can
I'll
add
to
this
a
bit
that
streams.
L
I
would
recommend
thinking
about
streams
as
very
lightweight
things,
and
if
you
want
to
think
about
the
mess
messages,
that's
fine,
it
doesn't
have
to
span
multiple
packets
is
what
I
would
say
you
can
have
it
in
such
a
way
that
you
can
have
a
a
it's
basically
two
bits:
you
need
a
beginning,
an
end
bit
for
a
stream
effectively
and
if
you
have
those
as
part
of
the
message
that
you're
sending
it
can
all
fit
within
a
packet,
if
designed
to
be
that
lightweight.
L
I
have
one
other
comment
on
the
mapping
streams
too,
quick
and
yeah.
That's
the
the
first
set
of
issues.
Saying
streams
are
hard.
Yeah,
that's
going
to
be
difficult.
I
don't
know
what
else
to
say
at
the
moment
for
that,
but
that's
that
that
that
is
going
to
be
a
difficult
thing
to
accomplish
across
these
different.
A
M
So
since
we
chatted
last
time,
we
got
some
really
good
feedback
from
a
lot
of
people.
So
thank
you
to
everybody
who
had
really
good
comments
and
contributed.
There.
M
We've
moved
a
bunch
of
those
to
github
issues
and
if
you
go
through
the
pdf
of
these
slides,
that
link
may
or
may
not
work,
but
we
can
drop
it
in
the
minutes
and
we
updated
to
a
one
draft
with
not
too
too
many
changes
next
slide,
please
the
I
wanted
to
split
talking
about
this
into
two
separate
sections,
so
we've
got
one
section,
which
is
a
little
bit
of
concepts
and
fitting
in
very
nicely
with
the
previous
conversation
that
victor
was
just
prompting
and
then
the
second
part
is
a
couple
of
actual
issues
with
specific
questions
that
we'd
really
like
some
feedback
on
and
to
get
some
some
answers
on.
M
Do
we
want
to
be
able
to
run
web
transport
sessions
side
by
side
with
other
web
transport
sessions
and
so
part
of
being
able
to
support
that
full
kind
of
feature
set
that
we
think
we're
getting
from
web
transport
alongside
http
means
bringing
back
some
of
the
capability
that
we
have
in
h3
to
h2
next
slide?
Please.
M
As
part
of
that,
we're
still
missing
a
couple
of
things.
One
of
those
is
unidirectional
streams
which
we've
kind
of
waved
off.
As
oh
that's
easy,
you
just
don't
use
one
part
of
a
bidirectional
stream,
but
we
need
to
actually
formalize
how
we
do
that
and
then
datagrams
and
especially
unreliability
which
the
overview
and
requirements
document
gives
a
really
nice
treatment
of
hey.
There
are
things
that
you
might
want,
and
you
know
in
h3
mode
and
in
some
modes
you
may
have
them
in
other
modes.
M
M
As
part
of
that,
we
have
two
things
we
need
to
think
about.
One
of
them
is
how
do
we
actually
implement
it?
Do
we
do
each
message
as
a
stream?
Do
we
want
out
of
order?
Ian's
comment
earlier
was
was
spot
on.
I
think
we
really
need
to
clarify
how
much
do
we
care
about
ordering
you're
gonna
by
definition,
get
some
head
of
line
blocking
when
you're
going
over
h2
et
cetera,
et
cetera
and
then
the
other
half
of
that
is.
How
do
we
expose
that
to
the
user?
There's
there's
clearly.
M
What
do
we
do
to
implement
it,
and
then
there's
also,
how
does
this
get
expressed
in
the
api
and
that's
something
that
that
comes
up
in
in
the
proposal
for
the
api,
especially
around
w3c?
So
I
would
second
a
lot
of
the
conversations
of
let's
make
sure
we
bring
that
together.
So
everybody's
talking
to
everybody
next
slide,
please.
M
As
a
brief
aside-
and
I
put
this
in
here
to
potentially
spark
some
conversation
later.
M
A
larger
scope,
not
necessarily
even
just
within
web
transport.
We
need
to
figure
out.
Do
we
want
to
have
something?
That's
sitting
there
on
top
of
tcp?
That
provides
a
lot
of
what
quick
gives
you,
or
do
you
give
a
thing
over
h2?
That
gives
you
much
the
same
as
the
kind
of
bottom
half
of
h2,
which
is
now
quick
in
h3
land
and
do
that
next
slide.
Please.
M
So
those
are
a
couple
of
the
larger
questions
of
you
know.
What
do
we
want
to?
Where
do
we
want
to
go?
What
do
we
need
to
think
about?
Let's
set
the
stage
for
things.
These
are
very
specific
github
issues.
Most
of
them
are
coming
from
something
that
we've
been
talking
about
already.
So
I
put
in
the
quote
from
the
the
original
web
transport
overview
that
says
these
are
what
it
needs
to
look
like,
and
so
that's
where
we
need
to
figure
out
where
things
go.
M
So
for
this
one
issue
number
three
is
we
want
to
be
able
to
have
new
streams
without
additional
round
trips?
That's
actually
brought
up
as
one
of
the
downsides
of
websocket.
Is
we
don't
necessarily
want
to
do
a
handshake
every
time,
at
least,
if
we're
not
allowed
to
send
data
before
we
do
that
handshake
in
h2?
We
have
this
concept
of
the
connect
stream.
M
Over
http
2
is
over
tcp
and
therefore
doesn't
offer
certain
unreliability
things
we
likely
need
to
have
the
corresponding
section,
which
already
exists
in
most
part
of
what
do
we
actually
require
from
each
thing
so
that
we
can
make
sure
that
we
have
this
equivalent
for
each
mapping
of
web
transport
onto
the
different
h1
h2
h3.
However,
that
wants
to
go
so.
I
think
the
the
biggest
question
to
answer
here
is:
how
do
we
want
to
do?
We
want
to
do
anything
special
to
specify
how
you
can
do
very
lightweight.
M
So
this
is
a
fun
one
unidirectional
streams,
so
we
know
we
need
to
be
able
to
provide
datagrams
unidirectional
and
bi-directional
streams
such
that
we
can
do
much
of
what
I
was
just
talking
about,
where
we
make
sure
that
regardless
of
mapping,
you
have
enough
of
the
features
you
want
available
to
be
able
to
make
some
forwards
progress
on
what
you're
doing.
This
has
a
very
pretty
much
a
b
choice.
M
It
seems
like
so
far
the
the
proposals
are
either
open
up,
h2
streams
and
actually
half
close
them,
such
that
h2
in
its
state
machine,
believes
that
the
streams
are
effectively
unidirectional
at
this
point.
Or
do
we
decide
that
that
comes
with
some
other
connotations
and
has
other
meaning
inside
of
h2,
that
we
don't
yet
want
about
the
stream
lifetime?
And
so
on
top
of
that
we
would
have
web
transport
simply
say
you
know.
No.
M
Some
of
that
was
talking
about
what
are
the
actual
pieces
of
metadata
that
you
are.
We
say.
Oh
http
is
nice
because
it
gives
you
some
happy
metadata
and
you
can
have
this
additional
negotiation
or
carry
additional
things.
If
you
brought
that
back
to
quick
and
wanted
to
do
it
just
on
top
of
raw
quick,
there
were
there
were
some
opposing
sides
of
oh
well.
You
don't
really
need
to
bring
that
much
back
versus,
oh,
but
we
don't
know
what
yet
we're
going
to
need.
M
This
is
another
case
where
we
have
a
need
for
slightly
additional
metadata
over
what
we
had
before
to
carry
the
type
of
stream
that
we'd
want
to
have,
and
so,
as
we
bring
that
in,
do
we
look
forwards
to
having
more
and
more
and
more
of
those
and
does
that
impact
our
choice
over
whether
we
do
this
over
http
or
just
over
rock
quick
next
slide.
Please.
M
Datagrams
fun:
we've
already
got
text
in
the
overview
that
says
you
don't
necessarily
have
to
retransmit
datagrams,
but
you
might
because
you
might
have
to
because
you
might
have
no
choice.
One
of
those
things
is
we
need
to
have
the
api.
Such
the
application
knows
what
they
requested
versus
what
they
have.
That's
great.
The
other
question
becomes
implementation-wise.
M
How
do
we
actually
do
this?
Do
we
want
to
have
a
dedicated
datagram
stream?
Alan
brought
up
most
of
these
options
and
I
think
they're
really
good.
So
I
thought
it
was
worth
talking
about
this
here.
Do
we
want
to
have
a
dedicated
datagram
stream?
M
Do
you
want
to
open
a
new
stream
per
datagram
and
do
that
lightweight
streamless
message,
which
is
the
discussion
that
we
were
just
having
so
there's
a
whole
spectrum
here,
a
lot
of
that
really
does
come
back
to
what
ian
said
about.
We
need
to
figure
out
what
our
requirements
are.
Do
we
really
care
about
the
ordering?
Do
we
want
to
support
it
being
delivered
out
of
order?
M
How
much
overhead
do
we
think
we
get
from
each
new
stream
as
a
message
and
is
that
similarly
cheap
in
h2
as
it
is
in
h3?
I
know
jono
was
saying
it's
not
super
expensive
to
do
that,
you
can
fit
it
all
in
one
packet.
We
need
to
make
sure
that
that's
also
true
for
h2
in
general.
It
seems
like
it
is,
but
there's
a
little
bit
more
to
think
about
there
next
slide
part
two
of
datagrams
flow
control,
so
this
gets
to
be
a
little
bit
more
interesting.
M
I
made
a
note
here
that,
having
it
be
a
new
frame
to
carry
datagrams
makes
it
a
little
bit
easier
to
reason
about,
because
you
don't
have
to
necessarily
shoehorn
yourself
into
an
existing
treatment
of
how
streams
contribute
to
flow
control
in
h2
or
not.
But
we
need
to
make
sure
that
the
interactions
that
these
datagrams
have
with
other
streams
potentially
allows
the
receiver
to
inhibit
the
ability
of
datagrams
to
fully
consume
every
flight
of
data.
If
you
exempt
them
from
flow
control,
because
you
obviously
don't
want
them
to
starve
everything
else
out.
M
That
also
introduces
the
interesting
caveat,
which
is,
if
you
say,
datagrams,
aren't
part
of
flow
control,
because
when
the
receiver
at
the
other
end,
gets
them.
If
you
know
they're
backed
up
and
they'd
like
to
exert
back
pressure,
there
isn't
really
back
pressure
for
datagrams,
so
they
can
just
drop
them.
You
now
have
somebody
who
thinks
they
fell
back
to
tcp
and
may
say.
Oh
I've
now
got
a
reliable
link.
M
It
changes
one
piece
of
it
over
one
link
between
two
particular
hops
on
what
may
be
a
several
hop
path
to
your
ultimate
destination
and
then
the
other
question
is:
if
we're
going
to
make
it
a
new
frame
or
do
something
that's
a
little
bit
more
disruptive
to
h2,
rather
than
mapping
on
top
of
h2
a
lot
of
the
conversations
that
we're
having
about
this
seem
like
they
would
be
interesting
in
a
more
general
context.
M
M
Finally,
one
thing
that
came
up
a
little
bit
earlier-
and
I
thought
I'd
note
here-
is
we've
made
a
decision
that
I
think
we're
all
pretty
happy
with
right
now
that
we
don't
want
to
have
the
underlying
quick
or
h3
or
h2
stream
ids
be
exposed
to
the
client.
But,
as
victor
said
earlier
right
now,
there
is
kind
of
a
stream
id
space
for
each
web
transport
session.
That's
still
in
the
document
a
little
bit,
but
then
there's
discussion
on
github
about
that's
coming
out
or
then
going
back
in.
M
K
Oh
ian
sweat,
I
had
a
kind
of
a
high
level
question.
I
think
a
lot
of
these
questions
that
are
being
brought
up
are
really
good.
How
many
of
them
do
we
need
to
answer
before
we
pick
kind
of
a
technical
direction
here
and
and
what
are
the
most
important
ones
in
the
view
of
the
participants
in
chairs.
M
I
I
have
a
brief
note
on
the
h2
document,
specifically
and
then
I'll
hand
it
over
to
bernard
and
david
to
answer
the
other
half
the
first
half
of
this
presentation
is
a
lot
and
then
it
kind
of
ties
into
the
second
half
is
is
a
lot
of
stuff
that
victor
has
talked
about,
and
that
we've
just
been
talking
about,
and
I
think
some
of
those
we
need
to
collectively
describe
decide
as
a
group
where
we
want
to
go
a
lot
of
the
other
ones,
are
much
more
implementation
detail
of
assuming
we're
already
going
in
that
direction.
M
Just
you
know
what
do
we
need
to
make
progress
on
the
h2
document
itself
and
those?
I
don't
think
we
need
to
answer
before.
We
decide,
for
example,
to
adopt
a
particular
direction,
which
I
think
is
going
to
come
up
later
as
victor's
talking
about
the
different
layers
of
what
we
could
do.
So
I
think
the
big
question
for
today
is
very
much.
A
Well,
victor
has
a
presentation
later
in
the
session,
which
relates
to
the
http
3
transfer
versus
quick
transport,
and
I
think
we'll
refer
to
some
of
that
earlier.
So
that
seems
like
a
big
one
to
me,
but.
B
Well,
I
I
wouldn't
necessarily
count
on
the
chairs.
I
don't
think
we
have
authority
on
deciding
like
what
are
the
important
questions.
That's
definitely
a
matter
of
working
with
consensus,
but
I
agree
that,
like
we,
we
have
some
of
these
questions
that
span
the
drafts
and
I
think
maybe
we
should
focus
on
those
and
you
know,
potentially
in
collaboration
with
the
w3c,
for
those
that
directly
impact
the
javascript
api.
A
B
Yes,
I
agree
that
makes
sense
all
right
and
then
next,
in
the
queue
I
have
simon,
hex
who's
requesting
screen
sharing.
It
looks
like
all
the
slots
for
the
requested
media
are
taken.
M
B
G
So
web
transport
over
http
I've
not
updated
the
draft
in
any
way.
There
might
be
some
updates.
We
will
need
to
bring
it
inconsistency
with
http
2
draft,
but
it's
basically
intends
to
do
the
same
thing,
but
for
http
free
there
or
I've
not
updated
it,
because
I
want
to
gather
our
general
direction
mostly
before
we
proceed
with
getting
to
the
nitty-gritty
and
I
actually
potentially
hope
we
could
merge
those
to
drafts
next
slide.
G
G
G
The
quick
transport
uri
theme
has
the
server
name,
which
is
sent
as
an
sni.
It
has
a
port
number,
which
is
the
udp
port
to
which
you
connect,
and
it
has
a
path
and
very
query
value
which
are
sent
as
a
part
of
the
handshake,
and
the
handshake
is
like
a
request
header,
which
has
two
fields.
One
of
them
is
the
origin
of
the
page
to
satisfy
crs
requirements.
Another
is
the
path
specified
in
the
uri
next
slide,
so
the
status
of
quick
transport
is
that
we
implemented
it
in
chrome.
G
It
is
currently
available
in
chrome,
stable
as
an
origin
trial.
It
will
be
available
for
approximately
until
november,
which
is
when
our
origin
trial
will
automatically
expire
and
it
will
stop
working,
as
is
it's.
We
did
some
interrupt
and
I
think
we
interrupt
with
ourselves.
G
We
interrupt
with
aao,
quick
and
the
chrome
84
implements
draft
27
and
chrome
85
and
later
implements
draft
29
as
well,
and
it
supports
version
negotiation
so
that
something
will
that
we
hope
to
use
to
gain
more
developer
feedback
about
what
developers
like
and
don't
like
about
the
api
and
quick
transport
as
is
and
the
hopefully,
this
might
assist
us
in
the
decisions
that
I'm
going
to
discuss
now.
G
So
next
slide.
So
now
to
the
actual
big
question
is
that
we
have
four
drafts
and
we
need
to
decide
which
of
them
we
want
to
adopt
and
which
of
them
we
do
not
want
to
adopt.
At
this
point
or
possibly
ever.
This
is
a
continuation
of
discussion
we
had
as
a
previous
meeting.
We
had
since
had
some
discussion
on
the
mailing
list,
which
I
believe
helped
clarify
a
few
points,
but
I
don't
think
we
still
we're
still
not
at
the
point
where
we
quite
have
consensus
or
even
enough
information.
G
So
the
transport
purpose
so
far
are
quick
transport,
which
is
minimal.
Quick
based
web
transport,
implement
protocol
http,
2
and
http
free
transports,
which
are
protocol
extensions
to
https
that
allow
a
web
transport
session
to
exist
inside
an
existing
http
connection,
and
there
is
a
hypothetical
fallback
transport
which
is
in
we
didn't
write.
I
never
wrote
it
but
like
there
was
an
idea
of
implementing
something:
that's
polyfillable
in
the
web
as
it
is
today
you
say
websockets
and
the
questionnaire
which
ones
we
want
to
adopt
next
slide.
G
So
there
are
two
important
access
in
which
we
compare,
which
is
one
trend.
Some
transports
are
quick
based
and
some
transports
are
tcp
based
and
we
want
to
adopt
at
least
one
quick
based
and
at
least
one
tcp
based.
We
want
one
quick
base,
so
we
can
get
unreliable,
datagrams,
etc
in
ideal
case
and
one
tcp
base
file
by
four
situations.
One
quick
is
blocked
and
dedicated
versus
pooled,
so
quick
transport
and
fallback
transport
always
have
their
dedicated
quaker
tcp
connection.
G
G
There
are
quick
transport
of
like
precise,
quick
transport,
has
a
nice
property
that
there
is
no
http
dependency,
which
potentially
simplifies
things
so
I'll
get
later
to
why
this
may
or
may
not
be
the
case,
and
here
is
an
interesting
list
of
target
applications
which
are
very
speculative
based
on
partially
based
on
what
I
gathered
from
developers
with
whom
I
talked
and
partially
it
is
my
intuition
of
what
would
be
better
applicable.
G
G
That
is
to
say
that
if
you're
connected
to
a
website
and
you're
requesting
new
http
free
transports,
the
browser
can
just
go
into
its
socket
bool
find
that
it's
already
connected
to
the
transport,
presumably
because
it
fetched
the
top
level
frame
page
over
same
connection
and
then
you
can
just
create
a
stream
inside
the
transport,
and
this
is
very
convenient
because
it
reduces
the
number
of
sockets
in
use,
which
means
you
can
save
on
certain
data
structures,
and
this
makes
this
entire
thing
potentially
more
more
scalable,
and
one
important
thing
is
that,
while
often
you
would
want
to
have
one
number
of
so
one
thing
about
having
multiple
connections
to
the
same
host.
G
Is
that
the
more
tabs
you
open?
If
you
create
a
connection
per
tab,
you
might
at
some
point
run
into
connection
limits
and
here
and
if
you
do
pulling
all
of
the
connection,
all
of
the
transport
objects
would
go
onto
the
same
connection.
So
you
would
not.
You
would
be
subject
to
different
limit,
which
could
be
potentially
different.
G
The
other
thing
is
that,
since
you
have
the
connection,
you
don't
have
to
pay
the
connection
establishment
test
in
terms
of
latency,
and
the
final
advantage
is
that
since
it's
just
an
http
connection,
the
traffic
web
transport
traffic
is
identical.
It
looks
identical
from
that
stance
from
handshake
standpoint
to
http
traffic.
So
an
important
thing
to
note
here
is
that
multiplexing
is
supported,
and
that
means
that
the
wire
protocol
allows
you
to
put
multiple
transport
on
the
same
connection
or
share
connection
between
http
traffic
and
web
transport
traffic.
G
But
this
is
not
necessarily
a
requirement.
There
are
some
cases
in
which
you
might
want
to
have
a
dedicated
connection
and
there's
nothing
in
the
wire
protocol
itself
as
specified
in
the
drafts.
That
would
prevent
you
from
doing
that.
I.E,
opening
http
connection
dedicated
to
a
certain
object.
So
this
is
actually
despite
being
a
wire
protocol
concern.
This
is
more
of
an
api
concern
from
the
my
perspective.
B
J
So
on
slide
44
you
have
this
implication.
I
guess
that
that
the
dedicated
transports
are
essentially
more
appropriate
for
use
cases
with
tighter
performance
requirements.
At
least
that's
that's.
The
net
effect
I
see
on
the
left.
The
quick
transport
is
faster.
Basically.
G
So
I
would
not
say
it
faster.
The
reason
they
are
put
there
is
that,
because,
in
those
use
cases
you
would
often
want
to
do
things
like
get
detailed
statistics
on
connection
level,
things
like
our
transmission
and
sometimes
even
do
things
like
custom
condition
control,
and
you
can't
do
that
really
with
a
pulled
connection.
A
G
Yes,
so
yeah
basically,
but
the
point
I
was
trying
to
make
when
I
said
that
there's
our
not
inherent
properties
is
that
you
can
still
do
that
with
http
transparent.
It's
just
you
have
a
connection
with
http
framing,
but
you
don't
really
use
that
framing
besides
the
handshake,
that
is
to
say
once
you
are
done,
the
handshake
and
assuming
that
you
have
http
connection
in
some
hypothetical
dedicated
mode.
There
is
no
difference
between
http
transport
and
quick
transport
outside
of
minor
differences
like
prefixing
and
assuming
that
you're
connected
directly.
J
So
hi
meat
echo
kicked
me
out
for
some
reason,
the
the
the
claim
I
want
to
make
here
well.
So
apart
from,
as
as
you
pointed
out,
if
you
want,
if,
if
the
performance
of
of
multiplexing
is
a
problem,
you
can
always
just
choose
not
to
multiplex
as
a
server
the
player,
but
also,
if
you
do
multiplex,
then
you
have
a
single
congestion,
control
context
that
can
view
both
the
http
and
web
transport
traffic.
G
B
Then
next
up
is
jonathan,
lennox
and
and
utah.
By
the
way,
please
use
the
button
to
join
the
queue
that
way.
You're
properly
ordered
all
right.
N
I
said
you
don't
hear
me
until
I
hear
the
beep,
which
is
an
important
thing
to
note
so
yeah.
I
guess
my
question
also
is,
though,
for
the
http
transports.
Is
it
anticipated
that,
like
the
whole,
all
the
the
client
would
basically
have
to
support,
at
least
at
least
the
client
would
have
to
support
the
entire
http
state
machine?
In
particular,
you
know
you
can
anticipate
being
300
to
a
different
location
and
things
like
that.
G
Yes,
I'll
talk
about
that
on
the
next
slide.
Okay,.
C
C
I
don't
want
to
run
dns
over
quick
transport,
but
I
do
think
it's
instructive
to
look
at
things
like
that
and
say
if
we
actually
want
this
to
be
used
in
contexts
where
it's
somebody
who
needs
this
kind
of
transport
and
doesn't
need
to
share
a
context
with
the
web,
you
you
definitely
don't
want
to
make
them
use
http.
G
So
another
advantage
of
http
transport
and
the
reason
I'm
in
particular
enthusiastics
about
it-
is
that
we
can
use
http,
standard
metadata
format
and
the
reason
I
am
excited
about.
It
is
because,
as
we
I've
thought,
through
quick
transport
and
as
we
build
up
more
and
more
of
it,
it's
more
would
spend
time
building
it.
The
more
it
looks
like
http
so
like
right
now
it
is,
has
request
headers,
but
no
response,
headers,
which
looks
suspicious.
G
I
like
http
09
at
some
point
we
might
add
response
headers
and
there
are
also
headers
which
we
need
to
add,
potentially,
which
would
look
a
lot
like
http.
G
So
the
notable
examples
are
the
origin
header
which
we
already
have
and
we
can't
get
crs
without
that.
Another
example
is,
if
we
ever
add,
load
balancer
support.
We
would
want
to
have
forwarded
aka
also
relatedly
x,
forwarded
for
header,
because
this
is
really.
This
is
the
header
that,
when
you're
behind
the
load,
balancer
plus,
you
know
the
ip
address
and
identity
of
the
actual
requester
as
opposed
to
the
proxy
requester.
G
That's
a
example
of
a
header
that
a
lot
of
people
would
want
if
they
deploy
web
transport
in
the
load
balance
settings.
Another
very
popular
another
feature
that
people
often
want-
and
I
know
that
because
I
myself
found
myself
in
a
situation
with
a
certain
non-http
protocol,
where
I
really
needed
that,
but
did
not
have,
is
a
location,
header
and
301
for
two
status
codes
which
allow
you
to
redirect
the
url
from
one
to
another,
which
is
very
practical
for
operational
reasons.
G
So
and
besides
that
there
are
lots
of
headers
which
are
deployment
specific,
but
are
used
to
do
things
like
do
request,
racing
and
performance
tracing,
and
those
are
also
some
things
that
people
would
want,
and
people
have
already
built
up
infrastructure
build
out
infrastructure.
For
so
I
assume
that,
like
a
lot
of
those
could
be
potentially
ported
to
http
transport,
since
they
a
lot
any
headers.
That
does
not
make
a
specific
assumption
about
the
request
and
response
body
being
a
sequence
of
bytes
can
be
pretty
much
ported
to
http
transport.
G
There
is
a
counterpoint
to
that
that
there
are
headers
that
could
not,
and
there
are
cases
when
there's
assumptions
that
http
transport
is
in
fact
http
can
be
dangerous
and
confusing,
and
this
is
something
we've
experienced
with
web
sockets
when
people
assume
that
web
sockets
behave
like
http
in
cases
when
it
did
not
behave
like
http.
G
G
So
the
most
of
the
disadvantages
of
http
lie
in
complexity,
and
the
first
complexity
is
implementation.
Complexity.
Http
on
one
level,
is
very
simple,
because
it's
just
effectively,
you
attach
metadata
to
streams
on
both
sides
and
you're.
Pretty
much
done,
and
you
don't
need
things
like
server
push
if
you're
implementing
web
transport
only.
G
But
there
are
some
complications
with
the
fact
that
http
requires
mandatory
header
compression.
We
there
is
a
draft
that
I
wrote
for
http's
working
group,
which
would
solve
the
problem
by
making
that
negotiable.
So
that's
one
barrier
in
terms
of
complexities
that
we
can
remove
and
the
other
barrier
is
more
of
complexity,
barrier
to
the
clients
that
everything
that
involves
socket
pulls
is
subject
to
just
being
much
more
difficult,
because
socket
pulls
are
hard.
G
The
design
complexity
here
is
more
interesting
topic,
so
one
of
them
is.
We
have
to
define
interaction
with
existing
http
mechanisms
and
what
I
mean
by
that
is
that
there
are
headers
which
apply
very
clearly
like,
for
instance,
this
silly
thing
like
user
agent,
header
can
be
always
sent,
as
is
similarly
client
hints,
can
be
sent
to
this.
But
on
other
hand
there
are
headers
like
health
service,
which
are
more
tricky,
because
we
would
have
to
think
very
carefully
about
how
we
decide
between
http,
3
and
http2.
G
G
That
is
to
say
that
you
can
give
a
quick
transport
object
which
just
fails
when
you
don't
have
quick
and
the
application
can
choose
to
either
only
use
that
or
use
that,
but
raise
it
with
tcp
connection
or
use
some
other
fancy
scheme
which
they
can
fine-tune
as
much
as
they
want
with
http
transfer.
That
is
not
possible
because
the
web
application
has
no
visibility
into
the
current
state
of
the
socket
pool
and
also
the
current
state
of
the
l
service
mapping.
G
So
whatever
we
would
have
to
define
some
semantics
for
how
http
is
socket
is
selected,
and
that
is
not
trivial,
because
there
are
questions
like
if
we're
using
web
transfers.
We
would
probably
always
want
to
try
quick
and
http
free
because
we
might
be
talking
to
some
server
we've
not
talked
before,
and
it
would
make
sense
intuitively
to
assume
that,
if
we're
using
web
transport,
it
probably
sticks
quick.
G
On
the
other
hand,
what
if
we
have
a
existing
socket
and
that
socket
is
http
2
and
the
reason
it's
http
2?
Is
we
never
got
l
service
mapping
so
http
3
is
there,
but
it
might
be
there,
but
it
does.
We
don't
know,
and
do
we
use
that
http
2
connection
immediately
or
do
we
try
attempt
http
free
connection
and
what
happens-
and
there
are
lots
of
questions
like
that,
which
I
currently
do
not
have
answers.
G
The
other
questions
to
which
I
do
not
have
answer
is
how
do
we
make
sure
that
we,
in
a
situation
when
we
pull
multiple
transports
over
the
same
quick
connection?
How
do
we
avoid
a
dos
attack
by
one
transport,
just
opening
extremes,
because
they're
passed
because
usually
connections
have
a
flow
control
on
streams,
and
this
means
not
only
that
the
creation
of
strings
is
flow
control,
but
often
implementation
will
have
some
max
stream
limit
for
the
entire
connection.
G
It
will
not
let
you
go
below
be
beyond
that
limit
and
if
one
transport
just
exceeds
this
limit,
what
what
happens
so
that's
other
question
we'd
have
to
solve
and
we
don't
have
to
solve
it
in
dedicated
case,
because
the
quick
flow
control
maps
one
to
one
and
then
there
are
also
things
like
stats,
etc,
which
are
much
easier
to
define
with
dedicated
connection.
But,
of
course,
as
I
said,
we
could
make
a
dedicated
http
free
variant
and
just
let
applications
select
what
they
want.
G
So,
in
terms
of
implementation
experience,
I
think
we
have
some
positive
experience
with
both
quick
transport
is
implemented
in
chrome.
There
are
I've,
heard,
I'm
familiar
with
it
list
four
instances
of
indepen
of
relatively
independent
implementations
of
a
server
in
it
and
it's
as
easy
as
it
sounds.
G
It
works,
http,
transparent,
I'm
not
sure
about
the
very
current
drafts.
The
previous
versions
were
definitely
implemented
at
facebook
and
apple,
so
that
presumably
also
works.
I
don't
see
any
reason
so,
as
I
said,
there
is
much
more
complexity,
especially
in
terms
of
socket
pulls
next
slide.
G
So
there
is
a
question
of
how
which
of
those
transfers
are
mapped,
better
suited
to
which
use
cases,
and
as
far
as
I
am
concerned,
I
believe
that
all
of
the
use
cases
can
be
to
some
extent
satisfied
with
either
option,
because
the
reason
those
use
cases
are
appealing
for
web
transport
is
appealing
for
those
use.
G
Cases
is
not
because
of
any
specific
reason
inherent
to
those
transports,
but
for
most
of
them
the
most
appealing
part
is
obviously
support
for
partially
reliable
streams,
independent
streams
and
unreliable
datagrams
is
what
is
the
fundamental
block
of
any
web
transport
implementation
of
I'm
sorry,
any
web
transport
protocol,
both
quick
transport
and
http
transport?
Now,
of
course,
there
are
properties
which,
for
some
use
cases,
would
makes
this
more
appealing.
For
example,
as
I
said
before,
because
http
has,
it
is
the
problem
of
load,
balancing
http
and
using
http
reverse
proxies.
G
It's
very
well
understood,
so
you
we
know
for
facts
that
you
can
build
up
a
very
scalable
gtp
load
balancer
and
we
work
through
the
proxy.
So
this
means
that
any
case
which
relies
on
having
a
large
server
firm,
for
instance,
there
are
cases
like
push
notifications.
G
Those
cases
will
find
this
feature,
particularly
appealing.
On
the
other
case,
there
are
cases
where
the
complexity
really
matters.
G
One
interesting
case
is
I've
talked
to
people
who
do
game
development
and
they
said
that
they
often
have
to
shift
their
implementation
from
scratch,
because
they
write
code
for
unusual
platforms
with
unusual
tool
chains
where
they
might
not
necessarily
use
some
things.
That's
existing
open
source,
so
for
them
it
would
and
if
they
are
trying
to
make
like
their
networking
stack
unified
that
is
same
from
the
web
and
same
from
the
native
platform
codes.
They
they
would
much
rather
opt
for
something
like
quick
transport
and
in
general.
I
believe.
G
Complexity
is
important
because
from
what
I've
heard
complexity
was
one
of
the
most
biggest
barriers
to
adoption
of
rtc
data
channel
in
client
server
cases.
So
that's
my
view
on,
like
which
use
cases
are
better
satisfied
by
which
transports
and
the
brief
answer
is.
There
are
some
indications
one
way
or
other
way,
but
ultimately
I
do
not
believe
this
is
like
as
important
as
it
sounds,
because,
fundamentally,
what
you
need
is
you
need
to
have
datagrams
and
partially
reliable
streams.
G
There
are
some
questions
so
the
drafts
by
themselves
they
define
protocols,
but
there
are
some
semantics:
that's
attached
to
the
protocols
which
are
not
part
of
the
protocol
and
in
http,
as
this
is
a
lot
what
I,
what
we
would
call
fetch
level
concerns
that
is
usually
there
somewhere,
either
in
what
we
fetch
stack
or
in
many
cases,
are
actually
completely
undefined
by
standards.
So
the
most
important
one
is:
what
is
the
url
uri
scheme
that
represents
the
web
transport
resource
and
for
quick
transport
is
almost
definitely
quick
transport
for
http
transport.
G
We
could
either
define
a
new
scheme
or
use
http
scheme
that
would
have
some
implications
on
how
it
web
transport
interacts
with
the
origin
model,
and
then
there
are
concerns
like
if,
regardless
of
whether
we
go
with
quake
transport
or
http
transport,
do
we
do
things
like
send
cookies
and
other
things
like
http
off
and
there,
as
I
mentioned
before,
there's
a
big
question
of
how
this
works
without
service
and
socket
bills.
G
Next
slide
so
like,
as
I
said,
we
currently
from
the
discussions
that
happened
in
mailing
lists.
So
far
as
it
looks
like
there
are
people
who
are
either
inclined
to
only
ship,
http
or
only
ship
quick,
and
I
haven't
heard
that
much
enthusiasm
for
shipping
both
that
I
have
that's,
because
previously
I
proposed
shipping
both
and
there
were
serious
objections
to
adding
more
complexity
for
what
percy
was
perceived
to
be
insufficient
reasons.
G
But
I
still
believe
we
are
not
quite
at
the
point
where
we
can
come
to
this
attach
to
conclusion,
mostly
because
I
want
a
lot
of
people
who
chimed
in
so
far
have
been
people
who
are
either
either
browser
vendors
or
vendors
of
major
server
software.
But
there
is
in
the
case
of
web
transport,
one
of
the
main
target
audiences,
smaller
independent
web
developers,
and
I
want
to
hear
from
zam
or
even
from
bigger
web
developers,
who
typically
do
not
appear
what
they
think
about
this.
G
H
B
Really
interested
in
folks
who
have
opinions
on
like
which
protocols
that
we
want
to
build,
because
I
think
answering
or
reaching
consensus
for
this
as
a
working
group
will
help
us
narrow
that
down
and
that
way
we
can
move
forward
start
moving
and
making
progress
on
these
documents.
Philip
go
ahead.
O
Hello
hi,
philip
diesel.
When
seeing
the
presentation,
I
asked
myself
diff
study
different
questions,
but
whether
the
transports
are
always
sought
end
to
end
or
whether,
for
example,
a
load
balancer
proxy
might
convert
between
different
web
transport
implementations.
So
you
have
an
http
based
transmitter
to
the
load
balancer.
O
G
G
G
I
We
have
more
flexibility
to
layer
on
things
like
various
header
extension
functionality
and
authentication
and
interactions
with
the
web
model.
If
we
go
with
the
h2h3
combination,
whereas
if
we
go
with
the
just
native,
quick
and
native
tcp
side,
then
we're
gonna
have
to
respecify
a
new
layer
of
a
new
layer
of
things
like
that.
So
it
seems
like
just
going
with
the
h2h3
side
and
then
perhaps
has
been
suggested
in
jabber
having
a
profile
down
version,
so
people
didn't
have
to
implement
the
whole
thing
might
be
a
preferable
approach.
J
Thank
you
yeah.
I
I
just
wanted
to
surface
that
idea
that
I
wonder
how
complicated
it
would
be
to
implement
the
quick
transport
semantics.
So
one
transport
per
quick
connection
and-
and
you
know
basically
the
very
simple
word
semantics
in
http
3
wire
format,.
G
The
answer
is,
it
should
not
be
that
hard.
As
I
said,
that's
one
of
the
motivations
behind
the
draft
to
disable
header
compression,
because
it
should
look
very
much
like
that
after
you
disable
header
compression,
because
if
we
sorry
like
like
there's
like
still
like,
there
are
still
a
lot
of
other
questions
we
would
have
to
address.
But
that's
I
guess
the
main
conclusion
I
came
to
after
thinking
about
this.
J
Yeah,
because
I
think
we
should
we
should
make
it
possible
to
participate
as
either
endpoint
without
having
to
have
a
full
http
3
implementation.
If
all
you
need
is
web
transport,
but
it
would
be
really
nice
if
I
can
make
I
can
write.
You
know
this.
This
limited
http,
3
transport,
only
implementation
of
a
client
or
server
and
speak
to
a
remote
endpoint.
That
actually
is
a
full
http
3
implementation
and
have
it
have
it
all
just
work.
C
That's
relatively
bare
bones
on
the
http
transport,
but
you
can't
be
sure
that
it
then
goes
into
an
environment
that
treats
it
any
differently
than
a
standard,
http
connection,
and
I
think
that's
that's
where
I
continue
to
believe
that
having
it
over
a
quick
transport
is
a
valid
choice
and
whether
people
are
willing
to
do
that.
Given
that
it
gives
you
two
to
do
rather
than
one
I
I
can't
say,
but
for
me
personally,
it's
the
ecosystem,
difference
between
the
bear
transport
and
the
hdp
ecosystem.
C
C
And
I
think,
as
we
discuss
this,
we
ought
to
keep
in
mind
not
just
the
the
client
and
server
manufacturers,
but
the
people
who
build
load,
balancers
and
proxies,
because
while
they
do
a
great
job,
when
this
is
connected
to
a
web
application
or
a
web
service,
I'm
not
sure
that
it's
the
right
thing
to
do.
When
it's
not.
A
Okay,
thank
you
dad
just
maybe
a
little
bit
of
wrap
up.
I
think.
As
a
result
of
this
meeting,
we
have
a
number
of
things
which
we
need
to
focus
on
and
probably
we'll
initiate
several
discussions
on
the
mailing
list
and,
if
they're
not
converging
quickly,
I
think
maybe
an
interim
meeting
may
be
in
order.
E
B
I
think
that
makes
sense
we'll
take
we'll
take
a
lot
of
these
questions
to
the
list
and
we
do
encourage
folks
who
are
on
here
to
follow
the
list
and
please
answer
there.
I
think
it
would
be
really
nice
to
reach
consensus
on
these
before
our
next
session
and,
as
I
was
saying
earlier,
this
might
call
for
an
interim
meeting.
So
we'll
also
discuss
this
on
the
list
to
see
what
folks's
thoughts
are.