►
From YouTube: WEBRTC WG meeting June 2018 Day 2 part 1
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
website,
but
it
a
user,
might
not
trust
underneath
it
and
that's
where
if
getusermedia
is
called
from
web
LG
provider
iframe,
maybe
the
stream
should
be
isolated
or
protected,
for
instance,
and
in
which
case
there
might
be
a
need
for
the
doctor
website
to
actually
say
express
that
hey
if
getusermedia
is,
is
actually
called
from
the
iframe
and
it's
fine,
but
I
also
want
that
the
estream
to
be
isolated.
So
that's
the
first
thing
that
we
might
be
able
to
do
there.
B
B
C
C
A
C
The
detail
less
case
right
I
mean
you,
can
you
well
know
you
get
to
see
the
you
get
to
see
the
DTLS
fingerprints
and
I?
Can
we
modify
them,
but
you
never
get
to
see.
You
never
can
see
a
traffic
keys
if
your
web
browser.
B
E
C
D
B
Are
trying
to
see
the
trust
models
and
what
needs
to
be
well?
What
are
we
want
to
actually
make
least
distress
models?
So,
okay,
that's
pretty
clear!
So
now,
if
the
web
app
is
fully
untrusted,
it's
the
full
general
solution,
so
the
audio
video
and
content
should
be
isolated.
So
you
do
it
get
you
the
media
call.
It
should
be
isolated
from
the
start,
which
means
that
currently,
you
might
need
some
UI
for
a
user
to
express.
Oh,
this
is
this
website.
B
Maybe
the
website
can
express,
but
it
wants
isolated
streams,
but
you
need
to
pass
on
that
information
to
the
user
at
you.
So
it's
it's
a
difficult
topic.
We
do
not
have
anything
there,
but
maybe
you
can
envision
things.
I
also
want
to
mention
that
if
the
stream
size
are
so
lated
than
nothin
yet
anymore,
but
maybe
it's
fine
of
course,
web
apps
cannot
get
access
to
to
the
keys
again.
B
Ti
notifiers
may
be
fine,
but
then
you
need
a
way
to
hook
this
untrusted
web
app
with
some
IDP
in
IDP
providers,
ID
providers
or
have
something
like
an
eme
model
so
that
you
can
go
from
key
identify
us
to
key
values,
but
the
Reese
mapping
should
be
out
of
the
weather
control,
so
it
should
give
browsers
that
implements
it.
So
there
would
be
only
two
to
do
things
there.
So.
B
So
these
are
the
three
models
we
don't
fight
so
far
and
as
a
conclusion,
if
we
look
at,
we
were
to
see
API
level
that
we're
refining
the
changes
are
pretty
narrow
and
it
seems
that
it's
not
the
most
difficult
part
there
might
might
be
able
to
cover
some
cases
with
partial
trust
and
maybe
there's
some
use
cases
that
will
be
interesting
enough
to
trigger
interest
in
terms
of
implementation
and
moment.
If
there's,
no
trust
at
all
I
think
we
need
something
like
IDP
and
piggyback
on
it.
B
So
probably
we
would
need
to
work
on
IDP
ship
it
and
then
improve
on
it
to
actually
add
an
to
an
encryption
and
yeah.
Also
delegating
crypto
to
the
weather
is
not
always
possible,
depending
on
the
Transvaal.
Oh,
it's
only
in
one
trust
model
where
the
web
app
is
free
to
acid
that
we
could
envision
that
scenario.
B
C
G
H
E
I
L
I
I
Participant
to
another,
because
to
the
grip
it
and
for
some
moment
in
time
it
realizing
excreted
in
the
SVU.
So
even
if
you
trust
this
GU
code,
you
deploy
in
a
closed
provider,
you
may
not
trust
o'clock
provider.
It
may
be
co-located
with
her
in
with
another
people
machines
that
they
have
access
to
to
the
memory
of
the
other
soul.
Deploying
everything
outside
of
the
bank
is
saying
is
is
not
allowing
so
even
if
you
did
have
to
implement
that
you're
honest
to
you
is
you
pray
you
deploy
it
in
a
cloud
environment.
L
I
mean
that
one
I
think
it's
like
a
little
bit:
I,
don't
I
don't
get
at
them
because,
like
the
so,
if
we,
if
we
talk
about
an
end-to-end
encrypted
scenario
and
you
do
deploy
in
the
cloud
theoretically,
if
you
do
it
right
that,
as
if
you
doesn't
get
access
to
any
of
the
media,
it
will
be
able
to
do
any
kind
of
denial
of
service
like
tropic.
It's
and
stuff
like
that
right,
but
we.
I
I
I
think
that
it
is
far
of
the
scope
of
what
you
see
I
mean
you
cannot,
you
cannot
do
anything
to
to
do
to
prevent,
deny
or
services
of
mutti
Haven
I
mean
what
we
want
is
to
to
ensure
that
the
privacy
of
the
content
that
it
is
being
transferred,
it's
not
compromised
at
the
HT,
you
cannot
be
a
higher
gate
and
a
and
the
content
be
either
change
it
or
even
recorded.
So
that's
that's
the.
K
I
K
N
D
L
P
L
B
L
F
C
D
F
D
Thought
I
want
to
be
able
to
do
that
as
web
developer
yeah.
So
this
is
echo.
This
is
this
an
area
where
the
bank
is
serving
the
JavaScript
and
they're
using
someone
like
Symphony
as
their
provider.
They
don't
want
to
trust
them,
so
they
they're
able
to
serve
their
own
keys.
They
just
don't
want
those
keys
to
be
accessible
to
symphony,
which
is
the
running
they're,
the
D
or
whatever.
You
call.
K
It's
an
encryption.
You
are
right,
no,
in
that
case,
all
the
mechanism
about
securely
providing
the
key
around
to
all
the
conference
or
the
other
documents
in
per
car,
not
useful,
since
you
already
trust
a
web
app.
So
you
can
have
a
simple
JavaScript
API
to
pass
the
key
e,
which
is
not
something
that
perk
was
made
to
a
low
in
the
first
place
unless
I'm
mistaken
different.
L
C
Other
pieces
of
perk
I
mean
okay,
so
so
I
mean
the
the
the
the
intent
of
perk
is
to
so.
It
means
going
back
to
the
basic
design
principle
of
how
this
entire
thing
was
designed,
which
to
avoid
his
bruising,
the
King
mature
of
the
web
application
and
like
so
and
then
the
purpose
of
perk
is
do
is
rely.
You
provide
and
as
a
few
we're
all
still
still
lucky,
Martello
work,
vocation
and
like
so
since
the
face
design
principles,
I'll
just
post
the
Kirito
application.
That
seems
like
unattractive
well
so.
D
It
so
I
could
that
scenario
might
be
say:
you're
using
hangouts
or
some
service.
You
don't
trust
that
service.
You
want
to
completely
own
the
keys
as
the
enterprise
provider
right,
in
which
case
you
don't
want.
You
don't
want
the
JavaScript
to
get
access
to
the
keys,
because
it's
part
of
the
service,
you
don't
trust
the
s
of
you,
you
don't
trust
the
JavaScript
right.
So
that's
I
think
the
perk
scenario
is
the
general
case.
Web
app
is
untrusted,
yes
and
that's
doe.
C
Isn't
that
problem
solved,
but
what
I
get
yeah
I
guess,
but
my
point
is
even
ignoring
the
like,
even
like
even
ignoring
the
case
of
you
know.
I've
isolated
streams
like
a
basic
design
principle,
was
never
going
suppose
the
traffic
keys
to
have
a
location,
and
that's
like
what
it's
like,
why
we
did
not
doing
his
desk
right
and
so
so.
I
understand
like
what
the
like
I'm,
not
super,
seven
designs
that
I
compromised
that
variant.
B
C
B
C
Okay,
maybe
I
think
masti
really
missing.
Something
because,
like
the
on
is
alright
logic,
we
get
double
encryption.
Is
you
say
well
what
I
really
want
is
like
everybody
in
the
conference
use
the
same
key,
but
GT
also
provide
that
to
you
all
this
provides
like
you
know.
She
just
rides
like
take
separate
keys
for
each
pairwise
connection,
and
so
we
need
to
synchronize
those
keys
and
say
you
get
into
Park
in
the
first
place,
so
I.
D
I
think
our
time
is
basically
up,
but
but
let
me
make
a
proposal
for
next
steps.
I
think
we've
at
least
gotten
to
the
point
where
we
understand
what
we're
arguing
about
which
we
didn't
understand
before,
and
we
have
these
three
cases
which
we
could
conceivably
write
up
and
then
post
to
the
working
group
for
this
further
discussion.
So
I
think
we've
made
a
significant
step
forward
and
that
we
now
have
enough
to
even
have
a
discussion
which
we
didn't
have
before
and
or.
D
So
yeah
so
we're
trying
to
get
to
the
point
where
we
understand
even
what
the
problem
is,
I,
don't
think.
We
necessarily
know
that
we
have
a
JavaScript
API
for
any
of
these
we're
just
trying
to
first
understand
the
use
case.
In
what
way
we
know
we
don't
right
so
does
that
make
sense
as
next
steps,
which
is
to
try
to
write
up
well,
we're
certainly
going
to
post
the
slides,
because
that's
what
we
do
but
to
write
this
up
and
enough
of
a
form
that
and
post
it
to
the
list.
C
Q
R
R
D
A
S
F
Currently,
we
don't
allow
all
those
combinations
you
want
to
send
out
even
video.
Apart
from
what
we've
talked
about,
where
you
can
capture
from
my
body,
oh
and
send
over
a
data
channel
the
nice
path
that
we
support,
you
can
only
send
audio
and
video
over
RTP
and
data
over
SC
TV,
I,
potentially
date
over
quick.
If
you
take
the
quick
extension
spec
or
the
version
of
it
no
RTC,
but
it
would
be
nice
if
we
could
have
audio
and
video
and
data
all
over
the
same
transport.
F
This
has
come
up
in
a
number
of
these
cases
that
we
talked
about
yesterday.
So
it's
kind
of
a
repetition
there,
the
game,
one,
the
VR
one,
the
conference
sharing
thing
remote
control
so
be
nice.
If
we
can
fill
in
all
the
boxes
and
there
are
ways
we
could
do
it
so,
for
example,
if
we
split
out
the
encoder
from
the
RTP
sender,
we
could
have
media
go
over
a
CTP.
F
If
we
made
a
lower-level
RTP
transport,
we
could
have
data
go
over
RTP
and
if
we
added
the
quick
transport
along
with
the
encoder
being
split
out,
you
could
send
media
over
quick
and
then
the
application
could
choose
which
protocol
it
wanted
to
you,
for
which
type
of
content
for
its
use
case.
So
maybe
there's
a
server
an
end
point
that
wants
to
use
quick.
T
F
I,
don't
think
it'll
ever
be
the
case
that
the
app
can
implement
the
congestion
control,
but
quick
does
allow
for
pluggable
congestion
control.
So
you
can
definitely
imagine
an
evening
where
you
could
pick
between
a
certain
set
right
now.
The
quick
implementation
in
chrome
supports
cubic
and
bbr,
but
we've
been
looking
into
adding
the
WebRTC
congestion,
controller
and
they're
also
just
a.
D
B
F
C
Taking
a
step
back
here,
how
like,
let's
take
FC
psst
prep
the
table
for
a
second,
maybe
useful
days
in
sequence,
like
the
art,
if
you
wanted
crazy,
like
you
know,
we
reverse
out
trouble
design,
actual
transport
and
RTP
is
not
a
transport.
It's
just
like
it's.
Just
like
packets
I
mean
like,
but.
F
F
D
C
U
D
D
C
C
E
D
E
D
B
People
or
a
decision
or
so
just
to
clarify
a
little
bit.
The
I
think
acre
is
right
in
saying
that,
in
a
browser
on
the
rendering
side,
there
might
be
an
issue,
but
maybe
the
games
you're
talking
about
are
native
games
using
WebRTC
and
not
actually
so
that
the
rendering
in
that
case
is
handled
natively,
and
in
that
case
you
can
do
synchronization
between
this
data
and
audio
and
video
and.
C
F
J
D
F
H
H
C
D
4103,
it's
already
defined,
that's
what's
everyone's
using
it's
already
an
existing
RFC,
RFC
4103.
We're
done
so
no
light.
No
standardization
work
needed.
Okay,
we
don't
get
when
you're
sending
is
T
140
texts
and
we
don't
know
that's
that
can
be
done
in
a
JavaScript
library
it.
But
it's
the
format
in
RFC
1483.
F
M
C
F
E
E
Can
we
step
back
a
little
bit
and
see
if
I
can
summarize
the
motivation
here?
The
motivation
here
is
that
it's
proven
difficult
to
get
people
to
adopt
the
SCTP
data
journalists,
not
that
we
believe
that
the
SCT
a
channel
genuine.
You
can't
do
any
of
this,
except
maybe
synchronization.
It's
that
we've
found
it
difficult
to
get
people
to
adopt
it.
E
F
I
I,
don't
I,
don't
think
that
what
you
are
going
to
do
is
provide
data
where
RTP
I
think
that
what
you're
going
to
do
is
to
provide
an
interface
that
allowed
to
send
from
the
raw
RTP
and
they
the
applications
can
hack
that
and
that
API
to
send
random
data
as
it
was
out
on
video.
But
you
don't
want
to
send
data
over
RTP
right.
K
I
It
I
would
say
that
it
is
a
side
issue
or
a
hack
of
an
idea
that
it
is
proposal.
I
mean
I,
think
that
the
rationale
is
not
to
provide
that
our
RTP
is
that
a
society
issue
of
an
idea
that
it
is
being
proposed
to
send
raw
earth
key
the
web
apps
may
use
it
or
hang
it
to
stand
to
pretend
to
send
some
data.
It
was
audio
or
video.
So.
B
If
we
can,
if
you
can
look
at
pros
and
cons
of
this
approach
to
me,
it
seems
like
the
biggest
cons
is
that
in
terms
of
web
browser,
there's
no
processing
model
for
this
synchronize
data.
Yet
so,
if
it
were
available,
I
guess
it
it
might
be
a
no-brainer,
but
it's
it's.
A
comb
I
think
there
might
be
some
some
pros,
for
instance,
if
we
go
to
NV
and
we
remove
SCTP
dependency
or
to
give
a
for
Weber
cnv
that
might
be
easier
for
some
people
there's
a
congestion
control
that
might
be
another
Pro.
V
And
I
just
wanted
to
double
back
on
the
main
reason
why
we're
not
able
to
do
what
Tim's
asking
for
is
that
the
time
stamps
that
go
into
the
RTP
packets
are
not
the
same
that
go
into
the
data
channel,
so
the
receiving
side
has
no
metadata
to
synchronize
these
two
forms.
You
could
indeed
do
some
more
work
there
to
put
the
64
bit
RTP
header
extensions
that
word
allow
for
the
synchronization,
so.
W
Okay
with
that
that
wasn't
what
we
were
talking
about
there,
though
we're
talking
about,
if
you
have
a
data
channel
that
has
a
TP
timestamp
when
it
gets
sent
okay
and
then
the
SFU
just
simply
distributes
it,
it
doesn't
care
what
the
data
is.
It
just
simply
sends
it
to
everyone,
and
when
the
receiver
gets
it,
it
can
just
it
could
compare
that
I.
I
Think
there
are
two
different
things:
I
mean
because
we
have
already
Anushka
stat
and
we
want
to
allowed
to
send
meta
a
meta
data
in
synchronized
with
the
alien
video
that
is
Francine
and
I.
Think
that
the
main
reason
for
it.
Maybe
while
people
are
using
the
sending
that
our
RTP
it
because
say
RTP
and
a
CDP-
has
different,
accuse
and
difference.
Latencies
I
mean
I
as
they
have
different
conversion
controls
and
there
might
be
a
big,
distinct
relation
between
the
data
you
sent
over
the
channels
and
over.
W
V
To
do
so,
just
going
back
to
what
Randall
said
about
synchronizing
RTP,
it's
not
that
simple,
because
each
SSRC
has
its
own
RTP
time
stamps
in
their
own
are
teepees
time
some
space,
how
you
synchronize
them
is
usually
through
an
RTP
sender
report
which
says
the
RTP
of
these
streams
corresponds
to
this
clock
over
sender
reports.
This
is
not
possible
today.
V
W
Basically,
if
you
create
a
day
annal
with
a
that's
a
timestamp
data
channel,
you
would
simply
have
each
data
data
message
have
a
timestamp
at
the
front
and
and
then
that
the
receiving
side
it
would
return
it
would
do
whatever
appropriate
calculations
are
necessary
to
tell
you
what
media
time
for
the
receiving
media
that
it's
equivalent
to
because
they
may
be
receiving
and
the
receiving
side
you're
not
receiving
it
as
RTP
either
I
mean
our
to
be
time
stamps
either.
You.
W
W
Know
you
wouldn't
provide
the
time
step
bra.
You
would
when
you
wait
on
when
you
yeah
in
the
on
message
callback
it
would.
It
would
provide
you
that
the
the
the
relative
time
to
the
media
streams
in
this
to
the
media
streams
you've
decided
to
synchronize
it
to
because
even
with
our
TPF,
to
decide
which
streams,
you're
synchronizing
it
to
so
you'd
have
it
would
be
entered
delivered.
A
message
was
sent
equivalent
to
this
timestamp
to
our
setup.
In
the
day,
the
for.
L
S
R
A
F
B
Until
we
solve
the
issue
of
synchronized
rendering
pipeline
and
synchronized
sending
pipeline,
sending
timestamp
is
not
very
useful.
You
have
an
event
you
sanded
and
you.
You
schedule
live
and
at
some
point
we
really,
which
will
be.
Maybe
the
point
you
decode
the
same
frame
with
the
same
timestamp
and
then
at
some
point.
The
JavaScript
will
actually
execute
the
callback
it's
not
guaranteed
when
it
will
happen,
there's
no
guarantee
there
so
for
pure
web
browsers.
The
time
stamps
are
not
useful,
which
does
not
mean
but
somewhere
at
the
end.
F
B
F
V
F
V
F
K
Gonna
go
a
quick
was
only
making
sense
as
a
adapter
kind
of
thing
for
existing
solution.
Sorry,
adapter
yeah
like
existing
solution
that
would
create
RTP.
We
then
be
able
to
go
over
quick,
but
other
than
that
there
was
not
a
real
use
case.
They
were
pushing
through
that.
That
was
my
understanding.
Did
I
miss
something
so
globally.
There
is
not
a
real
super
big
and
I.
C
C
It's
so.
F
D
So
also
to
be
cleared
for
the
top
left
box
top
right
box,
they,
the
quick
audio/video
grid.
You
also
have
the
option
of
going
to
the
ITF
defining
what
audio
video
is
over.
Quick,
if
you
don't
like
our
TV
right
but
I
guess
what
you're
saying
is
something
different,
which
is
you
don't
want
to
have
a
standard
but
allow
it
people
to
define
how
they
transport
themselves
yeah.
K
D
V
D
C
C
C
F
F
T
Shaho
from
Pier
five,
sorry
so
I
think
in
our
realm
of
like
this
very
difficult,
optimization
problem
where
we
need
to
have
a
modern
congestion
control
and
a
few
more
use
cases
so
I
think
improving
transport
too
quick
would
be
very
beneficial
and
we
can
obviously
test
it
and
and
put
it
behind
the
flag.
You.
H
H
H
Y
F
Depends
on
what
your
use
cases
so,
if
you're
doing
one
of
these
games,
that's
heavily
built
on
RTP,
but
you
want
to
add
data.
You
probably
want
the
RTP
transport
doing
data.
If
you're
interested
in
doing
everything
over
quick,
because,
like
pure
v,
you
want
to
use
quick,
then
you
would
use
quick
for
everything.
D
My
understanding
is
the
interest
in
quick
media's
more
from
like
this
broadcasting,
like
sports
kind
of
case,
should
load
latency
broadcasting
scenario,
but
that's
not
what
I
see
from
the
games
developers
and
the
games
developers
like
Peter
said
who
want
the
doing
what
a
video
want,
the
data
kind
of
distributed
by
the
SFU
they're,
the
ones
who
want
the
RTP
transport.
There's
a
lot
of
games
developers
today
using
sut,
be
mostly
in
the
unreliable
mode
for
kind
of
small
messages.
D
W
D
D
W
E
E
D
The
reason
they're
interested
in
quic
is
that
it
already
has
audio/video
friendly
congestion
control
that
they're
looking
at
the
court
source.
They
already
see
that
in
there,
so
it's
like
it
already
does
everything
they
want
to
do
and
they
could
go
modify
a
user
se
to
be
live
but
they're,
not
the
games
developers.
They
don't
really
interested
in
doing
that.
They
don't
even
have
a
knowledge
to
do
that.
V
Then
I
would
also
echo
what
Martin
said
that
quick
is
still
not
baked,
and
a
lot
of
people
who
wanted
to
media
were
quick
is
to
see
if
Rick
is
worthy
of
doing
media
over
it
and
what
changes
need
to
be
made
to
quick
for
it
to
be
comfortable.
The
RDP
is
now
our
transport
and
it
doesn't
have
like
congestion
control.
That's
been
defined,
it
is
it's
a
Google
congestion,
control
and
there's
a
few
others,
proprietary
ones
that
everyone
has
so
I.
D
Of
now
for
doing,
media
works
and
so
I
think
we're
gonna
need
to
wrap
this
session
because
we're
already
a
couple
of
minutes
over,
but
basically,
where
we're
gonna
go
from
here
as
I
guess
we
we're
gonna,
create
an
RTP
transport
session,
I'm,
not
sure
a
lot
to
figure
out
winning
goes.
We
have
another
session.
The
next.
F
That
so
the
things
I
didn't
cover
that
ought
to
be
mentioned
are,
if
we're
going
to
allow
SCTP
data
channels
to
work
in
NV
without
a
peer
connection,
which
I
think
is
something
people
want.
Even
they
want
to
be
able
to
continue
to
do
the
bottom
middle
box
without
STP
and
occur
connection.
We
would
need
to
have
the
transports
from
ortc
in
WebRTC
and
v4
DTLS
and
SCTP
they're
already
defined.
I
was
going
to
go
over
some
of
the
details
here
and,
of
course,
quick
in
our
tip.
F
G
A
F
Yesterday
we
were
talking
about
use
cases
and
I
brought
up
the
fact
that
if
we
split
the
RTP
sender
into
an
encoder
and
a
transport
or
a
sender
in
a
encoder,
then
a
lot
of
these
use
cases
can
be
can
be
done,
basically
allowing
the
application
to
do
more.
In
particular,
control
the
transport
make
a
go
over
a
quick
or
the
RTP
packet
ization,
and
we
talked
earlier
today
about
the
indian
encryption.
I
think
that
the
untrusted
use
case
are
starting
to
trust.
F
To
these
case
the
one
where
you
trust
the
application
can
be
accomplished
just
by
letting
the
app
choose
the
RTP
packet
ization,
because
then
it
can
do
encryption
on
top
of
the
hop-by-hop
layer
just
itself
and
whatever
form
it
wants
to.
But
that
requires
the
encoder
to
be
split
out
of
the
transport,
and
there
was
a
lot
of
discussion
over
the,
whether
that
was
worth
the
complexity
introduced
by
having
a
separate
encoder.
F
So
what
I
wanted
to
point
out
here
is,
if
we
don't
have
the
encoder
separate
from
the
transport.
These
are
the
use
cases
that
we
that
I
don't
know
how
we
can
accomplish
so.
The
Indian
encryption
we
maybe
could,
if
we
have
some
kind
of
RTP
sender,
dot,
set
the
extra
key,
but
that
would
require
having
the
entire
thing
baked
into
the
browser,
which
maybe
is
the
trade-off
for
watt,
but
I
think
that
allowing
the
app
to
do
it
would
be
a
better
option.
F
We
can't
do
that
over
quick,
at
least
unless
the
encoder
is
separated
from
the
transport.
There
is
maybe
a
way
we
can
do
this
with
worklets,
which
I'll
get
to
at
the
end
of
the
day.
But
it's
a
it's
a
random
idea.
I
came
up
with
yesterday,
so
I'm
not
sure,
even
if
it'll
work
so
in
other
words,
I
think
we
should
split
the
encoder
out
from
the
RTP
sender.
F
B
If
we
look
at
all
of
these,
so
we
could
add
an
end
to
an
encryption,
and
so,
let's
put
it
out
of
the
discussion
jitter
buffer
from
what
I
understood.
It's
mostly
a
knob
that
you
want
to
tune,
and
you
want
to
add
more
or
less
jitter
buffer,
I
think
that
can
be
done
without
splitting
the
encoder
and
the
sender.
B
C
D
D
K
C
F
C
W
F
I'm
saying
if
we
took
the
RTP
senator
that
we
have,
which
has
an
encoder
built-in,
we
pull
out
the
encoding
piece
and
provide
an
API
for
that
which
doesn't
have
RTP
attached.
Like
the
encoding
part
of
the
RTP
sender.
You
have
an
API
for
that.
You
stick
in
a
track
and
you
get
out
encoded
frames
that
that
would
be
enough
for
me
to
go
to
town
as
a
core
so
eloquently
boot,
but.
X
X
B
K
I
If
are
the
same,
so
even
excuses
for
how
different
parameters
you
can
do
different
things
from
vp8
got
MVP
night
that
in
h.264
so
having
a
single
API
and
when
to
wrap
it
in
a
sender,
disowned
object
is
prescient,
doesn't
work
for
all
most
of
the
web
developers
had
tried
to
do
anything
else.
I
just
try
to
encode
the
video,
so
you
try
to
control
any
parameter
in
the
color.
I
It's
not
possible
in
earth
with
RTP
sender
so
also
and
I
know
I
don't
like
the
day
and
the
RTP
evolves
things,
but
I
think
that
a
student
encoder
for
the
for
the
transfer
in
their
RTP
sender
is
really
need
it,
because
we
have
one
would
use
case
that
we
can
do
at
the
moment
I
mean
we
can
have
one
single
encoding
sent
through
to
transport.
You
have
to
three.
B
T
P
P
I
D
X
T
T
E
C
Hypothetical
case
were
like
this
all
quick
like
I,
don't
believe
any
quick
stack
lets
you
be
like
hey
like
the
prioritize.
This
thing's
push
this
entire
video
in
its
entirety
of
your
channel.
It's
not
how
about
action,
even
like
counting
against
conjecture
until
window
on
quick,
sorry,
what
I,
don't
think
axe
even
count
against
ik,
actually
control
on
quick
yeah.
S
W
If
what
the
requests
here
is
it
a
very,
very
fine
grain
level?
What's
the
action
to
control
the
fighting
between
the
data
channels
and
media
versus
the
HTTP
etc,
operations
of
the
browser
that
sort
of
thing
could
be
mediated
by
the
browser
by
by
by
lowering
the
overall
priority
of
one
of
the
other,
depending
on
what
on
what
you're
doing
I
mean
basically
giving
having
the
browser
reduce
the
amount
of
sharding
it
allows.
C
V
Actually,
that's
all
so
that
might
be
one
solution,
but
these
are
different.
Five,
two
pools,
and
actually
you
don't
have
so
much
control.
Apart
from
like
the
place
where
you're
uploading
or
sharing
with
the
other
peers,
you
stop
those
data
channels
from
like
you
control
it
in
your
javascript
by
moderating
yourself,
because
I,
don't
think
the
stack
would
do
anything
because
it's
I
I
think
it
is
fundamentally
against
net
neutrality.
At
that
point,
because
you're
like
trying
to
change
the
behavior
when
you
have
a
totally
in
control
of
that
view,
but.
K
F
T
And
the
last
thing
is
a
starts
everywhere
and
more
fine-grained
stats
or,
for
example,
rdt
and
and
latency,
of
course,
but
also
if
we
can
have
the
congestion
control
window
on
the
sender
or
the
scores
whatever,
depending
on
the
congestion
control
algorithm,
having
a
more
fine-grained
understanding
of.
What's
going
on
right
now,
you've.
V
F
S
And
talked
the
fine
so
that
two
browsers
who
implement
the
same
stat
actually
give
the
same
number
in
the
same
situation
and
the
other
one
is
actually
that
is
useful,
so
giving
giving
one
scenario
of
one
application
where
the
application
actually
would
change
its
behavior
in
response
to
measuring
one
of
the
others.
But
then
it's
not
a
stand
right,
well,
kind
of
and
well
one
point
back
to
the
aggressor
against
HTTP.
S
This
is
something
that
we
should
be
a
little
bit
careful
about,
because
this
is
not
a
HTTP
working
group
and
figuring
out
how
to
do
how
stem
different
specs
with
different
purposes
in
different
web
browser
applications
should
behave
against
each
other.
That's
a
very
good
question,
but
it
might
not
be
this.
This
forum,
that
has
the
is
the
right
forum
for
it
and.
V
D
F
T
O
F
Okay,
so
in
that
case,
I'll
move
on
so
I'm
talking
about
quick,
so
the
the
quick
transport
as
currently
defined
is
a
thin
layer
on
top
of
the
quic
protocol
that
lets
the
app
control
how
data
or
media
would
go
on
top
of
it
in
the
quic
spec
there
are
kind
of
two
basic
ones.
I'm
sorry
I
just
made
me
a
point
of
clarification.
This
is
what
is
the
status
this
document?
It's
a
an
extension
spec
that
has
not
been
adopted.
Is
that
right?
That's
right.
Okay,.
D
F
So
but
there's
a
very
clear
one-to-one
mapping
and
if
the
app
wants
to
do
reliable
order
data
on
top
of
that,
like
the
current
data
channel,
then
it
could
simply
add
a
message
framing
into
a
single
stream
or
if
it
wanted
to
have
unreliable
or
unordered
channels,
they
could
do
a
mechanism
of
one
stream
per
message.
So
both
of
these
are
very
straightforward.
It's
like
10
lines
of
JavaScript
or
whatever,
but
if
you
just
want
to
send
like
one
file,
you
can
do
that
as
a
single
string
without
doing
any
breaking
up
into
messages.
So.
D
F
So
the
unreliability
here
I
guess
for
unordered,
yes
unordered
but
reliable,
it's
exactly
the
same.
The
unreliability
can
be
achieved
just
by
canceling,
a
stream.
After
a
certain
point
of
time.
There
isn't
currently
a
way
specified
here
to
say
only
retransmit,
so
many
times
or
retransmits
times
with.
We
have
talked
about
adding
that,
but
it's
not
currently
there
in
order
to
send
media
over
quick,
you
need
to
figure
out
a
way
to
take.
So
this
is
what
Brynn
was
talking
was
asking
about.
F
You
need
to
figure
out
a
way
to
take
a
an
encoded
frame
that
would
come
out
of
an
encoder
and
stick
that
into
some
media
stream
or
media
streams,
and
to
me
a
really
simple
way
of
doing
it
is
take
the
encoded
frame
serialize
it
into
a
byte
array,
using
whatever
format
you
like
protocol
buffer,
C,
bore
I,
don't
know
JSON.
If
you
want
and
then
write
that
into
a
quick
stream
and
do
that
one
frame
per
stream,
we've
actually
made
a
prototype
doing
this
and
it
works
pretty
well
so
I.
F
W
W
D
Going
to
happen
as
quick
as
its
defined
today
as
a
reliable
transport,
so
it's
going
to
take
these
ten
frames
and
keep
retransmitting
them
that
isn't
really
what
you
want
for
that
kind
of
media
application.
What
you
really
want
is
something
much.
You
basically
want
the
options
you
have
in
a
data
Channel
today,
which
is
right.
K
W
C
D
A
F
F
F
C
F
D
Because
remember,
the
problem
is
like
for
the
case
that
Randall
just
said,
with
the
frames
got
split
into
ten
packets
in
theory
to
get
the
exact
same
behavior
as
a
data,
channel
I
would
need
to
keep
opening
and
closing
connections
well,
cuz,
if
I
don't
want
to
have
this
retransmission
occur
on
its
own
in
this
right
within
the
extreme
of
these
ten
packets
I
would
need
to
say
essentially
packetize
it
myself
and
openness.
Different
stream
streams.
D
With
that
is
say,
I'm
trying
to
transmit
multiple
streams,
now
I
somehow
have
to
understand
the
mapping
between
the
connection
IDs
and
the
streams
that
I'm
sending
I
maintain
that
I'm
saying
I'm
sending
to
fit.
You
know,
there's
two
video
streams
coming
in
or
whatever
I'm
getting
I'm
getting
all
these
new
quick
connection
IDs
on
each
so
for.
D
V
V
W
S
W
V
Just
to
say,
there
are
many
strategies
that
you
could
do
here,
as
in
the
one
that
you've
taken
is
probably
the
most
common
one.
That
people
would
do
is
put
a
frame
onto
a
stream,
but
there
are
other
reasons
for
people
to
put
a
whole
picture
like
a
length
of
pictures,
all
associated
frames,
honor.
That.
V
Q
V
V
V
I
K
I
V
V
And
it
may
work
so.
My
concern
here
is
that
we
would
probably
need
a
few
more
knobs,
because
the
reset
cancel
might
not
be
sufficient.
You
might
need
some
more.
This
might
be
just
that.
There
needs
more
metadata
or
stats
coming
out
of
this
sure
be
able
to
make
this
workable,
because
if
I
send
a
whole
group
of
pictures
or
on
a
stream,
my
behavior,
then
we
come
back
to
the
question
like
I-
need
more
fine-grained
control
over
what
is
happening
on
the
quick
stream
sure.
K
D
Describe
if
you
go
back
one
slide
to
the
transport
yeah,
so
the
key
thing
is
there's
this
create
stream
method
here
currently
has
no
argument,
because
there's
only
one
thing
quick
can
do,
which
is
reliable,
so
there's
nothing
to
put
in
there.
But
in
theory,
if
you
had
all
these
other
things
that
it
did
like
max,
retransmits
you'd
have
more
or
less
the
equivalent
of
whatever
it's
called
rtcdatachannel
on.
It
would
basically
go
into
that
extreme
argument
and
then
it
would
say:
okay
create
a
stream
with
these
following
characteristics.
V
F
E
D
E
F
D
E
D
C
This
is
this
much
a
reasonable
think
about
this
as
two
days
to
unbundle.
Details.
Connections
then,
does
think
about
this
as
two
connections
and
service,
and
so,
if
that
looked
at
also
the
same
constraints
is
that
city
too,
as
a
situation,
which
is
to
say,
which
is
a
say,
your
program,
certificate
blocks
or
our
regular
blocks.
B
So
I
think
it's
interesting
as
a
proposal.
The
idea
that
somehow
you
you
would
be
able
to
replace
WebSocket
with
something
but
will
have
more
features
more
more
control.
There
is
nice,
especially
if
we
are
able
to
reuse
an
existing,
quick
connection
between
the
servant
class
and
the
client,
so
that
sounds
interesting,
so
there
will
be
another
topic,
I
guess
about
what.
How
would
reply
be?
B
Cpi,
that's
another
topic
when,
at
that
level,
what's
interesting
the
slippery
slope
there
is
like
use
that
for
audio
and
video,
it's
putting
a
lot
of
complexity
and
a
lot
of
potential
issues
on
the
web
developer.
Where
so,
if,
if
the
goal
is
to
expose
that
precisely
for
audio
and
video
I
think
we
would
need,
we
will
need
to
help
web
developers
with
they
should
not
need
that
API
level
by
default
issue,
they
should
have
a
more
simple
solution
to
exchange
audio
and
video.
They
should
not
need
to
write,
write
JavaScript
this
way.
B
D
F
So
there
are
two
different
roads
here.
One
of
them
is
that
libraries
can
be
can
provide
this
and
it
could
be
different
experiments
with
how
to
map
these
things,
because
there
might
be
different
ways
you
want
to
do
it.
A
live
broadcast
might
choose
to
use
a
single
stream
whereas
or
might
choose
these
mini
streams
and
non
live
broadcast
might
choose
to
use
one
stream.
Something
like
that.
The
other
option
is
that
one
of
these
mappings
becomes.
F
B
Wouldn't
need
more
than
I
understand
the
big
picture,
but
we
would
need
more
to
decide
there.
There
are
obvious
concerns
in
terms
of
performances
in
terms
of
complexity
and
sure
you
can
build
libraries
that
they
may
be
they
may
get
out
of
date.
There
may
be
things
that
you
would
have
been
able
to
optimize
if
it
was
the
browser,
but
well
that
we're
doing
it
and
now
since
it,
since
it's
not
done
this
way,
you're
screwed
so
but
yeah.
B
Y
It's
I
just
want
to
say
it's
very,
very
simple
and
very
powerful.
It
gives
you
a
lot
of
control
like
you
want
time
same
then
time
stamp
use
at
a
time
stamp.
You
want
subtitles,
just
add
subtitles.
You
want
this
to
be
synchronized
with
some
event
in
a
game.
Just
add
that,
like
it
gives
you
all
the
power,
and
if
you
want
a
more
simple
thing
on
top
of
this
is
very
easy
to
either
shim
it
and
like
create
something
in
JavaScript.
Y
Y
Question
would
be
just
in
terms
of
the
unreliable
thing
like
okay,
you
can
create
the
stream
and
then
cancel
it
which
right,
even
though,
even
if
you're
creating
a
million
streams,
it's
so
simple,
there's,
no
a
handshake!
You
just
here's
an
ID.
So
the
fact
that
there
are
different
streams
on
paper,
it's
very
easy
to
to
make
sure
that
they
are
the
same
stream
from
the
applications
perspective.
But
but
when
you
you
send
something
and
then
you
cancel
it.
Does
that
mean
that
the
other
endpoint
will
always
send
an
acknowledgment
back
or
like?
R
D
Y
C
This
is
better.
The
very
beginning
if
like
the
desired
target
is
like
is
necessary
to
transport
and
an
unreliable.
This
is
a
rented
transport
them,
like
those
things
ought
to
be
part
of
like
a
first-class
things
about
quick,
not
things
we
half-baked
in
by
creating
like
by
screwing
around
the
string
cancellation
interface.
So
yes,
there.
C
F
There
are
two
proposals
that
I'm
aware
of
inside
the
quick,
quick,
quick
working
group
that
would
be
useful
here
and,
as
you
say,
those
parts
are
certainly
not
baked,
but
in
a
future
where
they
do
get
baked.
It
would
be
very
easy
to
add
support
for
those
here.
So,
for
example,
there's
the
there's
the
proposal
for
having
unreliable
messages
up
to
the
size
of
an
MTU,
and
that
would
be
it
would
be
very
easy
to
add.
F
A
send
unreliable
message
to
the
transport
object
here
would
be
one
method
addition
there's
also
one
for
breaking
up
a
stream
and
having
chunks
in
it
and
having
the
different
chunks
be
unreliable,
and
so
we
could
add
support
for
that
to
the
quick
stream.
But
what
we
can
provide
an
API
for
now
is
the
more
baked
part
of
quick,
that's
available
now
and
then,
as
time
goes
on
and
things
become
available,
we
can
add
them
to
the
API.
The.
D
Other
thing:
that's
missing,
which
I've
heard
an
interest
in
for
that
unreliable
case
is
basically
being
able
to
use
the
data
channel
API
and
there
you
need
kind
of
an
establishment
protocol
to
be
described
that
would
run
over.
You
know
mostly
I
hear
interest
in
it
for
the
unreliable
case,
where
you
just
use
the
existing
message
API,
so
that
work
would
need
to
be
done
as
well.
On
top
of
this
to
enable
that
particular
use
case.
F
D
AA
AA
F
Now,
okay,
so
if
we
say
yes,
then
you
know
we
can
continue
in
future
meetings
to
discuss
the
fine
details.
You
know
we
got
a
lot
of
stuff.
We
talked
about
back
pressure
and
buffering
if
the
answer's.
No,
then
you
know
that
work
will
probably
keep
happening
in
the
ortc
community
group
and
then
just
like
invaded
for
many
years
and
then
at
some
future
day
will
yeah.
D
D
V
F
C
Well,
so
I
think
it
should
be
clear.
I
think
this
is
premature
and
should
bake
in
the
OTC
working
in
quite
some
time
to
work
is
more
than
IPs
and.
W
K
Can
we
divide
that
into
three
different
question?
Who
doesn't
want
it
at
all
forever
once
who
is
okay?
To
put
it
now,
even
though
the
underlying
implementation
is
not
ready
and
we
taking
a
risk
knowingly
or
we
is
okay
to
not
say
no
but
say
maybe
and
revisit
that,
depending
on
the
quick
update.
What
I'm
afraid
of
is
that
we
say
no
today,
and
then
we
cannot
revisit
that
later.
F
B
B
B
But
if
we
adopt
it
then
we
might
take
steal
some
time
from
when.
F
E
So
I'm
inclined
to
say
we
adopt
the
use
cases
like
there's
bunch
of
interesting
use
cases
that
have
come
out
of
this
discussion
of
things
that
we
want
to
try
and
achieve,
and
we
should
be
adopting
those
and
we
should
be
adopting
quick
as
a
potential
solution,
but
with
no
timescale
on
it
because
I
don't
think
we
like
what
I'm
hearing
is
that
people
who
would
have
to
implement
it
or
nervous
about
when
that
could
happen.
So
I
think
we
should
what
anybody
have
to
implement,
but
we
should
be
splitting
out
the
yeah.
E
F
F
S
S
C
S
Not
finished,
okay,
sorry,
there
is
an
other
part
which
is
the
unreliable
stuff
which
is
needed
for
the
media
stuff
that
actually
requires
consultation
with
IDF
work
on
quick,
I
think
to
figure
out
when
it's
appropriate
and
whether
or
not
we
should
at
all
attempt
to
glom
something
onto
existing
functionality
or
whether
that
we
should
cooperate
with
quick,
which
is
why
we
should
have
a
listen
and
and
make
sure
our
requirements
are
heard
when
quick
adds
the
appropriate
functionality.
So
that
might
be
future
work.
S
C
Is
not
merely
a
matter
of
where
the
forum
happens,
because
adoption
implies
endorsement
by
the
working
group
and
so
I
perfectly
happy
to
have.
This
discussion
happen
to
have
continued
to
happen
on
this
mailing
list
as
a
different
question,
since
I
can
just
follow
it
up
with
a
different
question
of
whether
or
not
it's
being
interesting
to
work.
B
B
So
it
might
be
right,
but
one
part
of
the
API
might
be
stable,
at
least
at
least
underlying
the
underlying
quick
features
might
be
stable
and
we
might
want
to
expose
it
well
if
we
do
not
have
a
good
idea
of
what
is
the
actual
end
goal
and
what
is
the
actual
feature
set
that
we
will
have
in
particular
the
unreliable
use
cases.
Then
we
might
have
an
epi
that
is
not
consistent,
because
we
would
have
shipped
something,
but
we
what
would
go
well
with
reliable
and
foreign
real
world.
B
U
L
We
start
working
on
this.
What
is
the
problem
we're
actually
trying
to
solve
here?
We
implementing
the
data
channel
over
quick
for
what
purpose
I
totally
get
the
part
where,
like
you,
want
to
do
everything
over
quick
but
I,
think
that
has
the
dependency
on
the
ITF
solving
the
unreliable,
sending
over
critic.
First
I
think
we.
AA
Harold
was
asking
for
the
explanation
of
what
it
means
a
top
two,
you
draft,
so
you
don't
have
any
meaning
into
the
process.
What
we
could
do
is
to
say
we
think
this
is
work.
The
working
groups
the
taking
on
and
if
we
do
that,
one
signal
we
could
send
would
be
to
publish
a
first
week
working
left
with
as
many
warnings
and
disclaimer
as
well
on
this
yeah
and
I
guess.
That,
basically,
is
the
only
signal
that
would
much
I
guess
this
adoption
I
know.
Y
So
what
happens
if
we
don't
do
that?
I
mean
very
interesting
and
intersects,
with
a
lot
of
things
that
we're
talking
about
so
I
think
this
is
the
right
place
to
to
talk
about
this
stuff
and
I.
Think
there's
value
in
splitting
up
the
encoder/decoder
from
transport
than
weather,
quick
transport
or
RTP
transport
or
whatever
transport
is,
is
the
best
transport.
There
is
a
value
here
and
I
think
we
should
discuss
it
more.
Our.
V
F
V
U
D
AA
U
B
F
B
I
know
that
I
won't
the
working
group
too
you're
hearing
about
it
and
at
some
point
to
work
on
it,
I
think
that
they
are
great
use
cases
where
and
potential
I
don't
want
the
work
in
the
group
to
be
distracted
by
those
bad
kind
of
subjects.
So
it
means
that
if
there's
an
interim
meeting,
then
if
it's
in
the
scope,
then
we
are
still
in
time.
So
that's
that's
a
kind
of
issue
a
I
might
have
there.
Z
Z
F
M
D
Clear
that
discussion
currently
is
happening
in
github
and
the
issues
there
is
an
editors
meeting
happening
and
the
issues
get
discussed
there.
It's
didn't
these
issues
are
not
brought
to
the
working
group
because
it
has
not
been
adopted.
So
if
you
not
going
to
work
on
it,
then
you're
not
going
to
see
the
discussion
of
it
because
right
now
it's
not
a
not
item
of
this
of
the
WebRTC
working
group
so
that
that
would
be
the
first
decision
to
make
the
discussion
is
occurring,
but
not
here,
but.
T
F
V
O
I
mention
another
concern,
maybe,
which
is
I,
move
a
concern
that
development
on
on
SCTV
based
data
channel
installs
completely,
because
even
the
currency
creation
is
not
brighter
than
I've
got.
So
what
I'm
saying
is
I?
Don't
want
quick
to
be
an
excuse
to
not
implement
SCTP
based
bit
data
charts,
I.
F
Think
that's
totally
valid.
You
know
earlier
one
of
the
slides
I
had
was
to
say:
okay,
we're
gonna.
We
should
adopt
an
sctp
transport
in
be
so
you
can
set
it
up
without
peerconnection
I
think
that
is
something
we
should
support
and
I
think
we
should
continue
to
have
time,
like
you
have
later
today,
to
improve
on
the
SCTP
data
channels.
So,
yes,.
W
F
W
F
W
F
Busy,
so
another
thing
to
point
out
is
like
this:
API
is
not
that
complicated.
It's
it's
very
simple
other
than
the
buffering
I'll
get
to
that
and
I.
Don't
think
we're
gonna
be
spending
a
whole
lot
of
time.
Talking
about
it.
It's
not
gonna,
be
huge
time.
I,
don't
think
it's
gonna
be
a
big
time
sink.
It
hasn't
been
in
the
work
in
the
ortc
community
group.
Look
now
a
copy
out
there.
Z
I
mean
this
discussion.
Sort
of
dovetails
with
I
was
finishing.
One
point
no
I
mean
at
a
different
if
we
dressed
it
as
a
different
point.
Is
there
any
appetite
for
I
guess
absorb
being
quick
into
existing
data
channels
in
some
way
more
within
peerconnection?
Are
you
I
guess
that
we
all
decided
that
we're
not
gonna?
Do
your
connection
anymore
I
mean
all
predictions.
F
AB
D
So,
just
to
the
point
of
how
it
would
affect
existing
work,
my
understanding
is
that
people
working
on
this,
for
example,
Southampton,
are
not
doing
anything
else
in
the
web
right.
You
see
working
group
and
so
they're,
not
distracted,
they're,
essentially
just
focused
on
the
quick
stuff
and
and
and.
D
C
D
F
J
B
O
B
We're
not
yet
ready
but
yeah,
and
maybe
we
can
totally
Wow
I.
Q
B
AA
D
The
problem
is,
if
you
have
to
wait
till
the
unreliable
stuff
gets
done
as
I
understand
it.
That
requires
a
recharter
and
the
quick
working
group
right,
because
it's
not
on
a
current
charter
so
and
quick
won't
be
done
until
December
on
the
current
schedule,
so
you
probably
be
talking
about
like
December
2019
by
then
this
will
be
out
in
the
wild
and
chipped,
so
you
know
adopting
it
at
that
point
and
like
starting
to
change
existing
shipping
code.
It's
just
not
gonna
happen.
B
And
isn't
that
the
issue
actually?
But
that
we're
feeling
is
that
any
way
it
will
be
shipped?
And
maybe
maybe
it
will
be
good,
because
everybody
will
have
a
very
clear
picture
of
which
quick
unreliable
features
will
be
available
on
the
API
will
be
right
correctly
designed
or
it
will
have
been
shipped,
and
we
will
discover
that
the
API
is
wrong
and
then
we
will
need
to
do
light
peer
connection
and
work
on
fixing
it
and
have
a
long
tail
of
things
and
people
happy
about
the
API
changes
and
so
on.
K
Disclaimer
I'm,
not
a
browser
handler
right,
so
I'm
not
committing
myself
to
a
lot
there
so
take
it
with
a
grain
of
salt.
I
got
the
feeling
that
there
is
a
specific
small
use
case
on
the
reliable
case
with
actually
consumer
for
it,
everybody
that
does
data
channel
offload
of
CDN
stream
route,
ear,
five-tier,
CDN
and
so
on.
So
people
that
are
ready
to
test
that
today
with
the
technology
that
is
possible
today.
F
K
As
a
web,
RTC
working
group
II
did
that
something
we
want
to
relive
if
they
want
to
port
what
they
have
done.
We
want
to
see
specifically
to
that
use
case.
We
specifically
the
support
of
Pharos
forward
torrent
and
distributed
a
tier
5,
and
all
these
people
that
are
show
up
today
saying
that
this
is
important
to
them.
AA
K
B
I'm
a
bit
bit
surprised
by
it.
This
wavered
cnv,
because
I
thought
whether
T
and
V
was
something
that
we
come
and
we
were
building
it
properly
with
a
stable
base.
And
it's
not
what
about
Z
and
V,
because
maybe
we
would
see
this
part
will
be
shipped
before.
We
will
finish
word
z1o
and
that's
that
kind
of
interesting
to
hear
about
like
in
in
one
year
from
now.
D
B
D
I
D
C
D
D
D
F
W
B
But
the
point
is
we:
can
we
can
work
on
it?
That's
fine,
but
you
know
year
from
now
this
thing
will
have
been
shipped
and
we
will
change.
We
will.
We
might
want
to
change
the
API
just
because
of
its
unreliable
use
case
right,
not.
O
B
S
Earlier
icg
draft
that
you
can
reference
and
they
look
more
at
support
the
people
who
approve
look
more
at
will
this
be
supported
in
other
browsers,
then
they
they
look
at
what's
stage
of
the
standardization
process
is
the
draft
ads.
They
found
very
much
about
shipping.
Api
stats
are
not
publicly
documented,
so
the
point
is
building
the
web
platform,
but
waiting
until
the
standards
process
has
gotten
all
this.
All
the
t's
crossed
is
not
part
of
the
release
process,
yeah
I,
guess
what
I
was
trying.
S
W
S
S
W
AA
AA
B
X
X
B
Z
AB
D
So
so
we're
on
for
what
what's
exactly
is
here
now?
Is
it
scalable,
video
coding,
okay,
so
lets
slides
are
slightly
out
of
order,
but
hopefully
we'll
find
it
in
here?
Okay,
all
right!
So
do
you
want
to
come
up
and
I
can
present
this
part?
Basically,
so
these
are
some
slides
that
Sergio
put
together
and
talked
about
at
two
meetings
ago.
Something
of
something
like
that
if
it
was
a
tea,
Peck
I,
don't
remember
tea
back
okay
seems
like
only
yesterday
so
and
I
guess
Sergio.
I
D
So
basically,
the
simulcast
in
SVC
use
cases
are
pretty
similar
when
you
tend
to
want
to
want
simulcast.
You
also
tend
to
want
scalable
video
coding.
Typically,
today,
its
temporal
scalability
that's
used.
Although
vp9
has
an
AV
one,
have
spatial
scalability
capabilities
as
well.
So
there's
interesting
in
that.
D
D
We've
had
some
comments
about
the
current
Weber's
see
what
else
back
that
some
of
these
trade-offs
are
not
entirely
clear.
For
example,
we
have
something
called
degradation
preference
and
it
isn't
entirely
clear
how
it
affects
simulcast,
operation
or
SVC.
We've
had
some
comments,
some
priority
as
well,
but
the
basic
idea
is:
you've
got
a
bunch
of
different
clients
in
the
same
conference
and
you're
trying
to
adapt
to
the
capabilities
of
each
of
them
and
they
may
need
they
may
have
different
bandwidths.
D
They
may
have
different
screen
sizes
or
frames
per
second
capability
and
SVC
can
help
enable
that
another
use
case
described
here
is
I
would
say,
differential
robustness
against
packet
losses
where
you
apply
different
protection
levels
to
each
SVC
layer.
As
an
example,
you
might
decide
to
retransmit
or
two
forward
error,
correct
only
the
base
layer
and
leave
the
only
other
layers
to
be
lost,
because
you
can
basically
continue
to
as
long
as
you
have
the
base
layer.
You
can
continue
to
send
at
least
at
some
basic
level
of
quality.
D
So
Sergio
looked
at
some
of
the
needs
of
application
developers.
I
guess
the
first
thing
to
say
is
is
there's
an
ability
need
to
enable
and
disable
temporal
scalability
in
particular.
That
is
something
that
is
extremely
widely
used.
Today,
it's
a
flag
in
chrome,
X,
Google,
flag
conference.
It's
basically
everyone
who
uses
simulcast
pretty
much
uses
that
flag
and
it
turns
on
temporal
scalability
as
well,
but
it
is
not
part
of
the
Weber
c1l
standard
because
that
only
covers
simulcast.
D
So
that's
definitely
something
that's
extremely
widely
used
today,
there's
also
I
think
interest
in
going
beyond
simulcast
is
typically
used.
Today,
for
spatial
differentiation,
and
so
there's
an
interest
in
the
future
future
codecs
like
vp9,
every
one
of
having
spacial
scalability,
largely
as
a
substitute
for
spatial
simulcast,
and
then
part
of
that
is
the
ability
to
set
the
maximum
frame
rate
a
bit
rate,
typically
you'll,
set
that
maybe
on
each
individual,
simulcast
stream.
D
D
So,
in
what
we're
gonna
get
to
that
in
a
bit
but
I
think
at
least
at
a
baseline
level,
I
think
we're
talking
about
much
the
same
kind
of
configuration
that's
available
to
simulcast
today.
That
is
there's
a
scale
down
by
parameter.
You
know.
Basically,
the
spatial
simulcast
is
very
similar
to
spatial
scalability.
It
would
be
more
or
less
the
same
kind
of
things
that
you
would
have
for
spatial
simulcast
for
temporal.
D
It's
a
little
bit
different
because
there
you're
trying
to
bury
the
frame
rate,
and
so
at
least
a
no
RT
see
what
was
done
is
kind
of
having
equivalent
to
the
scale
bound
by
by
scale
of
framerate
down.
Although
it's
a
little
bit
different
because
in
simulcast
you
can
you
don't
have
to
scale
the
resolution
by
a
factor
of
2,
but
in
SVC
you
sort
of
do
so.
It's
it's
a
little
bit
different
in
that
you
can
put
in
you
could
put
in
everything
anything
you
want
there,
but
really
there's
only
certain
allowable
values.
D
K
V
Like
situations
where
the
same
device,
like
typically
down
down
great
towards
a
device
by
having
the
resolution,
but
there
are
also
situations
because
we
have
conferences
with
large
number
of
people-
there
are
devices
the
different
parameters
right.
So
you
have
some
iPhones
and
iPads
and
you
have,
and
sometimes
when
you
want
to
do
SVC,
you
want
to
have
like
not
necessarily
three,
but
maybe
five
when
you
get,
if
there's
so
that
you
can
just
transport
like
take
the
first
three
and
give
it
to
the
iPad.
V
Q
I
Q
D
D
In
temporal
it's
it's
kind
of
a
hierarchical
configuration
where
you
know
four
depends
on
three
demands
on
two
depends
on
one,
but
in
terms
of
you
know,
sergio
survey.
So
I
followed
up
with
a
bunch
of
the
developers.
What
I
found
was
the
most
typical
configuration
for
vp8
at
least
was
three
simulcast
layers
and
three
temporal
layers,
which
is
more
or
less
the
default
that
you
get
in
chrome
with
the
ex-google
conference
flag.
D
I
also
seen
some
developers
using
two
and
two,
but
three
and
three
people
seem
to
like
because
it
covered
like
the
vast
majority
of
use
case.
I
didn't
find
a
lot
of
developers
using
more
than
three
three
by
three
and
they
actually
like
the
simplicity
of
just
turning
on
the
flag
and
letting
the
browser
do
something,
and
most
of
them
didn't
ever
see
the
browser
do
anything
objectionable
and
they
didn't
really
care
to.
Like
I
asked
people
you
know.
D
Do
you
want
to
configure
the
algorithm
like
as
I
understand
that
chrome
will
drop
the
simulcast
layer
first
get
down
to
the
lower
resolution
and
then
stop
start
dropping
the
frame
rates.
I
said:
did
anybody
wants
how
many
different
musical
people
said?
Yeah,
that's
fine,
you
know,
and
they
didn't,
because
the
question
was
specifically
does
degradation,
preference
change
that
and
people
said
I
hadn't
thought
about
that,
but
I
don't
care,
so
it
there's
not
necessarily
a
lot
of
need
for
configurability
people
seem
to
live
with
this.
You
know
flag.
D
It
seems
to
do
everything
they
want
most
of
the
time
and
they
seem
happy
with
it
so,
but
anyway,
kind
of
going
down
the
list
of
things
setting
priority
between
encodings
and
layers.
So
you
could.
Maybe
you
could
mark
them
differently
if
you
want
to
do
that
again
again
and
I
think
the
list
here
would
also
be,
for
example,
applying
different
protection
like
having
our
TX
occur
on
a
base
layer
band
effect
on
a
base
layer,
but
not
on
other
stuff.
So
those
are
the
kind
of
things
people
do
so
the
other.
D
D
D
Could
write
you
can
in
the
encoding
parameters
you
can
set
the
you
can
set
parameters
yeah
actually
in
the
encoding
parameters?
There's
a
lot
of
individual
for
each
of
the
individual
simulcast
layers.
You
can
set
a
lot.
You
can
set
max
bitrate
SMAC
frame
rates,
I'm,
not
saying
all
of
that
stuff
is
actually
engaged
in
implementations,
I'm,
just
saying
it's
there
and
but
for
the
purpose
of
developers
that
they
might
like
to
know.
Hey.
D
D
Part
of
the
issue
is
that
the
spec
just
isn't
well
enough
to
find
saying
what,
because,
if
you
think
about,
if
your
browser
implemented
there
could
be
a
million
things,
you
could
expected
to
do
as
a
result
of
this
API
and
are
you
going
to
do
all
of
them?
I
suspect
not,
but
so
we
should
make
it
clear
what
what's
not
going
to
be
like.
We
don't
care
about
or
what
shouldn't
happen
anyway.
D
The
other
thing
that
I've
heard
a
complaint
about
is
the
browser's
will
by
nature
drop
layers,
so
say
the
browser
sending
simulcast,
but
the
bandwidth
isn't
there
it'll
stop
sending
the
top
layer,
the
most
high-resolution
layer
and
the
SFU
is
sitting
there
and
starts
seeing
a
stream
disappear
right,
there's
no
more
packets.
That
could
happen
just
because
there's
horrendous
packet
loss
or
it
could
become
because
it
was
an
intentional
drop.
There's
no
event
that
would
allow
the
browser
to
say:
hey
I
dropped
this,
and
so
you
could
signal
to
the
SFU
hey
not
here.
D
So
very
often
people's
code
has
to
wait
and
say
you
know,
particularly
if
there,
if
you
think
about
it,
if
you
drop
the
highest
layer,
there
may
be
some
users
downstream
who
are
getting
that
layer.
So
now
you
have
to
do
a
stream
switch
I'm
sitting
here.
I
have
nothing
to
send
that
person
if
I
knew
that
it
got
dropped.
I
would
probably
send
an
F
IR
to
say
hey,
you
know,
give
me
the
keyframe
of
the
lower
res
thing,
so
I
can
switch
there,
but
I'm
just
sitting
here.
D
V
D
D
D
I
V
D
I,
don't
wanna
that
the
purpose
of
the
presentation,
the
argue
about
how
you
would
fix
this
problem,
it's
just
to
say
there
are
these
problems,
but
I
get
consensus
about
what
we
want
to
fix
so
so.
The
next
question
is
how
how
should
we
deliver
SVC
support,
assuming
assuming
we
care
about
it?
It
is
widely
used.
It
is
important
to
understand
the
different
codecs
have
different
capabilities.
D
For
example,
bp8
has
temporal
scalability
h.264/avc
doesn't
really
have
anything
because
it's
a
b
c
and
then
vp8
and
vp9
and
a
v1
have
both
temporal
and
spatial,
so
might
be
nice
to
support
all
of
those
if
you're
going
to
do
it.
So,
in
terms
of
you
know,
options
we
have
for
Weber
to
see
100
it.
It
could
be
feasible
depending
on
the
group's
inclination
to
have
an
extension
to
support
temporal
or
spatial,
and
we
can.
We
have
a
some
idea
of
how
that
might
work.
D
It's
basically
extending
the
encoding
parameters
to
allow
dependencies
on
previous
groups
to
signal
SVC
layers
and
then
in
WebRTC
NV.
You
know
we've
been
discussing
all
of
these
different
ways
of
doing
things
and
orders,
and
but
it
you
know,
it's
also
possible
to
do
the
same
kind
of
things
that
you
could
do
in
one.
Oh
I
guess.
The
point
here
is
that
this
potentially
applies
both
to
101
and
B.
D
There
are
a
lot
of
people
doing
temporal
scalability
today
and
where
c1l
there
are
new
codecs
coming
out,
which
will
have
new
capabilities,
and
so
it
would
be
a
little
odd
if
they'd
be
one
came
out
and
was
shipped
in
whoever
see
Ueno
browsers
and
people
couldn't
use
the
spatial
SVC.
That
was
their
least
in
a
an
easy
way.
R
D
V
D
We're
gonna
get
into
the
discussion
of
that
the
the
dependency
and
coding
IDs
is
one
approach.
We're
gonna
get
into
a
discussion
of
all
that,
but
the
diff
is
fairly
small
fight
yeah,
as
they
say
right,
I
think
Harold
said
this.
You
know
it
looked
like
simulcast
was
such
an
innocent
thing
at
the
time
and
then
okay,
it
nothing
is
ever
in
it.
Nothing
is
ever
as
simple
as
it
looks
but
yeah
in
terms
of
parameters,
it's
small,
but
in
terms
of
headaches
it
might
be
lucky.
D
I'm
gonna
go
over
with
that
with
the
diffuse
yeah.
So
we'll
we'll
talk
about
some
of
these
things,
so
so
the
first
slide
is
on
Weber
c1.
Oh,
we
can
get
into
as
much
detail
as
people
want
or
need,
but
I
think.
The
main
purpose
of
this
is
to
try
to
get
a
general
sense
of
direction
from
the
group.
That's
again
how
high
or
low
do
you
want
to
go
so
for
over
2
c1?
Oh,
you
know
a
basic
way
to
extend.
The
parameters
was
developed
in
ortc.
As
I
said,
it's
adding
another
encoding
parameter.
D
The
trickiest
thing
about
this
was
not
the
parameter
itself,
but
some
of
the
capability
issues
that
Harold
has
already
brought
up
recently
for
simulcast.
So
it
was
an
example.
One
thing
we
see
in
simulcast
today
is
that
simulcast
may
not
be
supported
for
all
codecs
on
a
given
browser,
so
you,
for
example,
you
know
some
browsers
may
support
simulcast
for
vp8,
but
not
h.264.
Others
may
support
issues
absorb
it,
not
vp8,
so
so
there's
a
basic
capable
there.
D
It's
also
possible,
similarly,
that
some
browsers,
for
example,
might
support
only
temporal
scalability
for
81,
even
though
it
supports
spatial
right.
So
you
can't
assume
that
for
any
given
codec
and
then
there'll
be
a
question
of
how
many
layers
what's
the
maximum
layers,
that
you
support
for
a
given
codec.
So
there's
there's
an
issue
of
capability,
discovering
the
capabilities
both
in
simulcast
and
SVC
of
a
given
codec
and
then,
as
Herald,
has
noted,
there's
issues
in
even
in
Weber
to
see
when
I
would
simulcast
as
it
exists
today.
D
What,
if
you
exceed
those
those
capabilities
right,
it
said
I
can
only
do
three
Samak
a
so
as
you
asked
for
five
and
I
think
the
general
direction
Herald
is
that
we
should
give
you
three
and
not
create
errors
so
that
make
your
life
miserable
and
probably,
if
that's
the
direction
for
simulcasted,
probably
more
or
less
makes
sense
for
us.
We
see
and
I
would
say.
Overall,
the
feedback
I've
gotten
for
developers
is
generally
simple
as
better
and
in
fact
most
of
them
don't
even
want
to
deal
with
this
at
all.
D
They
just
want
to
tell
the
browser
to
do
whatever
it
feels
is
best,
and
obviously
the
browser
knows
how
many
layers
and
supports
of
a
given
codec.
So
it's
like
hey,
do
the
right
thing
and
leave
me
alone
as
opposed
to
trying
to
guess
what
it
can
do,
and
you
know
having
to
do
something
else
and
then
having
to
retrieve
parameters
and
figure
out
what
it
did.
You
know
just
hey
dude
go.
Do
this.
W
D
The
people
I
talked
to
did
not
feel
at
all
guilty
about
not
understanding
the
details
of
codecs
and
they
were
quite
happy
with
the
behavior.
They
were
seeing
and
didn't
complain
about
it
so
anyway,
just
something
to
think
about
and
then
so
generally.
The
way
things
probably
would
work
within
the
existing
Weber
c1o
framework
is
you'd
have
some
kind
of
get
capabilities.
D
We
don't
have
that
kernel,
you
don't
have
max
simulcast,
but
and
depending
on
the
behavior,
you
might
or
might
not
care
about
that,
but
might
be
one
thing
to
say:
you
know
how
many
temporal
layers
you
can
get
in
order
to
see
it
works.
This
way,
how
many
simulcast
layers,
how
many
temporal,
how
many
spatial
is
the
maximum
and
then
you'd
have
a
set
parameters
that
would
set
what
you
want,
and
you
know,
subject
to
some
of
the
PR
issues
that
Harold
is
working
through.
D
D
A
novice
developer
would
start
to
do
weird
things
that
you
know
we
could
kind
of
go
along
the
lines
Howard
suggested
and
have
it
figure
out
what
you
wanted
to
do.
But
if
you're
going,
if
you're,
already
kind
of
trying
to
interpret
what
somebody
is
saying
you
might
as
well
just
give
them
a
reasonable
default
to
begin
with
right,
they
might
do
things
like
try
to
scale
a
frame
rate
by
a
factor
of
three
instead
of
two
or
you
know,
choose
a
weird.
D
You
know:
we've
already
had
some
problems
by
the
way,
even
in
the
existing
spec
about
being
clear
about
what
order
the
encoding
czar
supposed
to
be.
In
so
that,
none
of
that
really
matters
if
you're
telling
somebody
to
do
the
right
thing
and
they
just
do
it
because
then
you're
not
getting
into
that
level
but
I
think
already
haven't
we
seen
different
browsers,
interpret
the
order
differently,
yeah,
so
so.
S
K
D
Yep,
so
those
are
those
are
some
of
the
things
to
think
about,
and
then
you
know,
there's
the
overall
question
which
we
already
have
with
simulcast
is
how
much
stuff
can
you
it,
because
it's
an
encoding
like
simulcast
would
be.
You
know
how
many
of
these
different
parameters
do
you
allow
to
to
impact
things?
I
mean
some
of
them
probably
wouldn't
make
a
lot
of
sense
like
if
you're
doing,
for
example,
temporal
encoding.
Would
you
also
put
a
max
frame
rate
on
there?
Why
would
you
do
that
right?
It's
already
got
a
frame
rate
scale.
D
So
for
WebRTC,
nd,
I
guess
based
on
some
of
the
feedback,
maybe
we're
not
going
to
do
the
video
encoder,
so
it's
more
or
less
the
same
set
of
problems
as
I've
just
outlined
for
one
Oh
with
you
know
some
differences.
For
example.
Maybe
you
have
set
parameters
and
get
parameters,
and
maybe
you
don't
it?
Does
the
nice
thing
about
having
set
parameters
and
get
parameters,
as
we
do
in
WebRTC
we
know.
Is
it
gives
you
the
possibility
of
not
throwing
errors?
D
So
I
guess
the
the
overall
question
for
the
group
is,
you
know
I
guess
the
first
question
is:
do
we
feel
that
it's
useful
to
specify
get
into
this
whole
SVC
issue
and
support
it?
And
if
so,
is
this
something
we
should
do
as
an
extension
to
ever
c1o
or
is
it
something
purely
for
Andy?
So
I
guess
that's
the
main.
The
main
question
on
which
we'd,
like
people's
opinion,
so.
B
M
W
I,
can
I
interject
here
just
because
the
100
near
acquires
to
64
and
8
does
not
mean
that
one
Oh
does
not
support
other
codecs
like
vp9,
which
is
already
supported
widely
like
a
v1
which
will
be
supported
in
web
rc1?
Oh,
so,
the
fact
that
the
required
codecs
don't
have
spatial
scalability
does
not
mean
that
you
that,
when
a
will
not
have
codecs,
it
supports
spatial
scalability
yeah.
D
So,
just
to
be
clear,
this
would
be
an
extension.
It
would
not
be
in
the
web,
see
when
I
was
back.
We're
not
adding
features
at
this
point
and
in
fact,
Harold
has
even
suggested
taking
simulcast
out
and
sticking.
It
may
be
in
the
same
extension
spec,
because
the
same
questions
arise
with
SVC
in
simulcast
right.
It's
all
clarifying
exactly
how
this
stuff
works
right
in
the
model
and
all
that
that's
it's
all
the
same
bunch
of
questions
it
may.
F
D
Because
of
the
things
we
just
described
with
a
set
and
get
it's
a
little
bit,
probably
clearer
how
to
make
it
work
with
pure
connection
and
with
mb,
if
you're
gonna
try
to
do
what
I
wanted,
rather
than
what
I
said
because
you
have
in
100,
you
have
get
parameters
which
could
actually
tell
you
what
the
browser
did.
If
it
wasn't
what
you
asked
for,
whereas
an
ND
you,
it's
not
really
clear
how
you
do
that
so
that
you
know
one
one
proposal
I
would
like
to
put
out
would
be
there's
an
extension.
D
D
S
D
So
let
me
ask
this
question:
is
there
any
opposition
to
trying
to
prepare
a
proposal
along
these
lines
and
let
the
working
group
look
at
it
and
see
what
they
think
I,
don't
think
we're
anywhere
near
close
to
deciding
anything,
but
should
we
prepare
aspect,
let
it
percolate,
okay,
I'm,
hearing,
I'm,
hearing
general
interest
in
that
so
well.
The
next
question
is
what.
S
D
D
D
Of
the
things
we
would
obviously
describe
to
the
group
is
what
we're
proposing
to
take
out.
I
mean
you
know
how
it,
how
what
you
know
what's
in
that
spec,
but
for
presumably
that's
what
Harold
will
be
working
through
and
we'll
add
the
SVC
proposals
in
there.
And
my
third
question
is:
generally
you
know
what
kind
of
SVC
proposal
are
you?
D
Would
the
group
like
as
I
described,
there's
a
lot
of
developers
who
just
want
to
basically
say
go,
do
the
right
thing
and
you
know
maybe
tell
me
a
little
bit
after
the
fact
what
you've
done,
but
we're
not
really
interested
in
specifying
this
SVC
thing
to
an
infinite
degree.
You
that
the
average
configuration
tends
to
handle
90%
of
the
usage.
D
D
D
D
Well,
yeah,
that's
kind
of
dangerous,
though,
because
somebody
might
not
want
SVC
at
all
or
just
trying
to
do
like
a
peer-to-peer
connection
and
suddenly
they
get
threesome.
Yes,
we
see
layers,
so
you
kind
of
have
you
need
some
indication
that
right
you
want
both
the
SVC
on
default
right
I
want
I,
want
SVC,
but
I,
don't
know
any
more
details
beyond
that.
Give
me
something
yeah
yeah,
so.
I
D
W
D
U
V
To
say
that
you
know
the
when
we
don't
provide
any
control,
the
problem
is
that
most
developers
do
run
into
issues
where
they
would
suddenly
realize
that
they
need
control
and-
and
this
thing
starts
to
escalate
because
initially
it's
only
like
a
minority
and
they
say
well,
let's
ignore
it
and
that
Minority
starts
to
grow,
and
at
that
point
they
have
no
clue
what
to
do
and
as
people
that
collect
a
lot
of
stats,
we
see
these
people
reach
out
to
us
and
say
like
hey.
How
would
we
control
something
like
this?
V
D
D
V
D
B
F
R
D
D
But
yeah
it
has
to
be
on
the
Senate
I
made
for
the
excited.
Well
I.
You
know,
I
think
that
what
I've
what
we're
described
here
is
kind
of
more
1o,
centric
I.
Think
the
first
step,
because
the
end
the
model
currently
doesn't
have
set
parameters,
get
parameters
which
it
as
we
work
through
this
model.
We
might
understand
how
useful
that
is,
and
it
might
be
that
nd
might
need
to
incorporate
a
get
and
set
mechanism
if
to
handle
some
of
the
issues
that
come
up
so
I
don't
want
to.
D
It
would
definitely
go
on
the
RTP
sender,
but
exactly
what
would
be
needed
to
handle
the
not
getting
exactly
what
you
asked
for
a
case.
It
still
would
remain
to
be
determined
for
envy,
but
there
is
a
discussion
occurring
there
in
the
model.
I
think
it's
tending
so
far
more
towards
the
error
kind
of
situation
than
it
has
in
whoever
see
one.
Oh,
because
there's
no
get
parameters
to
kind
of
determine
what
you
got
so.
D
S
D
You
so
can
we
make
Leonard
the
presenter
or
do
I
have
to
be
the
slide
Turner
I.
O
O
Right
so
hello,
everyone,
this
is
about
setp
data
Channel
and
streams,
and
I
want
to
talk
about
about
that.
Scp
is
the
basis
of
data
channels
for
quite
a
while.
Now
and
I
went
to
look
at
things
that
didn't
really
work
out
and
hopefully
resolve
them
with
evolutionary
steps.
I
I,
don't
plan
to
do
any
radical
stuff.
Now
next
slide,
please.
O
So
these
are
the
use
case.
Requirements
I
want
to
check
out.
They
are
taking
from
the
use
case
requirements
presented
yesterday.
So
let's
go
briefly
over
them.
Breck
pressure
on
both
sides,
or
we
can
also
call
it
flow
control,
or
it
is
also
part
of
buffering
there
I'm,
not
sure
what
the
perfect
terminology
is.
Maybe
we
can
set
it
on
on
on
tack,
pressure
control
over
weight,
transmissions,
less
noise
on
the
wire
I'm,
having
possibly
a
congestion
control
that
works
well
with
audio
and
video
and
SCTP
as
well
and
reduce
international
entities.
O
This
hasn't
been
brought
up
yesterday,
but
it
has
been
brought
up
on
the
mailing
list
while
ago
next
slide.
Please
so,
let's
go
it's.
Let's
go
directly
into
the
issue
of
back
pressure
and
buffering.
Sadly,
Peter
didn't
talk
about
it
today,
but
well.
I
hope
this
will
be
a
little
bit
less
controversial.
Do
we
see
so
yeah?
O
So
if
you
want
to
handle
buffering
and
back
pressure-
and
if
you
want
to
prevent
back
pressure
on
the
sender
society,
you
have
to
set
the
buffer
amount
low
threshold
on
data
channel,
there's
some
carefully
chosen
low,
watermark
value,
and
then
you
send
messages
until
it
reaches
some
other
carefully
chosen,
high
watermark,
where
you
and
then
you
post
sending.
This
is
all
done
in
the
application
and
you
continue
sending
once
the
buffer
timon
low,
even
finest.
So
this
may
be
a
little
bit
hard
to
get
right,
but
kind
of
works.
O
The
thing
is
now:
this
only
works
on
the
certain
circumstances,
so
yeah
next
slide.
Please
because
this
only
works.
If
you
don't
send
large
messages,
keep
in
mind
data
channels,
message
oriented,
so
we
can
sound,
send
large
messages,
and
this
this
is
perfectly
fine.
There's
a
valid
use
case
for
that
so
problem
here
is
that
we,
if
we
send
a
large
message
this
will
blow
up
the
buffer,
for
example,
if
we
send
a
large
file
next
slide,
please.
O
O
So
what
does
SDP
offer
us
to
fix
this?
Well
again,
STP
is
message
oriented,
so
if
we
don't
want
to
do,
is
break
up
a
large
file
into
small
messages
and
then
add
another
protocol
on
top
to
mark
the
end
of
a
file.
What
we
want
to
do
is
we
want
to
break
up
large
file
into
small
chunks
and
then
leverage
the
message
oriented
principle
of
SCDP
to
mark
the
end
of
the
file.
O
So
how
can
we
do
that?
On
the
sender
side,
there
is
SLP
offers
an
explicit
IRRI
mode
which
allows
us
to
send
a
large
message
in
small
chunks
and
on
the
receiver
side.
I
certainly
have
bad
news
for
you.
We
can
do
some
kind
of
flow
control
there
we
can
post
receiving
the
whole
on
the
whole
Association,
but
that
would
mean
that
this
this
would
have
to
happen
on
all
data
channels
simultaneously,
so
that
that
is
pretty
hard
to
get
dried,
API,
wise
and
I'll.
Hopefully
come
back
to
that
next
slide.
O
O
This
word
followed
the
internal
STP,
a
SCDP
API
very
closely,
the
capi
and
on
the
receiver
side
we
could
add
a
new
binary
type,
for
example,
called
area
buffer
chunked
and
then
hand
our
partial
messages.
These
are
the
chunks
in
the
message
event
and
expose
and
attributes
to
the
message
event
that
marks
the
end
of
a
file
again
to
to
post
receiving.
Is
it's
tricky
to
do
this
to
do
in
in
a
quick
and
dirty
way?
O
O
Okay,
so
let
me
introduce
you
to
what
WG
screams.
This
is
an
issue
that
has
been
brought
up
a
while
ago,
I
think,
a
few
months
ago,
on
the
on
the
on
the
issue
tracker.
So
on
the
sender
side
we
could
add
a
new
method,
something
like
create
writable
stream,
and
then
you
get
a
vital
stream
instance
back.
O
This
is
different
to
what
we
have
today,
because
normally
you
would
just
send
and
then
it
atomically
sense,
but
now
we
get
a
writable
stream
instance
and
on
this
writable
stream
instance,
we
can
create
a
writer
and
every
time
you
want
to
add
a
chunk
to
this
message.
We
just
call
that
right
on
the
Associated
writer.
This
is
by
the
way,
the
low
letter
API
this
some
high
level
components.
I
will
talk
about
later
and
once
the
writer
has
been
closed,
you
can
do
that
with
a
closed
method.
O
On
the
writer
you
mark
the
end
of
the
message
and
then
it's
complete
on
the
receiver
side.
We
could
have
a
new
binary
type
called
stream
and
then
we
hand
out
readable
stream
instance
as
part
of
the
message
event,
but
difference
here
with
the
first
chunk
received
not
with
the
last
one
and
again
post
receiving
is
still
tricky,
but
the
difference
here
is
not
this
model
would
allow
us
to
to
add
if
we
have
a
way
to
do
on
a
channel
basis
to
do
the
flow
control.
We
can
add
this
later
without
any
API
changes.
O
Alright
next
slide,
please.
So
let's
break
this
up
a
little
bit
more
in
detail,
because
this
can
be
bit
hard
to
wrap
your
head
around.
So
essentially,
each
individual
message
is
a
stream
and
the
left
side.
We
see
the
sender
side
again,
we
create
a
writable
stream,
then
we
add
chunks
and
then
we
close
the
thing
and
then
the
message
is
complete.
O
On
the
receiver
side,
we
will
get
the
message
event
or
read
from
it,
and
then
it
is
smart
closed
at
some
point
and
the
message
is
complete,
of
course,
the
chunks
they
don't
have
to
be
of
the
same
size
on
both
sides.
So
this
is
up
to
the
implementation
and
then
this
whole
procedure
which
repeats
itself
for
every
message
on
the
channel.
The
thing
is
that
this
high
level
API
component
of
what
WG
streams
which
allows
us
to
if
we
have
more
components
using
it,
we
can
use
the
pipe
to
method.
O
B
Lynyrd
some
comment
there
yeah
I
agree,
but
in
the
context
of
service
workers,
when
you
give
a
response
you
can
pass
when
you
create
or
respond
to
synthetic
response
you
you
can
pass
a
readable
stream
as
as
the
constructor
and
it's
it's
working
fine
and
then
it
can
be
optimized
if
the
source
of
the
stream
is
actually
a
native
train
so
that
you
never
go
through
JavaScript
at
all.
Yes,
exactly
and
by
type
2.
We
do
the
same,
so
fetch
response
has
a
readable
stream.
B
B
O
B
B
O
O
Is
a
perfectly
valid
point:
I
asked
yesterday
on
on
the
ILC
channel
of
the
water
VG
group.
They
said
that
they
haven't
done
anything
yet
or
not,
but
there
is
interest.
So
if
the
streams
API,
which
I
hope
it
is
implemented
widely
everywhere,
then
I
hope
to
see
a
file
API.
So
we
can
write
a
stream
directly
to
file
without
having
this
in
memory
at
the
whole.
At
the
time
just.
Z
F
Have
a
question
when
I
look
at
the
what
WG
streams
examples,
it
seems
rather
easy
to
implement
readable
stream
on
top
of
other
abstractions
and
have
lots
of
examples
of
that,
and
they
even
have
a
theoretical
example
of
back
pressure.
Sockets,
where,
if
there
were
a
socket
which
looks
exactly
like
a
data
channel,
had
a
read,
stop
method,
then
that
could
be
a
way
for
the
receiver
to
buy
back
pressure.
F
B
Hypothetical
example,
so
it's
it
would
be
fine,
but
I
think
the
goal
there
is
to
have
a
readable
stream
or
whatever
stream
with
a
native
source,
so
that
you
do
not
go
through
the
whole
JavaScript
limbo
of
promise
resolution
and
everything,
and
so
that
when
you
do
piping,
you
can
do
piping
from
a
native
source
to
a
native
consumer.
Yes,.
W
O
It's
this,
of
course,
the
the
optimization
win,
but
there's
also
another
thing
that
the
example
they
brought
up
in
the
water
BG
streams.
Spec
is,
it
is
not
message
oriented,
so
this
is
message
oriented.
We
have
a
stream
per
message,
but
the
example
is
based
on.
They
treat
messages
as
chunks,
which
I
don't
want
to
do.
The.
S
One
well
one
thing
in
this
say
this
suggests
the
API
which
I
like
is.
We
don't
have
any
way
means
of
the
sender
telling
the
recipient.
What
kind
of
message
is
coming
so
the
recipient
can
choose
to
either
receive
it
as
a
blob
or
receive
it
as
a
as
a
as
a
stream,
which
means
that
the
stream
is
that
the
sender
can
cause
all
the
problems
that
it
can
do
today.
Also
in
this
structure,
right
yeah.
O
You're
correct
about
that.
If
we
keep
this
the
current
API,
which
is
this
approach,
then
the
the
streams
is
just
an
extension.
So
if
I
want
to
receive
a
stream,
I
can
receive
in
the
stream,
but
if
I
don't
want,
you,
I
can
still
receive
it
as
a
blob
or
an
area
buffer.
But
of
course,
usually
you
control
both
sides.
So
you
know,
if
there's
someone
who
sends
a
lot
of
data
and
then
you
will
take
the
stream
approach.
O
If
the
streams
API
is
adopted
more
in
web
api's,
there
is
the
advantage
that
we
have
low
level
and
high
level
at
the
same
time.
So
we
can
do
this,
writing
individual
chunks
and
reading
individual
chunks,
but
we
can
also
use
pipe
2
or
we
can
also
use
transform
screams,
for
example,
unzipping
or
doing
double
crypto,
and
this
can
be
optimized
if
the
browser
provides
these
kinds
of
things.
There's
also
the
the
bring
your
bring
your
own
buffer
concept,
which
allows
us
to
well.
O
O
O
It's
a
there
general.
Yes,
we
should
do
this
or
now
should
this
be
in
and
V
or
should
we
also
part
backported
two
to
one
at
oh
and
something
I
would
particularly
like
to
see.
If
can
we
possibly
say
goodbye
to
Mac's
message,
message:
sighs.
W
So
I'll
jump
in
and
be
the
first
to
try
to
provide
some
some
answers.
I
think
we
should
do
this
I
generally,
like
the
API
I
think
it
should
be
in
envy
I
think
if
we
want
to
do
it
in
one
Oh
part
of
the
you
know
the
wanting
to
finish
off
one,
oh
and
so
forth.
W
I
think
we
should
consider
this
as
an
extension
spec,
not
a
C
part
of
the
100
original
spec,
but
that
I'm
open
to
discussion
on
that
and
as
far
as
math.max
message,
sighs
I
would
love
to
see
max
message.
Size
go
away,
and
this
actually
provides
largely
the
sort
of
large
transfer,
a
better
form
of
the
large
transfer
stuff
we
had
discussed
and
actually
implemented
in
some
cases
before
max
message.
W
W
W
S
S
S
O
W
B
It
seems
very
reasonable,
there's
the
possibility
to
have
some
consistency
between
different
api's,
for
instance.
If
we
allow
such
constructs
in
peer
connection,
then
you
can
pipe
some
info
from
WebRTC
direct
directly
to
a
fetch
response,
for
instance,
or
you
could
try
to
do
some
things
like
that
and
if
more,
if
there's
more
adoption
of
the
streams
API
elsewhere,
then
it's
it's
a
good
way
to
to
do
things
where
there
are
some
Cavett's
and,
for
instance,
a
writable
stream
can
actually
stream
anything
like
cows,
pigs,
potatoes,
which
might
be
problematic
in
some
cases.
B
AA
F
Really
have
an
opinion
number
two
I,
don't
really
see.
Any
reason
that
it
would
be
in
be
specific
doesn't
seem
like
whether
the
data
channels.
F
Z
O
B
D
Q
Z
Z
AA
K
O
Another
thing
is:
I
would
like
to
add
a
few
more
control,
knobs
data
channels,
so
we
we
could
expose
more
ways
to
control
data
channel
specific
behavior
that
is
driven
by
SCTP,
for
example,
the
retransmission
time
one
way
is
these
could
be
useful
if
you
want
to
do
fast
through
transmissions
instead
of
the
default
that
SCTP
provides.
So
if
you
couple
that
with
max
retransmits,
then
this
can
be
quite
powerful.
O
O
Yes
and
another
thing
has
been
brought
up
by
Michael
Tilson
was
we
could
technically
set
the
maximum
amount
of
retransmissions
until
the
SCTP
Association
robots.
This
might
not
be
that
useful,
but
who
knows
so
yeah
and
there's
something
else
I
wanted
to
bring
up,
which
is.
Ideally
all
of
these
are
not
controlled
by
the
rtcdatachannel
parameters.
Dictionary,
because,
just
recently,
we
out
that
the
use
of
a
dictionary
is
problematic
because
it
fails
side.
O
So
just
to
give
you
an
example
just
very
recently,
if
you
try
to
create
a
data
channel-
and
you
said
max,
we
transmit
to
zero
and
Firefox.
This
has
been
silently
ignored
and
what
you
got
was
a
data
channel
that
was
completely
reliable,
which
is
the
complete
opposite.
You
wanted
to
have,
and
even
worth,
if
you
wanted
to
fix
that,
you
could
set
max
packet
lifetime
instead,
but
that
in
turn
has
been
ignored
by
Chrome
because
they
still
use
the
old
naming
for
that.
So
I
have
bad
news
for
you.
O
If
you
ever
wanted
to
use
an
unreliable
data
channel,
you
might
actually
have
ended
up
with
a
reliable
channel
instead.
So
something
that
we
could
do
is
instead
of
user
dictionary.
We
could
use
something
like
the
Builder
pattern
instead.
So
instead
of
saying
a
Maxima,
transmits,
0
and
some
label,
I
explicitly
call
some
methods
to
set
this
and
then
create
a
data
channel
at
the
very
end
and
because
they
are
methods
if
they
don't
exist,
this
will
fail
explicitly.
D
Z
O
Z
You
want
backwards
compatibility
and
when
people
when
browsers
add
new
features,
you
don't
necessarily
want
all
your
existing
methods
to
fail,
because,
let's
say
you
add
another
option
to
fetch,
you
know
old
browsers
are
gonna,
ignore
that
and
you
usually
want
that
I
mean.
Is
it
bad
that
you
get
a
reliable
data
channel
instead
of
an
unreliable
versus
no
channel
at
all,
I
would
call
that
backwards.
Compatibility
or
I
would
call
that,
not
frankly,
the
way.
Z
Don't
think
well,
Dom
folks,
I
talk
to
generally
don't
have
the
concept
of
feature
detection
effect
there.
Api's
I
mean
you,
you
want
the
most
IDL
API,
that's
most
ergonomic
and
idiomatic
for
JavaScript,
and
then,
if
you
want
feature
detection,
you
ask
well.
How
do
we
feature
detectors-
and
you
add
a
mechanism
for
that,
and
we
did
that
for
forget
using
media,
for
instance,
for
constraints.
Z
O
O
Z
Z
Do
all
kinds
of
crazy
things
for
a
French,
for
instance,
the
way
to
check
whether
fetch
is
cancelable
is
to
pass
in
a
cancel
controller
as
an
object
that
just
detects
whether
the
object
getter
was
accessed,
and
it's
quite
ugly,
and
this
is
considered
in
the
realm
of
feature
detection.
This
is
not
uncommon.
Yeah.
O
W
O
So
there's
another
area
we
can
improve,
which
is
just
you
know,
making
SVP
more
suitable
for
requirements,
and
we
already
have
message
interleaving
on
channels
which
is
now
specified
as
part
of
our
C
80
to
60,
but
it
still
requires
some
work
and
users
to
the
P,
because
pretty
much
all
brother
when
let's
use
it,
the
question
here
would
be:
is
it
mandatory
or
should
we
make
it
mandatory?
If
it
isn't,
is
mandatory.
O
S
D
O
W
W
AA
AA
W
O
O
O
Of
course,
once
we
have
pts
1.3,
whenever
that
happens,
we
might
actually
have
zero
RTT
set
up
for
setp
who
knows
okay,
yeah,
then,
last
but
not
least,
we
could
add
more
audio/video
frontally
condition
control.
This
has
been
yeah.
X
O
Has
said
it
several
times
now:
bbr
can
be
added
at
least
two
users.
Otp
I
have
seen
some
simulations.
You
can
expect
the
paper
or
not
at
some
point
yeah.
This
is
just
a
comment.
This
is
something
that
can
be
done
in
an
sctp.
I
would
love
that
great
so,
which
brings
me
to
my
last
slide.
I.
Think
no,
it's
second
to
last
one
okay,
so
especially
for
web
ABC
and
B,
and
these
are
my
thoughts
on
I.
Don't
think
we
need
an
additional
low-level,
HTTP,
API
or
anything
like
that.
O
I
think
data
channels,
pretty
much
leverage
with
the
screens
extension.
They
leverage
pretty
much.
What
we
can
do
with
setp,
so
I
would
propose
to
just
copy
the
ortc
model
do
a
few
at
the
eye
tricks
here
and
there
if
they
need
it
and
yeah
go
with
that
and
possibly
the
last
one
that
that
might
be
a
little
bit.
Controversial
is
the
question
if
we
want
to
keep
this
abstraction
of
the
RTC
data
transport.
F
O
O
Okay,
so
next
slide
please,
so
these
are
just
a
question
that,
from
from
the
last
few
slides
now
so
any
thoughts
or
more
control,
knobs
opinions
on
the
extensions
we
could
go
for.
How
does
the
MV
plan
sound
here
and
we
always
talked
about
that?
They
could
build
a
better
now.
W
So
on
the
control
knob
thing
there
I
think
three
control
knobs
you
were
discussing
I'm,
certainly
open
to
considering
more
control,
knobs
I,
don't
I
would
want
to
see
what
is
the
use
case
for
those
control,
not
those
specific
control
knobs
before
we
actually
expect
them,
and
you
know
what
it
would
be,
the
you
know,
plus
and
minus,
for
each
one,
I'm
I
think
having
more
control.
Knobs
in
scope
is
absolutely
it's
absolutely
valid
and
we
can
then
discuss
as
to
exactly
what
and
how
and
what
these
cases
for
each
individual.
S
E
S
E
W
W
W
W
Are
usually
building
the
building
the
message
in
JavaScript
anyway,
so
it
typically
doesn't
have
much
to
just
put
your
your
message
number
in
there.
However,
I'm
fine
with
reopening
that
discussion.
I,
don't
see
it's
like
you
said
the
protocol
has
it
so
if
we
just
find
a
reasonable
way
to
do
it
in
a
recently
backward
compatible
way,
then
we
could
absolutely
I
certainly
think
it's
in
scope,
too
gossip.
It.