►
From YouTube: WebRTC Working Group Virtual Interim March 13, 2019
Description
Unified Plan, SFUs and ssrc signalling
B
So
let's
talk
a
little
bit
about
we're
gonna
go
over
today
we
have
a
bunch
of
WebRTC
PC
issues,
as
I
mentioned.
The
first
three
relate
to
simulcast,
so
hopefully
this
will
take
an
enormous
amount
of
time,
but-
and
this
is
I
guess
what
most
of
the
developers
here
are
interested
in-
are
developing
the
simulcast
apps.
We
have
two
screen
sharing
issues
that
henrik
will
talk
about,
and
then
we
have
a
bunch
of
media
capture
main
issues
that
you
win
we'll
go
through.
Most
of
these
relate
to
privacy
of
some
case.
B
I
guess
we're
also
talking
about
the
over-constrained
event.
Henrik
will
talk
about
that.
So
that's
basically
the
agenda
for
today.
Hopefully
we
can
get
it
all
done
in
an
hour
and
a
half.
So
let's
go
into
the
WebRTC
PC
issues.
Three
of
these
relate
to
simulcast
and
we'll
be
talking
about
them.
Okay,
so
issue
1174,
relates
to
how
SSR
sees
are
surfaced
in
the
WebRTC
PC
API
and
really
specifically,
whether
they
are
surfaced
in
the
object
model
or
in
the
SDP.
B
Originally,
as
of
June
5th,
2017
SSR
sees
were
included
in
RTC
RTP
encoding
parameter
dictionary,
as
well
as
the
RT
X
parameters
and
the
RTP
FEC
parameters.
Dictionaries.
These
were
read-only
SSR
sees
they
could
not
be
set,
they
could
only
be
read,
and
this
is
basically
what
was
in
the
June
5th
2017
document.
As
you
can
see.
B
So
the
reason
that
did
not
happen
was
that
Taylor
there
was
a
meeting
with
Taylor
brands
that
are
of
Google
and
some
of
the
conferencing
developers
were
present
at
that
in
which
he
proposed
that
it
would
be
simpler
for
browsers
just
continue.
Putting
the
answer
so
she's
into
the
STP
keeping
them
in
the
STP
and
unified
plan,
while
the
ecosystem
transitions
to
mid
and
base
debugging.
So
that
was
Taylor's
proposal.
B
B
So
that's
basically
where
we
are,
and
we've
invited
a
bunch
of
the
conferencing
system
developers
here,
I,
don't
think
we
can
probably
introduce
all
of
you
but
I'll
just
name
a
few
who
are
here
to
kind
of
give
their
perspective.
I
think
we
have
a
meal
of
of
of
gypsy
video
bridge
I,
believe
we
have
Siva
Anantha
Christian
Anitha
Christine
of
team's
conferencing
service.
We
have
Antony
Sally.
Are
you
here,
Anthony
of
free
switch
is
here
and
perhaps
Lorenzo
Manero
of
mideco
link
was
gonna,
come
Inaki
buzz
castillo
of
media
soup.
B
So
a
bunch
of
people
writing
these
conversations.
If
I
missed
anybody,
you
can,
you
can
speak
up
so
anyway
wanted
to
take
some
opinions
from
the
developers
as
to
looking
at
this
perspective,
what
you
know
what
the
impact
is
on
them
of
not
having
the
SSRC
ease
and
maybe
even
whether
you
care,
whether
it's
in
the
STP
or
in
the
object
model,
can.
C
D
B
B
E
E
I,
wouldn't
go
that
far
that
there
are
benefits,
but
if
we're
talking
of
like
ideally
if
we
could
get
them
in
both
places
great
fine,
it's
not
it's
not
a
must-have
to
debate
in
in
the
object
model.
It
just
is
it's
not
a
must-have
to
be
specifically
in
the
SDP,
but
it
has
to
be
somewhere
and
if
that's
the
SDP,
that's
the
lowest
friction
approach.
E
E
Changing
your
system
to
support
the
new
style
of
signaling
is
very,
very
different,
and
that's
usually
specifically
in
this
case.
That's
just
a
client-side
change,
changing
it
to
support
our
ID
on
top
of
that
implies
involving
all
sorts
of
components
through
the
entire
ecosystem,
because
it
has
to
touch
on
the
routing
of
RTP
and
all
these
things.
So
if
we
can
go
back
to,
let's
put
them
in
SDP,
that's
definitely
low
with
the
lowest
friction
approach.
There.
F
Inaki
here,
I
think
that
well,
SSRC
is
lines
in
the
sleepy,
for
me
is
not
on
it,
because
I
think
that
the
the
way
to
go
is
by
by
using
grid
and
signaling
grid
to
the
server,
and
the
server,
in
my
opinion,
must
be
really
ready
to
to
to
profess,
read,
hidden
sentient
and
be
and
then
be
able
to
to
change
the
source
SSL
see
if
the
client
eventually
change
it.
For
any
reason,
so
we
have
called
the
SSL
see
this
is
like
read
is,
is
not
useful
for
for
for
anything
new
anyway.
F
F
It's
not
a
big
problem,
because
I
opened
an
issue
in
WebRTC
about
this,
and
I
was
told
that
at
the
end
at
least
Chrome
and
live
web
RTC
doesn't
use
the
C
name
to
synchronize
incoming
audio
and
video
from
the
same
source.
But
instead
is
it
used
the
the
media
string
ID,
which
is
something
that
well
I,
don't
agree
much
because
there
are
more
devices
that
may
not
really
own
on
this
web
concept,
such
as
the
media
stream.
But
anyway
the
problem
is
there
and
currently
a
chrome
when
using
single
casts.
F
B
F
For
from
long
yes,
and
basically,
I
am
fine
with
it
and
I
find
it
very,
very
comfortable.
No
is
I
mean
it's
work,
of
course,
but
it's
just
doing
it
and
after
afters
on
it,
it
was
fine
and
I
see
no
special
reason
to
look
up
the
SSLC
signal
it
into
the
SDP
or
even
in
the
object
model.
Iii
don't
need
them.
Okay,.
G
We're
a
little
bit
monitored
to
implementing.
We
did
an
MCU
first,
so
doing
any
kinds
of
simulcast
forwarding.
We
just
are
only
like
a
few
months
into
it,
so
a
little
bit
indifferent
about
that.
My
main
interest
is
is
getting
that
API
available
so
that
we
can,
you
know,
do
it
without
having
to
do
it.
Some
of
the
proposals
wrong
ways,
since
the
right
way
is
right
around
the
corner,
so
I'm
kind
of
here
to
figure
out
like
the
weird,
a
stance.
D
B
H
D
I
Yeah
they
go
here,
yeah
to
add
to
what
others
have
said
in
Janis
right
now.
We
we
do
have
partial
support
for
both
meet
and
re.
The
RIT
sorry-
and
it's
basically
already
kind
of
working
but
I'm
with
a
male
here
saying
that
having
SSR
disease
in
place
is
still
the
next
CD
easiest
easiest
way
forward.
I
mean
assuming
they
are
still
going
to
be
used
on
the
wire
anyway,
so
being
able
to
signal
in
them
to
signal
them,
make
sure
I
mean
if
they
would.
They
even
make
conflict
resolution
easier.
I
If
you
know
in
advance
which
you
love,
the
SSRC
symbols
are
and
so
on
and
so
forth,
that
thing
would
still
be
helpful,
but
in
principle
I
am,
as
in
yaki
said
you
you
still
you
can.
You
can
use
midn
RIT
for
that,
and
we
have
put
good
in
place
that
already
starts
working
in
that
direction
and
it
seems
to
be
promising,
even
though
I
mean
having
SSR
C's
would
make
things
definitely
much
easier.
G
F
You
read
the
meet
and
read
values
and
you
are
done,
but
when
your
peer
connection
is
bi-directional,
so
you
sign-
and
you
also
receive
media,
then
that's
more
difficult
for
some
servers,
because
that
means
that
you
must
also
said
they
meet
the
meeting
into
the
LTP
packets
that
the
SSU
sends
to
the
to
the
clients
over
that
P
connection,
and
maybe
the
source,
the
original
RTP
source
does
doesn't
contain
the
myth,
RDP
extensions
into
the
LTP
pocket.
So
the
SSU
needs
to
them
to
the
pocket,
which
is
not
a
icy
task.
F
B
F
Okay
know
that
the
the
problem
is
that
imagine
that
you
Australian
and
when
I
step
to
I,
accept
you
and
do
not
a
very
big
SDP
with
with
many
remote
tracks.
But
you
are
risking
nothing.
Its
media
said
young
and
must
have
a
different
meat
value
right
and
if
you
need
to
also
send
video
and
audio
on
the
same
connection,
then
you
need
to
Novotny.
Ota
negotiate
meet
potential
that
also.
That
also
means
that
you
need
to
do
it
properly
in
the
receiver
side.
I
Mid-17Th
was
with
mid-1
and
you're,
sending
it
to
one
that
expect
it
expect
it
to
be
MIT
52
over
there,
so
you're,
even
just
rewriting
an
existing
extension
main.
What
may
not
be
that
easy,
because
you
may
need
to
actually
expand
that
extension
in
place,
so
it's
I
mean
it's
doable
of
course,
but
it's
not
exactly
ideal,
especially
idea,
especially.
I
Which
is
why
I
mean
most
most
of
SF
use
and
I
journalists
right
now,
because
it's
a
fix
that
I
made
a
couple
of
days
ago
and
Annunaki
for
some
time
already.
We
basically
right
now
only
allow
a
my
deep
or
for
incoming
offers.
So
in
this
case,
whenever
somebody's
publishing
that's
a
and
for
all
receive
only
per
connections.
Instead,
the
MIT
is
disabled,
so
that
we
accept
MIT
for
incoming
streams,
but
for
whatever
we
send
outside.
I
We
basically
don't
negotiate
MIT
just
to
avoid
coping
with
that
issue,
but
this
is
something
that
you
can
do
whenever
you
are
handling,
let's
say
peer
connections.
You
know
some
kind
of
a
mono
directional
way.
So
if
you're
doing
something
in
the
Phi
direction
of
fashion,
I
think
it
was
saying
that
becomes
an
issue,
and
this
is.
I
This
was
an
issue,
for
instance,
in
some
use
cases
that
we
have
in
Jarvis,
because,
yes,
a
few
is
one
of
Li
use
cases
that
you
can
do,
but
where
are
other
applications
that
use
other
other
kind
of
plugins.
Instead,
that
make
use
of
a
bi-directional
bidirectional
feature
which
are
affected
by
this
kind
of
behavior
yeah.
B
B
J
B
It
was
reopened
because
the
understanding
had
been
that
the
browser's
would
put
the
SSRC
Xin
to
the
SDP.
That
was
the
that
was
how
the
issue
was
resolved
so
that
we
didn't
didn't
put
it
back
into
the
object
model,
but
the
reason
it's
been
reopened
is
that
chromium
74
does
not
include
the
associates
for
simulcast.
So
is.
Are
you
saying
UN
that
that
Safari
is
intending
to
keep
the
associates
for
simulcast
I.
H
Well,
I
believe
Firefox
is
including
them
right
now
yeah,
but
also,
though,
that
we've
been
vocal
as
a
long-term
solution,
people
should
be
able
to
do
what
they
need
solely
with
mid
and
rid
so
I
think
we
see
it
as
a
legacy
feature
right
now,
so
it
doesn't
necessarily
need
to
be
in
specs
perhaps,
but
that
may
be
dividual
browsers
could
treat
this
legacy
for
a
while
I
guess.
There's
some
concerns
about.
You
know
what
people
ever
transition
away
from
SSR
sees.
B
Some
of
those
issues,
though,
like
the
performance,
might
be
things
that
might
cause
developers
to
just
keep
the
SSO
she's
anyway,
if
they
can
get
higher
forwarding
rates
or
something
but
and
although
other
one
sounded
more
transient
like
issues
that
could
be
gotten
over
with
some
more
work
or
questions
any
other
browser.
Vendors
want
to
speak
up.
B
B
C
So
being
the
one
who
removed
the
Assessor
sees
shoot
all
the
arrows
this
way
what
I
haven't
heard.
So
let
me
give
some
background
on
why
I
removed
them.
The
problem
is
that
the
SSR
sees
conflict
with
the
rid
identifiers.
So
imagine
I,
give
you
three
rids
and
I,
give
you
three
associates
and
now
you're
tasked
with
understanding
which
SSRC
works
with
each
rid.
We,
we
would
need
a
solution
that
would
somehow
be
able
to
map
them.
C
You
know
give
a
one-to-one
mapping
and
what
happens
if
the
SRC
changes,
for
whatever
reason
RTX
SRC
SSR
sees
if
they're,
if
those
are
needed
as
well,
not
100%
sure
how
those
would
would
be
passed
in
this
in
a
similar
way.
There
are
all
these
issues
with
conflicting
IDs,
which
is
why
they
were
removed
so.
B
At
least
and
correct
me
if
I'm
wrong,
but
traditionally
the
SSRC
conflict
problem
was
kind
of
swept
under
the
rug.
You
know
when
you
had
an
offer
previously
the
SS
Uzis
came
out
and
there
was
no
promised
they
wouldn't
change.
Although
I
think
a
bunch
of
implementations
didn't
even
do
much
in
the
way
of
conflict
detection,
so
I
think
it.
The
understanding
has
always
been
the
SSR
sees
or
there's
no
guarantee.
They
won't
change.
Even
in
the
object
model,
it
was
never
any
event
for
us
to
associate
conflict.
C
There's
never
any
problem
with
us
or
season
the
non
simulcast.
It's
a
scenario
because
and
I'm
talking
about
unified
plan,
because
there's
only
one
one
SRC
per
media
section,
so
the
mid
header
extension
is
sufficient
to
you,
know
unambiguously
route
and
understand
and
identify
the
packets
and
the
mid
will
not
change.
C
So
if
we're
speaking
on
the
technology,
that's
that's
you
know
but
forth
in
the
spec,
where
we're
not
talking
about
Plan
B
semantics,
because
this
API
is
not
even
available
on
Plan
B
semantics.
Remember:
unified
plan
semantics,
which
are
not
supposed
to
signal,
reads,
assesses
our
C's
at
all
and
only
use
mids
and
rids,
and
just
to
be
100%
clear
on
this
I
probably
already
said
it.
We
are
not
removing
SSR
C's
from
any
existing
scenarios.
We
only
removed
it
from
this
scenario
because
of
the
conflict,
but
let
me
say
a
few
other
things
please.
C
C
E
C
This
seems
like
we're,
saying
we're
not
gonna
spec
it
out,
because
it's
a
temporary
solution,
but
it
seems
to
be
very
permanent,
so
doing
work
is
always
harder
than
not
doing.
Work
I
realize
that,
but
I
have
never
I
haven't
heard
any
one
say
that
anyone
that
said
that
they
are
not
supporting
maids
androids
say
that
their
intent
to
support
it
or
even
propose
any
timeline
as
far
off
as
it
may
be.
B
We
did
have
to
developers
Lorenzo
and
Nanaki
say
they
are
working
on
mids
and
Ritz,
but
they've
encountered
some
issues
with
it
correct
yeah,
so
it's
not
like
people
don't
want
to
do
it,
but
the
issues
some
of
the
issues
do
seem
like
they
might
cause
people
to
remain
with
SSRC
is
like
the
perfect.
So.
C
C
Then
you
might
be
able
to
make
assumptions
that
your
as
a
few
is
going
to
be
contacted
from
your
client
and
then
you
might
be
able
to
make
certain
limitations,
such
as
limiting
mids
to
only
one
byte
and
then
there's
suddenly
there
a
fixed
length,
you
put
my
meds
in
your
client
and
then
there's
no
problem
with
rewriting
the
myths,
if
you
ever
need
to,
because
it's
just
like
we're,
writing
a
Circe's,
so
I
don't
think
these
issues
are.
We
can't
overcome
them
right
now
what
I'm?
E
In
fact,
I
in
fact,
I
believe
that's
that's
kind
of
what
I
said.
Basically,
the
entire
support
of
unified
plan
becomes
one
huge,
unseparable
chunk
of
work.
That
involves
different
components.
If
you
have
to
combine
it
together
with
our
ID
and
then
my
D
support,
and
otherwise
it's
just
easier
to
do
it.
Okay,
first,
we
do
unified
in
the
client.
E
We
worry
about
the
bridge
later
and
then
a
point
that
will
come
as
well
now,
there's
a
much
bigger
discussion
as
to
what
is
exactly
the
value
that
comes
out
of
already
because
conflict
resolution,
isn't,
you
know,
isn't
a
isn't
a
particularly
sexy
cell,
so
it
and
and
I
kind
of
I
hope
I'm,
not
hearing.
You
know
we're
doing
this,
because
we
want
a
strong
army
into
supporting
this
new
thing
that
actually
doesn't
give
you
any
value.
I
I
hope
that
we're
doing
the.
E
B
I
do
want
to
ask
a
question
of
them.
It
you
raising
the
issue
of
mapping
the
SSO
she's
to
the
ribs
and
and
you're
right.
It
doesn't
do
that
in
the
STP.
In
fact,
we
didn't
really
even
solve
that
in
the
object
model.
So
neither
those
two
things
give
it.
But
I
guess
my
question
for
the
developers
is:
does
anyone
I
mean
say
you
just
put
the
SSO
sees
without
mapping
them,
I
mean
I'm,
just
asking
a
question
of
the
developers.
Would
that
cause
a
problem
to
anyone
to
not
have
that
mapping
I'm
just
trying.
B
C
B
E
C
So,
but
that's
not
the
case
order
based
is
not
what
what
happens,
because
you
can
set
your
scale
resolution
down
by
and
and
hopefully
in
the
future,
we'll
be
able
to
modify
these
settings
because,
as
the
next
issue
suggests,
simulcast
layers
can
be
dropped.
So
so
you
need
to
specify
the
your
preference
of
order,
because
it
might
be
that
your
network
forces,
you
know
some
of
the
layers
to
drop.
Then
if
you
don't
get
the
right
layers,
then
Norman.
B
B
B
You
put
the
SSI
cheese
in
order,
it's
true
that
the
browser
can
stop
sending
any
of
these
things
or
can
inactivate
and
a
layer
or
something
it
becomes
confusing
as
to
I
mean
all
the
SSR
sees.
Are
there
whether
they're
being
sent
at
a
given
time
is
a
different
issue.
I
think
I'm
just
trying
to
understand
what
the
what
the
problem.
C
Yeah,
there's
no
there's
no
problem
so
yeah,
while
the
ester
sees
should
be
there.
Some
of
them
might
not
be
sent
if
a
layer
is
paused
or
not
we're
out
paused,
but
also
if
layers
get
dropped,
because
at
a
certain
point
in
my
droplet
layer,
so
there's
no
really
way
for
you
to
communicate
what
which
stream
is
which
right
we're
because
the
rids
we
know
how
to
communicate
that
right.
C
We're
saying
you
set
the
rid
and
you
tell
me
the
first
rate
is
called
F
and
and
the
application
logic
says
that
that's
the
full
stream
or
what
or
whatever
right
but
there's
no
such
thing
on
the
SRC.
Yes,
so
the
Odyssey
is
not
going
to
be
the
full
stream
and
the
even
one
is
going
to
be
the
the
medium
stream
there's.
No
there's
no
way
to
do
that.
B
C
B
C
B
I
No
I
mean
I,
understand
the
midpoint
by
say.
There
is
no
current
way
of
saying,
for
instance,
that
our
ID
equals
K,
for
instance,
is
actually
mapped
to
a
specific
SSRC
that
you
see,
and
in
fact
there
is
a
problem
in
in
figuring
out
the
let's
say,
the
the
problem
or
dream
of
of
streams
in
in
the
specific
sense
in
general,
I
mean,
as
I
said,
I
think
that
the
alright,
the
both
read
and
made
them
should
be
able
to
provide
additional
information.
Truth
I
mean
you
are.
I
We
are
still
able
to
derive
the
SSRC
from
that
information
internally
because,
of
course,
routing
on
SSRC
is
much
more
efficient
than
doing
a
string.
Compare
for
each
packet
that
we
receive,
so
we
still
are
able
to
implicit
Liebig
route.
Ssrc
I,
think
I
mean
I
still
have
a
bit
of
a
doubt
when
it
comes
to,
let's
say
our
cheeks,
our
C's
and
stuff
like
these,
even
though
I
guess
that
the
RTP
stream
are
typically
refers
to,
MIT
can
be
used
for
that
I
mean
for
now.
I
G
I
I
B
B
K
C
C
Yes,
because
I
feel
that
that
it's
gonna
be
a
slippery
slope
because
that's
not
so
that's
not
gonna
be
sufficient
for
you,
because
if
I
only
tell
you
you
know,
I
have
read
1,
2
and
3
and
then
I
have
SSR
sees
10.
Oh
sorry,
11,
12
and
13
that
that
map
now
you're
gonna,
ask
me:
what
is
the
MSI
D?
What
is
the
C
name?
C
F
Simplification,
because
there
is
no
sorry
in
Salukis,
in
fact,
there
is
no
order
of
logics
I
mean
that's
a
simplification,
my
opinion,
you
have
different
layers;
no,
they
may
be
low
medium
and
high
quality
or
solution
or
whatever,
but
they
must
not.
They
don't
need
to
be
that.
Then
you
must
apply
different
configuration,
different
settings
to
every
layer
and
it
doesn't
mean
that
one
is
low
and
the
other
one
is
medium
or
high.
So
I
think
that
absolutely
correct
walking.
F
B
We
did
have,
we
did
have
to
introduce
ordering
into
the
encoding
prior
members
because
they're
all
kinds
of
questions
like
if
you
did
again
and
set
and
change
the
order,
but
you
know
kept
everything
the
same.
What
would
happen
so
anyway,
we
did
try
to
clarify
the
ordering
it
does
have
it.
Does
that
meaning
and
we'll
talk
about
the
drop
yeah.
L
C
Spit
yes,
so
you
can
specify
in
Chrome
at
least
I'm,
not
sure
about
the
other.
You
can
specify
and
I'm
not
saying
anything
is
just
I.
Just
really
don't
know
you
can
specify
you
know
scale
resolution
down
by
to
make
it
medium
high,
low
low,
high
medium.
However,
you
want,
and
the
only
the
only
meaningful
interpretation
is
just
a
priority
of
which
of
the
streams
yeah.
So.
B
What
we've
talked
about
here
is
that
reducing
SSR
sees
we've
talked
about
introducing
associates,
kharrus
ponding
to
the
encoding
parameters.
Amit
brought
up
the
issue
of
SSR
sees
relating
to
RTX
and
FEC
being
somewhat
of
a
another
complicating
factor
that
was
in
the
object
model
isn't
anymore,
I
have
a
question
for
the
developers
is,
is
it
is
omitting
the
RT
X
and
FEC
SSR
she's
going
to
be
a
problem.
D
B
C
There's
there's
no
issue
we're
just
degenerating
back
to
why
even
have
eggs.
Why
even
have
the
new
format?
Why
not
just
use
the
old
format
just
signal
all
the
SSR
sees
as
they
were,
and
just
that
you
know
the
munching
scenario.
Why
not
just
enable
people
to
to
instead
of
munching
the
SDP
to
directly
generated
like
the
idea
is
that
is
that
the
new
format
is
better
because
because
it
allows
you
to
add
rids,
which
give
you
implicitly
give
you
r-tx,
because
it's
just
the
same
grid
ID
on
the
on
the
repair
extension
header.
L
But
I
think
not
there
are
two
different
things
here
is
one:
is
the
SSRC
should
be
exposed
in
the
API
and
other
if
the
SSRC
is
are
presently
in
the
SCP
and
right,
because
we
are
kind
of
missing
they're
both
in
both
issues,
because
if
you
put
the
r-tx
in
the
group
in
the
CP,
you
don't
need
to
map
it
in
the
API.
But
you
need
to
do
need
it
to
some
book
somewhere.
I
mean
if
you
are
not
doing
an
iris.
Of
course.
B
Yeah
I
mean
basically
just
as
we
described
in
the
history.
The
SSR
series
have
always
been
part
of
the
API
from
the
beginning
and
we've,
you
know,
had
multiple
meetings
where
we
decided
to
have
SSRC
support
in
some
form
and,
and
the
reason
was
really
because
of
this
transition
issue.
That's
been
that's
been
described.
It's
not
like
people
are
saying.
Oh,
we
never
want
to
go
to
the
new
thing.
B
It's
just
separating
out
the
changes
in
the
conferencing
server
from
the
changes
in
the
client-
and
you
know
we
have
a
the
charter-
expires
in
March
2020,
by
which
time
we're
supposed
to
demonstrate
that
you
know,
plan
unified
plan
is
usable
and
get
the
basic
browser.
Interop
issues
solved,
but
you
know
adding
the
conferencing
server
work
on
to
that
would
make
that
deadline
harder
to
meet.
B
C
I
ask
a
question
regarding
your
last
statement:
yeah.
What
would
be
the
the
problem
if
legacy
I'm
gonna
call
the
legacy
for
lack
of
better
word?
Oh
and
let's
say
as
a
fuse
that
don't
support
rids
and
made
continue
using
the
munging
scenario
where,
as
in
in
their
client
apps,
because
it's
an
app
change
right.
This
is
a
new
API
and
the
new
clients
move
to
use
the
new
one.
Is
there
a
problem
with
with
that?
Well.
B
Yes,
I
think
there
is
a
problem
because
from
a
w3c
perspective,
what
it
would
mean
is
we
wouldn't
be
able
to
demonstrate
the
interoperability
of
a
unified
plan,
so
I
mean
part
of
it
is
we're
supposed
to
develop
tests.
We
have
when
we
talked
about
WPT
tests.
They
were
based
on
the
SSRC
model
because
they
loop
back
tests.
We
also
have
kite
tests,
but
those
work
with
the
the
conferencing
servers
that
the
developers
have
just
mentioned
and
all
those
tests
would
fail
if
we
don't
include
the
SSRC.
B
L
B
Meet
the
w3c
criteria
right
we
get
all
this
done.
You
know
web
urgency
gets
approved.
It's
goes
to
PR
all
this
at
that
point,
because
if
assuming
we
don't
change
the
standards
to
mandate,
the
SSR
sees
you
know
a
browser.
Vendor
could
say
it's
a
year
or
two
from
now
say:
hey
we're
going
to
deprecated
this
thing
you
have
another
or
whatever
18
months,
to
get
rid
of
it.
It's
not
part
of
the
standard.
You
knew
this
was
going
to
happen,
but
at
that
point
you've
got
the
interoperability.
B
L
C
C
If
the
playground,
this
is
something
that
we
can
solve
without
doing
all
of
this
right,
I
can
imagine
we
put
something
in
a
stat
in
stats
and
then
we
just
get
a
playground
to
work
that
everyone
can
use.
The
playground
is
not
part
of
the
API,
your
spec
right,
it's
a
tool.
We
can
get
a
tool
to
work.
It
takes
I'd
rather
hack
a
tool
than
hack
the
speck.
C
H
B
C
So
let
me
complicate
your
your
interrupt
scenario
so,
regardless
of
if
SS
or
C's
are
signaled
or
not,
the
requirement
will
be
that
the
SFU
or
yeah,
the
other
party
will
have
to
negotiate
back
the
the
simulcast
line,
including
rids
and
including
read
extensions
right,
because
all
of
those
are
going
to
be
sent.
So
what
this
means
is
that
the
server
is
going
so
there's
gonna
have
to
be
work
done
on
the
server
regardless.
That's.
B
B
C
C
C
B
H
I
I
In
a
go
shape,
which
mode
you
use
yeah,
it's
really
depended
on
whether
read
is
in
place
or
not,
and
we
will
do
have
the
SSRC
available.
If
we
do
have
the
SSRC
available,
then
we
know
that
they
are
going
to
be
I,
don't
know,
let's
say
three
different
streams
that
are
going
to
be
sent,
and
so
we
are,
we
are
able
to
to
handle
all
over
them
three
at
the
same
time
and
route
accordingly,
so
you
are
definitely
able
to
do
simulcast
in
without
using
greatly
future
success.
I
I
C
Yes,
you
can,
you
can
do
it
with
only
associates
because
we're
already
doing
it
in
them
in
the
legacy
scenario.
My
point
is
what
I'm
saying
is
that,
if
we're
doing
it
with
a
new
scenario,
where
we're
signaling
ribs
and
we're
signaling
a
simulcast
line,
then
the
response
must,
as
it
appears
in
the
spec,
also
contained
the
same
Ridge
and
the
simulcast
line
in
the
reverse
order,
and
it
also,
you
know,
must
be
able
to
support
the
rid
extension,
because
otherwise,
there's
no
way
for
us
to
send
the
ritz.
C
E
So
I
just
wanted
to
comment
on
this
very
briefly.
What
actually
gets
the
SDP
that
actually
gets
into
the
browser
can
be
handled
entirely
at
the
client
site.
That
doesn't
mean
necessarily
munging
it,
but
part
of
the
offer
answer
engine
can
actually
live
in
the
climb,
which
is
what
happens
for
us,
for
example.
So
the
fact
that
you
change
how
that
operates
doesn't
mean
that
you
are
changing
your
server.
You
don't
necessarily
have
as
DP
going
all
the
way
to
the
server.
Well,
that's
munging
pretty
much.
C
C
B
Mm-Hmm
well,
I
think
that's
I,
think
what
Emil
is
saying
and
correct
me
if
I'm
wrong
Emil
is
that
basically
you
everybody
here
is
already
saying:
they're
gonna
change
their
client
to
do
unify
plan
I,
don't
think
that's
under
discussion,
but
basically
the
issue
would
be
what
signal
to
the
SFU
and
I
guess:
you're,
saying
Emil
you're,
basically
gonna
in
your
client,
going
to
take
off
the
writ
and
mid
stuff
right.
Yes,.
C
B
C
B
E
What
you're
supposed
to
do
by
spec,
yes
and
and
I,
do
want
to
insist
very
much
on
the
fact
that
this
offer
answer
as
far
as
the
browser
is
concerned
is
supposed
to
happen.
Well,
we
know
between
the
offer
and
the
answer
and
there's
no
one
says
that
hey,
if
you
do
it
at
the
SFU,
it's
not
merging.
If
you
do
it
elsewhere
than
it
is
munching,
that
is
not
that's.
Just
not
accurate.
Yeah.
B
I
mean
basically
what
people
have
been
talking
about.
Is
they
just
want
to
separate
the
work
on
the
client
from
the
conferencing
server?
I?
Don't
think
anybody
said
they're
not
going
to
do
the
client
work,
so
you
know
anything
needed
to
change
the
STP
into
the
right
format
for
their
conferencing.
Server
I
think
is
really
not
a
problem
nobody's
subjected
to
that
part
of
it.
So.
C
Now
it's
not
the
changing
it's
the
semantics.
So
what
I
mean
by
semantics
is
you
are
in
fact
telling
the
the
other
peer
connection
that
you're
interrupting
with
that
you
support
doing
this,
that
you
support
the
D
maxing
or
accepting
and
understanding
the
rid
header
extension.
That's
what
I
mean
the
semantic
meaning
of
you
signaling.
This
hat
means
that
I
should
be
able
to
trust
that,
for
my
r-tx
and
for
my
and
for
my
signaling
of
dreams,
that's
what
it
means.
Can
you
think.
B
I
guess
the
question
is
say
say:
let's
assume
for
a
second
that
we
put
all
the
SSRC
info
in
right.
The
the
you
know,
Emmet
is
still
sending
the
Ritz.
You
know
in
RTX
and
fu
C
and
all
that
stuff
I'm
just
trying
to
understand.
If
anything
bad
will
happen
here.
If
the
confidence
x
over
just
it
has
yes
SFC
info,
it
needs
it.
Just
you
know,
drops
all
the
redhead
or
extensions.
Doesn't
look
at
him.
Kenneth's
associates
change.
Well,
they
always
said
they're,
but
that's
always
been
true,
even
in
Plan
B.
B
A
B
A
B
The
the
remaining
issue
that
Hammett
brought
up
was,
you
have
to
do
it
for
the
RT
X
and
the
FEC
as
well
right
or
you
have
to
really
do
it.
You
have
to
do
it
completely
for
everything
or
right
for
for
this
to
get
the
backward
compatibility
issue
solved
that
people
have
been
talking
about
so
I
haven't
heard
anything
I
mean
we've
said:
there's
certain
limitations,
you
couldn't
map
one
to
the
other,
but
I
think
that's
handled
by
the
ordering,
and
so
nobody's
saying
this
is
a
perfect
thing,
but
I
think
people
are
saying
I'm.
B
B
Well,
I
think
that
the
proposal
on
the
table
was
the
original
resolution
was
to
keep
the
SSO
sees
in
the
SDP
more
or
less
as
they
are
in
Plan
B.
That's
kind
of
what
the
resolution
of
1174
was
and
I
guess.
What
I'm
trying
to
understand
is
if
there's
anything,
any
reason
that
can
that
can't
be
done,
it
is
being
done
in
Firefox,
I,
don't
know
if
anybody's
encountered
in
an
issue
with
Firefox
to
say
why
it
doesn't
work
but
well.
B
I
think
wasn't
put
in
the
spec
is
that
people
didn't
want
to
enshrine
it
forever.
I
think
what
we're
really
about
here
is
just
to
get
us
to
the
point
where
we
get
whoever
C
102
to
PR
right.
Do
all
the
testing
and
all
the
Interop
prove
it
works.
I
think
I
prayed
that
from
changing
the
conferencing
service
just
put
that
off
for
a
while
I
think.
H
B
Okay,
well,
as
opposed
to
you,
still
have
a
private
transition
problem,
though,
because
a
bunch
of
the
people
here
will
probably
remain
on
plan
B,
so
the
other
alternative
would
be
to
keep
them
with
plan
B
and
have
no
Interop.
You
know
it's
going
to
create
problems
from
running
on
multiple
browsers
right.
You
have
to
get
them
off
of
this
somehow
I,
don't
know
that
the
question
is
which
is
worse.
B
C
We
lying
here
so
by
saying
we
have
we've
gotten
WebRTC,
the
spec
is
done.
Here's
the
spec
and
look
everything
is
interrupting,
but
if
you're
interrupting
with
things
that
are
not
in
the
spec
and
it's
it
around
or
interrupting
because
of
things
that
are
not
in
the
spec,
so
I'm,
relying
by
saying
that
this
is
a
success.
So.
M
G
B
B
M
M
B
M
B
E
B
It's
not
it's
not
lying
in
the
sense
that,
in
the
sense
that
having
interoperability
you
know
demonstrating
the
thing
actually
I
mean
also
having
very
big
differences
in
the
STP
between
browsers,
it's
not
going
to
make
people's
life
easier
right.
It's
going
to
make
it
a
lot
harder
for
developers
to
to
develop
stuff.
So.
H
C
F
F
C
C
L
C
B
C
B
B
It
would
basically
separate
the
object
model
in
the
STP
you'd
have
to
keep
your
own
table
of
what
SSR
sees.
You
know,
correspondent
to
what
encoding
parameter
lines,
but
you
know
if
you,
if
you
do
the
ordering,
as
we
suggested
that
yeah
I
mean
a
bunch
of
the
clients
here,
I
mean
I'm,
Emil
and
others.
I
assume
they're,
going
to
use
their
transceiver
with
the
encoding
and
set
them
get
parameters
and
all
that
stuff.
The
intent
is
really
to
use
the
web.
Richard
c1,
OCR
and
unified
plan,
as
Society.
B
B
C
Can
we
go
back
to
the
lie
argument
so
yeah?
Unless
so
I,
don't
remember
who
it
was
maybe
was
it
was
Jana
I,
don't
know
said
that
we
would
be
lying
in
either
either
way.
Can
you
clarify
that
again
because
putting
them
in
the
actual
object
model
is
a
way
to
say
no
we're
not
lying,
the
SF
use
might
be
lying,
but
we're
not
signaling
things
that
we
shouldn't
be
signaling.
F
C
C
So
some
of
the
of
the
things
that
rids
ad
is
that
rids
ad,
you
had
the
ability
to
add
some
parameters
on
these
on
the
identifiers,
so
I
can
say
a
writ
for
writ
number
one
has
a
max
bitrate
of
something,
and
that
means
that
the
bitrate
cannot
go
higher,
but
it
can
be
lower
because
it
doesn't
really
identify
it.
We
have
removed
that
from
the
from
the
spec
and
we're
not
using
that,
but
yeah
you're
right,
there's
no
way
to
to
signal
this,
and
we
said
that
this
would
be
signaled
off
band.
F
C
I
would
agree,
I,
don't
think
that
breaks
Interop
or
says
that
that
it's
not
working
as
intended.
You
still
get
three
streams.
You
might
not
be
able
to
really
know
what
stream
is
the
largest,
but
you
can
still
probably
process
these
streams
to
understand
that
maybe
I
don't
know.
Maybe
there's
some
mix
I'm,
not
aware
of
all
the
extension
headers.
Maybe
there's
an
extension
header
that
tells
you
the
size
of
resolution
or
you
can
baby
go
by
the
VP
I.
Don't
think.
C
O'bannon
yeah
yeah,
but
that's
my
point
is
that
I
don't
think
this
is
required
to
prove
Interop
that
you
know
exactly
what's
going
on
in
each
of
the
layers.
Just
like
you
don't
know.
If
I'm
just
sending
one
layer,
you
don't
really
know
in
advance
and
signaling
what
this
resolution
of
that
layer
is
you're.
Just
getting
my
video,
so
I,
don't
think
that
breaks
the
the
Interop
argument
that
that
unified
plan
would
be
working.
Yeah,
okay,.
C
They
don't
really
have
to
be
scaled
down.
Right.
Simulcast
means
that
I'm
sending
multiple
layers
for
whatever
reason,
maybe
maybe
what
I'm
sending
is
I'm
sending
one
layer
that
has
been
treated
with
some
optical
filter
that
that
helps
people
with
colorblindness
to
see
the
screen
better.
Maybe
that's
what
it's
doing
where's
note
I.
F
Don't
in
RTC
you
can
even
specify
the
the
reduced
in
every
in
encoding,
so
you
may
add
tender
plain
stream,
decimeter
string
using
vp8
and
h.264
and
that
part
of
that's
similar
gasps.
Well,
it's
not
human
cars,
but
it's
different
encodings.
This
is
about
encoding.
It's
not
about
you.
Mucus
mucus
is
just
a
very
odd
use
case
of
sending
multiple
encodings.
C
Exactly
I,
don't
think
we
need
to
Steve
and
have
them
being.
You
know
to
prove
that
they're
different
just
that
there's
there
multiple
that
there's
one
source
that,
oh,
that
that's
getting,
that
gets
inserted
into
the
pure
connection
for
lack
of
better
term
one
sort,
one
source
that
gets
kind
of
insert
into
the
peer
connection
and
gets
transceiver
multiple
times
to
the
SFU.
Yes,.
B
B
H
B
If
you
wanted
with
in
the
current
state
you
can,
the
implementations
are
different
enough.
That
I,
don't
think
you
can
claim
that
either
the
API
or
the
protocol
interoperate.
So
that
would
probably
you
know
they
could
result
in
removal
of
the
entire
simulcast
material
from
the
document,
because
we
wouldn't
be
able
to
meet
the
PR
requirements
for
interoperability
or
of
the
WCC,
so
I
mean
I.
Think
this
whole.
Basically,
we
could
basically
fail
on
on
this
issue.
Well,.
B
B
I
think
we've
got
a
problem
and
the
charters
up
in
2020
it
just
it
just
creates
a
lot
of
issues.
If
we
don't,
if
we
don't
get
this
stuff
to
PR
by
2020,
remember
I
mean
a
lot
of
the
people
have
been
saying
that
we
can't
move
on
to
new
work
until
we
finish
this,
so
we
potentially
be
looking
at
a
year
or
two
or
more
work
on
the
existing
100
API,
there
could
be
problems
in
recharter.
We
could
have
to
remove
the
Selma
case
from
the
document
entirely.
B
It
basically
creates
scenario
for
fairly
spectacular
failures
of
the
group,
and
you
know
the
big
question
is:
is
it?
Is
it
absolutely
necessary
to
do
that?
Or
can
we
just?
You
know,
meet
the
PR
criteria
and
let
browsers
handle
the
transition,
as
they
would
and
kind
of
remove
that
from
the
standards
process
right.
H
B
I,
don't
think
it's
an
idea.
I
mean
it's
an
issues
come
up
in
the
in
the
reaching.
You
know
reaching
the
PR
stage
for
WebRTC,
so
it
really
is
pretty
much
exclusively
w3c
I
mean
we're
not
nobody
saying
they
want
to
change
the
ITF
standard.
I.
Don't
think
anybody
here
is
said
that
they
they
want
to
change
the
long
term
goal
of
getting
to
mids
and
Ritz.
It's
just
a
question
of
how
we
demonstrate
the
inner
Oppidan
and
finish
the
API
and.
M
M
So
so,
let's
assume
that
there
is
no
API
change,
that
there
is
no
really
no
decision
from
a
diversity
perspective,
but
we
need
them
is
something
specific
about
demonstrating
interoperability
of
civil
caste
DPI
surface
and
whether
that
involves
the
Syrians
or
neji's
or
mock-ups
or
so
on.
I
think
is
a
very
different
discussion
from
yeah.
B
But
if
it's,
if
it's
it's,
basically
for
instituting
a
fraud
to
say
that
we've
got
an
operable
API
when
in
fact
nobody
can
really
use
it,
and
this
API
will
in
reality
remain
proprietary
and
I.
Think
that
that's
a
real
problem,
if
we're
basically
saying
wait,
wink
nobody's
actually
going
to
use
this
API.
Basically
again,.
M
B
M
Agree,
it
would
be
a
problem
I'm
not
saying
we
need
to
look
at
the
fraud.
I,
don't
think
that's
in
anyone's
benefit.
I'm
saying
there
are
many
ways
of
looking
at
this
problem
and
I'm
not
sure
looking
at
it
from
a
spec
effective
when
it
is
in
fact
an
interrupt
perspective
is
necessarily
the
right
approach.
B
C
L
C
F
I
Know
which,
as
we
said,
is
an
issue
when
you
are
doing
bi-directional
peer
connections
instead,
because
in
that
case
you
cannot
disable
it
on
the
receiving
side,
because
it's
the
same
your
connection
as
the
sending
side,
and
so
in
that
case
you
have
to
generate,
or
let's
say,
manipulate
the
mitx
RTP
extension
on
the
way
out
as
well,
which
again
is
doable.
It's
it's
a
bit
of
a
hassle
and
it's
performance
wise.
It's
not
ideal
for
for
several
reasons.
So
that's
the.
I
That's
not
necessarily
true
because
in
Yankee
said
it's
not
only
browsers
out
there.
We,
you
may
actually
be
injecting
media
from
other
tools
that
are
not
browsers,
like
ffmpeg
do
streamer
or
whatever,
as
we
do
in
many
scenarios
right
now.
In
that
case,
you
have
RTP
packets
that
don't
contain
any
RTP
extension
at
the
moment.
So
you
have
to
inject
an
extension.
You
have
to
manipulate
the
package,
move
a
lot
of
data
and
you
have
to
do
this
for
a
lot
of
packets
and
a
lot
of
participants
at
the
same
time.
I
C
I
understand
what
you're
saying,
but
I
and
I
agree
that
that's
that's
that's
an
issue,
but
we
can't
say
that
these
legacy
systems
that
provide
video
in
I
don't
know
that,
don't
don't
don't
follow
the
spec.
Our
part
of
the
interruption
area
they're,
obviously
not
interrupting
those
those
devices
that
you
talked
about
devices
that
don't
that
don't
send
it
that
don't
send
mids
are
not
interrupting,
no
matter
what,
even
if
you
munch
it
on
the
other.
B
That's
that's
really
not
true,
because
you're
able
to
not
negotiate
the
ribs
and
that
header
extensions
STP
requires
that
when
you
don't
negotiate
stuff,
it
gets
turned
off
so
saying
they
don't
interrupt
that.
That's
really
not
the
case
and
in
a
practical
sense,
you
know
if
you
can't
use
this
back
with.
If
you
can't
ship
it,
you
know
any
large-scale
and
you're.
Basically
and
the
reality
is
people
will
remain
with
the
proprietary
stuff.
You
have
a
transition,
probably
Plan
B
anyway,
right.
L
M
C
So
going
back
to
that
to
that
problem,
a
device
that
doesn't
send
mids
is
fine,
that
that
is
fine,
but
that
device
it
I,
don't
know
if
it
can
receive
when
it's
and
it
doesn't
receive
mids
I'm,
not
sure.
If
that's,
if
that's
going
to
be
according
to
spec,
because
it's
gonna
require
signaling,
as
sees
which
is
not
the
case.
Unless
it
only
receives
one
video
and
then
it's
not
a
problem.
A
C
A
We're
relying
on
as
if
you
used
the
drill,
reject
some
of
the
functionality
that
we
wants
to
be
part
of
one
that
oh
in
order
to
achieve
interrupt,
then
we've
only
demonstrated
interrupts
to
some
extent.
We
haven't
truly
demonstrated
interrupt
with
with
the
thing
that
we
say
is
the
one
that
I'll
feature
right.
A
We
want
to
truly
demonstrate
one
dodo,
then
whatever
was
in
the
spec
that
needs
to
be
what's
tested
on
the
wire.
If
we
want
to
make
transition
easier,
that's
that's
something
we
can
do
like
adding
read-only
feature
stuff
to
the
FTP
or
or
whatever,
but
that's
separate
from
actually
demonstrating
the
end
goal.
I
agree.
M
And
I
think
we
should
also
not
be
too
I
mean
it's
important,
that
we
demonstrate
that
we
have
confidence,
that
what
is
in
the
API
can
work
and
interrupts.
We
cannot
and
should
not
try
to
demonstrate
that
every
single
possible
combination
of
the
features
that
are
possible
in
the
spec
can
be
shown
as
working
I
mean.
What
we
need
to
show
for
1.0
is
that
we've
reached
a
stage
where
it
is
good
enough
where
people
don't
come
and
complain
to
me
that
none
of
the
browser
works
the
same
way,
but.
B
B
M
B
C
Bernard,
if
you
have,
if
the
AP
I
understand
that
you're
seeing
this
as
an
API
issue,
because
it's
going
to
be
in
the
web,
IDL
API,
but
we're
but
I,
but
the
point
is
or
I
think
the
point.
The
point
is
that
if
you're
adding
a
feel
to
the
API
that
that
the
reason
that
it's
supposedly
there
to
support
backwards,
compatibility
or
something
like
that,
it's
not
really
used,
or
it's
not
really
in
the
API.
You
can't
specify
it.
You
can't
do
anything
with
it.
It's
not.
C
It's
not
doesn't
appear
in
the
spec
anywhere,
except
the
fact
that
you
know
what
this
field
you
might
understand
this
field.
That's
that
I!
Think!
That's
why
we're
saying
it's
not
an
API
issue.
The
API
is
fine,
because
the
API
doesn't
contain
you
specifying
a
source
ease
for
RTX
or
anything.
It
just
says
you
say
you
want
these
these
streams.
You
want
these
rids
to
identify
them
and
you
want
to
use
RTX.
That's
the
API.
F
Are
we
talking
about
also
removing
the
SSLC
from
the
object
model
from
this
SDP
or
it
will
move.
B
C
F
C
Example,
right
exactly
that's
something
that
we
need
to
that
we
need
to
fix.
My
point
is
that
these
things
should
be
backwards
compatible.
We
should
look
at
them
as
backwards
compatibility.
What
we've
been
doing
is
we've
been
relying
on
them
for
implementation,
we're
saying
the
spec
says
something
we're
doing
something
and
other
things
as
well,
and
we're
only
relying
on
those
other
things
to
get
the
Interop
and
to
get
the
actual
connectivity
between
everything
else.
We
need
to
be
able
to
show
this
is
my
understanding
or
my
interpretation.
M
F
C
B
C
B
J
B
C
Correct
there's
a
bug
and
I
think
for
interrupt.
We
need
to
show
just
the
spec
we
need
to
show.
You
know
only
maids
know
it's
our
C's
and
things
working
and
then
that's
where
you
can
say:
hey
the
spec.
It
really
works,
otherwise
you
can
say,
as
the
spec
is
there,
but
you
know
you
need
all
these
other
things
that
we
didn't
document
in
order
for
it
to
work.
F
C
F
Think
that
may
be
useful
for
the
problem
about
the
B
direction.
That
will
connection
when
you
see
em
ID
is
the
ability
to
set
to
set
the
mid-80s
for
sending.
This
is
legally
possible
in
the
SDP
by
resetting
the
I
think
directly
on
sent
only
in
the
TP
hitter,
but
I.
Don't
think
this
is
implemented
around
in
browsers
but
mitigate
the
problem
of
using
m
ID
in
be
directional
P
connections
when
talking
to
I
accept
you
I
mean
or.