►
From YouTube: WEBRTC WG VI July 8 2020
Description
Recording of the W3C WEBRTC virtual interim on July 8, 2020
B
A
A
Connection
aroused,
it
may
be,
a
vp9
will
be
transmitted
as
well,
but
for
higher
resolution.
Another
example
of
it
could
be
to
have
Hardware
codecs
and
software
codecs.
At
the
same
time,
for
example,
you
know
the
device
support
hardware
h.264,
so
you
want
to
use
that
for
some
layer
to
make
sure
that
is
something
and
if
the
resources
allow
it
use.
A
Another
software
codec,
where
you
have
more
control,
also
about
the
quality
harbor
codecs,
are
notoriously
tricky
to
use,
because
you
don't
know
necessarily
the
quality
that
you're
going
to
get
out
of
it
or
if
they're
going
to
give
you
the
betrayed
you
ask
for
so
we
had
a
similar
make
tourism
previously
from
ortc
and
cutting
parameters.
We
had
a
correct
pedal
type,
which
was
taken
from
the
panel
types
that
you
gets
from
the
offer
answer
and
that
one
was
removed
from
the
spec,
because
there
are
some
issues
with
it.
You
don't
know
it
before.
B
A
A
So
here's
a
stand
and
coatings
or
just
imagine
encoding
if
it
was
a
set
parameters
where
you
would
have
a
new
entry
highlighted
in
green
for
the
codecs
and
you
would
say,
have
connects
listed,
for
example,
every
one
vp8
for
different
layers
and
you
could
have
different
things
there
so
that
boost
codecs
entries
are
taken
from
the
sender.
Get
capabilities,
call
just
like
psychotic
preferences.
It's
an
array
and
it's
you
you
put
preferences
there.
You
could
imagine
that
you
put
all
the
preferences
you
want
before
the
offer
answer
and
we
get
filtered
after
you.
A
A
So
if
you
can
go
to
the
next
slide,
I
have
a
summary
of
what
I
said.
So
the
codecs
are
taker
and
from
get
capabilities
call
from
the
sender
you
shouldn't
be
able
to
modify
them.
Just
like
hidden
correct
preferences,
it
doesn't
have
issue
that
we
have
with
a
transceiver.
We
don't
where
we
wouldn't
had
a
good
parrot
ID
before
and
the
question
that
I
got
on
the
issue
on
github
was
why
don't
you
have
to
Sanders
and
the
reason
for
that
is.
If
you
have
simulcast,
then
you
have
resource
allocation.
A
A
So
we
would
have
the
better
resource
allocation
bandwidth
allocated
between
the
two
layers,
better
allocation
for
processing
power,
and
you
would
allow
a
graceful
degradation,
so
disabled
the
higher
layer
first
and
then
degrade
the
quality
as
needed
on
two
different
layers,
also
something
that
this
API
could
allow
necessary
needed
because
I,
don't
think,
that's
a
problem
that
people
have
mentioned
before,
but
that
could
allow
also
switching
codec
on
a
simulcast
failure.
Without
going
through
another
answer
negotiation.
It
would
just
use
set
parameters,
so
some
low
power
device
trying
to
call
can
only
do
h.264.
A
C
B
D
A
Having
different
allowing
different
codecs
on
different
simulcast
layers
to
low
property
relation
between
the
simulcast
layer,
if
you
ran
out
of
bandwidth,
you
would
able
the
top
layer,
which
is
not
something
you
can
do
right
now.
If
you
have
different
senders
with
different
codecs,
it
would
allow
some
use
cases
where
we
use
more
complex
codec.
A
That's
smaller
sizes
of
simulcast
to
you
have
better
performance
with
bad
network
using
this
bandwidth,
for
example,
with
a
t1
and
switch
to
a
higher
level
higher
layers
to
a
different
codec
that
has
maybe
more
features
more
computationally
insensitive,
but
but
might
be
supported
by
the
hardware,
for
example.
So
you
will
guarantee
that
something
gets
transmitted
and
yeah.
A
A
Yes,
right
now,
I
suggest
thinking
that
the
WebRTC,
extensions
and
I
guess
in
the
future.
We
might
move
things
from
the
extensions
use.
The
main
spec
as
if
they're
widely
implemented,
all
seem
very
useful,
but
right
now
that
would
be
an
extension
I'm,
not
sure
how
you
would
detect
that
it's
reported,
but
try
to
step
by
matures
with
some
products
written
back
sieve
I'm
sure
in
the
dictionary
is
still
there.
That
could
be
a
way
to
detect
if
it
works,
but
yeah
that
could
be
an
extension
that
we
would
implement
them.
So
I.
D
A
E
B
A
B
A
You
would
have
to
check
with
all
the
codec
entries
are
the
same
as
the
ones
that
are
returned
by
capabilities,
in
the
same
way
that
it's
done
with
psychotic
preferences.
So
you
cannot
change
any
parameters,
you
cannot
change,
fmt
parameters
or
anything.
If
you
have
a
SVC
the
nodes
listed,
you
would
put
them
the
same
way,
probably
just
copy
the
exact
correct
entry
without
any
changes.
C
Say
go
for
it.
I
think
this
would
even
be
compatible
with
clients
that
don't
support
this,
because
they
are
already
paired
for
this
bright.
Changing
you
can
you
can
fake
this
by
doing
a
little
offer
answer
dance
on
the
sender
side,
but
you
don't
tell
the
receiver
side.
That's
just
that
today.
This
would
happen
to
all
layers,
but
I
don't
see
why
it
couldn't
happen
for
different
payload
types
in
different
layers
should
work.
C
B
I
think
I
think
there's
interest
in
it
all
right.
Thank
you.
So
next
issue
is
from
WebRTC
SVC.
This
issue
was
brought
up
by
dr.
Alex,
who
is
also
working
on
a
v1,
and
the
question
came
up
because
of
some
unique
capabilities
of
a
b1
we'll
get
into
that
in
the
bidding
in
a
bit.
So
today,
we're
ever
to
see
PC
supports
the
negotiation
of
multi
stream
simulcast
and
basically
the
NSF.
You,
for
example,
can
send
the
offer
indicating
how
many
streams
that
can
receive,
and
then
the
browser
can
answer
now.
B
This
is
a
little
weird,
because
with
an
SDP
and
and
whatever
you
see,
you
only
negotiate
really
the
number
of
streams
we
we
call
it
encoding
parameters,
but
it's
it's
really
only
related
to
streams.
And
so,
if
you
have
multiple
coatings
on
a
single
stream,
it
doesn't
really
show
up
in
encoding
parameters,
except
for
the
what
we
call
the
scalability
mode,
which
is
in
WebRTC
SVC.
B
So
you
basically
don't
negotiate
multiple
stream
simulcast,
you
just
set
it
in
either
set
parameters
or
add
transceiver.
So
in
the
process
of
writing
the
test
cases
to
figure
out
if
a
v1
worked
and
and
potentially
would
interoperate
between
browsers
dr.
alex,
came
up
with
a
following
question:
is
it
legal
to
have
mixed
simulcast
transport?
So
as
an
example,
you
could
have
three
encoding
x'
as
three
three
layers,
such
as
what
Florent
described
and
then
within
each
of
them.
You
could
have
three
simulcast
layers,
so
you
have
nine
encodings
in
this
kind
of
hierarchical
manner.
B
F
Nobody
could
think
about
the
use
case
that
would
generate
that.
But
if
you
follow
and
you
try,
all
the
possible
scenario
would
be
compliant.
That
is
one
of
them,
and
so
I
was
raising
that
up,
saying
I
think
we
should
prevent
people
from
doing
that.
That's
that's
not
the
right
way
to
do
things,
and
if
that's
the
the
answer
from
the
group,
then
we
should
explicitly
do
something
about
it
in
the
spec
right.
F
A
B
B
We've
developed
a
PR
and
let
me
kind
of
go
through
it
and
we
can.
We
can
talk
about
any
potential
modifications,
so
the
PR
basically-
and
this
is
kind
of
implicit,
but
it
browsers
may
or
may
not
support
single
stream.
Simulcast
sending
sing-sing
signature
in
simulcast
and
the
SF
use
may
or
may
not
support
receiving
it
right.
So
it's
an
optional
capability.
B
We
browsers
today
can't
receive
multiple
stream
simulcast
and
we
don't
think,
though
it
makes
any
sense
to
receive
single
stream
simulcast
either
and
in
fact,
right.
Doctor
Alex
it'll
always
be
filled
well,
it
would
always
be
filtered
in
an
SFU,
but
I
think
they're
also
going
to
be
problems
in
well
and
just
receiving
it
deciding
which
operating
point
right.
Dr.
Alex
I
think
that's
true.
Yeah.
F
F
What
I'm
afraid
of
you
know
Florin?
You
say
let
people
experiment
with
this,
but
if
you
let
people
do
try
everything
with
expectation
that
is
going
to
give
a
certain
result,
and
it
does
not
they're
going
to
come
back
to
us
and
say
that
thing
is
broken
right,
so
I'm
I'm
a
little
bit
caught
full
about
of
putting
this
I
think
it's
it's
a
real
nightmare.
To
use.
Most
of
the
time
signal
cache
noise
we
see
is
used
on
the
sender
side
to
do
adaptation
if
mix
the
single
source
and
the
multiple
source.
F
You
start
having
problem
with
the
bandwidth
estimation,
the
congestion
control
to
start
with,
and
then
the
SFU
implementation
is
completely
crazy
right.
So
a
reasonable
approach
is
to
say
the
only
one,
and
that
does
single
stream
multicast
simulcast
today's
everyone,
anyway
right
and
people
not
going
to
have
an
eighty
one
stream-
are
the
specific
RTP
payload
filtering
system
on
the
SFU,
which
is
what
bernard
spoke
about
the
decoding
target
and
things
like
that.
I
think
we
should
on
mix
I,
think
what
Florence
said.
F
B
So
so
anyway,
these
PR
proposes
an
operation
error.
If
you
attempt
to
do
this,
so
basically
you
look
at
your
encodings.
If
there's,
if
there's
multiple
layers
and
there's
an
S
mode,
the
proposal
in
this
PR
is
to
have
an
operation
error
and
that
would
be
for
either
set
parameters
or
a
transceiver.
So
it's
it
would
be
illegal.
The
other
reason.
A
F
Correct
we
think
that
case
is
not
going
to
be
dynamic.
We
think
at
the
beginning,
people
will
decide
either
to
go
multi-stream
simulcast
possible
with
different
codec
for
each
of
the
stream
right
or
choose
everyone
and
and
use
the
intrinsic
as
we
see
capacity
of
everyone
for
all
the
layers
and
in
which
case
they
should
just
go
for
all
the
modes
that
are
defined
with
ETA,
be
one
single
stream,
simulcast
yeah.
B
So
the
other
reason
we
had
a
question
I
think
that
also
came
up-
and
this
was
more
in
the
in
the
AV
one
group.
But
how
do
you
know
whether
an
S,
if
you
support
single
stream
simulcast
right,
not
every
s,
if
you
probably
will
support
it?
And
then
how
do
you
know
how
many
in
coatings
that
can
handle
in
the
multiple
stream
case?
We
know
this
because
you
can
negotiate
in
an
offer
answer,
but
the
question
was
for
the
single
stream
case.
How
do
you
know
it
so?
B
Basically,
one
way
to
know
it
is
using
get
capabilities
so
as
an
example
and
it's
works
either
in
a
browser
or
in
SFU.
If
you
do
get
the
capabilities
of
the
receiver
and
you
see
s
modes
in
there,
it
means
that
that
SFU
is
capable
of
receiving
this
single
stream
simulcast
boat.
So
as
an
example,
if
you
had
C
so
the
s
2
T
1
and
s
2,
T,
1,
H
modes,
but
no
other
s
modes,
that
would
mean
the
SFU
can
support
to
simulcast
encodings
on
a
single
stream,
but
no
more
than
that.
B
So
basically,
we
have
a
way-
and
this
is
obviously
outside
of
offer
answer.
But
if
you
exchange,
if
you,
if
the
SFU
sends
its
capabilities
before
you
do
add,
transceiver
and
offer
answer,
you
can
know
whether
it
supports
these
single
stream
modes
and
know,
know
whether
to
set
the
single
stream
simulcast
or
do
the
regular,
multiple
SSRC
to.
F
Be
restored
here:
the
L
1
T,
2
and
L
1
T
3
are
used
for
vp8
and
h.264
mix
mode,
where
you
Cemal
cast
the
spatial
layer,
but
each
of
the
special
layer
as
multiple
temporal
layers
right
right
now
in
in
the
web.
Rtc
implementation.
When
you
do
simulcast
with
vp8
or
h.264,
you
automatically
get
also
switch
on
those
modes
right
with
either
two
or
three
special
layer.
B
A
A
F
Right
Florent,
the
SF-
you
don't
have
to
implement
the
JavaScript
API,
but
it
returning
the
spec
of
the
web
RTC
JavaScript
API
in
the
simulcast
section
that
the
use
case
of
simulcast
is
send
the
site
to
SFU
right.
So
where
does
half
use
in
the
spec?
So
here
is
not
normative,
it's
more
informative.
We
say
by
the
way,
guys
all
do
we
signal
that
to
fears
and
because
it's
simulcast
one
of
them
is
going
to
be
an
SF
implementation.
I
have
the
capacity
to
support
a
specific
player.
F
We
don't
have
the
equivalent
of
the
simulcast,
so
there
is
no
our
ID
and
there
is
nothing
in
the
SDP
today
that
allowed
to
do
that,
and
we
just
point
people
to
the
fact
that
by
probably
on
the
browser
side,
RTP
receiver
get
get
capabilities
and
filtering
for
those
modes.
That
gives
you
the
answer,
your
question
and
you
can
do
the
equivalent
on
a
non
browser
side
of
the
implementation
of
web
RTC
and
the
signaling
and
then
up
to
whatever
signaling
you
will
have
to
adopt.
B
Is
this
kind
of
a
scheme
would
not
work
for
for
that
complex
mixture,
for
example,
you
know
if
you
could
mix
them,
the
SFU
might
might
have
to
say
something
like
I
can
handle
up
to
eight
streams
with
you
know,
four
of
them
two
to
two
layers
and
then
for
in
a
given
one,
and
that
that
kind
of
capability
is
very
difficult
to
figure
out
and
signal,
whereas
this
is
very,
very
simple,
let's
say
I
support
these
estimates
or
I.
Don't
many,
it
tells
you
exactly
what
you
can
send.
So
that's
it
that's
another
another.
C
C
A
Is
there
really
anything
to
support
I
mean?
For
me,
this
is
property
of
the
application
that
is
querying
their
SFU
and
asking
in
a
proprietary
way.
Possibly
what
votes
to
you
supports.
So
then
it
seems
logical
that
the
current
client
will
not
use
any
of
the
modes
that
is
reported
supported
by
the
application.
I,
don't
see
why
that
necessarily
needs
to
be
discussed
in
the
spec.
A
B
The
other
alternative
which
has
been
proposed-
if
we
don't
do
this,
you'll,
actually
have
an
SDP
proposal
coming
in
that
you'll
have
to
support
and
where
to
see
so
it'll
it'll,
you
know,
there's
there
you'll
have
a
very,
very
complex.
You
know
20
or
30
page
SDP
proposal.
So
that's
the
alternative
to.
F
F
Case
exists
and
it's
the
RTP
payload
spec
41
and
the
RTP
payload
spec
for
the
one
list,
all
those
modes
that
cannot
be
implemented
or
supported
in
reality
in
a
full
system.
Without
that
kind
of
thing,
then
another
one
which
is
the
way
about
TCS.
We
suspect
by
Bernard
that
also
least,
and
is
more
generic
than
anyone
write
it
also
at
least
what
is
possible
with
vp9
and
what
is
done
with
vp8
and
h.264
today.
F
B
Yeah
there's
now
as
yeah.
So
far,
we
haven't
heard
an
interest
in
this
multiple
mode
thing
and
as
I
described
here,
it's
it
would
be
quite
complicated
and
you'd
have
to
go
down
the
STP
route
and
have
a
very,
very
complex,
SDP
negotiation
to
get
get
this
done.
If
we,
if
we
feel,
we
need
to
support
these
mixed
and
that's.
F
Yeah
it's
for
again,
if
you
use
case,
was
that
specific
mix
use
case
then
I
and
you
are
correct.
We
do
not
know
of
any
people
that
would
want
to
do
that,
but
since
its
permitted
by
the
by
the
spec
we're
just
trying
to
avoid
having
people
even
trying
right,
because
this
sort
of
that
use
case
will
be
overly
compliant.
My.
G
B
E
Just
one
of
the
like
I'm
in
favor
of
this
and
I
just
wanted
to
kind
of
say
I
want
to
be
careful
about
saying
it's
always
be
done
in
less
a
few
I
think
there
are
use
cases
where
endpoints
might
want
to
do
this.
We
not
the
mixed
mode,
but
in
general
pick
out
single
streams
of
simulcast
by
Creek.
You
know
you
think,
about
security
recording
devices.
You
might
record
a
different
stream
than
you
show
kind.
B
H
B
Little
a
little
reminder
about
where
we
are.
We
were
to
see
ice.
We
have
an
editor's
draft
13
open
issues.
This
has
been
widely
implemented.
It
therefore
implementations
of
it
only
one
of
which
has
working
today.
Actually
maybe
there's
yeah,
that's
maybe
more
implementations.
Anyway.
The
functionality
is
basically
a
standalone
ice
transfer
with
no
STP
dependency.
B
There
is
an
issue
open
for
adding
forking
support.
I
would
mention,
there's
also
ever
to
see
that
org
bug
for
forking
support
and
an
unmerged
PR.
That
I
believe
supports
forking
I'd
like
to
tie
this
back
to
Weber
to
CMB
use
cases
so
as
it
is
what's
in
Weber's,
the
ice
supports
a
use
case
such
as
data
channel
and
workers.
B
I
would
note
the
data
channel
workers
is
not
a
use
case
in
wherever
to
CMB
use
cases,
but
it
is
something
we've
had
requests
for
and
in
particular
someone
filed
an
issue
and
Weber,
or
do
you
see
recently
two
five
five
three
asking
for
this
Roberts
the
ices
it
exists
currently
does
not
meet
requirements
for
the
multi-party
online
game,
which
is
in
whoever
you
see
and
be
use
cases
because
that
case
requires
forking.
It
doesn't
support
the
calling
with
multiple
end
point
case,
because
that
also
requires
forking
is
mobility.
B
Use
case
in
the
end
be
use
cases
which
requires
flex
ice
I,
don't
think
that
one
doesn't
mention
forking
and
then
we've
discussed
at
various
points.
Peer-To-Peer
mesh
use
cases
such
as
a
tea
pack
2019.
That
also
requires
flex
ice,
so
we're
in
a
little
bit
of
an
interesting
dilemma
here,
because
we
have
this
spec,
but
it
doesn't
actually
support
any
of
the
any
use
cases
in
WebRTC,
Envy
use
cases
document,
and
so
we've
been
trying
to
figure
out
what
we
need
to
do.
B
What
we
need
to
add
to
the
spec
to
at
least
support
some
use
cases
that
are
that
we've
said
we
want
to
support.
So
a
little
bit
of
reminder
about
flex.
Ice
flex
ice
has
a
lot
of
features
of
which
forking
is
only
one
but
there's
a
lot
of
ways
to
control
where
the
paths
and
things
like
that,
and
we
did-
we
discussed
that
a
tea
pack
2019,
so
I'd
like
to
hand
it
over
to
our
esteemed
guest
Peter.
That
thanks.
H
H
H
So
this
is
the
main
slide
that
teeth
from
that
I
had
a
tea
pack
2019,
and
basically
it
was
the
three
things
that
the
peer-to-peer
mesh
case
needed,
which
is
the
main
one.
I
talked
about
a
tea
pack,
2019
thanks
like
and
the
big
question
was
this
one,
which
is:
are
we
willing
to
implement
nice
forking
next
slide,
which
is
really
a
question
of
how
hard
it
is
to
implement?
So
after
tea
pack,
2019
I
went
and
investigated
how
hard
it
would
be
to
implement
ice
forking,
and
it
wasn't
that
bad?
H
It
was
a
lot
easier
than
I
expected
next
slide.
Oh
I
blew
it.
I
was
gonna,
give
you
a
drumroll.
Alright,
thanks,
like
drum
roll,
looks
like
it's
not
that
hard.
So
this
is
how
it
kind
of
felt
when
I.
Finally,
you
realized
how
much
code
needed
to
change-
and
it
wasn't
nearly
as
much
as
I
thought
it
would
be.
I
was
pretty
happy
about
that,
and
next
slide,
that
kind
of
led
to
the
question
of
okay.
H
H
Live
RTC
and
provided
it
upstream
as
a
as
a
CEO
Harold
was
asking
basically
you
know:
what's
the
interest
I've
on
this,
so
we
answered
the
question
of
how
hard
it
is
to
implement,
and
now
that
punts
us
back
to
the
question
of
how
useful
is
it.
So
that's
that's
what
we're
just
good
or
what
I
want
to
discuss
now
next
slide.
H
H
So
a
little
background,
a
call
forking,
unless
you
in
case
you
don't
know
what
it
is.
It's
basically
when
you
want
one
device
to
call
multiple
devices,
and
there
are
multiple
ways
of
doing
this,
but
ice
forking
can
be
a
particularly
good,
a
particularly
good
one
and
I'll
explain
one
next
slide
and
it's
a
little
funny,
because
the
original
use
case
for
a
call
for
ice
forking
back
in
the
ortc
days
was
call
forking.
So
the
fact
that
we're
discussing
this
is
a
little
bit
of
going
back
to
the
future
next
slide
right.
H
So
why
do
you
need
ice
forking
for
call
forking,
and
you
don't
need
it
exactly?
But
if
you
want
to
be
fast
efficient
you
need
it
so
explain
why
next
slide,
for
example,
you
could
say
Peter.
Why
don't
we
just
wait
to
do
ice?
You
know
send
out
multiple
offers,
wait
for
one
particular
answer
and
then
do
ice
after
that.
Well,
it
makes
the
call
set
up
slower
it's
nice
to
be
able
to
do
ice
while
the
various
Collies
are
ringing
so
that
when
one
of
them
answers
is
a
lot
faster.
H
So
if
you
say
wait,
2d
wise,
you're,
just
kind
of
slower
to
call
set
up
next
slide.
So
that's
out
you
could
say
why
don't
you
send
out
and
offers.
So
if,
for
example,
you
knew
ahead
of
time
that
you
were
going
to
talk
to
five
different
collies,
you
could
create
an
offer
for
five
different
times
by
different
offers.
Allocate
five
different
sets
of
candidates,
send
out
all
those
candidates,
five
times
and
so
on,
and
that
would
work.
H
H
Reports
and
another
is
that
you
have
to
send
out
more
candidates
over
signaling
and
that
can
actually
be
issue.
So
basically,
everything
multiplies
by
in
on
top
of
needing
to
know
what
n
is
so
for
this
doesn't
help
with
efficiency
and
for
some
situations,
where
you
wouldn't
know
n
in
the
first
place,
is
Cavanaugh
starter.
So
at
least
four
signals
use
case.
This
was
kind
of
suboptimal
or
had
difficulties
so
and
next
light.
H
So
if
we
decide
yes
ice
forking
is
definitely
the
way
to
go
with
call
forking,
which
is
what
we
cited
a
signal.
The
question
is:
can
this
even
work
or
peer?
Can
the
peerconnection
API
and
so
I'm,
using
this
little
emoji
as
the
representation
of
the
peer
connection,
because
it's
yeah
I
just
thought
it
was
cute
all
right
next
slide
and
the
answer
is
yes:
not
only
can
we
fork
ice,
but
we
can
kind
of
forth
clear
connections
and
it
the
API
API
for
doing
so
is
not
that
difficult.
I
was
originally
I.
H
Hate
worry,
but
it
might
be,
but
turns
out,
it's
not
that
bad.
So
next
slide
now
I
was
told
that,
since
signal
is
not
part
of
the
w3c,
that
I
should
be
careful
to
explain
the
what
in
the
why,
but
not
so
instead
of
proposing
particular
API
I'm
simply
going
to
give
the
requirements
for
what
is
needed
in
an
API
that
would
allow
for
call
forking
with
the
peerconnection
API,
with
ice
forking
underneath
and
they're.
Basically,
three
parts.
One
is
that
you
need
to
create
an
offer
that
can
be
shared
between
several
different
peer
connections.
H
And
second,
you
need
to
gather
candidates
that
can
be
shared
among
several
of
your
connections
and
then
then
the
key
part
is
that
you
take
the
first
two
and
you
apply
them
to
a
newly
created
peer
connection.
Now
what
the
mechanism
is
for,
creating
these
shared
offers
and
candidates
could
be
many
different
things,
but
the
key
is
just
that
when
you
apply
it
different
peer
connections,
all
those
cricketing
grid
connections
end
up
sharing
the
same
local
description
and
the
same
localized
candidates.
H
So
next
slide.
So
this
is
kind
of
summary
where
ice
forking,
please
to
fast
call
forking
and
then
I
think
I,
one
more
slide.
Oh
yeah,
it's
mind.
Blowing
X
left
thing
is
wrong:
okay,
yeah!
It's
conclusion.
So
the
update
is
that
ice
14
is
implementable.
It's
not
that
bad
that
doing
colonise
forking
is
compatible
with
the
peerconnection
API.
It's
not
that
bad
and
that
adding
these
to
the
API
allowed
for
apps
like
signal
to
do
so.
Efficient
multi
ring
or
call
forking
and
a
little
bit
of
a
fourth
point.
H
C
H
Just
point
out,
there
is
a
simpler
way
and
as
far
as
signal
goes,
we
have
put
this
code
into
production,
so
we're
reasonably
confident
that
it
works.
So
it's
not
just
theoretical.
It's
like
all
this
stuff
can
work
in
the
real
world
and
if
we
put
it,
if
it's
put
into
the
browser,
it
might
be
of
use
to
apps
like
signal
in
a
web
context.
That's
what
I
got.
G
D
B
We
have
a
couple
of
use
cases
that
requires
forking,
but
not
nothing
as
basic
as
just
a
call
forking
use
case.
So
one
question
is
whether
we
should
add
that
to
the
use
cases
document-
and
the
second
is
whether
there
is
consensus
to
add
through
some
mechanism
to
add
forking
support,
I
guess
the
first
question
is:
would
we
add
it
to
whatever
to
sea
ice
or
to
Everage?
The
extensions
are
both
I'm.
C
A
little
bit
if
you
can
care,
if
I
confused
about
what
happens
when
you
do
the
ice
forking
right.
So
you
have
a
you
force
the
peer
connection,
but
you
still
need
to
connect
to
a
new
endpoint.
So
what's
the
what's
the
thing
you
benefit
that?
Well,
that's
the
difference
between
basically
starting
a
nuclear
connection
and
then
getting
the
new
candidates,
because
you
need
the
new
candidates
for
the
new,
whatever
the
new
endpoint,
this
right,
I.
H
C
B
H
D
H
H
E
B
B
B
H
Correct:
okay,
music,
okay
connection,
API,
so
yeah.
If
you're
trying
to
decide
what
level
of
the
API
to
do,
I
think
that
would
be
easier
for
existing
applications
to
have
it
in
WebRTC
extensions,
although
in
general,
I've
always
been
more
of
a
fan
of
lower
api's.
In
this
case,
I
think
it
is
easier
to
deal
with
with
a
higher
number
one.
B
B
D
D
G
G
G
B
Here's
here's
where
the
issue
comes
up
dumb,
so
we've
had
this
bug
filed
against
wherever
at
you
see,
which
was
for
the
data
channel
in
workers,
and
we
had
previously
discussed
that
in
tea
packs-
and
that
was
one
of
the
reasons
we
went
to
the
standalone
Isis
because
we
felt
we
didn't
want
to
have
all
of
your
connection
and
workers.
But
this
bug
that
was
filed.
Basically
people
said
that
was
what
they
wanted.
They
wanted
all
appear
connection
in
the
worker,
not
just
data
channel
and
ice.
B
The
Cole
for
the
nice
thing
about
the
call
forking
use
cases,
it's
pretty
clear
unless
you're
going
to
build
an
entire
ortc
stack,
you
pretty
much
have
to
do
that
with
whatever
to
see
extensions.
So
if
we
put
that
use
case
in
it
basically
means
we're
gonna
kind
of
go
down
the
route
of
of
adding
forking
tip
your
connection,
and
then
the
next
question
would
be
you
know:
is
it
possible
to
do
data
channel
no
workers
within
pure
connection
right.
B
B
D
I
have
a
really
dumb
question.
Maybe,
but
what
happens
today?
If
you
take
an
offer,
send
it
to
many
and
they
all
apply
or
or
you
apply
the
same
remote
offer
to
multiple
peer
connections.
Locally
I
mean
whether
it's
the
actual
change
needed
for
this
I'm,
assuming
it
doesn't
work
today
or
the
or
there
wasn't
the
apply.
C
B
B
B
So
I
wanted
to
talk
with
the
remaining
time
wanted
to
go
over
where
we
are
within
the
use
cases
and
also
introduce
potential
web
transport
use
cases
and
then
ask
the
question
about
whether
we've
missed
any
use
cases
I
think
there
might
be
some
stuff
out
there.
We
might
want
to
talk
about
so
a
little
bit
of
a
summary
of
where
we
are
within
the
use
cases.
We
have
some
improvements
to
existing
use
cases
like
multi-part,
young
game's,
mobility,
scalable
videoconferencing
and
some
improvements
to
that
reversal.
B
Stuff
like
that,
and
then
we
have
new
use
cases
currently
file-sharing
Internet
of
Things,
funny
hats,
machine
learning,
virtual
reality,
gaming
and
secure
video
conferencing.
So
that's
what's
in
our
end,
that
use
cases
so
far.
We've
had
various
additions
that
have
been
suggested
and
discussed.
We've
had
recent
discussion
about
security,
and
that
would
be
the
trusted
JavaScript
use
case,
which
we
removed
from
the
document,
and
then
we've
been
talking
about
a
semi
trusted
case.
I
think
UN
is
on
the
hook.
B
For
that,
although
we
need
to
define
what
semi
trusted
is,
as
I
mentioned,
we
had
a
bug
filed
which
will
probably
move
towards
EMV.
We
know
about
data
Channel
and
workers.
We've
had
the
peer-to-peer
mesh
cases
that
Peter
talked
about,
and
we've
also
had
three
bugs
that
I'll
be
three
new
potential
use
cases.
People
have
been
asking
for
a
broadcasting
use
case
as
censorships
or
convention
and
then
one
which
involves
more
control
over
latency
and
acceptable
loss.
B
So
I'll
talk
about
at
these
three
now,
as
you
may
know,
that
w3c
is
looking
at
creating
a
web
transport
working
group
and
as
part
of
their
Charter.
They
also
list
a
use
case
document
and
it
doesn't
exist
yet,
but
there
were
some
use
cases
discussed
at
the
IETF
106
web
Transport.
Buff
then
included
also
some
new
and
some
existing
use
cases.
B
The
new
ones
were
machine
learning
in
a
client-server
mode,
so
you
were
sending
a
machine
learning
in
the
cloud
cloud
gaming
and
then
live
streaming,
which
would
be
something
like
a
live
event,
a
sporting
event
with
maybe
with
a
chat
or
something.
And
then
there
were
some
existing
use
cases
which
would
be
a
remote
virtual,
desktop,
again
client-server
scenario
and
then
to
which
were
web
games
and
web
chat,
not
sure
we
know
what
those
are
yet.
But
so
this
is.
This
is
the
realm
of
the
web
transport
use
cases.
B
So
keeping
that
in
mind
here
is
the
issue
50,
which
related
to
broadcasting
of
one-to-many,
and
so
this
is
situation
where
developers
are
trying
to
broadcast
at
large
scale,
and
the
issue
mentions
three
things:
one
is
the
negotiation
of
the
cipher
suites
I.
Think
we've
discussed
this
in
a
separately
and
then
having
to
do
various
smashing
Nations.
It's
inherently
a
client-server
use
case,
I
think,
but
apparently
people
are
trying
to
use
ice
over
TCP
and
then
another
thing
which
I've
actually
heard
is
a
request
for
DRM
or
which,
whoever
you
see,
doesn't
support.
B
B
The
second
one
is
censorship,
circumvention
I,
don't
know
if
people
are
familiar
with
snowflake,
but
apparently
people
are
doing
things
like
that,
and
also
somebody's
running
wire
guard
VPN
over
the
data
channel.
I'm,
not
sure
how
you
do
that,
but
it's
this
is
a
situation
where
people
have
an
extreme
need
for
privacy,
and
people
are
concerned
about
things
like
safeguarding
information
from
ice
like
they
don't
want
to
have
the
you,
frag
and
password
be
seen
on
the
wire
or
from
the
detail,
s
handshake,
which
can
expose
self-signed
cert
and
allow
identification.
B
B
E
Already,
maybe
you
already
know
it
like
you,
you,
you
must
presume
me,
there
must
be
some
share
information
you
could
derive.
You
could
write
some
rules
about
how
you
derive
it.
You
frac
and
they
you
password
from
some
shared
secret
or
some
shared
state
and
just
not
have
to
send
it
over
the
one.
Well.
D
E
D
E
But
but
I
think
the
role
of
standards
would
be
to
to
codify
how
you
derived
them.
Thank
you,
for
you
had
some
shared
information
that
you
could
then
use
to
derive
the
you
for
there
you
password
from
like
what
were
the
rules
for
doing
that.
It's
the
kind
of
thing
that
you
could
do
in
in
a
standard
space
I
mean
I'm
kind
of
making
this
up
on
the
move.
So
it's
probably
completely
I
it'll
feel.
E
B
E
Yeah
I
mean
you
could
say
that
it
was
in
some
way
related
to
something
else
about
the
session
that
you
already
kind
of
knew.
I,
don't
think
it's
I
mean
so
so
I,
don't
think
it's
in
unsolvable
and
I.
Don't
think
that
it's
totally
out
of
scope
for
WebRTC
envy.
B
E
Yeah
I
mean
I,
guess
the
sensitivity
is
that
kind
of
weather
recently
peer-to-peer
game
in
town
for
this
at
the
moment.
So
it's
kind
of
the
moment
you
remove
it
from
the
web.
Rtc
use
cases
list
you're
saying
it's
fine
you're
happy
to
make
it
fine
server
and
that's
right.
You
know,
maybe
that's
the
right
decision,
but
I
I
personally.
Don't
feel
that
way.
E
I
mean
VPN
like
behaviour,
I'm
kind
of
conscious
that
I'm
in
the
w3c
world
here
so
gonna
be
a
tiny
bit
careful
here,
but
I
think
I
presented
that
TPAC.
The
idea
that
you
could
load
pages
from
a
device,
the
device
behind
Nats
a
camera
or
something
you
could
load
pages,
whether
WebRTC
containing
over
web.
What
you
see
load
a
page
kind
of
VPN,
like
into
your
your
browser
without
being
dependent
that
page
never
being
dependent
the
server.
So
that's
a
peer-to-peer.
G
B
B
Okay,
so
I
think
the
overall
guidance
seems
to
be
try
to
try
to
keep
and
develop
this
in
WebRTC
NV
use
cases
try
to
figure
out
what
the
core
of
it
is
for
that
and
also
potentially
I.
Guess
privacy
could
be
of
interest
to
web
transport
as
well,
so
it
can.
It
can
go
to
both
places.
I
guess
the
last
one
is
an
IOT
security
camera
use
case
where
they
were.
B
So
item
one
I
guess
you
can
do
retransmission
and
set
our
TX
time
sdp
parameter,
so
you
can
get
the
partial
reliability
so
I'm,
not
sure
that
this
is
something
that's
impossible
with
whoever
to
see
today
and
then
item
2.
If
you
really
want
no
loss
at
all,
that
seems
like
of
something
that
web
transport
might
be
best
suited
for,
so
you
can
use
like
a
reliable
stream
and
get
your
video
upload.
That
way-
and
this
is
something
that's
pretty
popular-
it's
a
video
ingestion
case.
E
B
D
B
C
B
D
F
B
Yeah
I
understand
why
a
security
camera
might
want
to
make
sure
there's
no
loss.
You
know
somebody's
robbing
your
store
or
something
yes,
but
it's
it's
definitely.
It's
definitely
not
real-time,
because
you'd
want
to
be
transmitted
until
the
cows
come
home
to
make
sure
you
caught
the
jewelry
thief
or
whatever
it
was.
Yes,.
E
D
B
B
B
B
Okay,
thank
you
all
right.
So
we've
gone
over
a
bunch
of
the
web
trans
of
the
NV
use
cases
as
it
exists
today,
and
we
talked
a
little
bit
about
some
of
the
potential
web
transport
use
cases
I'd
like
to
have
an
open-ended
question
for
the
working
group,
which
is
have
we
missed
anything
here
and
the
reason
I
asked.
B
Second,
one
is
at
the
art
world.
This
is
a
very
big
convention
occurs
a
couple
of
times
a
year.
It's
called
Art
Basel,
it's
one
of
the
largest
art
shows
in
the
world.
This
is
the
2019
version
in
Miami
Beach,
which
was
held
in
person
and
most
certainly
will
not
occur
again,
at
least
in
2020.
This
is
one
big
convention
center
and
they're-
probably
a
hundred
thousand
people
that
are
in
there
at
once.
B
So
here's
a
few
things
from
that
from
2019
that
we're
interesting
a
big
there
was
something
called
virtual
windows
and
the
idea
here
is
you
basically
I.
Don't
know.
Why
want
to
do
that
say
you
have
a
house
and
you
replace
the
window
with
an
LCD
screen
and
then
you
have
you
can
have
scenes
from
anywhere
in
the
world
and
in
this
particular
case
it's.
This
is
actually
not
a
photograph.
It's
it's
actually
live
video
or
it's
video.
B
It's
not
it's
not
live,
but
this
is
Jerusalem
at
the
Wailing
Wall
and
you
basically
would
have
a
window
in
your
house
and
I
guess:
you'd,
look
at
it
and
think
you're
in
Jerusalem,
if
you're
more
interested
in
Italy,
they
had
a
window
that
was
I,
guess
a
mall
in
Milan
as
well,
so
in
2020
based
they
won't
be
having
this
show
and
they've
been
doing
it
online,
and
the
big
question
is
whether
this
would
actually
work
because
they
so
extremely
high-priced
artwork.
But
apparently
somebody
bought
a
multi-million
dollar
sculpture
and.
B
This
year
at
least
the
Democratic
National
Convention
er
will
be
all
virtual
and
I'm,
not
sure
exactly
what
they're
planning
to
do,
but
I
think
it'll
be
a
fairly
I
think
they're
interested
in
a
fairly
substantial
production.
So
it's
not
going
to
look
like
you
know,
people
in
their
houses
and
stuff,
like
that
they're
they're.
Thinking
of
doing
something
fairly,
elaborate,
not
sure
how
you
do
that
with
a
couple
of
thousand
delegates
and
all
that
but
they're
they're
planning,
for
it.
B
We've
also
seen
a
lot
of
other
very
large
virtual
events.
I
got
an
invitation
yesterday
for
something
called
a
nationally
met
national
town
hall,
where
they
were
planning
on
having
upwards
of
10,000
people
in
a
meeting,
and
this
is
another
example
of
this
there's
a
homelessness
conference
that
occurs
every
year
and
it'll.
It's
going
to
go
virtual
and
I
think
they're
planning
to
have
several
thousand
people
in
it
at
once.
B
As
you
may
know,
many
of
the
biggest
movie
theaters
are
going
potentially
declaring
bankruptcy,
and
this
is
a
virtual
movie.
Theater
application.
When
you
get
to
watch
your
movie
with
your
friends,
so
it's
kind
of
a
combination
of
streaming
and
and
a
chat,
I
guess
with
this.
This
one
is
the
key
thing
here
is
that
it's
not
it
doesn't
require
a
low
latency
streaming.
The
streaming
itself
can
be
high
latency,
but
I
guess
it
has
to
be.
Everyone
has
to
see
the
same
thing
more
or
less.
At
the
same
time,.
B
The
sports
teams
are
not
allowed
to
generally
to
play
in
stadiums
around
the
world,
and
so
a
lot
of
the
teams
are
trying
rude,
weird
things:
I
just
got
an
email
today
about
baseball
games
where
they
want
the
audience
to
be
online
and
clap
or
applaud,
or
do
the
usual
things
they
do
during
the
game
and
they'll
pipe
that
into
the
stadium,
which
is
empty.
Okay
and
then
there
are
other,
maybe
I,
don't
know
if
they're,
if
they'll
try
to
pack
that
into
the
broadcast,
but
also
the
NHL
and
Disney,
are
creating
an
interactive
tool.
B
So
anyway,
sports,
as
we
know
at
large
large
sports,
is
being
remade
with
a
combination
of
conferencing
and
streaming
a
number
of
theater
companies
that
I
belong
to
have
canceled
their
seasons,
but
a
few
have
tried
to
continue
on
and
one
of
them
it's
a
little
theatre.
Company
in
Tacoma
has
gone
entirely
virtual,
so
there's
they're
starting
to
sell
tickets
to
these
virtual
performances.
This
was
a
performance
of
Robin
Hood,
which
they
think
they
did.
It
was
zoom
and
it
was
uploaded
to
YouTube.
B
You
can
actually
watch
it
on
YouTube
and
you
can
see
that
they're
setting
the
scenery
using
custom
backgrounds
and
so
I
guess
a
bunch
of
them
are
in
a
some
kind
of
Castle,
and
so
they
did
this
live.
So
you
would
actually
buy
a
ticket
and
come
on
and
watch
this
and
then
you
could
see
it
after
the
fact.
So
it
was
kind
of
a
combination
of
YouTube
streaming,
I
guess
and
some
an
RTC
to
create
the
performance
and
their
number
of
requirements
like
people
are
constantly
entering
and
exiting
the
scenes.
B
What's
interesting
about
this
one
is
it's
actually
people's
homes
they're,
not
using
custom
backgrounds?
I'm,
not
sure
why
NASA,
but
this
is
Saturday
and
I
live
at
home.
You
may
have
seen
this
people
get
on
and
they
do
their
skits
through
through
conferencing,
and
this
is
a
show,
it's
called
full
frontal.
B
You
may
have
seen
it
it's
a
husband-and-wife
team,
they're
off
in
the
woods,
and
the
husband,
I
guess
is
a
has
experience
producing
and
editing
television
on
that
Samantha
B
this
one
they
do
bring
in
guests
over
RTC
and
kind
of
edit,
this
all
into
a
production
they
send
up
and
they
and
it's
pretty
it's
pretty
well
done.
If
you
watch
it
some
cases,
you
might
not
know
that
it
was
done
at
home
as
opposed
to
out
of
the
studio,
because
they
have
all
their
all
their
equipment
and
lighting
and
all
that
stuff
at
home.
B
Another
industry
that's
been
impacted,
is
the
music
industry.
Most
large
concerts
have
been
canceled.
Who
knows
when
we'll
have
those
again,
so
a
lot
of
them
have
gone
online.
Most
of
them
as
far
as
I
know,
are
kind
of
small
they're
done
produced
at
home,
like
I,
think
Garth
Brooks
at
a
concert
he
produced.
It
was
seen
by
a
large
number
of
people
at
drive-in
movie
theaters,
so
it
he
recorded
at
home
and
did
people
were
able
to
watch
it
in
a
drive-in
in
their
cars
but
others.
B
Virtual
travel,
since
we
can't
go
anywhere,
might
be
interesting
to
be
able
to
virtually
travel
I'm,
not
sure
I
want
necessarily
an
experience
where
I
sit
on
a
plane
and
have
that
part
of
it.
But
there
are
all
kinds
of
virtual
reality.
Experience,
Digital,
exhibits,
museums,
some
museums
have
gone
online
and
you
can
go
in
and
explore.
B
B
B
B
F
Bernard
with
with
minicams
we're
we're
doing
today,
the
you
know,
meeting
in
in
Europe.
That
is
also
with
my
friends.
Those
are
much
your
audiences
than
that
right,
even
for
small
audience
on
mini
cast,
which
is
a
one-to-many
broadcast
as
design
right.
The
auction
houses,
for
example,
do
you
71
auction
in
parallel
over
ten
hours,
with
each
around
50,000
viewers
in
parallel,
so
Glen?
This
is
the
order.
I
think
you
can
expect
and.
C
F
B
F
E
Feel
there
are
couple
themes
in
there,
one
of
which
is
is
asymmetry.
That's,
like
kind
of
you
know,
all
of
the
things
that
you're
talking
about
our
are
asymmetric
media,
essentially
potentially
dynamically
asymmetric
and
I.
Think
maybe
we
haven't
quite
paid
enough
attention
to
that
in
in
the
past
and
I
think.
The
other
thing
is,
it
feels
like
a
low.
F
I
told
you
team
there
I
feel
like
there
is
a
demon
to
reduce
the
gap
between
the
traditional
media
streaming
and
WebRTC.
People
expect
to
have
the
N
beats.
Hd
are
things
that
in
Netflix
that
they
can
play
in
Chrome
other
web
RTC
and
that
develop
quality
is
not
in
the
real-time
encoder
and-
and
there
are
a
few
things
like
that-
we're
adding
the
same
media
stack
for
both
if
it
wasn't
for
the
real-time
say,
the
latency
section
of
things
is
really
something
that
is
being
asked
among
other
stuff,
yeah.
A
B
A
Ahead,
there's
an
aspect
that
a
lot
of
people
try
to
mix
is
that
if
you
have
low
latency,
you
probably
have
synchronized
delivery,
but
synchronize
delivery.
I
think
is
a
quality
that
could
be
aimed
for
in
a
different
way
than
we've.
Lower
latency
may
be
a
long
buffer,
but
with
a
global
clock
and
making
sure
you
have
synchronize
delivery,
for
example,
for
spot
sevens
or
things
like
that
words,
but
I,
don't
think
that's
something
we
can
do
right
now.
You.
F
Have
you
have
some
people
like
you,
have
Phoenix
RTS
that
have
a
synchronize
delivery?
There
is
that
cast
also
in
Europe.
That
does
that,
but
the
problem
is
to
do
synchronize
delivery.
Is
the
wicked
weakest
link
right
since
you
cannot
accelerate
people?
You
need
to
slow
down
everybody
to
the
slowest
member
to
make
sure
that
everybody
gets
the
frame.
F
So
if
there
is
a
regal
regulation
behind
that,
for
example,
for
gambling
to
make
sure
that
everybody
gets
the
same
chance,
that's
fine,
but
otherwise
for
live
event
for
sport,
people
always
want
to
get
the
fastest
access
they
can.
They
don't
want
that
if
there
is
too
one
viewer
connecting
from
a
crappy,
Wi-Fi
or
slow
connection,
then
it
slows
everybody
else
down
to
be
able
to
have
a
synchronized
delivery.
Well,.
A
You
could
say
that
you
have
a
10
second
latency
of
as
a
real-time
event
and
then
making
sure
that
everyone
is
receiving
the
frames
on
it.
Is
him
he's
playing
the
frames
at
the
same
time
with
whatever
buffering
strategy
they
want
for
their
device
to
make
sure
that
your
neighbor
is
not
screaming
before
you,
which
is
usually
third
use
case,
but
it.
B
F
But
a
question
to
the
working
group
on
the
table
before
we
go
to
the
bird
okay,
I
think
the
most
pressing
demand
is
related
to
the
business
model
right
most
of
the
people
using
WebRTC.
Originally
they
have
content
that
do
not
have
value,
so
they
fine.
Some
people
want
to
monetize
with
adding
that's
doable.
That's
fine
and
then
some
people
have
very
very
high
value
content,
and
the
first
question
is:
what
do
you
have
for
content
protection?
F
Do
you
have
anything
like
DRM
for
WebRTC
stand,
and
today
we
don't
even
if
end
to
an
encryption,
would
be
a
possibility,
because
DRM
is
a
kind
of
end
to
an
encryption.
I
think
this
is
the
problem
that
is
not
solvent.
That
would
gain
value
in
being
standardized,
but
I
do
not
I
do
not
know
I,
don't
think
it
should
be
web
RTC
I
think
we
should
try
to
unify
what
is
done
with
the
media.
F
B
I
think
it
it
if
you're
mixing,
because
a
bunch
of
the
scenarios
we
just
described
would
have
content
protection
like
some
of
those
sports
things
and
if
you're
mixing
them
with
a
weber
DC
component,
then
I
think
it
is
like.
How
do
you
do
that?
If
some
of
the
elements,
I
guess
some
of
the
elements
is
something
you
would
have
separate
video
tags
and
some
of
them
would
be
DRM
and
some
of
them
not.
But
anyway,
I
think
I
think
that
that
will
come
and
nepeta
play
come
up.
Probably.
F
B
I've
heard
customer
asks
for
it,
but
it's
kind
of
mixed
in
some
ways.
It's
mixed
into
the
security
model
for
what
we
call
intense
security
I've
heard,
for
example,
concerns
about
people
recording
wherever
you
see
and
then
using
it
to
create
digital
fakes.
So
they
want
some
ability
to
apply
DRM
to
weather,
to
see
content.