►
From YouTube: IETF100-TUTORIAL-WEBRTC-20171112-1345
Description
WEBRTC TUTORIAL at IETF100
2017/11/12 1345
https://datatracker.ietf.org/meeting/100/proceedings/
A
Part
of
the
education
team
and
part
of
the
education
work.
We
have
the
tutorials
at
the
beginning
of
the
week.
Just
before
this
we
have
the
TLS
tutorial
and
after
the
excitement
of
TLS
the
excitement
of
web
RTC,
we
found
the
co-chairs
of
the
web
RTC
working
group
ready
to
guide
us
through
the
work
and
also
show
us
some
of
the
implementations
that
are
going
on
and
without
further
ado,
because
we
already
lost
some
time
over
them
there.
You
go
thanks.
B
Thank
you,
oh
well.
Alright,
yes,
so
I'm!
Actually
not
one
of
the
chairs.
One
of
at
least
one
of
the
chairs
is
sitting
in
this
room
and
I.
Don't
want
them
to
get
any
ideas
about
me
trying
to
take
their
their
spot,
but
my
name
is
Dan.
Burnett
I
am
the
first
and
longest
running
editor
of
the
WebRTC
specification
in
w3c
and
I'm,
also
co-author
of
the
most
opulent
web
RTC
with
Alan
Johnston.
So
what
you're
seeing
here
is
actually
condensed
down
from
an
eight-hour
live
training
class
that
that
we
do
so.
B
The
slides
actually
have
a
lot
of
information
on
them
and
I'm
going
to
do
what
I
always
hate
when
other
people
do
I'm
not
going
to
tell
you
everything.
That's
on
the
slide,
but
I
am
going
to
tell
you
the
important
thing.
The
important
points
on
each
slide
and
you
can
read
the
details
later
yourself
if
you're
interested
so.
B
Boy
it's
just
today,
okay,
here
we
go
real-time
change
yeah.
These
are
the
old
slides,
sorry,
okay,
so,
okay,
so
the
the
main
thing
here
is
is
okay,
so
WebRTC
is
designed
to
do
peer-to-peer
communication
and
this
diagram.
This
WebRTC
triangle
is
the
canonical
architecture
diagram
for
WebRTC.
We
have
a.
We
have
two
browsers,
a
mobile
browser
on
the
Left,
a
laptop
browser
on
the
right
and
a
web
server
up
at
the
top.
Now,
just
like
with
any
web
application,
your
browser
is
going
to
fetch
the
application
from
the
web
server
and
then
execute
it.
B
What's
different
here
with
WebRTC
is
that
the
browser's
are
able
to
set
up
a
media
communication
channel
directly
between
the
two
browsers.
Now
this
is
client
to
client
okay.
So
this
is
actually
a
breaking
of
the
web
model
with
what
WebRTC
is
letting
us
do
now
in
order
to
do
this,
to
set
it
up
there's.
Actually,
there
needs
to
be
some
sort
of
communication
between
the
two
browsers
before
the
media
is
set
up,
and
for
that
we
use
any
kind
of
signaling
approach.
You
want
and
I'll
talk
more
about
that
later.
B
In
this
particular
example,
I'm
showing
this
one
web
server
being
used
not
just
to
serve
up
the
HTML
pages,
but
also
to
act
as
a
relay
for
information
between
the
two
web
browsers
until
they
get
the
media
connection
set
up
now,
just
as
an
example,
many
of
you
may
be
familiar
with
sip
and
may
be
familiar
with
the
SIP
trapezoid,
which
is
kind
of
the
canonical
architecture
there.
You
can
do
that
with
WebRTC
as
well.
There's
no
requirement
that
you,
you
actually
use
the
same
application
in
both
cases.
B
There's
no
requirement
that
that
you're
signaling
goes
through
the
same
servers,
and
this
is
just
showing
that
you
can.
You
can
use
multiple
servers
if
that's
something
that
that's
important
and
works
for
you.
So
what's
really
being
added.
This
is
very
complex
picture
here.
Okay,
but
the
main
piece
to
look
at
is
on
the
lower
left
corner
where
it
says
browser
our
TC
function.
It's
that
lighter
blue
there.
There
are
new
controls
that
have
been
built
into
web
browsers.
B
Now
there
are
new
audio
and
video
codecs
the
ability
to
negotiate
these
peer-to-peer
connections,
and
also
echo
cancellation
and
packet
loss
concealment.
That's
pretty
cool
actually
that
this
is
now
in
web
browsers.
Support
for
this
is
actually
in
all
the
major
browsers
today,
except
for
Internet
Explorer,
and
we'll
talk
more
about
that
later.
B
So,
whoever
you
see
is
cool
because
it
uses
the
the
web
platform,
so
you
can
write
in
HTML
with
JavaScript
and
the
standards
are
defined
in
terms
of
JavaScript
API
s.
Now
we're
going
to
talk
more
later
about
some
that
you
don't
have
to
use
those
api's,
particularly
weber.
Dc
also
provides
natural
versal,
and
I
will
be
talking
about
that
quite
a
bit
in
a
moment
and
of
course,
the
codecs
that
I
mentioned.
B
One
of
the
nicest
things
is
that
it
includes
ice
and
that's
one
of
the
first
things
I'm
going
to
be
talking
about
so
okay
for
many
of
you
who
know
networking,
you're
gonna
be
just
incredibly
bored
by
this
okay,
this
is
very
high-level.
Let's
say
we
have
our
mobile
browser
and
our
laptop
browsers,
and
they
want
to
communicate
peer-to-peer
over
the
Internet
right.
Media
can
go
directly
between
those
two.
B
It
doesn't
have
to
be
relayed
to
you
know
if,
if,
if
I'm
here
and
I'm
talking
with
Alex
who's
here
in
the
front
row,
the
media
doesn't
have
to
go
all
the
way
to
the
US
and
back,
for
example.
Okay,
you
can
just
go
directly
between
the
two
of
us,
but,
as
we
all
know,
this
is
not
really
what
the
internet
looks
like.
The
Internet
actually
looks
like
this
right,
so
the
mobile
browser
on
the
left
and
the
laptop
browser
down
in
the
center.
B
Each
of
these
is
behind
an
access
point,
a
Wi-Fi
access
point
of
some
sort,
a
home
coffee
shop,
whatever,
which
has
its
own
network,
address
translation.
So
the
problem
with
that
is
that
so
I'm
showing
here
the
home
case
and
with
the
mobile
browser
we've
got
two
devices
here
with
different
IP
addresses,
but
those
are
their
pride.
Internal
IP
addresses
as
far
as
anyone
else
on
the
internet
is
concerned.
B
Their
IP
address
is
the
one
you
see
at
the
top
there,
the
203
0.11
3.4
and
not
the
192
168
addresses
so
stun
is
a
protocol
defined
here
in
the
IETF
for
basically
being
able
to
find
out
what
your
public
IP
addresses.
You,
you
send
a
query
out
to
a
stun
server
and
the
response
that
comes
back
tells
you
what
IP
address
it
sees
for
you
again,
that's
useful,
because
these
brought
these
browsers
are
going
to
have
to
communicate
with
each
other,
so
they
need
an
address
that
they
can
route
to
now.
B
There's
another
protocol,
that's
related.
When
you're
talking
about
ice
and
that's
turn.
This
is
essentially
a
media
relay.
So
this
is
the
case
where
you've
decided
oops.
We
actually
do
need
to
relay
the
media
somewhere,
hopefully
not
all
the
way
to
the
US
from
here,
but
sometimes
things
just
don't
work
peer-to-peer
the
Nats
and
firewalls
are
just
too
restrictive
for
that
to
work.
So
the
thing
that's
important
here
is
that
WebRTC
includes
support
for
this
okay.
B
B
Webrtc
is
a
very
highly
successful
joint
standards,
effort
between
the
internet,
internet
Engineering,
Task,
Force,
US
and
the
world
wide
web
consortium
or
w3c.
So
IETF
is
responsible
for
all
the
protocols,
of
course,
and
many
of
the
protocols
actually
virtually
all
of
them
are
they
already
exist.
Okay,
the
intention
was
not
to
have
to
create
a
bunch
of
new
stuff,
but
we
have
had
to
create
extensions
in
the
IETF
and
I'll
talk
about
a
few
of
those
later
on.
B
The
RTC
web
working
group
in
the
IETF
is
the
one
that
has
the
its
main
focus
on
WebRTC
I've
listed
some
others
here
that
are
that
have
related
activities
as
well.
So
anyway,
it's
it's
spread
throughout
the
ietf,
or
at
least
has
been
for
the
past
few
years.
The
JavaScript
API
is
are
developed
in
w3c
and
there
are
two
groups
there:
the
WebRTC
working
group
and
the
media
capture
task
force.
So
if
you
hear
RTC
web,
that's
IETF.
B
If
you
hear
WebRTC,
that's
in
w3c
and
in
general,
when
you
hear
people
refer
to
WebRTC
in
general,
they're
referring
to
the
JavaScript
API
is
defined
in
w3c,
although
what
IETF
does
is
rather
important
here
as
well,
just
briefly
on
protocols,
I'm
not
going
to
go
through
these
in
detail.
These
are
all
the
protocols
that
end
up
getting
used
at
some
point
and
in
some
way
with
WebRTC
I
do
want
to
call
out
a
couple
of
them.
B
Let's
say
I
wonder
if
this
will
give
me
a
pointer
yes,
but
you
can't
agree,
you
can't
see
it
very
well.
The
main
thing
I
want
to
point
out
is
that
media
is
sent
using
SRTP
and
that
actually
sits
here
basically
uses
using
DTLS
for
UDP
packets,
the
data
channel,
which
I'll
talk
more
about
later
uses
SCTP
and
DTLS
and
UDP.
These
are
the
ice
related
protocols
like
I
mentioned
before
and
STP
we'll
talk
about
it
a
little
bit
more.
B
So
what
this
is,
it's
really
interesting
when
I
talk
about
WebRTC,
it
turns
out
that
in
every
crowd,
typically
people
either
are
communications,
people
or
they're
web
people,
and
it's
tricky
to
create
a
tutorial
for
both,
because
you
actually
need
to
understand
some
of
both
so
I'm
going
to
talk
now
about
what's
new
for
web
developers,
there
are
two
new
API
pieces
again.
This
is
in
w3c,
one
is
capturing
of
local
media.
So
without
a
plug-in
you
can
get
access
to
the
camera
and
microphone.
B
Now
there
are
some
extra
specs
that
define
how
to
get
access
to
what's
playing
in
a
video
element
and
in
fact,
your
screen
and
window
as
well.
The
another
specification
defines
the
peer
connection
and
that's
a
term
you're
going
to
hear
a
lot.
The
peer
connection
is
what
provides
the
ability
to
transport,
both
media
and
data
between
two
client
browsers.
You
will
often
hear
me,
say:
media
peer-to-peer,
it's
media
and
data.
So
if
you
hear
me
say
that
I'm
in
most
cases
referring
to
both
okay.
So
very
briefly,
there
are
several
steps
here.
B
This
is
not
everything
to
building
a
web
RTC
application.
These
are
the
pieces
approximately
in
order
that
a
JavaScript
application
would
have
to
do
in
order
to
add
in
WebRTC
functionality.
Okay
and
the
first
one
is
getting
access
to
local
media.
That's
done
using
the
getusermedia
call
and
I'll
show
a
little
bit
more
specifics
later,
but
one
of
the
important
things
here
is
that
this
is
where
the
user
is
asked
for
permission:
I'm
not
going
to
cover
the
the
details
on
that.
B
But
I
will
show
examples
later
on
when
you
call
getusermedia
you're,
given
a
media
stream,
which
consists
of
one
or
more
media
stream
tracks
and
I'll
give
more
details
on
on
these
later.
But
basically,
you
need
to
have
local
media
in
order
to
send
it.
You
need
to
set
up
a
peer
connection
and
the
rtcpeerconnection
API
is
the
one
that
does
that
I've
listed
here.
B
All
of
the
things
they
the
pure
connection,
actually
dowser
provides
an
API
surface
for
so
a
lot
of
what
we
think
of
as
WebRTC
really
lives
here
now,
once
you
have
a
peer
connection,
that
doesn't
mean
that
any
media
is
flowing
yet
okay,
the
simplest
way
to
have
media
flow
is
to
call
add
track.
This
is
a
peer
connection
call
and
what
it
does
is
it
basically
tells
the
browser
I
want
you
to
negotiate
how
to
send
media.
B
One
thing
that
I
didn't
show
in
this
diagram
is
the
signaling
channel
and
you'll.
Hear
me:
I
talked
about
this
at
the
very
beginning.
The
signaling
channel
is
just,
however,
you
send.
However,
you
communicate
your
low-level
session
description
information
before
you
have
a
peer
connection
going,
so
it's
not
standardized
okay.
This
is
not
officially
part
of
WebRTC,
but
you
cannot
build
an
application
without
setting
up
some
means
for
relaying
this
information
from
one
browser
to
the
other.
B
Okay.
So,
in
a
little
bit
more
detail
here,
navigator
dot,
media
devices
die.
Getusermedia
is
how
you
get
access
to
cameras
and
microphones,
and
there
is
the
permissions
check
now,
once
you
have
that
media
there's
a
new
method
that
was
added
to
video
and
audio
elements.
This
isn't,
we
actually
had
to
add
this
ourselves
in
the
WebRTC
specifications
initially,
but
it
then
was
moved
into
HTML
itself.
B
So
the
point
here
is
that,
instead
of
just
setting
the
source
of
a
video
element
to
be
a
URL,
you
can
now
call
you
can
set
the
source
object
to
be
your
media
stream
directly.
Okay,
so
it's
not
going
to
fetch
a
resource,
that's
at
the
that's
referenced
by
URL.
Instead,
it's
going
to
use
a
local
local,
local
media
stream.
You
can
also
create
new
media
streams
as
well.
B
So,
just
briefly
in
the
in
the
model
that
we
have,
of
course
you
can,
you
can
load
content
from
files
or
whatever,
but
what's
really
being
added
here?
Is
the
ability
locally
to
get
access
to
the
camera
and
microphone
now,
interestingly,
the
peer
connection
is
actually
both
a
source
and
a
sink
for
media,
because
you
can
send
media
over
a
peer
connection,
in
which
case
when
you're
sending
it
it's
obviously
a
sink
when
you're
receiving
it
it's
a
source.
B
So
here
are
a
couple
of
examples
of
what
what
the
permissions
prompt
looks
like
when
you
call
getusermedia
you're,
going
to
be
asked
whether
you're
giving
permission
to
use
the
camera
and
microphone
or
in
the
case
of
firefox,
they
tend
to
allow
you
to
select
the
you
know
which
device
you
want
to
use,
but
in
either
case
once
it's
set
up
both
browsers.
Allow
you
to
change
which
device
you're
using
if
you
have
more
than
one.
B
B
But
it's
something
that's
important
when
you're
building
an
application
to
keep
in
mind
that
at
least
the
first
time,
someone
visits
your
site
they're
going
to
have
to
agree
this
permission,
prompt,
okay,
so
a
media
stream
track,
which
will
often
just
call
a
track,
is
a
handle
to
one
flow
of
one
kind
of
media.
Okay,
so
audio
or
video.
A
media
stream
is
a
collection
of
tracks
and
the
tracks
do
not
all
have
to
be
of
the
same
type.
So
you
can
have
you
know
an
audio
to
video.
B
For
example,
one
of
the
other
things
that's
interesting
about
a
media
stream
is
that
the
intent
one
of
the
intents
of
a
media
stream
is
that
the
collected
tracks
are
to
be
kept
synchronized
to
the
best
ability
of
the
browser
and,
if
you
think
about
it,
if
you
have
a
camera
on
a
person
and
a
microphone,
you
do
want
to
make
sure
that
the
audio
that's
coming
out
matches
the
movement
of
their
lips.
Okay,
so
the
browser
works
pretty
hard
to
actually
maintain
that
as
much
as
possible.
B
It's
not
possible
to
guarantee
this
in
all
cases,
guarantee
synchronization,
because
if
your
media
came
from,
let's
say
several
different
peer
connections
into
your
current
browser,
they
may
all
have
different
clocks
and
it's
not
clear
exactly
how
you
would
synchronize
them.
But
again
the
browser
will
do
what
it
can.
B
B
So
I'm
gonna
jump
around
just
a
little
bit
because
I
had
reordered
these
so
I'll
I'll
come
back
to
this
in
a
minute.
I
want
to
talk
about
SDP
offer
answer
so
again.
This
is
how
do
you
get
the
information
about
which
codecs
to
use
any
bandwidth
information,
perhaps
candidate
addresses
at
which
one
browser
can
find
the
other
browser.
These
things
all
live
in,
something
called
SDP.
B
We
used
this
session
description
protocol
just
doesn't
sit
or
approximately,
as
in
sip,
to
describe
the
media
session
at
the
protocol
level.
Stp
is
very
widely
used
in
sip
systems.
Today,
the
use
of
STP
in
the
in
the
browser
for
WebRTC
is
defined
in
the
JavaScript
session.
Establishment
protocol
and
I
had
added
a
link
to
that
in
the
other
slides.
Basically,
if
you
search
for
that,
you'll
find
the
the
drafts
that
describe
this.
B
So
with
respect
to
s
with
respect
to
WebRTC
this
protocol
level,
configuration
information
is
just
treated
as
a
blob.
Okay,
the
it's
not
expected
that
the
application,
the
JavaScript
application
will
ever
have
to
look
at
it.
So
now,
let's
just
take
a
look
at
what
the
application
does
have
to
do,
though,.
B
And
yes,
I'll
come
back
to
that
other
slide.
So
here
we
have
our
two
browsers
in
the
in
the
center
down
here
and
what
we
want
to
do
is
actually
set
up
the
de
pere
connection
and
and
make
it
so
that
media
can
flow
between
the
two.
So
this
browser
here,
the
one
on
the
Left,
calls
create
offer
and
is
given
an
SDP
blob.
B
So
what
this
browser
then
does
is
it
calls
set
local
description
with
that
offer.
So
what
is
returned
here
from
create
offer
is
an
offer
it's
this
STP
blob,
so
the
application
needs
to
tell
the
browser.
Yes,
this
is
what
I'm
using
locally,
but
it
has
to
do
something
else
to.
It
is
the
job
of
the
application
to
send
this
offer
over
the
signaling
channel
to
the
other
browser,
and
then
the
other
browser
takes
the
same
thing.
B
B
Now
once
that's
happened,
this
side
can
call
create
answer
to
generate
the
answer
for
for
this
offer,
and
once
the
right
size
has
right
side
has
the
answer
which
is
right
here.
It
calls
set
local
description
just
like
this.
One
did
over
here
with
the
offer
this
one's
doing
it
with
the
answer
and
then
just
like
before
the
application
is
responsible
for
sending
this
answer
over
the
signaling
channel
to
the
other
browser.
B
B
So
I
just
want
to
mention
here
that
there
are
a
number
of
extensions
that
were
made
to
SDP
in
order
for
WebRTC
to
work.
One
is
bundle,
and
actually
this
is.
This
is
particularly
interesting
because,
as
we
worked
on
this,
we
found
that
basically
everybody's
been
wanting
to
do
this
for
a
long
time
anyway.
B
B
Here's
one
example
right.
So
if
we're
signaling
using
sip,
we've
got
some
sip
library
in
in
JavaScript
here,
and
so
we
have
the
web
server
as
usual
that
that
the
HTML
application
comes
from,
but
then
for
the
signaling,
the
library
that
we
have
is
set
up
to
contact,
sip,
proxy
or
registrar
server,
just
as
the
just
as
you
would
normally
for
sip.
The
only
thing
that's
really
different
here
is
that
the
sip
signaling
information
is
sent
using
the
WebSocket
protocol,
and
this
is
not
a
new
RFC.
B
That's
one
of
the
other
things
in
there
in
the
update
this
has
been
around
for
a
while
now,
but
this
RFC
defines
sip
Transport
over
WebSockets.
So
this
is
what
you
would
do
if
you
have
an
existing
sip
infrastructure
and
you
just
want
to
use
the
WebRTC
front-end.
Essentially,
there
are
a
variety
of
libraries
that
you
can
use
to
do
that.
B
Okay,
briefly,
for
audio
the
browsers
are
required
to
implement
both
opus
and
g.711
opus
was
developed
here
in
the
IETF.
It's
a
great
codec
sounds
wonderful.
G.711
works!
That's
what
I'll
say
about
it!
The
browser
is
also
expected
to
support
DTMF,
which
are
basically
the
you
know.
The
events
you
get
when
you
press
the
buttons
on
it
on
a
keypad
for
video.
The
requirements
are
h.264
and
vp8.
B
B
So,
just
briefly,
I
want
to
talk
about
the
the
data
channel.
Again
it
uses
SCTP,
basically
the
the
data
channel.
We
wanted
to
make
sure
that
we
had
congestion
and
flow
control
for
an
arbitrary
data
channel,
and
so
that's
what
we
use
SCTP
on
top
of
DTLS
and
UDP.
I
understand
that
some
of
the
design
of
quick,
or
at
least
some
of
the
motivation
for
how
quick
was
designed,
came
from
experience
with
building
the
data
channel
in
WebRTC.
B
So,
from
a
JavaScript
standpoint,
it's
one
of
the
peerconnection
API
is
you
call
create
data
channel
and
what
you
get
back
is
something
that
has
a
send
method
and
an
on
message.
You
can
set
an
on-message
handler
by
default.
Data
channels
are
bi-directional,
but
you
can
change
that
and
you
can
send
any
any
string
or
any
of
the
various
array
buffer
types.
B
So
very
briefly,
about
security.
There's
a
lot
that
could
be
said
about
this.
Your
media
is
always
encrypted
in
WebRTC,
we
actually
mandate
the
use
of
SRTP.
This
was
a
contentious
point,
but
the
world
is
used
to
it
now.
So
just
just
expect
that
and
this
I
actually
got
rid
of
these
slides
because
they're
just
very
confusing.
There
is
a
mechanism
now
in
the
specification,
an
identity,
proxy
mechanism
and
essentially
the
SRTP,
is
great,
because
you
know
that
your
media
is
protected
end
to
end
from
browser
to
browser.
B
But
how
do
you
know
that
you're
actually
talking
to
the
person
you
want
to
be
talking
with
on
the
other
end?
Okay,
the
way
WebRTC
deals
with
this
is
by
allowing
you
to
use
third-party
identity
services
like
Facebook
Connect,
for
example,
to
to
log
in
to
prove
who
you
are,
and
basically
interaction
with
that
proxy
server
is
used
to
tie
your
identity
in
with
the
DTLS
keys
used
for
the
communication,
and
so
I
don't
want
to
say
more
than
that
about
it
again.
B
New
low-level
controls
in
WebRTC
and
going
forward
we're
eventually
going
to
only
be
recommending
the
use
of
those
the
reason
I
stuck
it
at
the
end
here
is
because
they're
not
implemented
anywhere
yet,
even
though
the
specification
that
you
know
assumes
that
everyone
is
implementing
them
today,
so
I'm
gonna
walk
through
an
example.
Here
we
have
one
browser
one
and
browser
to
browser.
B
B
B
B
Well,
the
browser
tries
to
do
smart
things,
but
people,
a
number
of
developers
have
decided
they
don't
trust
the
browser
to
do
what
they
want
them
to
do.
That's
why
we're
getting
some
new
controls
what's
recently
been
added,
is
RTC
RTP,
senders
and
RTC
RTP
receivers,
which
we
just
call
senders
and
receivers,
and
they
are
handles
to
the
outbound
and
inbound
RTP
streams
so
for
every
track,
that's
sent
or
received
over
a
peer
connection,
there's
an
Associated
sender
or
receiver,
and
they
that
gives
you
direct
access
to
those
particular
streams.
B
B
B
B
So
and
I
modified
the
diagram
a
little
bit,
it's
actually
each
browser
has
a
transceiver
for
each
M
line,
so
there
would
be
one
two,
three
four
transceivers
that
this
browser
has
and
for
that
this
browser
has-
and
this
transceiver
has
pointers
to
this
sender
and
well.
You
know
this
receiver,
which
doesn't
have
anything
in
it.
Okay,.
B
Just
briefly,
the
status
of
the
I
want
to
talk
about
the
status
of
the
API.
So
again
the
JavaScript
API
czar
being
standardized
in
w3c.
There
are
two
main
specifications:
the
WebRTC
spec
and
the
media
capture
and
streams.
Spec
both
of
them
are
now
a
candidate
recommendation
stage.
The
WebRTC
spec
just
went
to
candidate
recommendation
stage
a
couple
weeks
ago,
so
WebRTC,
unfortunately
I've
had
this
bullet
up
for
a
while.
B
When
I
give
this
presentation
core
is
stable,
just
cleaning
up
edge
cases
now,
so
we're
still
cleaning
up
edge
cases,
even
with
it
being
at
candidate
recommendation,
but
we're
getting
close.
What
we
really
need
is
implementations
for
the
media
capture
and
stream
spec
yeah.
It's
it's
been
closed.
For
a
long
time
again,
we
need
some
stuff
implemented
for
the
IETF
protocols,
I'm
not
going
to
try
to
list
all
of
them.
B
:
Jennings
maintains
a
dependencies
draft
which,
which
is
always
interesting
to
look
at
to
see
just
how
many
specifications
there
are
that
WebRTC
depends
on
and
then,
of
course,
for
the
individual
groups.
You
can
follow
their
progress
like
you
would
normally
just
briefly.
There
are
tools
that
have
popped
up.
You
know
out
in
the
world
to
help
with
WebRTC.
There
are
a
variety
of
libraries
I'm
not
going
to
list
them
here.
I
mentioned
there,
sip
signaling
libraries.
There
are
several
of
those
there
are
hosted
signaling
services.
B
B
There
are
also
some
higher-level
api's.
If
you
just
don't
want
to
code
all
of
the
stuff
that
you've
seen
you
can
use
some
of
these
simpler,
api's
fun
links.
You
can
look
those
up
on
your
own
if
you're,
interested
and
I
thought
it'd
be
good
to
talk
about
this
I
almost
wanted
to
start
with
this
one.
Facebook
chat,
Google,
Hangouts
and
duo
and
Amazon
Mayday
all
use
WebRTC.
B
B
D
C
Thank
you,
then
I
think
that
was
shorter
than
usual,
because
I've
seen
this
presentation
during
hate
hours
done
before
I
think
is
still
very
interesting
and
it's
very
difficult
to
actually
condense
6
years
of
work
and
multiple
specification
into
into
45
minutes.
So
you
did
a
great
job
there
I'm
going
to
try
to
do
that
in
in
in
just
one
slide.
So
it's
an
even
bigger
challenge
and
then
I
go
through
everything.
C
You
said
in
the
next
two
slides
right
just
to
you
to
repeat
everything,
but
first
my
name
is
Alex
Squire
I'm,
actually
living
and
residing
here
in
Singapore
and
I'll
participate
in
both
w3c
Andy
Andy
IETF
met
me
on
web
RTC,
but
also
on
browser
testing
and
tools,
web
drivers
and
things
like
that.
Those
we
went
to
GS
confessor
last
year
already,
so
us
I
hope
you
going
to
enjoy
having
high
ETF
in
town.
We
know
we
very
happy
it's
here.
C
C
So
if
you
get
a
you
know
satellite
picture
of
to
specification,
even
though
there
are
more
than
20
the
most
important
one
are
listed
here
and
the
good
news
is
all
of
them-
went
into
the
latest
stage
of
recommendation
and
w3c
or
are
already
a
specification
in
IETF
I.
Think
that's
the
first
IETF,
where
we
only
need
one
session
of
RTC
web
and
not
to
as
we
used
to
so
that's
great.
C
That
means
consensus
is,
is
is
reached
for
everything
that
is
the
specification,
but
that
doesn't
mean
that
the
implementation
actually
is
at
the
same
level.
So
you
went
through
the
specification.
Let's
look
a
little
bit
about
the
implementation
during
the
lifetime
of
web
RTC,
there
was
really
two
pivots
and
for
people
that
want
to
have
all
the
gory
details.
There
is
a
fantastic
block
by
the
the
WebRTC
Mozilla
team.
That's
going
to
give
you
in
detail
the
list
of
all
to
SBI,
so
please
go
back
to
the
slide
later.
Take
a
look
at
that.
C
That's
really
informative
and
I'll.
Explain
to
you
why
today's
some
website
use
some
kind
of
API
right,
a
set
of
API
and
other
user
different
subsets,
so
long
story
short.
You
can
see
free
main
stage
where
at
the
beginning,
you
were
speaking
about
streams,
so
you
could
get
a
stream
from
your
camera
with
audio
and
video
inside
anywhere
passing
a
string
right,
but
stream.
Sorry,
and
then
we
went
into
a
track
model
where,
instead
of
saying
you
know,
I
just
want
to
stream.
Actually
I
would
like
to
do
track.
C
So
one
use
case
that
make
us
think
like
that
was
the
webcam
plus
screen
sharing.
You
know:
I'm
presenting
a
PowerPoint
I
want
to
stream.
My
PowerPoint
and
I
want
to
stream
my
face
in
my
voice,
then
I
will
need
two
video
tracks
and
one
audio
track,
and
that
was
getting
complicated
with
the
stream
API,
among
other
things
that
were
complicated
with
the
stream
API
another
use
case
very
simple
I'm
using
a
mobile
and
I
want
to
switch
during
a
call
between
the
front
camera
and
the
back
camera
right.
This.
C
This
video
is
the
same
time
might
be
not
the
same
resolution,
but
there
is
no
reason
why
it
will
be
complicated,
and
so
we
came
with
a
replace
track.
Api
that
was
specifically
dedicated
to
to
that
use
case
when
you
want
to
switch
to
two
tracks
of
the
same
type
in
that
case
that
the
video
type,
while
keeping
the
rest
without
going
through
an
entire
cutting
the
call
renegotiating
the
call
and
restarting
the
call.
C
Eventually
we
brought
in
a
more
fine-grain
API
like
that,
and
we
wanted
to
make
the
the
glue
with
with
the
SDP
and
all
the
exchange
real
format.
We
were
using
the
signaling
format
at
that
time
and
that's
where
the
trends
ever
gain
and
transceiver
has
a
lot
of
very
good
application,
not
lose
that
you're
gonna
see
directly
in
in
an
application.
C
You're
gonna
write
yourself,
there's
not
a
lot
of
new
API,
but
he
will
alow,
for
example,
for
a
call
to
successfully
go
through
in
a
tenth
of
a
time
it
took
right,
so
we
call
that
click
to
earn
to
first
media,
when
you
click
accept
the
call
button.
How
long
does
it
take
before
you?
Actually,
in
the
call
and
start
talking
with
someone
with
the
original
version
that
will
be
one
to
two
seconds
with
full
eyes,
then
we
bring
trickle
eyes.
C
We
bring
that
down
to
100
something
milliseconds
with
the
things
that
add
transceiver
and
the
new
API
will
bring
in.
You
will
be
able
to
go
close
to
the
experience
you
get
with
a
real
phone
call
today,
which
is
ultimately
what
we
want.
There
is
no
difference
between
a
call
over
the
internet
and
a
call
with
a
real
telephone
line,
no
difference
between
voice
and
audio
right.
So
if
you
look
at
this
free
generation,
not
all
the
browser's
are
equally
advanced
and,
of
course
the
specification
was
still
in
flux.
C
So
if
we
go
to
the
next
slide,
we
can
see
the
what
was
presented
by
Chrome
last
week.
So
last
week
was
the
technical
plenary
meeting
of
w3c
dan
and
I
had
the
choice
to
participate
there
and
flew
in
this
morning
and
chrome
presented.
That
line,
which
is
as
far
as
we
know,
the
latest
state
of
the
art
status
of
way
about
his
implementation
in
chrome,
so
get
stuck
track,
constrain
receiver.
C
Lastly,
a
unified
plan,
which
is
the
standard
way
of
doing
simulcast
a
good
way
to
actually
adapt
the
resolution
in
case
of
bad
bandwidth
or
a
lot
of
people
joining
a
call,
is
still
using
Plan,
B
or
old
version
in
Chrome,
and
it's
blocking
a
little
bit
the
standardization
effort
where
Firefox
is
already,
for
example,
using
the
the
real
unified
plan
according
to
Chrome
that
should
be
completed
by
the
end
of
this
year
or
first
quarter
next
year
on
Firefox
is
little
bit
simpler.
Everything
is
there
except
transceiver
and
I.
C
Safari
is
interesting.
Safari
was
a
little
bit
late
to
the
party,
but
that
gave
us
an
advantage.
They
didn't
have
any
historical
implementation
of
WebRTC
based
on
the
older
spec,
so
they
directly
went
for
the
latest
version
of
the
spec.
So
the
directly
went
into
the
track
and
not
the
stream
generation
of
the
API.
Unfortunately,
they
based
on
the
Google
code,
so
it
was
too
complicated
to
switch
to
unify
plan
and
still
using
Plan
B.
C
There
are
a
few
other
things
that
are
still
improve,
above
because
there
are
the
latest
implementation
of
off
of
WebRTC,
but
one
year
ago
Safari
was
not
supporting
WebRTC.
So
today
we
have
all
the
major
browsers
that
support
web
RTC,
including
Safari
on
iOS,
and
that's
really
a
big
step
forward
right
edge.
C
Well,
it's
difficult
to
answer
for
Microsoft.
They
have
different
approach
depending
on
the
product
and
will
you
ask
there's
one
thing
we
know
for
sure:
Internet
Explorer
will
never
get
web
RTC.
They
want
people
to
move
away
from
with
all
the
old
windows
and
get
into
Windows
10,
so
they're,
putting
all
the
latest
API
on
edge
if
they
think
that
if
they
put
the
WebRTC
API
in
Internet
Explorer,
that
will
give
a
they
will
not
give
enough
incentive
from
people
to
move
away
from
windows
7
edge
as
to
implementation.
C
Today,
one
is
based
on
a
RTC
with
a
polyfill
on
top
and
another
one
that
is
the
web,
RTC
1.0,
but
first
generation,
which
is
the
ad
stream
and
and
removed
stream
implementation.
And
finally,
they
have
a
third
implementation
for
native
that
supports
web
RTC
and
ease.
As
far
as
we
know,
we're
not
told
me
last
week
the
the
most
advanced
we
support
for
vp9
and
things
like
that,
for
example.
So
originally
the
biggest
problem
with
the
edge
was
that
they
were
only
supporting
a
very
proprietary
codec
called
h.264.
C
You
see
that
was
the
SVC
flavor
of
264
used
by
Skype.
Now
it's
getting
much
better
h.264.
The
vanilla
h.264
is
in
vp8
as
well.
Vp9
with
SVC
is
coming
in
some
of
the
3
different
stacks.
They
have
at
Microsoft.
So
here
again
slowly
by
the
end
of
the
year,
things
will
be
interoperable
enough,
so
you
can
initiate
a
call
with
edge
and
receive
a
call
from
any
other
browser
with
at
least
one
codec.
That
will
be
understood
by
each
side.
C
So
w3c,
unlike
IETF,
require
that
test
exists
before
you
can
become
a
standard.
So
we
had
compliance
test
which
is
test
of
the
JavaScript
API.
That's
very
interesting
for
web
developer
because
you
can
check
if
not
only
the
API
exists,
but
she
behaved
according
to
the
spec
or
not,
and
so
we
were
not
close
to
consensus
last
year,
so
not
a
lot
of
tests
with
written
and
they
were
written
mainly
by
people
that
were
volunteering
their
time
Herald
over
there
myself
and
someone
called
Dominic
from
the
w3c,
where
basically
the
free
weekend
contributors.
C
C
What's
interesting
is
not
the
number
of
tab
themselves,
but
the
coverage
only
10%
of
the
spec
was
really
tested
one
year
ago
and
nowadays
last
week
we
did
another
a
check
and
we
went
up
to
70
percent,
so
we
went
from
10
to
70
percent.
All
thanks
to
we
fought
from
people
here
in
Singapore,
working
on
on
the
specs
next
slide
again.
What
does
that
mean
if
you
web
people?
C
Well,
if
I'm
a
web
developer
I
have
two
choices:
I
can
look
at
this
back
there,
beautiful
I
really
want
what
the
spec
is
selling,
but
the
implementation
in
a
browser
is
not
there.
So
I
need
to
make
a
practical
choice
and
that's
where
things
like
jQuery
came
in
and
that's
where
some
polyfill
and
some
adaptors
are
also
coming
for
WebRTC.
So
if
you
had
to
choose
between
the
browser
you
look
for,
you
could
use
those
test
suite
to
see
which
one
is
the
most
compliant.
So
in
June,
2017
Safari
was
the
most
compliant.
C
Oh,
my
god
that
was
a
shock
for
Firefox.
Well,
they
were
not
the
most
compliant
by
by
a
lot,
and
still
50%
of
the
tests
were
fairly
right.
So
those
tales
are
very
fine
granularity
and
you
have
to
take
them
into
account.
I
think
the
presentation
we
did
before
in
terms
of
further
or
second
generation
and
where
they
are
is,
is
giving
you
a
better
idea.
So
Firefox
is
great
at
getting
the
latest
thing
in.
They
have
almost
everything
there.
C
We
want
to
test
the
latest
thing
test
test
it
against
Firefox,
Chrome
and
Safari
are
not
far
away
behind
and
you
can
expect
everything
to
stabilize
by
quarter.
1,
2018
and
I
know
you
heard
that
before,
but
before
the
specs
were
still
in
flux,
now
they're
in
candidate
recommendation,
which
means
there
will
no
be
any
change
in
the
API.
So
in
the
object
that
you
can
use
in
the
browser,
so
the
only
difference
now
is
whether
they
implemented
or
not
right.
C
So
that's
that's
the
situation
as
of
today
to
be
able
to
test
and
to
make
the
things
better.
The
browser
vendors
beyond
the
usual
JavaScript
testing,
also
implementing
the
new
new
new
test
tools,
especially
one
kite.
That
was
developed
here
in
Singapore
for
all
the
browser
vendors
to
actually
test
specifically
web
RTC.
In
the
cases
where
having
only
one
browser
at
a
time
is
not
enough.
Nowaday,
the
JavaScript
API
is
pretty
ok,
everybody
know
what
they
need
to
do
to
make
it
work.
C
But
what
doesn't
work
is
when
many
people
try
to
connect
to
each
other
edge,
try
to
connect
to
Safari
in
the
same
cola
with
someone
on
Chrome
on
different
operating
system.
That's
not
something
we
have
as
a
standard
committee
to
test
before,
and
we
didn't
have
a
lot
of
visibility
in
that
and
that's
where
we
were
getting
most
tickets
from
from
the
users.
So
now
we
developed
a
specific
tool
for
that
that
is
going
to
be
run
every
day
to
test
into
probability
between
a
matrixes
of
different
browser,
revision,
operating
system
and
so
on.