►
From YouTube: WebRTC Working Group Meeting at TPAC 2023
Description
See https://www.w3.org/2011/04/webrtc/wiki/September_12_2023 and https://www.w3.org/2023/09/12-webrtc-minutes.html
A
B
B
A
A
A
A
A
A
A
A
A
B
A
A
A
A
A
It's
also
been
used
for
a
lot
of
things
that
are
not
videoable,
contrasting,
for
instance,
the
way
for
recording,
there's
another
Explorations
available
web
platform
and
so
on.
But
apart
from
the
usage
of
that
was
mentioned
of
web
products,
these
have
not
achieved
significant
performance
so
same
and
again.
A
A
C
A
That's
a
pain
to
manage,
but
it
works
that
it's
possible
to
say
that
this
is
their
true
standard,
and
this
is.
B
A
A
We
have
the
Weber
well,
which
I've
been
thinking
in
the
key
Direction
by
Samir
I've
met
where
the
government
didn't
quite
fit
so
I
guess
I'm
going
into
library
is
extension
system.
You
might
want
to
kill
off
the
emergency
eyes
at
some.
The
document
at
some
points
the
need
is
there
was
the
direction
is
different.
A
A
Sorry
about
that
and,
of
course,
key
capture
which
is
not
impressive
in
the
newly
established
SSC
team,
so
we
haven't
joined
dating
a
person.
Now
we
have
a
bunch
of
things
that
haven't
changed
very
much.
Did
you
get
a
chance
for
me?
They
got
a
record
from
Elemental
image.
Priority
competence,
as
they
say
necessity
has
been
implement,
that's
a
good
thing,
so
they
haven't
changed
it
much
and
lastly,.
A
And
I
was
wondering
with
her
initially
for
eh
them
and
moving
to
make
media
extensions
or
tag
them
as
broker
for
contact
recognition
do
something
to
actually
okay,
let's
put
some
info
to
find
right
next.
D
Yeah,
so
let
me
first
maybe
reply
to
your
instrument
and
then
I'll
bring
up
the
documents.
I
want
you
to
bring.
So
yes,
you
and
I
agree
with
you.
We
need
more
triaging
efforts
on
media
capture
main
without
putting
him
on
the
spot.
I
think
Henry
has
that
in
his
to-do
list
and
I
agree,
some
of
them
might
be
move
to
V2.
Some
of
them
might
be
closed
as
no
longer
Raven,
but
but
that's
work
that
needs
to
happen
before
we
can
move
to
the
to
the
next
stage.
D
Yeah,
so
in
several
of
these
repos,
we
received
comments
from
the
community
asking
about
progress
on
the
favorite.
Since
your
request
that
hasn't
received
attention,
which
to
me
seems
uncomfortable
I,
think
part
of
it
is
that
indeed
we
don't
have
anyone
that
feels
responsible
or
other
many
little
for
for
those,
but
yeah
I
would
prefer
if
he
didn't
fall
into
complacency
for
for
these
people
and
make
sure
that
we
had
a
FTR
drivers
or
each
others.
E
So
hello
Jan
over
here,
just
wanted
to
chime
in
that.
As
far
as
I
know,
for
me
to
capture
main,
there
are
no
significant
outstanding
issues
with
the
recent
removal
of
the
the
device
IDs
for
permissions.
So
if
anyone
feels
that
there's
a
significant
issue,
please
state
so
in
the
issues,
otherwise
we'll
I'm
late
in
doing
a
triage
run
of
of
those
issues.
E
B
F
Okay,
thank
you,
everybody.
This
is
Bernard
and
also
various
slides
will
be
handled
by
Harold
and
other
folks,
so
just
picking
up
from
where
we
left
left
it
in
May
and
July.
There
were
a
few
open
questions
that
I'd
like
to
get
answers
from
the
group
on.
We
did
talk
about
these
in
July,
but
one
was
the
relationship
between
this
stock
and
explainers.
Next.
F
And
the
question
was
whether
should
explainers
link
to
the
use
cases
that
potentially
could
clarify
whether
an
API
proposal
is
solving
a
use
case.
My
recommendation
that
I'd
like
to
get
some
feedback
on
is
whether
explainers
I'm
recommending
that
explainers
and
API
proposals
can
link
to
the
use
cases,
but
this
not
be
required.
F
F
So
the
question
is:
should
a
use
case
link
to
related
issues
and
the
advantage
is
links,
you
could
link
the
requirements
to
specific
issues
which
require
resolution.
We
do
do
that
in
one
or
two
cases,
basically
linking
to
issues
raised
in
the
CFC
or
to
the
CFC
summary,
but
it's
it
doesn't
occur
in
every
every
case.
Only
when
we
have
requirements
that
are
still
the
subject
of
still
in
progress
to
make
clear
what
they
are.
Any
objection
to
this.
F
And
I
think
we'll
see
in
a
minute
why
that
might
be
helpful,
but
it
just
just
helps
to
get
the
state
of
The
Proposal.
Okay,
the
last
one
is
the
relationship
between
this
stock
and
API
proposals.
F
Next,
so
here
we
have
an
API
proposal
and
should
a
use
case
linked
to
those
proposals
that
could
clarify
whether
a
use
case
has
proposals
and
links
to
use
case
to
a
proposal
where
you
can
track
progress.
The
recommendation
here
is
API.
Proposals
can
link
to
the
use
cases,
but
we
shouldn't
require
the
use
cases
to
link
to
the
API
proposals.
A
Thanks
so
the
issues
that
we
had
on
paper
stuff,
but
if
we,
if
we
want
the
UCS
to
be
a
very
dynamic,
that
is
this
number.
F
F
Okay,
all
right
now
on
to
the
next
slide
and
the
actual
extended
use
cases.
So
we're
going
to
talk
about
two
main
sections
of
the
document.
Today.
One
is
3.6
funny
hats,
a
little
bit
of
implementation
feedback
and
then
we're
going
to
talk
about
section
3-2,
which
is
alola
in
the
streaming
section.
F
Next,
so
I
guess
Harold.
This
is
your
slide.
A
A
Skin
tone
attachments
most.
H
A
A
Outline
face
offline
for
background
replacements
or,
in
my
post,
the
optional
background
work,
either
in
this
social
device
or
even
a
endpoints,
in
order
to
make
sure
that
none
of
these
have
only
one
of
these
Supply.
B
B
A
A
F
F
F
B
D
A
A
A
B
E
Hi
janova
here
sorry
did
I
mess
up.
I
can't
find
to
see
him
the
seem
to
find
the
race
hand,
symbol
and
zoom,
maybe
I'm
missing
it,
but
reactions.
E
Oh,
thank
you.
Yes,
just
a
quick
question
that
for
funny
hats,
I
mean
the
sdp
negotiation
seems
to
assume
that
we
need
to
modify
metadata
in
frames
and
for
the
purposes
of
adding
metadata.
E
Another
approach
might
be
to
add
some
kind
of
synchronization
info
to
data
channels,
for
example.
So
that's
the
two
ways
to
add
metadata,
so
I'm
wondering
if
that
presupposes
a
solution,
but
some
detail
under
the
actual
use
cases
that
would
require
metadata
on
frames.
I
think
would
be
helpful.
I
F
Oh
yeah,
so
the
next
step
I
think
is,
is
to
get
a
PR
to
articulate
the
requirements
for
this.
For
the
for
the
minutes,
I
think.
A
A
F
So
just
to
bring
everyone
up
to
date,
we
have
the
section
3.2,
which
has
two
major
use
cases,
one
on
game
streaming
and
the
other
on
low
latency
broadcast.
With
fan
out.
We
had
a
CFC
on
this
originally
starting
in
January
and
was
summarized,
and
we
did
have
consensus
for
these
use
cases,
but
we
had
some
issues,
most
of
which
have
been
closed.
There
are
two
open,
so
we're
going
to
try
to
get
the
get
closure
on
those
two
today.
F
So
we
mentioned
the
first
of
the
use
cases
in
section.
Three
two
is
game
streaming,
and
this
is
a
summary
of
the
requirements
we
have
so
far.
Basically
N15
36,
37
and
38.
F
Next
slide,
all
right,
so
we,
as
I
mentioned,
we
have
some
open,
two
open,
Major,
open
issues.
One
is
issue
80,
the
other
is
issue
103,
and
then
we
have
a
PR
to
talk
about,
which
is
to
clarify
some
of
the
things
in
I.
Think
for
the
for
game
streaming
requirements
which
we
will
have
some
further
discussion
on.
Okay,
next.
F
So
a
little
bit
about
80..
So
what
is
what
is
it
issue
80
about
so
issue?
80
has
been
about
well,
one
of
the
things
in
gaming
is
that
you
can
have
things
like
spatial
audio
is
becoming
interesting
and
so
to
do
these
new
codecs
people
need
access
to
the
raw
audio
data.
They
are
doing
this
today
in
wasm.
F
There's
a
couple
of
examples
to
that,
but
there's
they're
doing
this
using
a
hidden
support
for
l16
and
browsers
rather
than
official
mechanisms,
and
so
pippo
suggested
that
an
issue
be
opened
and
one
was
was
opened
in
issue
80
to
figure
out
if,
if
there's
something
that
should
be
in
the
standard,
rather
than
this
kind
of
hidden,
hidden
support,
l16,
which
is
actually
quite
popular,
and
so
this
is
the
discussion
we've
had
so
far.
F
F
F
As
I
mentioned
this
kind
of
hidden
functionality,
the
l16
is
quite
popular
and
it's
it
is
important
for
for
gaming
for
things
like
spatial
audio
and
then
Harold
just
talked
about
some
some
other
aspects
of
this.
So
I
would
like
to
get
feedback
from
the
group
about
where,
where
we
should
go
to
address
this
raw
audio
data
requirement,
which
I
think
is
probably
real
for
game
streaming,
apparel.
A
A
F
Yeah
I
think,
if
you
we
go
back
to
the
original
issue,
I
think
it's
more
of
a
sender
side
thing
than
a
receiver
side
thing,
but
there
are
some
receiver
side
requirements
because
it's
going
to
be
a
different
codec,
like
you've
mentioned.
A
Yeah
so
the.
H
A
Right,
the
title
is
not
right.
Access
to
the
data,
that's
not
the
target.
The
target
is
applicable
products.
A
H
F
I
couldn't
couldn't
really
hear
that
very
well?
Are
you
saying
is
I
guess
one
big
question
is:
does
this
issue
need
to
be
addressed
for
this
use
case
this
low
latency,
or
does
it
go
somewhere
else
or.
B
A
A
J
I
I:
this
is
Peter.
I
raised
my
hand,
I,
don't
know
if
it's
seen.
A
J
A
A
A
To
create
a
new
use
case
that
this
specifically
for
learning
all
your
products,
which
needs
to
specified
what
what
the
requirement
behavior
is.
Okay,
what
we
did
and
why
we
need
it.
A
So
whenever
you
think
you
can
think
of
sketching
of
the
the
use
case
for
practical
products,
yeah.
F
F
Okay,
next
slide.
J
Sorry,
sorry
before
we
move
on,
is
that
just
an
audio
thing,
or
should
that
also
Encompass
video.
F
For
the
it
it
is
just
audio
for,
as
it
related
to
the
comment
on
the
on
the
streaming
use
cases,
although
yeah
I
mean
assuming,
for
example,
that
I
mean
hebc
is
quite
popular
in
streaming,
but
assuming
that
eventually
gets
added
to
webrtc
I
think
that
would
have
been
the
major
thing
that
people
would
have
tried
to
do
a
new
try
to
do
a
custom
codec
for,
but
that
was
about
it.
F
A
lot
of
the
the
main
things
for
audio
with
this
spatial
audio
and
then
also
AAC
for
the
low
latency
streaming
is
pretty
popular
anyway,
but
but
in
the
other
use
case
we
might
do.
It
might
also
be
video,
but
we
can
talk
more
about
that.
F
A
Difference
between
a
video
is
that
and
with
the
audio,
we
need
to
debate
on
whether
whether
application
of
their
breakout
box.
A
A
The
complexity
is
different
from
in
terms
of
feasibility
of
statistics.
A
B
F
So
this
is
issue
103.
This
was
some
feedback
from
the
UN,
so
we
want
to
talk
a
little
bit
about
that
as
well.
One
of
the
questions
here
was
the
term
low.
Latency
appeared
somewhat
big,
we're
going
to
have
a
discuss
a
little
PR
to
hopefully
clarify
this,
so
next
slide,
okay,
so
well
so
here's.
Basically
we
have
a
n37
which
relates
to
Performance,
and
it's
not
all
very
specific.
F
It's
just
talking
basically
mostly
about
video
performance,
so
potential
ways
to
address
this
would
be
either
to
leave
it
alone
or
to
add,
replace
it
with
more
specific
performance
requirements.
We
have
a
PR
118
along
those
lines
where
there's
more
discussion
and
we'll
I
think
get
into
it.
When
we
talk
about
that
PR,
the
common
in
n38
was
the
requirement
was
relating
to
differ
bucket
Target,
and
it
is
correct
that
that
our
requirement
is
partially
Satisfied
by
that
as
I
we
mentioned
earlier,
we're
not
going
to
necessarily
link
to
the
apis.
F
If
that
helps
I
guess
we
could
do
it
to
say
hey
that
we
think
digital
brother
Target
is
the
solution
to
n38,
but
recommending
no
action
to
do
that.
At
the
moment,
any
comments
on
this.
F
Let
me
put
it
this
way:
are
there
objections
to
the
recommendation
of
no
action
on
to
link
in
38
to
Jitter
buffer
Target.
F
F
Okay,
so
if
we
can
put
that
down
as
a
resolution
to
to
I
guess
it
would
be
Weber
to
see
extensions
to
add
a
note
there
rather
than
here,
and
then
we
can
go
on
to
discuss
the
specific
performance
requirements
in
PR
118
which
we'll
get
to
in
a
minute.
Okay,
next.
F
All
right
so
do
we
have
a
presenter
for
PR
118.
I
This
is
about
clarifying
the
low
latency
game
streaming
requirement
as
I
presented
last
meeting
the
basic
National
for
the
cloud
gaming
is
we
we,
the
cloud
gaming,
is
more
required,
continuous
visual
feedback
to
the
usual
input,
which
means
this
application
requires
more
lower
latency
based
visual
input.
I
Also,
the
cloud
gaming
requires
defining
low
latency
kpi
from
click
to
pixel
latency,
which
is
a
little
bit
different
from
glass
to
glass
latency,
as
in
other
applications.
Also,
the
cloud
gaming
requires
a
latency
lower
than
150,
which
is
more
preferable
by
our
observation
from
a
lot
of
Gamers
out
there.
Also,
the
cloud
gaming
requires.
I
A
rather
faster
recovery
than
video
freezing-
this
is
also
related
with
the
continuous
video
visual
feedback,
so
also
for
the
latest
perspective,
it's
more
much
more
important
to
have
a
continuous
rate,
consistent
latency
than
fluctuations,
which
is
also
based
on
our
observation
or
expectation
from
the
gaming
perspective.
So
to
combine
everything
we
want
to
have
a
ultra
low
latency
with
a
faster
recovery
to
provide
a
better
crowd.
Gaming
experiences
so
next
slide.
Please.
I
So
we
Define
four
requirement
from
n48
to
and
51.
I
We
have
added
this
full
requirement,
and
this
has
a
little
bit
more
details
in
the
description
part
so
first
and
48
for
recovering
Bridging,
the
node
keyframes,
which
is
based
on
we
already
filed
the
issue
on
webrtc
project.
I
So
this
is
this:
by
having
this
issue
reserved,
we
can
support
a
video
decoding
recovery
using
the
frame
containing
in
infrared
macro
block
or
coding
units,
so
which
is
the
basic
issue
we
filed
for
having
more
clear,
more
accurate
frames,
information
from
sender
to
receiver.
I
By
having
this
implementation,
you
can
provide
more
faster
recovery
by
using
the
donkey
frames.
I
I
I
think
we
can
discuss
the
issue
after
explaining
all
the
items
so
and
for
tonight,
on
49
is
about
loss
of
decoder,
encoder
synchronously
notification,
so
we
can
based
on
this
information,
we
may
need
to
have
ipsi
reference
frame
selection.
Indication
I
think
this
can
be
discussed
further
in
the
next
slide,
but
it
has
been
implemented
by
the
web
inside
of
the
webrtc,
but
it
got
deleted
around
March
2017.
I
So
we
may
need
to
have
this
implementation
for
having
this
feature,
but
we
can
discuss
further
in
next
slide
and
regarding
the
n50
configurable
rtcp
transmission
interval.
Right
now
we
have
webrtc,
send
net
delay
millisecond
as
a
field
trial,
but
we
also
want
to
have
set
the
transmission
interval
for
transport
wide
rtcp.
I
That
is
our
expectation
to
implement
this
feature.
Lastly,
n51
is
to
improve
the
control.
I
think
it
might
be
related
with
the
previous
requirement
as
n38,
but
it
is
more
much
more
a
little
bit
more
details
about
controlling
the
zero
popper.
I
We
may
need
to
consider
the
render
pipeline
I
I
I
I,
find
we
found
one
chromium
issue
about
low
latency
render
mode
which
is
referenced
in
this
slide,
so
I
think
that's
those
four
new
requirements
might
have
to
implement
improve
the
current
current
experience
of
the
game
streaming.
I
F
Basically,
the
RFC
8834
already
recommends
rpsi.
So
essentially,
this
is
a
the
ITF
already
has
this
requirement
for
the
stack.
So
the
issue
is
not
whether
rpsi
should
be
implemented.
This
8834
already
says
that,
but
there's
some
practical
issues
which
are
on
the
next
slide,
so
next
slide
yeah.
So
we
have
this
document,
which
is
in
call
for
adoption
and
itfab
core
relating
to
hebc
and
webrtc,
and
in
Issue
13
was
filed
relating
to
rpsi
support
because
hebc
does
support.
F
Rfc
7798
does
include
support
for
rpsi,
and
so
a
little
bit
of
there's
been
a
debate
on
that
on
that
issue
and,
as
you
said,
it
was
originally
in
web
in
live
web
RTC,
but
was
removed
in
2017.
F
It
was
implemented
for
VPN
and
bp9
and
has
was
was
taken
out
there
and
currently
lipo
urgency
doesn't
support
rpsi,
because
that
code
was
removed
and
also
there
was
there
was
some
effort
previously
to
try
out
LTR
and
they've
done
a
instead
of
our
PSI
did
a
custom
rtcp
message,
so
the
the
bottom
line
is
that
there's
nothing.
The
ITF
recommends
rpsi,
but
there
have
been
some
practical
difficulties
in
live
web
art.
You
see
so,
but
there's
an
open
issue.
F
If
we
can
figure
out
a
way
of
addressing
this
within
the
code
base
in
a
way
that
can
be
implemented,
there's
no
there's
no
there's.
No.
Let
me
put
it
this
way.
The
the
w3c
requirements
are
not
an
obstacle
and
the
ITF
recommended.
There's
standards
are
not
an
obstacle
here.
It's
just
figuring
out
how
to
do
this
in
a
practical
way
within
the
within
the
code
base,
so
I'm
not
sure
that
we
need
a
requirement
for
it.
Given
that
the
itf40
recommends
it.
I
Okay,
the
question
I
understood
was:
should
the
application
control
the
ipsi
or
the
usual
agent
should
control
the
ipsi?
I
Okay,
yeah
I,
think
I
think
we
can
set
the
application
based
on
our
proposal.
We
want
application,
can
control
the
ipsi.
F
Well,
at
least
with
STP
monitoring
right,
you
can
always
say
you
can
always
negotiate
it
or
not
like
any
other
feedback.
So
that's
you
know.
That
would
be
the
way
you
would
do
it
assuming
it
was
even
in
the
library
to
see.
A
A
C
D
I
think
what
I
heard
was,
if
indeed
there
is
a
need
for
app
control,
then
that
should
be
an
explicit
requirement
in
the
use
case,
so
that
we
can
then
see
how
to
expose
it
in
the
API.
F
Yeah,
usually
I
I,
don't
recall
ever
seeing
a
stack
that
had
rpsi
under
app
control.
I
mean
it's
typically,
the
typically
the
the
RTP
stack
just.
Does
it
and
I
think
that's
how
it
was
originally
was
originally
implemented
for
vp8
and
bp9,
but.
F
Anyway,
I
I.
The
suggestion
here
is
to
deal
with
this
as
an
IDF
problem,
rather
than
a
w3c
one
and
see
if
we
can
figure
out
how
to
how
to
support
rpsi
in
in
library,
TC.
L
A
D
A
Yeah,
so
apologies
that
my
memory
is
on
this
topic
are
about
four
and
a
half
years
old,
but
I
deleted
it.
We
would
some
point
want
to.
They
have
certain
apis.
That
would
allow
us
to
select
the.
A
Controller
right
to
inject
the
controller
or
to
basically
do
the
controlling
of
the
frame
basis.
So
apologies,
the
library,
is
not
sharp
enough
about
all
of
the
vehicles,
but
basically
I
just
wanted
to
bring
this
up.
A
Potentially,
or
at
least
switch
between
different
modes
of
selecting
the
next
frame,
I,
don't
know
the
next
frame.
Sorry,
the
phrase
from
which
we
predict
that's
all
so
some
of
the
links
they
led
to
my
Google
musician
could
switch
from
using
fpsi
to
using
lost
notifications.
A
A
I
would
have
to
look
into
it
first,
but
we
have
heard
the
philosophization
allowed
us
to
know
both
which
frame
whatso.
We
expected
the
right
relax
system
knows
which
one
was
the
best
one
decoded
in
case
there
was
a
frame
that
was
over.
One
of
the
packets
are
like
over.
Here
is
very
data.
A
H
B
F
Well,
the
discussion
of
rpsi
itself
in
this
will
probably
occur
in
ABD
core,
and
also
maybe
that
includes
discussion
of
this
custom
rtcp
message
since
that
seemed
to
work
better
than
rbsi.
D
F
I
think
that
should
will
come
out
of
the
this
is
part
of
the
hebc
implementation
in
library,
TC,
so
I
think
that
will
that
will
Bubble
Up
from
from
the
discussion
of
the
implementation,
there's
PRS
for
both
Safari
and
chromium.
I
Okay
and
I
have
two
more
requirements:
can
you
go
in
go
back
to
slide
35.
I
Yes,
there
there
are
n50
and
and
51.
regarding
the
M50
I
think
we
already
have
neck
delay,
but
we
still
need
a
transport
wide
rtcp.
D
So
for
the
transport
finder,
if
you
negotiate.
A
I
So
see
does
it?
Does
that
mean
we
have
a
implementation
already.
F
I
think
I
think
we're
approaching
over
time.
So
maybe
we
need
to
finish
this
in
the
Overflow
session.
B
A
So
we've
got
them
we'll
take
care
and
slides
38
to
the
third
is
to
43
we'll
go
to
those
section
and
by
that.
K
Oh
so
we
already
had
this
setup
discussed
in
the
last
meeting
in
July,
but
let
me
go
through
that
again,
so
we
have
a
new
here
or
a
wrestling
pure,
which
has
two
receiving
peer
connections
or
delivery
included
friends
from
the
same
original
couple
stream,
but
with
different
metadata
because
of
databuse,
and
what
you
will
do
here
is
we
use.
Inquiry
transforms
to
read
printed
things
from
these
to
peer
collections.
K
K
And
to
support
this
use
case,
we
have
reporting
three
requirements
or
they
want
to.
So
we
would
like
to
change
the
robotics
English
being
SEC
with
the
following
three
changes.
First,
is
the
ability
to
move
frames
between
peer
connections,
the
second?
It
would
be
to
call
structure
clone
on
trades
inquiry
trains.
Another
third
would
be
to
allow
modifying
the
RP
simple
video,
artistic
music,
video.
K
Regarding
the
first
one
that
would
allow
us
to
that
would
allow
a
node
to
see
anybody
strange
from
one
Clear
Connection
and
drag
it
to
another,
and
to
achieve
this
change
we
would
we
would.
We
would
need
to
remove
the
Restriction
clause
on
string
steam
limited
to
unfair
connection,
and
we
already
have
a
PR
for
that,
and
also
the
two
issues
that
I
mentioned
here
and
yeah.
K
K
Next
and
the
last
one
is
to
modify
the
metadata
for
the
RDA
single
frames
so
that
the
trains
from
different
piece
so
to
support
the
redundancy
in
space
in
case
of
peer
failure,
and
we
want
to
limit
our
changes
only
to
particular
time
sum
for
audio
and
video
and
the
video
we
only
want
to.
In
addition
to
this,
we
also
want
to
do
Train
ideas
and
dependencies
change.
K
A
The
formal
requirements
on
the
market,
something
is
serializable
a
platform
object,
that's
so
natural,
not
that
they
have
a
serialization
and
a
decentralization
step
defined,
but
civilization
digitalization
does
not
require
that
the
outputs
be
observable
by
accident,
so
the
steps
can
just
say
and
to
an
internal
format
the
impacts.
So
this
doesn't
need
to
be
a
heavy
animal
specification.
A
K
J
I
have
a
few
questions.
First,
one
clarification
you're
talking
about
encoded
frames
right,
yes,
so
the
second
question
would
be
if
we
had
a
Constructor
for
encoded
frame.
Would
that
be
sufficient
for
some
of
these
things?
Would
that
obviate
the
need
for
some
of
these,
or
is
that
just
unrelated.
A
We
have
a
discussion
on
Constructor,
but
we
in
this
particular
use
case.
We
found
the
defining
reconstruct
the
required
defining
pool
with
the
metadata
and
while
just
modifying
that
data
allowed
us
to
say
that
and
by
the
way,
there's
there's
more
methods
of
this
thing,
but
we
don't,
but
we
don't
look
at
it
and
we
don't
change
so
so
when
we're
looking
at
the
property
internal
implementation,
which
is
messier
than
us
together-
and
this
was
a
significant
specification-
it's
a
woman
we're
suggesting
instead
of
contractor,
we
do
an
infrastructure
for
all
the
use
cases.
J
My
final
question
was
about
the
the
forwarding
of
the
encoded
frames
to
another
peer
connection.
When
you
do
that,
how
do
you
know
what
bandwidth
is
available
for
forwarding
to
the
other
peer
connection,
to
know
what
you're
capable
of
forwarding
and
not.
A
A
E
Yes,
so
I
just
have
a
little
bit
of
a
concern
about
how
we
got
here,
because
we're
about
to
see
encoded
frame
to
paraphrase
is
basically
taking
a
pipe
out
of
the
wall,
breaking
it
in
half
and
inserting
JavaScript
to
give
JavaScript
a
means
to
transform
existing
frames,
which
led
to
some
of
the
limitations
we're
fighting
here,
but
that
also
becomes
a
it
seems
that
has
become
a
very
powerful
opportunity
for
JavaScript
to
say:
hey
what
happens
if
I
put
all
this
other
stuff
in
and
I'm
a
little
concerned
with
moving
frames
between
PCS?
E
What
does
that
do
to
the
user
agents
ability
to
know
what's
going
on
and
to
be
able
to
make
optimal
decisions?
So
I'm
questioning
whether
this
is
the
right,
so
I'm
not
necessarily
opposed
to
solving
these
problems,
but
I'm
questioning
a
bit?
If
this
is
the
right
API
surface,
if
we
need
a
better
surface,
if
the
goal
is
going
to
be
to
give
more
broader
controls
to
JavaScript
here
as
far
as
structured
clone,
I,
don't
see
a
problem.
E
Well,
I
should
ask
encoded.
This
is
encoded
as
frames
right.
So
there's
no
GPU
backing
is
that
right.
E
E
A
Concerns
in
terms
of
the
use
case
with
standard
transform
his
soul,
but
if
you're
adding
too
much
metadata,
then
things
are
going
up
and
there
basically
it's
the
same
case,
but
my
pattern
because
it
might
be
a
legalized
connections
that
actually,
you
might
take
a
class.
So
when
we
discussion
is
much
lower
and
you
will
still
try
to
push
to
be
a
lot
of
data,
send.
A
B
A
Responsible
to
say
any
product
will
be,
trade
is
mutual
digital,
and
now
you
should
achieve
that
and
so
on
and
that's
the
model
we've
learned,
Center
Transformer
and
with
this
we
are
moving
away
from
that.
But
if
you're
still
using
government
Transformer
and
idea
to
have
a
different
UK
shape,
where
we're
staying,
there's
no
way,
of
course,
and
then
it's
very
clear
from
using
agent
that
it
has
to
be
something
else
in
terms
of
value
definition
and
so
on.
Like.
A
I
A
Parameters,
API
the
keyframe
request
and
I
believe
whether
or
not
you've
receive
keyframe
requests
is
visible
and
gets
fed.
So
you
can
just
forget
that
and
then
yeah,
so
you
have
a
lot
of
players,
you,
basically
you
won't
get
a
few
packets
and
we
won't
receive
it
as
soon
as
possible
in
a
worker
thread
to
be.
K
A
A
A
A
A
Shape
this
way
from
the
right
place
to
feel
the
right
place
and
would
be
fairly
another
API,
and
my
understanding
is
that,
given
that
there's
no
encoder
anymore
foreign.
B
A
A
Wait
for
the
perfect,
perfect
submission
to
arrive
well,
I,
understand
the
literally
and
respect
people,
and
this.
B
A
Sending
stuff,
and
if
you,
if
you
want
to
argue
that
we
should
break
up
the
Weber
saying
all
the
transformation.
A
A
A
A
F
B
A
Okay
is
trying
to
spread.
A
Thank
you
next
I
think
we
have
the
next
slide
so
that
obvious
and
next
slide,
I'll
start
with
the
track
stats
API
shape,
so
the
track
stats
API
has
we've
added
it
to
the
extensions
Bank
previously
to
count
these
train
counters,
how
many
frames
are
produced,
delivered,
distorted
or
dropped,
and
we
come
up
with
the
definitions
with
these
stats,
but
there's
been
an
ongoing
discussion,
whether
the
the
metrics
should
be
exposed
with
an
asynchronous
API
or
a
synchronous,
API,
and
so
the
issue
is
that
the
frame
counter's
been
on
off
the
main
thread,
because
media
is
Flowing
across
the
main
12
and
we
have
these
run
the
completion,
semantics
design
principle
that
says
that
you
should
not
change
values
exposed
to
JavaScript
in
the
same
task
execution
cycle
to
avoid
races.
A
So,
for
example,
you
could
post
post
a
test
to
change
the
value
and
I
previously
argued
against
synchronous
API,
because
I
was
afraid
of
excessive
post-past
being
or
having
to
use
new,
Texas
or
caching,
but
as
as
have
been
proven
in
the
thread
and
also
now
I
have
a
implementation
experience.
We
can,
we
can
implement
it
either
way.
A
So
that's
up
for
a
discussion,
but
what
I
want
to
do
to
get
consensus
on
today
is
whether
it
should
be
asynchronous
or
synchronous.
So
I
have
a
PR
that
changes
this
and
I
proposal
this
that
we
merge
budget
comments.
A
I,
don't
want
to
react
to
things
better
with
the
synchronous
API
there
here,
giving
the
impression
that
you
can
do
real
things
that,
based
on
the
stats,
so
I
think
we
have
either
of
that,
and
if
you
have
the
name,
videos
that
you
might
think
you
know
it
starts
with
time.
So,
let's
that's
a
good
thing.
A
A
A
B
A
Remember
somehow,
so
you
can
do
it,
you
just
need
to
keep
track
of
if
you've
already
updated
them
like.
If
the
getter
has
already
been
called
in
the
current
task
execution
cycle,
you
don't
need
to
do
anything
in
between.
So
the
naive
implementation
would
be
to
have
a
Boolean
that
says
that
equals
false
and
then
the
cat
is
supposed
to
update
the
values,
to
set
the
value
to
true
and
then
your
queue
of
micro
tests
to
set
cash.
Once
again,
you
can
do
it
without
the
math.
A
So
the
micro
task,
if
you
just
keep
an
ident
ID
to
the
which
task
execution
cycle,
am
I
in
one
two,
three
four
five
and
just
remember
last
time,
I
updated
the
counter
versus
press
execution
10..
So
it's
11
now
I'll
update
it.
That's
always
the
post
test
again
I
mean
one
post
test
for
the
call
list.
Also.
A
A
Because
then
qmq
medical
plan,
after
that
there
will
be
under
the
micro
password.
So
you
would
need
to
to
indicate
the
flashing
ID
and
the
last
micro
tasks
and
that's
not
well.
You
need.
A
Stable
state,
essentially
a
stable,
State
concept
right.
Can
we
not
use
that?
Okay,
what
you're
describing
is
called
waiting
a
stable
state
in
the
HTML
stack
like
it's
a
well-defined
concept,
all
right,
so
you
can
say
this
attributes
are
updated
when,
when
the,
when
there
is
a
stable
State,
meaning
there
is
no
JavaScript
on
the
stack
anymore.
So.
H
A
So
yeah
rental
completion
is
preserved,
that's
the
main
purpose,
but
also
it's
efficient.
So
yes,.
A
Yeah,
so
we
use
that,
for
example,
in
Media
we
use
the
a
bunch
of
locations,
but
the
exception
would
be
timers
like
apis,
where,
if
you,
you
query,
you
get
a
different
value.
It's
saying
that
all
that
yeah,
if
I,
remember
the
concept
of
stable,
State
Correction,
which
I
may
not
that's.
What
is
happening
between
cyclists
between
its
news
recycles
so
you're
saying
that
you
could
conceptually
update
the
countries
whenever
you
hit
the
Stables
so
that
the
implementation,
what
I
see
what
what
have
to
do
with
the
cash,
because.
K
A
The
difference
is
not
observable,
so
it
doesn't
matter
what
perspectives
and
the
way
the
language
is
right
now
is.
It
says
this
is
the
first
time
together
is
called
in
this
test
execution
cycle.
If
that
needs
to
be
rephrased,
and
then
we
can
rephrase
it,
but
the
concept
is
the
same:
you
can
only
get
there,
but
I
mean
I,
just
return
it.
So
in
Spec
we
need
to
Define
behavior.
A
Long
as
it's
the
same
so
yeah
I
mean
we
will
implement
it,
we
have
civil,
States
or
I,
don't
know
a
typical
buffer
or
whatever.
H
A
You
can
also
describe
it
exactly
in
the
same
way
using
the
saying
that
the
value
is
only
updated
during
a
civil
state
right.
This.
B
A
A
A
A
Conditions
I
can
put
out
the
examples.
Maybe
it's
going
to
be
clear,
like
bits
of
other
specs
that
do
exactly
the
same
thing
and
see
how
they
write
it,
but
yeah
listen,
but
I
I
modify
this
pull
request
or
I.
Do
a
follow-up
tool
request
or
a
existing
language
to
make
it
more
clear.
Yeah
I
can,
but
are
there
any
any
objections
to
me
now
going
in
this
direction?
E
Yes,
so
yes,
I'm,
supportive
of
synchronous,
API
and
merging
this
PR
I
just
had
an
unrelated
comment
that,
on
the
slide,
your
juxtapos
and
get
stats
versus
video
stats.
So
there's
a
separate
issue
of
naming
their
I
mean
a
track
is
always
either
video.
It's
always
either
video
track
or
an
audio
track.
So
we
could
probably
simplify
and
just
call
it
stats
and
have
it
be
a
union
of
two
interfaces,
one
or
the
other,
but
but
that's
but
yes,
for
this
particular
issue.
A
A
H
E
Anybody,
oh
yes,
this
is
me
so
hi
I'm,
Johnny,
Barr
from
Mozilla,
and
so
just
wanted
to
take
this
opportunity
to
present
something
Firefox
is
implemented
recently,
which
is
a
select
audio
output,
which
is
part
of
the
media
capture,
output,
spec
and
just
a
quick
refresher
on
this
API
is
that
even
though
that's
it,
the
ability
to
to
change
the
default
output
path,
say
you're
in
the
library,
for
example,
is,
is
behind
permission.
E
You
get
it
automatically
if
you
get
permission
to
microphone,
but
then
there
are
use
cases
outside
of
video
conferencing
where
you
might
be
able
to
where
you
might
want
to
not
necessarily
ask
the
user
for
permission
to
their
microphone
just
to
be
able
to
redirect
output
to
a
speaker.
So
that's
what
the
select
audio
output
API
does.
It
has
its
own
prompt
in
order
to
reduce
fingerprinting
risk,
because
the
old
numer
devices
approach
turned
into
be
a
big
fingerprinting
vector
anyway,
so
we
have
that
implemented
and
here's
a
demo
page
you
can
try.
E
E
Yes,
so
you
can
just
hit
play
or
you
can.
First
click
select
speakers
pick
a
different
device
like
airpods
in
the
prompt
in
the
browser
prompt
and
then,
when
you
then
hit
play
button.
It'll
play
in
your
airpods
and
Firefox
will
persist
these
device
IDs
so
that
you
can
refresh
the
page
or
come
there
later
and
when
you
hit
play
it
would
automatically
go
to
your
airpods
if
they're
available,
but
there's
a
problem.
If
you
refresh
the
next
time
you
you
enter
the
page.
E
If
the
airpods
are
in
their
case
and
you
hit
play,
you
get
a
prompt,
and
this
is
confusing
users,
because
users
expected
to
see
a
prompt
when
they
hit
select
speakers
not
when
they
hit
the
play
button,
and
in
that
case
perhaps
the
better
option
would
be
to
just
play
out
through
the
normal
audio
path.
E
E
E
So
the
proposal
is:
if
the
user
agent
recognizes
a
device
ID
having
been
removed,
meaning
it
was
a
device
that
it
used
to
satisfy
earlier,
then
it
is
allowed
to
project
select
audio
output,
where
you
pass
in
a
device
ID
with
a
not
found
error
as
a
one-time
currency
instead
of
prompting
subsequent
calls,
it
says,
would
continue
to
prompt,
could
continue
to
prompt.
It
might
be
up
to
a
bit
to
user
agents,
since
this
is
a
fingerprinting
mitigation,
exactly
how
that
would
work.
E
So
that's
the
proposals,
any
comments
or
feedback.
A
That
you're
still
pretending
to
know
that
it's
there
so
in
terms
of
design
I,
wonder
whether
that's
the
difference
and
modularly
I
was
thinking.
If
the
issue
is
a
few
airports
you
refresh
and
you,
then
we
want
to
not
point
so
maybe
you
could
say
that
a
Internet
devices
can
expose
the
device
I
put
a
little
bit
longer
so
refresh
after
refresh,
maybe
for
5-10
minutes,
and
if
so,
when
the
web
page,
that
is
refreshed,
it
would
cool
name
your
basis
check
with
Urban
device.
Is
there
if
it's
not
there?
A
It
knows
that
and
they.
A
Solution
to
your.
E
Right
so
the
the
fact
that
you
have
a
device
ID
stored,
doesn't
you
know,
enumeric
devices
already
is
quite
limited,
initially
right.
So
there's
a
chicken
and
egg
here
where
enumerate
devices
output
devices
in
a
number
device
only
appear
in
the
number
of
devices
after
you
be
called,
select
all
the
output,
so
I
think
the
use
case.
Is
you
store
your
device
ID
that
you
use
last
time
to
local
storage
and
you
you
pass
that
in
to
select
all
the
outputs.
E
So
that's
the
first
thing
that
happens
so
there's
no
information
in
the
numerate
devices
yet
and
then
there's
a
there's
either
a
prompt
or
or
this
new
not
found
error,
and
then
the
application
can
respond
and
decide
either
to
say
never
mind.
Let
me
remove
my
device
ID
and
play
out
through
the
default
audio
or
a
call,
select
audio
output
again
without
a
device
ID
which
will
cause
a
prompt,
did
I
miss
something
there.
B
H
A
Page,
so
that
it
could
not
always
select
a
new
output.
A
To
do
like
note
from
it
can,
and
if
thank
you,
it
seems
that
when
you
have
a
refresh,
what
the
web
page
needs
to
know
is
whether
it
can
still
have
the
airports
and.
E
E
Right
as
I
recall,
though,
the
original
design
here
was
expressly
to
avoid
trackers
being
able
to
track
this
information,
which
we
I
mean
that's
a
similar
fix.
We
did
to
camera
microphone
is
that
even
though
you
have
camera
microphone,
permission
doesn't
mean
that
it's
which
you
know
since
covet
happened.
You
know
most
people
probably
have
given
permission
to
all
kinds
of
sites,
so
the
decision
there
was
to
that
is.
It
was
still
a
fingerprinting
surface
that
was
unnecessary.
So
did
we
deferred
exposure
to
to
after
camera,
active
camera?
A
E
Yeah
I'm
not
sure
I
quite
follow
the
The
Proposal
there.
But
if
we
agree
that
this
is
something
we
should
solve,
then
we
could
always
iterate
on
the
issue.
Perhaps
but
I
think
our
preference
would
still
be
to
to
find
a
solution
here
as
simple
as
possible.
E
E
Yes,
I
mean
this
demo
uses
reload
as
the
as
the
example,
but
in
in
practice
you
know
you
could
be
coming
back
the
next
day
to
use
the
site
again,
but
the
question
there
is:
how
do
we
recognize?
How
do
we
give
application
a
way
to
recognize
the
device?
If
it's
there
and
I
guess
we
could?
E
We
could
like
force
some
language
to
always
fire
the
device
in
an
artificial
device
change
event.
If
you
had
the
airpods
end,
all
the
time
sort
of
to
give
the
application
early
warning
hey,
you
still
have
this
device,
but
all
these
approaches
seem
more
complicated
than
just
failing.
A
I,
have
a
little
worry
about
the
one-time,
courtesy
I
would
actually,
because,
if
you,
if
you
don't
call
Select
I'll,
pull
your
output
twice
by
excellent
View,
reduce
the
information
or
or
if
you
will
look
twice
and
you
you
haven't
been
able
to
rotate
your
your
idb.
For
instance,
you
know
so
so
I
I
wonder
if
we
could
simply
to
select
all
the
outputs,
the
device
ID,
and
that
is
a
known
device,
but
it's
no
longer
there.
It
turned
off
now
there.
E
I
mean,
instead
of
what
it
could
say,
should
we
could
leave
this
up
to
user
agents,
since
this
is
a
mitigation
issue
about
how
many
times
to
reject,
for
example,
but
at
some
point
you
know
there
It's
tricky
to
discern
trackers
from
legitimate
use
cases
right
so
I'd
be
happy
with
some
vague
language
here.
As
long
as
we
have
this
allowance
that
I
don't
know
if
we
need
to
go
all
the
way
and
say
it
must
okay,
but
not
not
found
error
forever.
E
Happy
with
this
feedback,
so
thank
you.
This
is
you
all
right,
so
this
is
over
in
webrtc
PC
we
found
out
so
Firefox
is
implementing
the
the
ice
transport.
Well,
the
final
parts
of
the
ass
transport.
E
Now
transceivers
are
exposed
after
set
local
description
and
that's
early
enough
to
catch
all
the
ice
agent
events.
Sctp,
however,
is
only
only
becomes
non-null
in
stable
State.
When
you
have
the
answer,
when
negotiation
has
completed,
which
means
that
since
you
don't
have
the
satp
object
yet
there's
no
way
to
find
the
ice,
dtls
transport
or
the
ice
transport,
it
means
you
can't
register
event
listeners
early
enough
to
catch
State
changes
for
the
ice
transport
for
details
you
probably
can
can
make
do
but
yeah
for
ice.
You
want
Gathering
State,
select
a
candle
pair
change.
E
E
Now
the
satp
object
itself
has
a
couple
of
attributes
that
so
it'd
be
a
little,
so
exposing
it
early
might
cause
some
properties
to
be
available
sooner
and
it
turns
out
there's
only
one
and
it's
Max
message
size,
which
currently
is
just
a
number
and
it
would
have
to
become
nullable
just
like
Max
channels
is
today
so
hopefully,
in
most
cases,
applications
are
written
where
they
know
what
state
they're
in
and
they're
reacting
to
existing
events
and
they
use.
E
That
means
you
don't
necessarily
have
to
think
about
this
provided
you're
in
a
known
State
like
connected,
for
example,
but
if
you're
writing
code
that
instead
doesn't
rely
on
state
but
is
sort
of
inferring
State,
based
on
the
presence
or
absence
of
the
satp
object.
For
example,
you
might
run
into
trouble
because
earlier
the
sentence
in
red
here
would
have
worked
like
if
you
have
an
satp
object.
E
So
it's
you
know,
there's
some
proposals
on
on
the
issue
to
make
more
significant
changes,
but
I
think
this
late
in
the
game.
We
need
to
minimize
changes
based
on
compatibility
concerns,
so
the
only
two
proposals
I'm
presenting
here
is
basically
to
do
this
to
surface
the
satp
transport
in
that
local
description
a
little
bit
earlier,
just
like
the
other
transports.
E
The
other
proposal
is
to
do
nothing
and
then
basically
say
that
for
data
Channel
only
uses
direct
people
to
use
the
existing
aggregate
event.
Listeners
like
on
Ice
candidate
connection,
State
change
and
online
scheduling,
State
change,
arguing
that
in
the
data
Channel
only
transport
case,
you
only
have
one
transport
but
for
consistency.
I
would
prefer
that
we
do
a
so
any
objections,
any
any
thoughts.
Any.
J
I
think
you're
right
that
we
want
to
expose
the
ice
transfer
earlier
on,
especially
if
we
add
controls
as
we're
proposing
as
we
have
been
and
have
later
slides
about
so
I.
Don't
think,
B
really
works
well
and
I
think
there
might
be
a
proposal
C,
which
is
to
add
a
method
like
peerconnection
dot,
get
ice
transports,
and
that
would
give
you
direct
access
to
ice
transports
and
in
the
data
Channel
case
that
you'd
expect
there
would
only
be
one,
then
that
would
be
backwards
compatible.
J
B
E
B
M
I'm
not
sure
about
having
Max
message,
size
notable
since
this
is
mostly
an
implementation
detail
on
the
username
chart
and
it's
going
to
be
most
likely
a
fixed
value
that
value
can
be
updated
later
on
negotiation,
but
that's
something
that
you
would
need
to
handle
anyway
in
your
code.
So
if
user
agent
returns
a
value
that
they
currently
support
or
intend
to
negotiate,
that
would
probably
be
fine,
and
so
we
don't
need
to
have
code
that
can
handle
next
message.
Size,
bingo,.
M
The
max
message
size
is
something
that
is
a
property
of
the
browser
and
the
implementation
in
the
browser.
So
it's
something
in
the
browser
is
going
to
use
a
specific
value
automatically
and
negotiate
that
something
you
could
possibly
alter
by
merging
sdp,
but
probably
something
that
most
people
will
do.
So
if
you
return
that
value,
that
would
be
fine
and
then
later
on
after
negotiation,.
H
M
M
E
So,
are
you
proposing
a
but
change
Max
message
size
to
not
be
in
early
Max
message,
size
alone
and
then
you
might
might
the
value
might
change
after
negotiation.
M
H
E
A
I'd
like
to
support
is
a
substance.
C
I
mean
when
we
sort
of
try
to
use
transport
to
write
tests
for
BTS
Transporters.
We
found
that
so
all
the
tests
and
use
this
code
to
add
a
data
channel,
so
I
added
the
audio
track.
Sorry
quite
random.
What
kind
of
object
we
had
had
to
add,
but
yeah
I'm,
quite
independent
of
what
we're
trying
to
look
at
so
I
would.
L
A
E
Okay,
that
the
problem,
though,
is
that
during
negotiation,
there
can
temporarily
be
multiple
transports
and
then,
when
the
answer
comes
back
and
if
it's
bundle
a
lot
of
those
might
go
away
again.
So
we
have
a
lot
of
complexity
built
into
the
model
here,
so
I'm
a
little
nervous
about
exposing
more
methods,
because
you'd
have
to
return
an
array
of
transports
then,
and
it's
not
clear
which
transport
is
which
yes,.
A
E
A
A
Because
if
we
have
this,
then
the
use
case
for
surfacing
and
I
said
the
picture
I
posted
as
a
degree
transport
that
might
disappear
before
you
can
use
it.
It
scares
too,
if
we're
only
concerned
about
this
one
specific
ice
transport.
What
about
proposal
B
to
have
an
attribute
for
the
satp
transport
size,
transport
messy.
E
Well,
I
I
think
yeah
I'm
worried
about
it
becoming
messy
here,
because
our
transport
situation
is
a
bit
complicated.
We
do
seem
to
have
an
inconsistency
about
when
we
expose
transports
it'd
be
nice
to
fix
that.
So
you
know,
I
I
would
still
support
proposal.
A
here,
I
think
given
the
options,
but
for.
H
E
All
right
so
Harold,
how
do
you
feel
about
prosely
in
isolation.
A
E
A
A
E
Okay,
so
I
think
in
most
cases
the
max
message
size
would
not
be
an
issue
because
hopefully
applications
will
already
await
the
correct
State
before
they
go
around.
E
C
And
today,
Peter
and
I
will
continue
our
conversation
about
the
ice
controller
apis.
So
this
is
the
roadmap
that
we
started
off
with
for
incremental
improvements
to
ice
control
on
the
web.
The
first
atom
has
a
PR,
that's
seen
quite
a
lot
of
discussion
debate
on
iterations,
so
it's
quite
feels
quite
well
done
at
the
stage
the
next
two
items
have
open
PRS
that
will
I
will
that
I
will
talk
about
a
bit
more
today
and
then
Peter
will
talk
later
about
some
of
the
following
items.
C
So
the
goal
of
this
item
is
to
allow
applications
to
choose
any
of
the
wallet
candidate
pairs
to
send
data
and
a
few
rules
from
RFC
8445
that
are
relevant
in
this
particular
situation.
So
to
summarize
them
ice
agents
start
off
by
collecting
candidate
pairs,
performing
checks
and
then
at
some
point
the
controlling
agent
decides
to
do
nomination
on
one
of
those
valid
pairs.
C
Once
the
nomination
succeeds,
data
can
only
be
sent
and
received
on
the
selected
pair
and
the
agent
isn't
allowed
to
nominate
again
and
an
I3
start
is
required
if
we
want
to
send
data
on
a
different
candidate
pair
and
that's
what
we
would
like
to
avoid.
We
want
to
avoid
a
nice
restart,
because
that
needs
several
more
round.
Trips
of
sdp
exchange,
candidate
pair
exchange,
more
connectivity,
checks,
nomination,
and
all
of
that,
so
we
are
allowed
to
use
any
valid
candidate
pair
prior
to
nomination,
and
so
that's
what
we
are
proposing.
C
We
are
a
new
event
that
is
fire
on
the
controlling
ice
agent
when
it
wants
to
do
nomination.
If
the
application
chooses
it
can
prevent
the
nomination
from
happening
by
canceling
this
event
by
calling
prevent
default
and
instead
the
application,
you
can
call
a
new
method,
such
selected
candidate
pair
to
choose
how
to
send
data
to
the
to
the
peer
and
once
application
calls
are
selected.
The
ice
agent
will
immediately
start
sending
data
on
that
candidate
pair.
C
So
this
is
something
that
can
be
done
both
on
the
controlling
and
control
side,
because
nomination
is
the
only
thing
that
only
the
controlling
site
gets
to
do
so.
On
the
controlling
side,
when
the
application
calls
selected,
we
can
have
another
nomination
event
fire
which
now
the
application
can
either
let
that
carry
on
and
conclude
eyes
or
it
can
prevent
that
nomination
as
well.
C
If
it
wants
to
change
the
candidate
pair
at
some
point
further
in
the
future,
so
does
the
API
for
that
and
just
to
clarify
this
proposal
does
not
alter
in
any
the
fundamental
I
States
or
state
transitions
in
any
way.
So
the
only
thing
this
will
do
is
allow
us
to
be
in
this
intermediate
pre-conclusion
state,
where
we
can
use
any
candidate
pair
that
we
want,
and
then
it
still
lets
us
get
to
the
conclusion
as
well.
So
the
I
State
machine
can
still
function
in
compliance
with
RFC
8445.
C
So
this
item
deals
with
removing
candidate
pairs
that
are
no
longer
necessary.
So
for
this
we
propose
a
new
method:
remove
candidate
pairs
fairly
straightforward.
It
can
also
do
bulk
removals
for
efficiency
and
when
a
candidate
pair
is
removed
using
this
new
method,
it
triggers
a
non-cancelable
candidate
pair
remove
event,
and
that
makes
it
symmetric
with
the
hard
event
that
was
Proposition
in
earlier
PR
as
well,
so
on
the
next
slide.
C
So
it
stops
off
by
setting
up
the
peer
connection
in
the
bottom
left.
It
makes
the
node
of
every
candidate
pair
that
it
sees
through
the
r
event
and
then
in
the
top
right.
If
the
ice
agent
is
trying
to
remove
any
candidate
pair,
and
that
happens
to
be
the
UDP
3478
pair,
the
application
prevents
that
from
happening
by
canceling
the
event.
C
Similarly,
if
the
isagent
is
trying
to
nominate
candidate
pair
that
isn't
the
UDP
3478
pair,
it
can
prevent
that
by
according
to
your
default
and
then
finally,
once
it
does
see
the
UDP
3478
pair,
it
switches
the
transport
using
that
as
a
selected
candidate
pair
and
then
finally,
it
it
can
dribble
for
all
of
the
unused
credit
cards.
So
that's
just
an
example
to
show
what
the
EPA
can
do
thus
far
and
then
over
to
Peter
for
the
new
purpose
of
this.
J
All
right
next
slide,
so
these
are
ones
that
do
not
yet
have
a
PR.
So
the
goal
of
presenting
here
today
is
to
see
if
the
overall
shape
is
one
where
it's
okay
to
go.
Make
PRS
for
these
things.
So
I've
got
three
slides
with
examples
and
then
one
with
what
the
web
Ideo
will
look
like
the
kind
of
or
or
for
all
the
examples.
J
You
know
lets
the
web
app
know
that
a
nice
check
is
being
sent
and
because
it's
cancelable,
which
I'll
get
to
in
the
next
slide,
it's
called
uncheck
send
not
on
check
sent
because
it's
like
going
to
be
sent,
and
the
key
is
that
you
can
get
the
time
that
the
check
is
sent
and
you
can
get
the
time
the
rest.
The
response
is
received
and
you
can
get
whether
there
was
an
error
in
the
response.
J
So
you
know
if
something
went
wrong
and
you
know
if
it
worked,
and
you
know
how
long
the
delay
was
between
sending
and
receiving
so
this
allows
an
application
to
do
its
own
calculation,
of
which
candidate
pair
it
likes
based
on
rtt
and
error
and
so
forth.
J
Then,
if
an
application
doesn't
want
to
check
to
be
sent
at
all
for
a
particular
candidate
pair,
then
it
can
prevent
that
from
being
sent
just
like
the
others.
The
other
events
that
are
cancelable
and
next
slide.
J
And
then,
finally,
if
an
application
wants
to
send
checks,
then
there's
a
new
method
on
the
ice
transport
called,
send,
check
or
I
mean
we
can
debate
the
names,
but
there's
a
method
that
allows
it
to
send
a
check,
and
there
were
the
return.
Value
is
similar
to
the
event
where
it
can
get
the
time
that
it
actually
gets
sent
and
the
response
that
gets
received
and
whether
there's
an
error
in
response
and
if
the,
if
a
response,
never
comes
back,
if
there's
just
a
loss
next
slide.
J
So
all
three
of
these
can
be
done
with
this
set
of
web
ideo,
like
I,
said,
there's
an
event
on
the
ice
transport
and
a
method
for
sending
a
check.
The
event
is
cancelable,
and
if
you
don't
cancel
it,
then
you
get.
This
check
object
that
lets.
You
know
when
the
check
is
sent
and
also
when
the
response
is
received
and
if
there's
an
error,
I
also
threw
in
the
transaction
ID
onto
the
check,
because
in
my
experience,
writing
server
software
that
or
I
should
say
remote
endpoint.
That
speaks
with
a
web
client.
J
It's
really
nice
to
be
able
to
debug
ice
connectivity
issues
if
you
know
the
transaction
ID,
but
that's
just
kind
of
a
little
bonus.
So
the
discussion
or
the
question
is:
is
this
the
right
shape
enough
that
we
can
now
proceed
to
write
some
PRS
for
it
or
is
there
something
fundamentally
objectionable
that
people
want
to
talk
about.
J
Is
that
not
the
best
way
to
just
get
some
bites?
It's
20
bytes.
J
B
J
No
I
the
way
I
did
it
here
was
that
the
if
there's
a
timeout,
the
ice
agent
will
determine
when
a
check
times
out
and
then
the
response
resolves
to
an
undefined
or
null.
But
if
we
don't
like
it
that
way,
we
could
you
know
quibble
about
the
exact
API
shape
in
the
PR,
but
I
thought
it'd
be
easier
if
the
application
didn't
have
to
deal
with.
H
J
A
This
one,
you
can't
face
any
issue
with,
like
main
effect
blocking
and
so
on
or
click
like
only
not
that
much
important.
If
you
have
like
some
latency
between
the
call
and
the
same
time
you
get
it
I
guess
get
the
information.
But
that's
do
you
think
there
will
be
an
issue
if
I
think
that
window
or
or
not,.
A
B
A
Yeah,
basically,
yes,
do
you
think
like
having
a
window
is
good
enough.
If
you
look
at
people
and
see
where
these
API
is
going
to
be
called
editing,
the
signaling
frame,
which
is
which
can
be
started
in
the
network
thread
and
so
on.
That's
I
like
to
understand.
J
I
think
you're
you're.
You
bring
up
a
good
point
that
it
would
be
nice
if
the
RTP
transport
is
something
that
you
could
hand
control
over
to
worker.
C
C
I
guess
the
only
thing
I
would
want
to
bring
up
is.
Should
there
be
a
way
to
associate
a
check
response
with
the
source
check,
and
that
could
be
something
as
straightforward
as
adding
the
transaction
ID
into
check
response.
J
But
you
know
if
we
wanted
to
change
the
shape
so
that
there's
like
an
event
that
says,
got
response
in
an
event
that
says
got
sent
check.
Then
you
would
need
to
tie
them
together
by
transaction
ID,
but
here
at
least
how
I
have
it.
You
only
get
the
response
from
the
check
dot
response.
Basically,
so
you
would
know
that
that
response
is
for
that
check.
J
Okay,
so
I'm
not
hearing
any
objections,
I'm
just
hearing
like
slight,
you
know
ideas
for
how
to
improve
the
shape,
so
I
think
the
next
step
should
be
to
make
a
PR
and
then
continue
discussion
in
the
pr
is
that,
okay
with
everybody.
B
J
All
right
so
RTP
transport
Stefan,
if
you
ever
wanna,
do
the
presenting
just
jump
in
and
let
me
know
that's.
H
J
So
previously,
I
presented
on
RTP
transport
and
some
of
the
feedback
I
got
was
that
what
I
was
presenting
had
too
big
of
a
gap
between
what
we
are
where
we
are,
and
there
wasn't
an
incremental
approach,
so
Stefan
reached
out
to
me,
and
he
had
some
ideas
for
how
to
make
it
a
more
incremental
or
Progressive
version.
J
Building
a
you
know.
Is
it
easier
way
to
get
there
without
such
a
big
gap?
J
But
we
both
agreed
on
the
reasons
for
the
RTP
transport
and
so
I
wanted
to
have
the
first
slide
be
a
reminder
of
some
of
those
some
of
them.
We
talked
about
today,
things
like
having
your
own
audio
codec
or
things
like
forwarding
of
RTP
from
one
endpoint
to
another
being
able
to
control
a
Jitter
buffer.
J
Sorry
can
you
go
back
and
there
are
others
that
we
talked
about
previously
things
like
being
able
to
customize
the
packet
station
or
customize
your
FEC
or
your
RTX,
or
even
write
your
own
Jitter
buffer
to
be
able
to
do
your
own
bitrate
allocation.
We
did
talk
today
about
doing
custom
metadata
that
you
can
send,
along
with
your
media
and
also
custom
rtcp
messages.
So
some
of
these
we've
talked
about
and
some
of
today
and
some
of
them
we've
talked
about
previously,
but
there's
a
lot
of
customization.
That
would
one
would
want
to
do
that.
J
Sorry
next
slide,
so
Progressive
version
means
that
it
works
with
the
current
peer
connection
and
it
works
with
the
encoded
streams
and
it
works
with
web
codecs,
and
you
can
pick
which
parts
you
want
to
replace
or
keep,
and
it's
not
like
such
a
big
gap
between
where
we
are
and
all
I'll
show
you
what
I
mean
by
that
next
slide.
J
So
here's
the
the
big
idea
is
that
there's
a
new
method
on
peer
connection
called
create
RTP
transport.
It's
similar
to
create
data
channel
in
the
sense
that
when
you
call
it
it
causes
ice
and
dtls
to
be
set
up
in
the
local
description.
Remote
description
and
the
difference
is
that
it
also
sets
up
srtp.
J
But
the
srtp
is
different
than
the
existing
audio
and
video
senders
and
receivers
in
that
you
can
receive
all
of
the
RTP
and
rtcp
packets
for
the
entire
bundle
group,
and
you
can
send
any
RTP
or
rtcp
package.
You
want
as
long
as
you
do
not
break
srtp
or
congestion
control.
J
A
J
I
think
we
should
be
able
to
mix
them
in
the
same
bundle.
Group
and
I'll
show
an
example
of
that
later
on.
But
basically,
if
you
were
to
have
an
RTP
transport
object
in
the
same
bundle
group
as
some
senders
and
receivers,
then
you
it's
still.
You
would
be
able
to
receive
the
packets
in
the
RTP
transport,
as
well
as
through
the
receiver
or
you'd,
be
able
to
send,
with
the
caveat
that
you
can't
break
srtp.
So
you
can't
reuse
an
ssrc
sequence.
Number
rollover,
account
combination.
G
J
J
Let
him
ask
his
question
so
some
examples
of
things
you
can
do
with
this
is
that
you
could
do
your
own
encoding
and
packetizing
with
your
own
custom,
codec
say
the
plug-in
audio
codec
that
we
were
talking
about
earlier
or
you
could
use
the
existing
encoders
from
encoded
streams,
but
then
do
your
own
packetization
and
sending
or
you
could
do
the
same
but
apply
your
own
custom
FEC
and
then
send,
or
you
could
observe
the
Knacks
coming
in
over
rtcp
and
then
choose
to
do
your
own
custom,
RTX
Behavior,
because
maybe
you
want
some
more
aggressive
RTX
or
something
hi
I'm,
good,
okay,
I'll
pause!
E
J
E
J
Foreign,
okay,
so
back
to
the
examples
you
could
receive
packets
and
use
your
own
custom,
Jitter
buffer
implementation.
So
like
that
game
streaming
one.
If
it
really
wanted
controller
for
the
Jitter
buffer,
they
could
write
their
own
Jitter
buffer
implementation
and
maybe
use
web
codecs
for
decode.
You
could
receive
packets
and
depacketize
them
yourself
and
then
inject
those
into
encoded
streams,
so
that
the
encoded
stream
still
does
the
frame
level
Jitter
buffer,
but
you're
doing
the
depacketization.
J
You
could
observe
incoming
feedback
and
do
your
own
custom
bandwidth
estimate,
if
you
think
you
can
do
a
better
job
at
bandwidth
estimation
than
what's
built
in
the
browser.
As
long
as
you
go
lower
than
the
built-in
congestion
control,
because
you
wouldn't
be
allowed
to
say,
oh
my
bandwidth
estimator
can
do
100
megabits
per
second
and
then
overcome
the
built-in
one.
J
You
could
get
the
bandwidth
estimate
from
the
RTP
transport
and
do
your
own
bitrate
allocation
for,
say
your
simulcast
layers
and
then
control
the
bit
rates
on
the
RTP
senders
through
the
max
bit
rate,
RTP
parameter
or
you
could
forward.
Rtp,
packets
or
rtcp
packets,
like
keyframe
requests
from
one
peer
connection
to
another,
with
full
control
over
the
entire
packet,
rather
than
having
to
rely
on
encoded
streams
hackery
at
the
frame
level.
J
So
those
are
some
examples
of
things
we
could
do.
I'll
get
I'll
show
some
code
in
a
minute
next
slide.
J
Here's
an
example
where
we
do
the
opposite:
where
we
receive
RTP
packets
from
the
RTP
transport,
and
then
we
inject
the
encoded
video
frames
and
encoded
audio
frames
into
the
encoded
streams,
and
that
requires
a
Constructor
for
those
next
slide
and
here's
an
example
where,
if
we
allow
the
Jitter
buffer
to
be
disabled
on
encoded
streams,
which
I
think
is
something
we
would
need
to
add,
then
you
could
not
only
do
the
deep
packetization,
but
also
do
your
own
Jitter
buffer
and
decide
when
you
want
to
have
something
come
out
of
the
digital
buffer.
J
So
if
you
wanted
to,
for
example,
in
that
game
streaming
example
have
a
very
constant
Jitter
buffer
delay,
you
could
do
it
in
with
something
like
this
next
slide,
and
this
is
kind
of
I
was
a
little
lazy
when
I
wrote
this,
but
you
could
apply
this
idea
to
multiple
simulcast
layers
here,
it's
just
one,
but
basically
you
can
get
a
bandwidth
estimate
called
Target,
send
right
here
from
the
rtb
transport
and
then
decide
how
am
I
going
to
allocate
this
in
this
trivial
example.
J
We
just
take
the
entire
thing
and
give
it
to
one
simulcast
layer,
but
you
could
divide
them
up
yourself
and
say:
okay,
the
lowest
layer
gets
I,
don't
know
an
eighth
and
this
next
layer
up
gets
a
quarter
and
so
on,
but
it'd
be
under
your
control,
as
opposed
to
right
now
in
the
browser
where
you
know
it
kind
of
decides,
for
you
next
slide.
J
So
what
does
the
RTP
packet
look
like
that
you
send
and
receive
it's?
Basically
a
parsed
RT
packet,
you
don't
have
you
don't
get
a
byte
buffer
and
then
parse
it
yourself.
Instead,
you
have
the
values
that
you
would
expect
from
an
RTP
packet
like
an
ssrc
and
a
sequence
number
and
a
timestamp
in
a
marker
bit
Etc
and
then,
when
you
receive
it,
you
can
read
all
those
things
and
when
you
go
to
send
one
you
can
you
can
set
them
with
an
rtcp
packet.
It's
a
little
different.
J
Those
are
compound
packets.
So
you
get
like
an
array
of
things
and
those
each
have
a
payload
type
and
a
payload
subtype,
sometimes
called
a
reception
report
count,
and
then
I
was
thinking
that
we
would
just
leave
the
payload
unparsed
so
that
you
can
decide
what
to
do,
and
you
can
have
your
own
custom
messages.
But
if
that's
not
convenient,
we
could
add
the
ability
to
have
a
parsed
sender
report.
Receiver
report.
Etc
I-
did
have
that
in
a
previous
version
of
this.
J
Now
getting
to
what
it
looks
like
when
you
call
RTP
transport
and
coming
to
that
question
of
what
happens,
if
you
call
it
twice,
this
is
a
part
where
I
think
we
would
need
to
Define
it
in
the
ITF,
but
the
magic
line
is
the
m
equals
application
and
then
the
RTP
protocol
and
then
a
star.
So
basically,
what
this
means
is
I
want
to
do.
Rtp,
but
I'm,
not
saying
whether
it's
audio
or
video,
it
could
be
anything
and
I'm,
not
saying
what
the
payload
types
are
going
to
be.
J
It
wants,
except
that
the
header
extensions
related
to
congestion
control
need
to
be
specified
here,
because
the
application
is
not
in
control
of
that
they,
like
I,
said
you
can't
break
congestion
control
here,
and
you
could
bundle
this
with
existing
mlines.
So
you
could
have
a
an
satp
M
line
that
would,
you
know,
be
shared
and
you
could
have
other
rtpm
lines
that
would
be
shared
and
if
you
called
create
RTP
transport
more
than
once,
then
it
would
create
multiple
lines
of
this.
J
But
I
think
we
would
need
to
say
that
those
would
not
be
in
the
same
bundle
group
I.
Suppose
we
could
leave
it.
Allow
them
to
be
in
the
same
bundle
group,
but
then
you
would
have
two
RTP
transports
that
are
effectively
the
same
thing,
because
both
of
them
can
send
whatever
RGB
you
want.
Both
of
them
receive
all
the
RTP
you
want.
J
J
You
would
just
be
able
to
choose
what
to
do
with
it.
You
know
you
get
like
a
second
copy.
Basically,
and
then,
when
you
send,
you
can
send
whatever
you
want.
J
J
So
that's
it
for
my
slides.
My
big
question
is
whether
we
should
do
something
like
this
here
or
whether,
like
the
screen
share,
did
create
a
community
group.
That's
separate
and
do
the
work
out
there
I've
heard
a
lot
of
people
say
they
want
to
do
it
here
and
that
it
wouldn't
be
as
good
out
in
a
separate
community
group,
but
is
this
incremental
approach
to
an
RTP
transport,
something
that
people
would
be
happy
with
doing
here.
A
G
Yeah
so
I
think
probably
here
in
terms
of
where
the
discussion
should
happen
and
on
on
the
API
in
general.
It
basically
as
I
think
the
incremental
approach
better
is
basically
RTP
and
Js
with
safety,
bumpers
for
cc
and
so
forth.
G
G
E
Yeah
I
would
second
what
Randall
says:
I
think
the
the
r
and
RDP
the
rnrtp
here
is
real
time,
and
so
we
can
already
do
this
on
data
channels,
which
are
main
thread,
but
we
have
gotten
concerns
with
that.
So
we
in
the
spec
we
made
data
channels
transferable
to
workers.
Also.
Some
of
these
examples
here
mention
create
encoded
streams,
which
I
believe
is
non-standard
I,
think
that's
Chrome's
main
thread
version
of
the
insertable
streams.
E
Sorry,
the
webrtc
encoder,
transform,
API
and
I
think
Safari
and
Firefox
instead
implement
the
RTC
RTP
script.
Transform
so
I
would
love
to
know
how
it
interfaces
with
that,
especially
since
some
of
the
examples
seem
to
involve
communicating
both
with
the
transform
API
and
the
more
main
thread,
the
rest
of
the
peer
connection
API.
So
that
seems
like
a
challenge,
but
aside
from
that,
I
think
It
looks
interesting
to
be
able
to
to
do
this
sort
of
stuff.
A
So
I
met
somebody
here
and,
and
so
yes,
this
is
interesting-
yeah
one
thing
that
they
started
wondering
about,
plus
why
they
don't
have
a
get
RTP
transport
operator
on
the
play
connection
that
allows
you
to
get
the
RDP
transport
and
do
all
this
wonderful
things
to
it
from.
J
J
So
but
I
I
was
trying
to
avoid
a
situation
where
we
gave
you
an
API
that
let
you
send
any
RTP
packet,
but
then
in
an
sdp
it
said
only
certain
RTP
packets
and
then
it
didn't
match
I
mean
I
would
actually
be
fine
with
that.
But
I
think
other
people
would
not
be
fine
with
that.
So
I
was
trying
to
avoid
that
good.
A
Good
answer,
of
course,
yes,
that's
to
be
worked
out.
I'm,
like
yeah,
my
my
friends
at
Mozilla
I,
do
think
that
the
experimentation
was
on.
The
main
thread
is
beneficial,
but.
F
Yeah
so
I
wanted
a
comment
about
workers.
I
do
think
it
would
be
useful
to
allow
it
because
there
might
be.
You
might
want
to
put
the
whole
pipeline
in
the
worker
and
I
think
what
we're
talking
about
is
what
one
question
which
would
arise
there
is:
is
an
RTP
transport
indivisible,
like
would
would
you
be
able
to
have
the
receive
and
send
Pipeline
on
different
workers?
F
If
it's
a
single
rtb
transfer
percent
and
received,
then
you
couldn't
do
that
you'd
have
to
have
both
both
to
send
and
receive
on
the
same
worker.
So
that's
that's
kind
of
a
question.
We
encountered
that
with
web
transport,
where
it
was.
It
was
difficult
to
use
multiple
workers
because
the
web,
the
web
transfer,
was
this
kind
of
Monolithic
thing.
That's.
J
Yeah,
that's
a
really
good
point.
Bernard
I
do
think
we
could
separate
them
into
the
sender
part
in
the
receiver
part
because
they
really
don't
have
much
to
do
with
each
other.
F
A
Yeah
so
with
the
same
thing
about
workers,
I
think
it's
where
it
belongs
clearly,
so
we
need
to
investigate
that
the
interaction
with
Weber
720
transform
I,
don't
know,
that's
not
the
primary
thing.
I
I
would
try
to
look
at,
but
maybe
in
the
future.
A
We
could
look
at
that
and
I
would
concentrate
for
some
like
ltp
transport,
for
a
nice
custom
like
sending
site
doesn't
collects
and
that's
fishing
inside
you're
using
that
codex
as
well,
which
is
the
kind
of
thing
I
will
try
to
concentrate
on
and
yeah
I
had
a
first
number
that
I
forgot
yeah.
The
third
command
is
so
we
have
a
discussion
for
you,
let's
say
hey.
We
want
to
do.
A
H
J
Okay,
is
there
anyone
else,
I
I,
don't
see
the
queue
so.
B
A
The
in
the
participant
list
and
and
zoom.
L
Yeah
Tim
Panton
I
I
I
do
I,
like
this
I
think
it's
interesting,
I
kind
of
got
uses
for
it,
I
wonder
if
you've
looked
at
anybody
else's
apis
in
this
space,
I'm
thinking
of
Pion
has
a
thing
called
interceptors,
which
I
think
does
something
quite
similar,
so
it
might
be
worth
like
looking
at
at
that
and
I
think
what
what
would
be
good
if
I
remember
it
rightly
to
steal
from
that
is
the
ability
to
just
get
packets
for
a
particular
ssrc
so
that
don't
have
to
do
the
demultiplexing
in
JavaScript
I
think
it
would
be
nice
to
kind
of
demultiplex
in
like
in
the
C
plus
plus
and
and
if
you're,
only
interested
in
the
audio.
L
Then
you
only
get
the
audio
or
and
I
I.
Don't
know
how
practical
that
is
in
your
in
your
scheme,
but
it
kind
of
feels
attractive
to
me.
J
Yeah
I
think
you're
right
that
I
am
familiar
with
Pion
and
interceptors
and
I'm
also
familiar
with
the
libweber
TC
API,
for
attaching
to
the
rgbtmxer,
so
I
I
think
you're
right
that
it
would
be
a
a
good
optimization
to
to
be
able
to
have
the
the
browser
basically
do
the
filtering
on
the
packets
instead
of
in
the
JavaScript
as
a
performance
optimization
for
situations
where
there
are
a
lot
of
packets-
and
you
only
want
some
of
them
and
I
think
we
could
build
that
into
the
UK.
So
that's
good
feedback.
A
A
F
F
F
F
J
The
quicker
sorry,
quick
response
to
the
previous
comment
from
Harold
I
do
think
we
could
do
something
where
you
call
git
RTP
transport
with
existing
sdp
and
you
could
receive
any
of
those
transport.
It's
really
on
the
send
where
you're
constrained,
and
so
that's,
where
we'd
have
to
decide
what
rules
there
are
what
bumpers
exist
if
you're
using
the
existing
sdp
instead
of
the
Wild
Card
STP.
A
J
No,
no,
the
idea.
The
idea
is
you
provide
the
unserialized
values,
your
ssrc
sequence,
number
time
stamp,
payload
header,
the
pairs
of
ID
and
header
extension
values
you
want
and
then
it
adds
the
browser
being.
It
adds
whatever
header
extension.
It
needs
for
congestion
control
which
in
many
cases
will
be
the
transport
CC
and
then
it
serializes
it
and
increase
it.
I
E
Sorry
I
raised
my
hand
again,
so
this
might
be
a
detailed
comment
at
this
stage,
but
I
do
notice
on
the
slide.
E
It
says
on
RTP
packet
and
so
I
want
to
make
sure
we
don't
repeat
the
mistake
of
websockets
that
had
on
message
where
there's
no
back
pressure,
where
the
if
the
JavaScript
client,
if
the
JavaScript
receiver
cannot
keep
up,
it
just
means
it's
you're,
building
up
JavaScript
buffers,
so
it
might
be
better
to
to
have
an
API
that
at
least
even
if
there's
not
back
pressure
in
rqp
at
least
lets
the
the
browser
receive
buffer,
know
that
JavaScript
isn't
keeping
up.
J
Yeah
we're
gonna
have
to
discuss
you
know,
do
we
use
what
wz
streams
or
do
we
use
events,
and
that
kind
of
thing
and
some
of
the
examples
I
used
a
DOT
right,
I
think
I
had
a
DOT
writable,
so
yeah
we're
gonna
have
to
have
that
discussion,
but
that
is
a
little
bit
a
little
bit
down
the
road.
J
A
C
B
A
Just
special
applications.
A
But
my
sense
is
that,
yes,
we
should
continue
on
this
work
and
that's
the
people
in
this
room
who
are
kind
of
advised
selection,
though
yeah
kind
of
a
favor
doing
it
here.
J
That
sounds
about
right.
According
to
the
notes,
I
took
and
sounds
great
to
me:
I'm
happy
to
continue
driving
the
discussion,
making
more
specific
proposals
and
integrating
the
feedback
so
far
into
those.
A
B
A
The
model
of
sap
negotiation
is
that
with
Exchange
sdp
offer
answer
and
that
configures
the
available
code
eclipses
in
the
encoder
and
the
pack
capacitor,
and
we
have
set
parameters
that
announced,
that
is,
that
called
the
references
that
allows
the
set
parameters
allows
you
to
pick
the
codec
specific
I
want
to
use
this
product
from
that
list.
That's
a
very
new
API,
but
in
the
case
where
we
have
a
chance
to
transform,
sits
between
the
encoder
and
the
packet
place
now
in
strictly
on
the
model
of
of
of
a
single
transform.
A
B
A
A
A
A
A
A
B
A
We
have
so
Hendrick
and
Bernardino
and.
A
F
Yeah,
one
of
the
things
that
we
found
that's
kind
of
interesting
is
the
the
behavior
of
the
packetizers.
Isn't
always
that
well
defined
so
I'll
I'll
give
you
a
weird
example
of
Hippo
discovered
that
the
h.264
packetizer
does
not
dropped
Now
units,
it
doesn't
understand,
that's
turns
out,
it
could
be
a
useful
property,
but
it's
also
not
a
property.
That's
defined
anywhere,
so
it
it
I'm
I'm
a
little
bit
concerned
about
potential
non-interoperability,
which
could
be
derived
here
unless
we
specify
with
some
of
these
things,
do
a
little
bit
more.
A
Yeah,
that's
listen
called
sit
in
the
idea
that
seems
to
be.
They
have
useful
properties
like
this.
This
is
a
structure,
that's
where
it's.
A
So
there's
a
non-stand
by
practitioner
mode
in
the
intronical
growth,
which
has
approximately
the
same
property.
That
is
single
in
a
non-standard
way,
which
is.
A
E
Yeah
so
so
my
comment
was
also
about
questions
around
packetizer
and
that's
not
agree.
It's
not
well
defined.
It
seems
to
be
the
way
we
talk
about
it
when
we
say
the
h.264
packetizer
we're
saying
the
packetizer
that
this
codec
uses,
which
I
guess
in
the
context
of
end-to-end
encryption,
is
quite
reasonably
understandable
because
it's
the
underlying
codec
that
we're
encoding
on
top
of
right,
and
so
it
makes
sense,
I
guess
to
keep
the
packetizer
the
same,
but
it's
still
a
little.
E
The
API
still
feels
overly
flexible
for
the
use
case
of
merely
signaling
what
we're
doing
in
with
end-to-end
encryption,
so
I'm
wondering
if
it
solves,
also
solves
some
of
the
it's
not
super
clear
how
you
would
use
it
to
avoid
some
of
the
issues
we
have
with
encoded,
transform
like
if
you're
using
h.264,
for
example,
you
quickly
find
out
that
there
are
certain
bytes,
you
can't
touch.
Otherwise
you
know
things
don't
work
and
nothing
happens.
So
I'm.
A
Interference
with
the
packet
conference,
for
instance,
we
found
that
the
vpa
pack,
the
pack
person,
tries
to
parse
the
packet
in
order
to
extract
the
QP
value
from
the
frame,
because
it
wants
to
know
that
before
it
starts
decoding
it.
It's
a.
It
was
a
weird
designation
and,
of
course,
the
HTC
four
pack
places
lead
back
feedback
by
a
series,
not
behaving
terribly
well,
when
you
give
it
to
non-natural
64
data,
so
I
won't
eventually
have
to
have
a
have
a
row,
a
packet
player
so
that
like
I,
can
tell.
A
E
A
E
What
you're
saying
about
that?
That
seems
like
an
improvement
to
me,
the
payload
types
always
seemed
like
a
low
level
thing.
G
Yeah
I
was
just
gonna,
make
a
comment
on
the
payload
type
to
mime
type
thing:
I
mean
I
I,
don't
know
what
you're
buying
by
doing
that,
I
understand
that,
like
okay,
mime
types
are
sort
of
a
higher
level
concept,
but
it's
just
it's
really.
You
can't
use
arbitrary
things.
You
have
to
use
something.
That's
been
negotiated,
it's
basically
just
a
lookup,
so
you
know
it
appear.
You
know,
I
I,
don't
know
the
advantages
and
I
suppose
it's
a
small
perf
thing,
but
I
don't
think
it
matters
a
great
deal
either
way.
A
A
I
A
A
K
B
D
A
A
I,
don't
think
it's
for
specific
cases.
I
think
you
actually
need
to
change
it.
The
user
agent
can
handle
it
because
it
knows
the
encoder,
and
it
knows
the
packetizer
and.
A
Like
laser
anyway,
it
will
know
how
to
probably
developing
itself
I
think
it's
good
to
be
able
to
switch
from
Target.
We
need
that
for
a
train,
for
instance,
when
you
are
actually
starting
at
Facebook.
A
So
one
question
would
be
how
much
Flex
week
did
we
want
to
to
we
only
per
frame,
or
you
know
you
want
it
to
do
like
just
once.
That's
a
question
that
would
be
writing
how
the
API
would
be
shaped,
because.
A
Design
decision
there
that
we
should
make
I,
don't
think
we
should
go
there
at
least
initially
I
think
the
main
goal
here
is
you
selected
just
before
you're
doing
your
transform
and
instead
of
hitting
the
HTC
for
bucketizer,
you
want
to
use
vegetable
at
some
point:
yeah.
It
gets
based
on
device.
So.
A
But
so
yeah.
A
A
Yeah
and
basically
we
need
a
genetic
spectator
and
maybe
the
API
would
be
safe,
use
generator
and
that's
yes,
so
one
one.
So
one
reason
for
the
for
the
current
assignments
that
we
don't
have
it
yet
so
then
brought
the
creative
events.
The
other
one
is
that
so.
A
We
have
kind
of
survives
with
using
standard
packages
for
a
while,
now
and
I'd
like
to
be
interoperable
with
their
sort
of.
H
A
B
H
A
Could
be
I
would
look,
it's
actually
API
instead
of
architecture,
you
can
use
it
back
inside
the
binary
and
there
would
be
just
one
in
value
which
would
be
a
IDF
generation.
A
But
more
interested
in
the
in
the
list
of
architecture
packages.
A
L
Yeah
I'm
kind
of
like
maybe
a
little
early
to
be
saying
this,
but
I
wonder
how
this
relates
to
the
proposal
that
Peter's
just
made.
So
it
feels
like
there's
a
kind
of
a
big
piece
of
overlap
in
that
it
would
be
nice
to
see
maybe
some
of
the
same
Concepts
appearing
in
one.
That
would
be
usable
in
the
other.
Like
you
know,
packetizers,
for
example,
like
it.
L
A
Yeah
I
I,
think
of
them
as
some
of
our
other,
because
the
main
main
thing
with
this
API
is
the
is
to
control
the
sap
negotiation
pattern.
All
the
sap
negotiation
controls
the
control
standpoint
encode,
the
encode
sacrifice
defect
as
process.
While
it
is
proposed
and
really
goes
to
the
goes
to
what
happens
with
the
packets
after
they
are
available,
Tech
advice
and
before
they
were
Deepak
advanced
yeah.
So
you
should
be
careful
to
make
sure
the
there's
some
alignments.
A
L
A
L
A
Yeah
I
also
like
to
keep
them
independent
in
the
sense
that
if
one
takes
a
year,
not
another,
the
other
one's
sort
of
foreign.
A
B
A
F
Think
you
need
to
reload
the
below
the
slides
yeah.
So
then
they
showed
up
yep.
A
F
Right
so
the
next
use
case
in
section
three
two
is
the
low
latency
broadcast
with
fan
out.
So
this
is
what
it
looks
like
currently
with
N15
and
n39.
F
Now
one
thing
we
had
discussed
in
July
was
that
this
is
a
little.
This
lacks
some
clarity
because
it
used
the
word
low,
latency
and
in
the
issue
there
was
a
question
of
what
that
meant
exactly
and
also
that
was
confusing
because
it
was
mixing
potentially
quite
different
use
cases.
A
This
is
what's
go
through
the
the
thing
that
product
was
discovered
in
earlier
moving
frames
between
pieces
to
Spectrum
themes,
a
modified
metal
data
and
we'll
discuss
this.
F
Yeah,
so
this
slide
came
from
July
is:
is
this
still
the
proposed
requirement
Edition
or
is
they're
going
to
be
some
other
PR.
F
All
right
so
can
we
have
an
action
item
to
actually
submit
that
as
a
PR?
Yes,
okay,
so
I
did
want
to
talk
about
PR
123,
which
is
to
attempt
to
clarify
this
low
latency
broadcast
case.
F
All
right,
so
this
use
case
originally
was
submitted
by
Tim
Panton
and
at
the
time
when
it
was
originally
submitted,
it
was
focused
on
auctions
embedding
and
that
isn't
just
low
latency,
that's
ultra
low
latency
and
what
I
mean
by
that
is
glass,
gas,
latency
less
than
500
milliseconds,
and
that's
actually
an
important
clarification
because
we
see
web
Regency
is
actually
popular
for
that.
F
That's
what
Whip
and
web
are
for
are
used
for
and
also
it
helps
distinguish
it
from
some
other
cases,
because
if
you're
trying
to
get
to
that
latency
data
Channel
fan
out,
may
not
be
what
you
want
now.
That's
for
a
couple
of
reasons.
One
is
that
you
may
not
be
able
to
handle
the
ordered
reliable
mode
because
of
all
the
retransmissions,
but
even
in
the
unreliable
unordered
transport
it.
F
It
might
add
too
much
latency,
because
then
you
have
to
do
your
own
potential,
retransmission
and
or
something
like
a
custom
FEC
and
the
other
aspect
of
it
is
is,
if
you're
doing
that,
most
likely
you're,
not
using
a
raw
format
you're
using
something
like
cmaf,
and
that
adds
the
overhead
of
cmap
containerization
decontainerization,
which
is
kind
of
wasteful
of
DRM,
is
not
needed,
but
also
you're,
probably
not
rendering
with
you
may
not
be
rendering
with
webrtc.
F
F
So
the
point
we're
trying
to
make
here
is
that
the
distinction
calling
this
ultra
low
latency
clarifies
that
it's
really
not
about
that
kind
of
beta
Channel
Farm
fan
out
and
also
the
data
Channel
fan
out
requirements
are
covered
elsewhere,
like
the
file
sharing
use
case
in
section
three,
one
has
a
requirement
for
data
channel
on
workers,
and
the
iot
use
case
has
a
requirement
for
control
over
Max
free,
Transmissions
timeout,
which
would
be
something
you
would
use
for,
like
a
partially
reliable
on
ordered
transport.
F
So
basically,
what
we
want
to
do
is
is
clarify
that
we're
really
talking
about
ultra
low
latency
here
and
also
motivate
some
of
the
requirements
that
Harold
just
mentioned,
to
make
it
clear
that
this
isn't
data
Channel
form
fan
app
next
slide.
F
So
this
is
the
proposed
pr123
basically
changed.
The
name
of
the
use
case
to
ultra
low
latency
broadcast,
with
fan
out
clarify
that
that
is
a
glass
of
gasoline,
see
less
than
500
milliseconds
and
focus
on
the
auctions,
betting
and
financial
news,
which
are
the
ultra
low
latency,
rather
than
throwing
in
other
stuff,
like
webinars
in
a
company
Town
Hall,
which
actually
can
take
considerably
higher
latency
and
can
be
done
with
data
Channel
plan
out.
F
F
Lot
of
the
other
requirements
remain
the
same
like
the
limited
in
our
interactivity
than
that
traversal
stuff,
like
that
and
I
also
removed
Pier
5,
because
that's
a
data
Channel
fan
out
implementation,
Dolby
is,
is
whoever
you
see
based
streaming.
So
it's
more
in
the
ultra
low
latency
category,
I,
don't
know
about
type.
F
Well,
I
wanted
to
clarify
this
use
case,
wasn't
about
data
Channel
fan
out
because
it
wouldn't
meet
the
latency
requirements.
So
that's
why
I
removed
here
five,
because
that's
a
data
Channel
fan
out.
F
Oh
yeah,
actually
that
that's
a
good
yeah,
that's
a
good
idea,
Harold!
Yes,
it's
that
should
that
should
be
removed.
Yes,
it
would
be
transported
via
RTP,
not
Theta.
Cap.
B
A
E
So
yes,
removing
the
data,
Channel
I,
guess
begs
the
question
of:
are
these
new
use?
Cases
is
the
fact
that
something's
already
solved
is
that
are
we
not
tracking
those
use
cases
anymore
and
I
guess
I'm,
finding
the
way,
just
clarification
of
whether
we
should
remove
data
Channel
there
also
when
we
say
ultra
low
latency,
it's
still
a
bit
confusing
to
me
because
you
know
webrtc
is
usually
you
know
less
than
100
milliseconds.
F
I
assumed
it
was,
it
was
a
little
bit
more
late,
potentially
more
latency
than
webrtc
because
of
the
fan
out,
usually
when
I
think
of
whatever
you
see
I.
Think
of
some
going.
You
know
the
packets
traveling
from
the
conferencing
unit
to
the
browser
without
being
peer-to-peered
along
the
way.
So
that's
why
I
made
it
500
milliseconds,
that's
also
the
point
at
which
you
know
it
was
also
to
try
to
clarify
that.
F
A
So
question
so.
A
A
A
F
Yeah
I
mean
you:
could
you
could
as
an
example
you
and
you
could
you
could
de-packetize
the
RTP
and
Shuffle
it
around
as
a
frame
and
that
you
know
I,
don't
know
if
that's
a
good
thing
or
a
bad
thing,
it's
probably
adds
latency,
but
you
know,
then
that
wouldn't
be
that
wouldn't
be
RTP.
It
would
be
more
like
Peter's
web
codex
thing,
but
it
wouldn't
have
the
same
kind
of
latency.
That
c
map
or
MSC
would
so
that
brought
that
might
work.
F
L
B
L
I
can
answer
that
because
I've
been
playing
with
both
their
RTP
there's,
both
cops
are
RTP,
I,
don't
know,
there's
no
point
in
which
they
are
I
mean
at
some
point.
A
A
B
L
Pretty
hard
to
understand
I'm
afraid,
but.
L
Well,
yeah
I
mean
it
was
a
while
ago,
when
I
raised
it
I
I
think
it's
still
I
mean
there's
an
interesting
like
there's
a
genuine
interest
in
this
and
and
in
in
low
enough
latency,
for
either
I
mean
to
these
environments
and
I.
Think
the
only
way
to
achieve
it
is
is
RCP
with
this
little
manipulation
as
possible
on
the
way
through
and
we've
just
built
the
thing
for
pit
Crews
to
watch
over
the
shoulder
drivers
in
race,
cars
and
like
they
really
know.
L
If,
if
you're
more
than
a
quarter,
I
mean
they
know
if
you're
a
quarter
of
a
second
behind,
but
they
can
live
with
that,
because
they've
got
real-time
radios
and
less
than
real-time
video
and
so
like
it's
really
perceptible,
and
then
the
fan
art
is
that
other
people
in
the
crew
also
want
to
watch
it.
L
Maybe
up
in
the
stands
or
whatever.
So
it's
quite
a
it's
a
bit
of
a
niche
that
we're
dealing
with
I
mean
dolby's,
obviously
doing
something
on
a
much
bigger
scale,
but
but
in
that
environment
we
don't
want
to
have
a
server
there.
We
don't
want
to
have
like
massive
infrastructure
to
have
to
deliver
to
a
pit,
or
you
know
backstage
or
whatever
it
ends
up
getting
used
backstage
the
Nordstrom
house
or
something
like
that.
L
So
I
mean
maybe
phrase
the
question
somewhere
on
a
PR
and
I
can
like
answer
it
in
detail.
But
that's
my
current
feeling,
I
think.
There's
still
a
thing
here,
that's
good
to
have
in
the
document.
F
Okay,
yeah,
if
you
you
can
review
the
pr
that
would
be
great.
F
F
Well,
I
think
this.
This
section
was
in
next
steps
for
the
use
case.
A
C
B
F
Okay,
so
this
is
the
last
section
before
we
wrap
up
and
I
guess
it
was
intended
to
be
just
to
keep
track
of
things
that
we
need
to
do
going
forward.
Next
steps
actions,
CFCs
stuff,
like
that,
as
a
result
of
what
we've
talked
about
today,.
A
E
I
think
individual
presenters
probably
know
their
own
action
items
to
some
extent.
There
was
some
discussion
of
of
new
documents,
though,
and
hopefully
that's
captured
in
the
minutes.