►
From YouTube: WEBRTC WG meeting 2023-07-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
So,
a
little
bit
about
this
meeting,
we
have
a
link
to
the
slides
on
the
wiki.
We
do
need
to
get
a
volunteer
for
note-taking
and
the
meeting
is
being
recorded.
The
recording
will
be
public,
we
have
a
volunteer
note
taker.
B
B
Okay,
thank
you,
Harold
all
right,
Okay.
So
a
little
bit
about
the
code
of
conduct.
We
operate
under
the
w3c
code
of
ethics
and
professional
conduct.
So
let's
try
to
keep
the
conversations
cordial
and
professional
few
things
about
the
meetings.
You
probably
already
know
this.
It's
recorded,
you
can
raise
your
hand
to
get
into
the
speaker
queue
and
lower
your
hand
to
get
out
of
it
and
call
on
you
and
please
wait
for
the
microphone
access
to
be
granted.
B
If
you
jump
the
queue
we'll
YouTube
and
if
you
can
use,
hits
headset
or
Echo
canceling,
speaker
box,
all
right
just
a
little
bit
of
the
document
status
because
something's
in
the
WC
reboot
doesn't
mean
it's
been
adapted
and
we
use
the
call
for
adoption
process
to
determine
that
Edo
straps
don't
represent
consensus,
but
working
group
chats
do
and
sometimes
we
merge
PRS
that
lack
consensus
about.
We
attach
a
note-
and
we
have
we'll
have
some
examples
of
that
later
today,
with
the
use
case.
B
A
B
Here's
the
agenda
for
the
meeting
we're
going
to
do
extended,
use
cases
then
we'll
have
palak
to
set
metadata
un
we'll
talk
about
encoder,
transform.
We
have
Samir
and
Peter
with
ice
controller
and
then
yanagar
we'll
do
device
ID
and
permissions
group
for
that's
today,
all
right
so
we'll
start
off
with
the
extended
use
cases.
B
So,
as
we
talked
about
in
May,
there's
a
number
of
things
we're
trying
to
do
to
upgrade
the
we
call
it.
The
webrtc
extended
use
cases
document
now,
and
one
of
them
is
to
figure
out
requirements
that
we
get
CFCs
on
but
got
comments
but
haven't
yet
resolved
the
comments.
So
we're
going
to
focus
on,
in
particular
the
low
latency
streaming
use
cases
in
section
3.2.
Today
we
had
a
CFC
that
concluded
on
January
16th
and
we
got
six
responses
received
fiber
and
support
one
no
opinion.
B
So
there
was
consensus
in
general
for
the
use
cases,
but
there
were
questions
about
it
that
we
haven't
resolved
yet.
So
that
includes
open
issue
103,
which
un
filed
and
we'll
be
talking
a
lot
about
that,
and
also
an
open
issue,
94
relating
to
Gamepad
input
and
then
I
think
palak
will
be
talking
about
some
issues
in
the
low
latency
broadcast.
We've
found
out
as
well
on
some
potential
Solutions.
So
to
remind
you
all
about
what
we're
talking
about
the
section
3-2
has
two
parts,
one
of
which
was
about
game
streaming.
B
So
this
is
the
the
game
streaming
use
case,
and
then
there
is
a
section
three
to
two
which
is
about
low
latency
broadcast
with
fan
out.
The
idea
here
is
we
have
a
low
latency
broadcast,
we'll
talk
about
what
the
meaning
of
that,
what
the
word
low
latency
means,
and
then
we
have
some
kind
of
caching
or
fan
out
system,
that's
operating
as
well.
B
So
these
are
the
two
use
cases
and
we're
trying
to
figure
out
exactly
what
the
requirements
are,
and
so
we
have
a
bunch
of
issues
and
potentially
PR's
to
address
the
issues
and
so
we'll
go
over
these.
The
issues
are
94
and
103,
and
then
we
have
a
whole
bunch
of
PRS.
B
We'll
talk
about
to
to
try
to
address
all
these
issues,
that's
what's
on
the
advantage
of
this
portion,
so
issue
94
was
about
improvements
for
Gamepad
input,
and
it
was
mentioned
I
forget
what
meeting,
but
that
there
were
issues
with
game
pads
and
so
one
of
the
things
to
resolve
this
is
try
to
figure
out
hey.
What
are
these
issues
and
also
what
can
the
web
urgency
working
group
in
particular
do
about
the
issues
so
I
went
over
the
GamePad
spec
and
there
actually
are
was
an
issue
and
a
PR
relating
to
events.
B
So
the
idea
is
that
Gamepad
with
fire
events,
instead
of
using
a
passive
model
that
was
filed
in
Issue,
four
relating
to
the
GamePad
API
and
it's
a
PR
152
to
add
gameplay
input
events.
This
is
all
in
the
GamePad
I
guess.
A
Gamepad
working
group
has
nothing
to
do.
Whoever
just
see
but
kind
of
looking
at
this
PR
152
it
did
have.
It
does
have
a
page
on
Chrome
status,
called
Gamepad
button
and
access
events,
there's
some
documentation
which
was
published
in
August
of
2021
and
there's
a
chromium
tracking
issue.
B
But
it's
doesn't
look
like
it
shipped,
because
the
last
entry
was
August,
20th
2021.,
so
I
had
some
questions.
I'm
hoping
some
of
you
understand
or
know
about
this
one
question
is:
is
the
implementation
of
this
Gamepad
event
being
stalled?
Is
there
any
chance
of
Revival
and
also
I'm,
trying
to
figure
out?
What
does
this
have
to
do
with
the
Weber
to
see
working
group
like?
B
Is
there
anything
the
webrtc
working
group
can
do
to
improve
Gamepad
input,
or
is
it
just
this
PR,
which
has
nothing
to
do
with
us
and
is
there
any
reason
that
we
would
add
a
requirement
to
section
3.2.1
relating
to
gamepad
or
some
other
editorial
change,
or
should
we
just
close
the
issue
so
I'd
like
to
have
a
little
bit
of
discussion
on
that?
Does
anybody
have
an
opinion?
Anybody
know
anything
about
this.
B
D
Just
to
say,
I
don't
see
any
relationships
we
were
actually
working
with
in
terms
of
use
case.
Right
Gamepad
is
something
that
can
be
used
with
with
webrtc,
but
just
like
probability
can
be
used
with,
say,
websocket
Edge
right
right,
so
I
think
that
it's
good
if
we
can
convey
as
as
a
working
group,
maybe
we
have
a
position
saying
hey
it's
important
that
is
fixed,
but
then
it's
up
to
the
game
Biden
put
in
your
working
group
to
actually
do
it.
So
that's
I,
guess
all
we
can
do
there.
B
B
D
Not
really
I
would
say
we
we
keep
it
to
this
group
to
wherever
see
working
group,
but
since
we
are
aware
that
it's
important,
we
just
convey
this
message
to
to
the
read
working
group
and
that's.
B
B
Okay,
thank
you.
You
win
we'll
we'll
take
that
as
advice
all
right.
So
then
the
other
issue
is
issue
103,
which
you,
when
filed
and
we'll
be
talking
about
this
at
two
levels,
one
of
which
is
there's
specific
questions.
B
Un
asked
about
M37
and
38
and
M39
we'll
talk
about
it
specifically,
but
un
also
asked
meta
questions
about
the
way
the
document
handles
things
and
we'll
talk
about
those
as
well,
because
one
of
the
things
that
is
in
this
email
is
that
some
users
are
deployed
and
whether
they
qualify
as
NB.
B
So
one
big
question
that
came
up
in
timpap
nasus
as
well
is
how
do
I
distinguish
the
use
cases
that
are
doable
or
if
they
are
doable
with
with
what
apis
versus
use
cases
that
aren't
doable
yet
that
that
Tim
calls
aspirational
and
it's
not
clear
from
the
document
as
it
is,
and
then
so
un
also
asks
it
seems
some
of
the
requirements
are
met.
But
the
great
question
is:
how
do
you
know
reading
this,
which
ones
are
met
and,
if
so,
by
what?
B
And
then
you
want
to
ask
a
question
about
low
latency,
what
it
means
so
we're
going
to
try
to
talk
about
all
of
this
feedback
we'll
get
to
the
meta
issues
first,
because
they
relate
to
the
whole
structure
of
the
document
and
what
it's
trying
to
convey
all
right.
So
one
question
is
the
problem
with
some
of
these
requirements
is
that
you
can't
really
tell
if
in
a
requirement
what
would
need
it
and
and
what
is
related
to
it.
B
So
as
a
general
principle,
it
seems
to
make
sense
that
requirements
should
be
specific
and
actionable.
That
is,
you
can
somehow
determine
whether
the
requirement
is
met
and,
if
so,
by
what
and
you
should
know-
hey
I
have
an
API
proposal
and
you
should
be
able
to
determine
whether
that
actually
addresses
the
requirement.
So
the
requirement
has
to
be
specific
and
and
actionable
enough.
So
one
question
which
was
raised
is
you
know.
B
As
an
example,
does
a
certain
requirement
require
new
apis
or
not,
and
if
so,
what
apis
have
been
proposed
that
relate
to
it?
And
one
thing
about
this
is
and
we're
going
to
have
a
discussion
about
this.
In
a
bit
is
absolute
performance.
Requirements
may
not
be
actionable,
so
you
say:
I
need
to
be
able
to
handle
4K
resolution.
For
example.
Well,
the
question
comes
up,
is
hey,
just
have
better
Hardware
or
just
improve
the
software
implementation
efficiency,
and
what
has
this
got
to
do
with
API
changes?
B
So
an
example
is
you
went
at
a
question
about
n38
and,
in
my
opinion,
that's
met
by
the
Jitter
buffer
Target
requirement,
but
there's
nothing
in
a
document
that
really
says
that
explicitly
so
that's
interesting
and
then,
if
it's
not
met
by
anything,
is
there
at
least
an
issue
that
you
can
look
at
and
then
you
can
say.
B
Oh,
this
issue
was
resolved
because
we
proposed
an
API,
so
you'd
at
least
be
able
to
track
if
the
issue
was
resolved
or
not
and
the
end
result
of
all
this
is
you
know.
People
are
looking
at
use
cases
they're,
not
sure.
Is
this
something
we're
trying
to
do?
Is
it
something
we
are
able
to
do?
You
know
they?
Don't
they
don't
understand
it?
B
So
an
example
is
and
we'll
talk
about
this
in
a
bit
that
Henrik
filed
an
issue
on
exposing
the
code,
errors
and
software
fallback
and
that's
an
example
of
an
issue
which
is
related
potentially
to
a
use
case
that
we
could
track
and
say:
hey.
Do
we
have
a
proposal
for
this
and
eventually
there
might
be
a
PR
and
the
pr
might
be
merged
and
you
could
say
hey
this
requirement
is
actually
fulfilled
and
here's
here's
why
and
then
the
last
question
is
great.
B
Maybe
maybe
we
get
the
pr,
maybe
it
gets
implemented,
but
is
there
a
test,
so
you
know
people
will
ask
a
question:
is
this
actually
supported
in
any
browser?
Does
it
really
work,
and
so
you'd
like
to
be
able
to
trace
this
eventually
to
a
test
and
look
at
a
test
result
and
say
it
works
on
this
and
that
browser?
B
So
another
question
that
comes
up
is:
what
is
the
relationship
of
a
use
case,
two
issues
I
mentioned
now.
In
some
cases
the
issues
will
work
the
solution
to
a
requirement
right
now
there
aren't
any
links
to
issues,
so
we
can't
really
do
that,
and
so
we
can't
track
the
resolution.
B
Another
thing
is
the
document
doesn't
link
to
API
proposals,
so
you
can't
really
tell
whether
the
requirement's
satisfied
or
not
or
what
proposal
might
satisfy
it
and
so
there's
a
question
about.
Should
the
use
case
link
to
the
proposals
that
are
related
to
it
and
then
the
question
is
the
relationship
to
the
explainers
should
explain,
is
link
back
to
the
use
cases
that
they're
trying
to
satisfy,
but
I'd
like
to
open
up
some
discussion.
C
C
You
can
certainly
verify
that
this
no
configuration
that
has
been
demonstrated
to
satisfy
the
requirements,
so
you
can
demonstrate
failure
right
and
so
we,
but
we
can
demonstrate
that
some
configuration
satisfies
the
performance
requirement
and
so,
for
instance,
if
we,
if
we
set
a
delay
Target
of
half
a
second
and
the
best
we
can
do-
is
five
seconds,
then
obviously
we
are
not
satisfied
it,
but
if
high-end
PC
is
able
to
do
0.5
seconds
on
a
Android
phone
is
able
to
able
to
do
one.
Second,
then
I
would
say
that
we
still
satisfy
the
half.
C
So
you
shouldn't
be
able
to
do
everything
on
anything,
but
you
should
be
able
to
at
least
demonstrate
that
it's
possible
to
implement
the
use
case
if,
but
when
it
comes
to
linking
I.
Think
that
link
having
API
proposals
linked
to
user
cases
is
a
good
thing.
Okay,
so
linking
in
the
other
direction
is
seems
questionable,
because
I
think
we
might
keep
a
database
of
links
but
rewriting
the
use
case.
Every
time
we
have
a
proposal
to
solve
it
right
right,
Solutions.
E
Sorry
I'm
needed
now
yeah,
so
so
I
I,
agree
with
Harold
I.
Think
use
cases,
in
my
mind,
is
more
of
an
input
to
the
process
than
an
output.
So
I
like
the
idea
of
links
going
to
the
use
cases
and
not
going
out
of
the
use
cases,
you
also
had
some
other
good
general
questions
of
how
to
deal
with
use
cases.
E
I
don't
have
a
good
answer
for,
but
I
wanted
to
pick
on
the
terminology
of
streaming,
because
that's
a
little
confusing
so
I
might
as
well
mention
it
now
like
in
my
mind
anyway,
low
latency
streaming
and
people
correct
me
to
have
a
different
opinion.
In
my
mind,
I
interpret
low
latency
training
to
actually
mean
High
latency
webrtc,
and
that,
in
that
streaming
in
a
general
term
is
you
know.
Webrtc
is
a
place
for
real-time
conversation
back
and
forth,
whereas
streaming
traditionally
I.
Think
of
you
know
a
content.
E
Creator
is
broadcasting
a
stream
that
has
much
more
higher
delay
in
that
low
latency
to
so
to
a
stream
or
low
latency,
so
to
a
streamer
webrtc
would
be
ultra
low
latency.
So
that's
the
so.
B
E
So
that
was
my
one
part.
The
other
part
was
the
term
streaming.
We'd
also
like
to
clarify
I
think
because
yeah
use
cases
are
often
using
data
channels,
but
also
Jitter
buffer
Target
is
a
feature
of
audio
video
center
right.
B
So
we're
gonna
we're
gonna
get
to
that
in
a
bit
because
that's
also
confusing
and
makes
the
requirements
not
actionable
because
you
don't
know
what
they
mean,
which
is
also
not.
A
B
All
right
so
so
thank
you
for
the
feedback
on
those
meta
questions.
I
think
that
will
be
helpful.
So
then,
so
then
Tim
asked
did
this
slide
and
said:
hey?
What
do
we
do
with
the
aspirations
that
stuff
I?
Guess
that
has
no
has
requirements,
but
maybe
no
proposals.
B
So
it's
not
something
you
can
do,
and
that
would
be
like
a
hope
and
a
dream
and
he
asked
whether
they're
out
of
scope
and,
in
my
opinion
at
least
they're,
not
out
of
scope,
there's
some
there
can
be
use
cases
that
you
actually
can't
do
at
the
moment
that
we
aspire
to
do
I,
don't
know
if
anyone
has
an
opinion
about
that.
B
E
Yeah
I
think
use
cases
for
this
working
group.
Our
use
gives
us
the
working
group
will
commit
to
solving
so
I
think
it's
very
important
to
to
not
include
things
unless
we
plan
to
actually
solve
them.
Does
that
make
sense.
B
E
B
Opinion
you.
D
Went
yeah
with
the
I'm
about
there
I
think,
first
for
dreams
to
be
in
NV.
You
need
to
have
requirements
that
are
actionable
and
if
they're
actionable
and
we
have
a
rough
idea
of
apis,
then
we
are
not
far
from
a
proposal.
So
that's
good,
but
if,
if
it's
not
actionable,
if
it's
really
hard
to
think
of
any
good
solution,
even
in
in
midterm,
then
I'm
not
sure
it's
worth
having
it
in
the
document.
So
the
Twilight
there,
like
maybe.
F
D
Can
become
reality
and
dreams
that
cannot
become
reality
and
I'm
only
interested
in
the
first
one.
B
C
Yeah
I
would
I'm
not
sure
if
I'm
not
arguing
the
opposite
or
not,
but
when
we
try
to
do
things
that
we
haven't
done
before,
failure
is
always
an
option.
C
I
would
kind
of
agree
that
we
should
we
shouldn't
put
things
in
a
document
where,
if
we
seriously
think
that
there's
no
hope
of
reaching
them
but
I
don't
want
use
cases
to
be
something
that
we
can
only
touch
if
we
have
a
complete
proposal
to
solve
them
now
so.
B
Okay,
thank
you
all
right.
So
now
we're
going
to
get
into
this
specific,
specific
proposals
that
we
have
to
try
to
address
issue
103,
which
you
unfiled
so
and
other
other
things
that
we
talked
about.
So
this
is
PR
116.
Last
meeting
we
talked
about
that.
We
had
this
requirement
in
22.
That
talked
specifically
about
media
manipulation
using
a
GPU,
and
we
talk
about
how
there's
lots
of
other
ways
to
do
efficient
medium
manipulation.
Maybe-
and
you
could
have
an
mpu
you
could
do-
was
from
Cindy.
B
So
why
was
this
focused
on
just
the
GPU?
So
our
proposal
is
just
to
eliminate
the
specific
reference
to
the
GPU
and
say
it
must
be
possible
to
do
efficient,
media
manipulation
and
worker
threats.
B
Okay,
Arrow
Harold's
thumbs
up
you
and
thumbs
up.
Okay,
that
seems
like
people
think
this
is
a
useful
change
all
right,
so
that's
PR
116.,
the
next
one
is
PR
117,
and
this
was
about
requirement.
N37
and
it
says,
must
be
possible
for
the
user's
agent
receive
pipeline
to
process
video
at
high
resolution
and
framing
and
un
pointed
out.
It's
not
really
particularly
actionable
like
what
okay
great
like.
What's
what
do
we
need?
How
do
you
know
what
apis
actually
relate
to
this?
B
So
what
what
we
we
so
we've
modified
it
to
to
really
focus
in
on
one
thing
at
least,
that
could
actually
help
this
happen,
which
is
to
actually
have
better
support
for
Hardware
acceleration,
particularly
Hardware
decode,
because
this
is
about
in
the
streaming
area,
and
so
we
ask
for
the
application
to
determine
whether
Hardware
decode
is
supported.
In
my
opinion,
that
would
be
solved
by
media
capabilities,
which
is
already
there,
but
you
also
want
to
know.
B
Will
you
be
able
to
receive
events
to
determine
if,
after
you,
whether
Hardware
decode
failed
for
some
reason
and
you
had
a
failover
to
software?
So
this
relates
to
henrik's
issue.
146,
which
is
exposing
the
decode
errors
and
software
fallback
is
an
event.
So
that's
our
proposal
is
to
replace
n37
with
a
new
text
and
I.
You
know
there
was
a
question
about
whether
it
made
sense
to
link
to
henrik's
issue.
B
Okay,
in
the
queue
we
have
yannibal
yeah.
E
Yeah
so
I
don't
know
if
I
like
this
change,
because
it
seems
like
whether
you
have
Hardware
or
decode
or
not
as
a
means
to
an
end.
So
it
sounds
like
the
earlier
one
was
actually
more
aimed
at
making
sure
that
we
had
received
pipelines
that
didn't
have
excessive
copies
and
stuff
like
that,
and
the
new
requirements
seems
quite
different.
So
I'm
not.
B
B
Yeah,
the
problem
with
the
copy
requirement
is
that
it
really
didn't
relate
to
this
working
group.
So,
for
example,
there
has
been
lots
of
work
on
copying
in
web
codex.
For
example,
the
color
color
conversions
cut.
You
know
being
able
to
go
from
a
video
frame
to
a
web
GPU
external
texture,
but
none
of
that
has
anything
to
do
with
us,
and
you
know
so.
B
In
webrtc
I
mean
you
I
guess
you
could
talk
about
the
efficiency
of
basketball,
streams
versus
transferable
media
stream
checks
or
something,
but
it
didn't
seem
actionable
in
this
working
group
and
then
there
was
a
question
of
what
do
you
mean
by
high
resolution?
How
would
you
determine
whether
it
was
satisfied?
So
those
are
some
of
the
problems
with
the
existing.
E
Right,
but
maybe
there
are
two
issues
here:
one
is
the
removal
of
One
requirement
and
the
other
is
an
addition
of
another
requirement,
because
it
seems
it's
not
clear
to
me
that
in
order
to
process
in
order
for
a
usage
agent
to
receive
and
process
a
video
at
high
resolution
frame
rate,
it's
not
obvious
to
me
that
the
application
needs
to
be
able
to
determine
whether
Hardware
decode
was
on
or
off.
B
E
B
E
B
Is
we
probably
need
more
reviews
of
this
PR
to
to
talk
about
figure
out
what
exactly
it
should
say,
so
this
one
I
think
we'll
move
on
to
the
next
one,
but
please
people
want
to
look
at
this.
Please
review
it
and
give
us
your
ideas
of
what
you
think
should
be
okay,
so
next
one
is
PR
120.
B
So
this
relates
to
the
streaming
with
fan
out
use
case,
and
this
one
was
confusing
and
we'll
talk
more
about
all
the
confusion.
That's
in
it,
it's
quite
a
bit
of
confusion,
because
really
it's
two
distinct
things
in
there.
There
are
and
I
think
yanivar
just
talked
about
this
there's
a
low
latency
case
and
then
there's
the
ultra
low
latency
case
and
they're
actually
slightly
different.
So
so
a
low
latency
example
and
we'll
talk
about
this
more
would
be
something
like
a
webinar.
B
Where,
really,
unless
someone's
asking
a
question,
you
don't
really
need
to
have
ultra
low
latency,
it's
not
Interactive
and
in
this
use
case,
has
been
implemented
using
the
data
channel
on
Main
thread.
The
problem
with
that
is
that
encountered
a
lot
of
latency
Jank
on
the
main
thread.
So
the
solution
to
that
was
support
for
data
Channel
and
workers
and
support
for
partial
reliability.
B
So
don't
just
send
all
the
messages
you
know
ordered
reliable,
do
partial
reliability
or
potentially
even
unreliable
on
order
to
do
your
own
Florida
error
correction,
or
something
like
that,
so
we're
proposing
to
add
requirements
in
13
and
then
16
to
this
use
case
for
the
case
of
the
low
latency
scenario
using
data
Channel.
These
requirements
are
already
there,
so
it's
not
anything
new,
but
hopefully
that
will
clarify
one
of
un's
questions
about
what
exactly
we're
talking
about
and
whether
whether
the
support
is
there.
B
So
an
example
of
this
would
be
the
combination
of
data
Channel
workers,
for
example,
and
the
new
managed
media
source
apis,
which
also
support
workers.
That
would
be
a
solution
for
this
particular
portion
of
it.
E
Yeah
I
was
a
little
surprised
to
find
that
it
mentions
service
workers
because
I
don't
think
we
have
a
proposal
for
that
at
the
moment.
There's
there's
workers,
there's
shared
workers
and
service
workers
right.
B
We
do
not
have
a
proposal
for
that
n13
I
I'm,
using
the
same
requirement,
as
was
in
a
another
use
case
that
did
require
service
workers,
so
I.
B
Are
required
for
this,
but
I
just
copied
the
same
use
case
same
requirement
so
yeah
it's
existing
language,
yeah
Harold,.
C
And
I'm
a
little
confused
about
this
one,
because
it
presupposes
that
the
only
possible
solution
is
data
channels.
B
B
Exactly
that's.
The
next
slide
tries
to
address
that
particular
problem
as
well.
The
problem
here
Harold,
is
it's
mixing
actually
two
things
Tim
pointed
this
out
in
review
of
the
pr
it's
mixing
low,
latency
use
cases
with
ultra
low
latency
use
cases
which
actually
probably
can't
be
solved
with
the
data
Channel
solution,
because
it's
two
highlights
and
the
congestion
control
protection,
but
we'll
talk
about
that.
B
D
Yeah
to
to
come
back
to
the
point
of
where
service
workers,
so
in
webrtc
extensions
data
is
notes,
are
exposed
in
a
web
ideal
worker
with
which
means
a
dedicated
worker,
shared
worker
and
service
worker,
so
n13,
is,
is
already
supported
by
Weber
key
extensions.
B
Yeah,
so
these
are
so
so
let
me
get
to
Harold's
question,
which
is
in
the
next
slide,
all
right.
So
there's
a
bigger
the
bigger
confusion
here
in
section
322,
which
is
so
un,
asked
to
clarify
the
meaning
of
low
latency,
and
so
Tim
pointed
out
that
there's
actually
two
fairly
distinct
uses.
B
There's
uses
like
a
webinar
or
a
company
meeting
where
there's,
which
would
be
more
of
a
low
latency,
which
means
less
than
a
second,
and
if
you
want
to
do
that,
that
could
be
that
could
use
data
channel
for
fan
out
and
would
require
n13
and
n16
to
to
get
rid
of
extreme
Jank
and
and
handle
that.
But
if
you're
trying
to
do
something
like
auctions
or
betting,
your
latency
requirement
is
going
to
be
a
lot
less.
B
It's
going
to
be
less
than
500
milliseconds
and
that's
very
popular
for
webrtc,
like
Harold
pointed
out,
like
you
use
RTP
for
that
scenario,
and
if
that's
what
you're
doing
the
data
Channel
found
out
probably
is
not
going
to
work
even
within
13
and
16..
So
we
have
requirement
in
39
which
covers
RTP
fan
out
and
then
Pollock
is
going
to
talk
about
n43,
for
that.
So
I
think
that
the
problem
here
is
we're.
We've
got
this.
B
B
D
So
you
think
that
since
it's
mixing
two
different
use
cases,
the
solution
would
should
be
to
split.
B
Yeah,
that
would
probably
yeah.
That
would
be
because
the
problem
is
you'd,
have
this
mishmash
of
requirements
and
you
wouldn't
be
able
to
tell
what
was
for
what.
So,
that's
probably
that's
a
good
suggestion.
You
want
to
split
322
into
the
low
and
the
ultra
low,
and
so
the
ultra
low
would
get
requirement
39
and
43
potentially,
and
the
low
one
would
be
M13
and
16,
and
does
that
make
sense.
Harold.
A
C
Okay,
if
you
have
a
solution
that
works
well
below
500
milliseconds
mandating
that
you
need
to
with
that,
we
need
to
work
on
a
separate
solution.
That
is
that
that
this
has
a
lower
performance
seems
a
bit
bizarre.
A
A
B
Okay,
so
we'll
we'll
come
back
next
time
and
try
to
sort
this
all
out.
Okay,
so
we
still,
we
still
still
have
one
a
couple
more
slides
here,
all
right:
okay,
so
we'll
put
off
discussion
of
the
pr
and
and
work
through
it
again.
B
So
now
we
have
PR
119,
and
this
relates
to
the
user
user
prompts
so
there's
two
rationales
for
this
in
game
gaming.
If
the
game
doesn't
involve
fat
between
the
users,
there's
no
audio
and
video
chatting
it's
a
little
bit
astonishing
to
be
asked
to
give
permission
for
your
microphone
and
Camera
like
why?
What
do
you?
What
do
you
need?
You
need
this
for
and
then
in
the
conventional
streaming.
You
know
if
you're
watching
a
movie,
for
example.
B
Today
you
know
on
your
favorite
streaming
service,
you
don't
get
a
permission,
prompt
for
a
microphone
and
Camera
right
and
and
if
you
did
you'd,
probably
go
looking
through
your
mobile
device
settings
and
try
to
figure
out
why
this?
What
what
it
with
this
app
required
and
why
it
needed
your
microphone,
a
camera
and
maybe
take
that
remove
the
permissions.
So
why
is
this
different
with
low
latency
streaming?
B
No,
it
might
make
sense,
for
example,
in
a
webinar
if
you're
asking
a
question
or
maybe
a
compass
or
something
I
want
to
go
to
the
mic,
and
then
you
could
be
asked
for
the
permission.
But
if
you're
just
sitting
there
watching
your
CEO
or
something
in
a
company
meeting
to
get
that
permission,
prompt
is
is
fairly
astonishing.
A
D
D
If
it's
that
that
we're
targeting,
then
it
would
be
good
to
make
it
much
clearer
because,
as
it
is,
n36
is
already
it's
really
good.
I
can
send
data
media
data
or
I
can
receive
media
without
any
prompt,
and
it's
working
today
in
all
browsers,
but
you
it's
not
something
that
it's
already
actionable
so
but
yeah.
G
B
B
Yeah,
so
when
you
say
it's
actionable
be
actionable
for
conventional
streaming,
we
already
can
do
it.
The
question
is:
for
these
low
latency
things
using
webrtc:
do
we
suddenly,
you
know,
have
these
requirements,
which
which
make
no
sense
within
the
context
of
streaming?
It's
just.
You
know
it's
still.
A
streaming
app
is
just
lower
latency,
and
how
do
you
get
into
this
weird
world
where
you
need
a
micro
on
a
camera.
D
Is
the
point
is
the
point
that,
because
camera
and
microphone
like
maybe
a
process,
will
be
a
higher
priority
and
it's
not
or
is
it
so
it's
or
is
it
the
case
that
in
faces
networking
is
not
that
good
when
there's
no
camera
in
America
these
other
questions
I'd
like
to
understand
because
there
in
the
requirement
it's
not
care
at
all
to
me
what
we
are
trying
to
do,
okay
to
get
to.
E
Think
go
ahead:
okay,
I'm!
Sorry,
yes,
so
I
don't
think
it's
appropriate
to
have
a
requirement
that
user
agents
are
forbidden
from
adding
prompts
I.
Don't
think
that
is
much.
For
instance,
if
you
were
to
view
a
movie
in
Firefox,
you
might
see
a
prompt.
Do
you
wish
to
download
view
DRM
content,
so
we
would
already
be
in
violation
of
that
so
I
think
user
agents
would
have
should
be
allowed
to
do
whatever
they
feel
is
best
for
their
user.
So
that's
one
but
I
also
sympathize
with
the
idea.
E
If
this
is
about
RFC,
8828
and
Ice,
I
totally
agree
that
microphone
tying,
that
to
microphone
permission
is
or
a
camera
permission
is,
is
kind
of
a
poor
solution.
So
I
welcome
explorations
of
maybe
adding
different
types
of
prompts
or
different
ways
to
solve
that
problem
particular
problem.
If
that's
what
it
is,
but
I
don't
know
if
that's
as
a
requirement,
I
don't
think
we
can
say
that
no
browser
prompts
is
the
requirement.
B
E
B
I
I
So
the
basically
the
cloud
gaming
is
very
similar
to
the
general
webrtc
application,
but
we
would
like
to
highlight
that,
though
there
is
a
more
continuous
visual
feedback
to
the
user
input
and
with
respect
to
the
latency
we
just
discussed
about
glass,
2
glass
latency,
but
for
the
cloud
gaming
you'd
like
to
prefer
to
have
a
click
to
pixel,
latency
and
then
latency
level
should
be
more
it's
more
smaller
than
the
General
application.
Also.
I
I
It
is
much
better
to
have
a
more
consistent
latency
than
any
other
use
cases.
So
the
combination
of
high
complexity,
ultra
low
latency
and
faster
recovery
might
be
a
good
use
case
for
the
cloud
gaming
and
the
next
slide.
I
B
I
B
I
Yeah,
the
first
item
is
recovery.
Using
none,
I
don't
keep
frame,
and
this
is
might
be
useful
in
case
of
very
bad
Network
condition.
If
we
don't
have
a
iframe,
then
the
application
should
wait
for
the
iframe,
but
this
is
not
very
solvable
in
a
bad
Network
condition,
so
we
would
like
to
have
an
option
to
recover
the
stream
of
using
the
non-ide
frame.
I
The
second
item
is
close-up
encoder
and
decoder.
Synchronously
notification.
So
sometimes
the
current
decoding
is
depends
on
the
previous
frame.
I
But
when
the
previous
frame
got
lost,
there
should
be
a
notification
for
the
decoder
to
re
map,
the
previous
frame,
so
that
we
can
recover
the
stream
vastly
and
the
third
one
is
for
having
a
more
configuration
on
the
transmission
interval
of
rtcp
Transport
feed,
Bank
message
so
that
we
can
have
more
adaptive
latency
and
the
last
one
is
to
cover
the
GTO
control
to
consider
the
client
usual
agent
capability,
for
example,
the
rendering
Pipeline
and
CPU
consumption.
So
we
can
estimate
the
are
best
decoding
or
decoding
performance
of
the
client.
E
Yeah
so
I
like
gaming,
so
I
don't
see
a
particular
problem
with
I
had
some
question
about
n49.
What
particular
connections
should
generate
signals
indicating
something
about
the
encoder?
Is
that
you
imagine
that
being
receiver
side
or
sender
signs?
As
you
also
mentioned,
rtcp.
I
Oh
yes,
it's!
It
can
be
both.
B
Yeah
I
I
do
have
a
question
about
this
because
rpsi,
you
know,
is
defined,
it's
not
implemented
in
webrtc,
but
there's
nothing
that
prevents
a
browser
from
implementing
rpsi.
B
B
A
B
Correct,
okay
but
I
guess
we
also
have
un
U.N.
D
Yeah
in
general,
if
if
we
knew
that
some
of
the
things
work
in
Native
applications-
and
they
do
not
work
in
the
browser,
then
adding
requirements
like
like
this
is
good
and
they
seem
like
actionable
so
that
that's
good
I
want
to
I'm,
not
I,
wonder
whether,
for
instance,
n51
is
already
we
already
good
there
or
whether
it's
it's
not
good.
But
we
can
discuss
that
at
review
time.
H
B
G
That
would
probably
be
two
levels
in
this
group.
We'd
be
interested
in
two
levels
of
it
that
one
we
are
saying
is
just
recover
using
non
keyframes,
which
means
that
it
could
result
in
corruption,
so
some
applications
may
be
able
to
tolerate
it,
and
some
applications
may
not
be
the
one
that
Bernard
mentioned,
that
you
can
use
alternate
reference
frames
which
that
wouldn't
result
in
corruption,
and
that
would
be
an
example
of
a
recovery
without
corruption,
whereas
the
I
think
n48
is
meant
to
cover
Beyond
Beyond
mechanisms
that
recover
without
corruption.
B
So
I
think
we're
out
of
time
for
this
segment,
but
what
I
would
like
to
recommend
for
PR
118
is
continue.
Discussion
in
GitHub
and
I
think
we
might
want
to
give
this
more
time
at
a
future
meeting
to
really
get
into
all
of
the
not
just
the
the
API
issues,
but
all
of
the
lower
level
RTP
functionality.
B
That
would
be
needed
to
do
this,
because
I
think
that
there's
a
lot
of
questions
about
at
least
in
my
mind
about
whether
Library
TC
has
all
the
underlying
functionality
that
would
be
needed
for
this
stuff.
I.
Don't
think
it
does
at
the
moment.
So.
B
Yeah,
that's
I,
think
the
question
we
need
to
answer.
I
mean
some
of
the
stuff
is
in
ITF
like
we
have
slice
loss.
We
have
rpsi
they're,
not
implemented
typically
for
new
codecs
anymore
and
that's
an
issue
like
we
don't
like
in
ab1.
We
didn't
bother
to
specify
how
that
even
worked.
B
So
that's
kind
of
a
question
about
whether
we
made
a
mistake
doing
that
not
putting
it
into
the
new
stuff
but
yeah.
There's
a
good.
That's
a
good
question
Harold.
Is
there
a
need
for
new
feedback
or
is
he
is?
Are
we
just
asking
to
implement
the
existing
feedback
in.
B
J
Yes,
hi
everyone,
I'm,
falak
and
today,
I
want
to
talk
about
adding
the
function
set
metadata
for
encoded
trains
to
support
redundant
reading
PCS
next
slide,
please
yeah,
so
I
want
to
start
by
discussing
the
existing
use
case.
Low
latency,
broadcast
with
pan,
on
which
we
have
already
discussed
earlier
in
the
meeting
today
and
I,
want
to
specifically
talk
about
using
P2P
relays
for
large
scale,
streaming
applications,
and
so
I
want
to
take
a
look
at
an
example
scenario
for
next
slide.
J
So
in
this
example
meeting
you
have
multiple
participants
who
are
sending
their
encoder
video
streams
to
multiple
sfus
of
servers
which
stem
cells
are
setting
these
videos
to
a
P2P
Network.
Now
this
P2P
Network
that
we
have
here
is
more
like
a
is
a
tree
such
that
each
receiving
pure
is
only
getting
is
only
getting
frames
from
another
pure
now.
We
know
that
peers
are
unreliable
and
can
lead
at
any
time
which
makes
all
the
other
peers,
depending
on
these
peers,
a
bit
Yeah
so
they've.
J
J
So
how
can
you
solve
this
so
by
that
I
think
for
this
we
can
just
add
a
redundant
communication
channels
for
peers
so
that
they
have
less
lines
so
that,
instead
of
just
depending
on
one
pure,
they
can
depend
on
multiple
next
slide,
and
this
is
how
adding
redundant
PCS
to
your
parent
book
would
look
like.
So,
let's
take
the
previous
a
b
and
c
in
this
case,
POC
is
receiving
streams
receiving
video
frames
from
both
A
and
B.
J
And,
let's
say
a
is
it's
let's
say:
C
is
only
using
the
frames
from
a
at
the
moment
for
decoding.
Also
relaying
now,
let's
say
a
dies
or
it
fails
or
just
leaves.
Then
that
case
we
want
to
seamlessly
switch
to
B.
To
do
the
same
things,
however,
we
need
to
understand
that
though
they
might,
though
we
might
be
sending
the
same
encoded
frame
payload.
J
It
might
not
have
the
same
metadata
because
of
the
different
network
parts
that
these
Encore
frames
might
have
taken,
which
means
that
you
cannot
easily
switch
from
A
to
B
and
that
will
again
cause
A
disruption
for
C,
causing
delays
next
slide.
J
So
in
this
case,
so
here
you
have
two
receiving
pure
connections,
both
of
which
are
getting
the
same
encoded
frame,
payload,
which
is
captured,
which,
which
it
gets
in
the
same
original
capture,
but
have
different
metadata
because
of
the
different
network
parts
that
they've
taken
and
what
we
do
here
is
we'll
take
inquiry
transforms
and
equal
transform
to
basically
read
encoded
frames
from
these
two
receiving
peer
connections
and
then
in
code
transform
and
then,
with
the
new
function
of
set
metadata.
J
J
The
JavaScript
can
also
take
care
of
disposing
of
the
duplicate
in
this
case
and
then
can
pass
this
process
stream
for
either
forwarding
to
another
Pure
or
for
just
local
rendering
excited.
J
And
this
is
how
the
JavaScript
code
will
look
like
you
have
two
receiving
peers
from
which
extract
the
readers
and
pass
it
to
the
function
transfer
frames.
We
also
have
a
relay
PC,
from
which
we
get
the
writer
to
write
the
frames
to
write
the
process
frames
into.
We
call
this
a
relay
PC
writer,
so
the
function
transfer
frames.
J
It
reads
the
frame
from
these
to
receiving
PC
readers
and
basically
sets
the
metadata
such
that
two
encoded
frames
with
the
same
payload
have
the
same
metadata
and
here
after
this
process
stream
can
then
be
returned
to
the
relay
PC
writer.
If
it
has
not
already
been
done
for
the
same
for
the
encodering
with
a
similar
paper.
J
And
support
this
use
case
the
proposed
adding
a
new
requirement
of
n43
which
to
sections
3.2.2,
so
n43
is
already
in
the
list
of
requirements,
but
it's
not
a
requirement
for
this
section
yet,
and
that
is
what
we
proposed
to
allow
modification
of
metadata
for
encoded
tricks.
Thanks
late.
J
This
is,
and
we
also
proposed
two
API
changes.
We
propose
modifying
the
timestamp
for
the
RTC
inquiry,
video
frame
and
also
changing,
and
also
adding
the
function
set
metadata.
J
K
J
We
don't
I,
don't
think
we
want
that,
so
we
can
just
this
along
the
spec
to
change
the
ssrc.
We
mostly
want
to
change
just
the
frame
ID
and
dependencies.
Okay,.
A
D
I
D
Set
me
the
data
will
apply
to
both
so
I
wonder
whether
you
you
thought
about
this,
and
the
second
thing
I
like
to
say
is
that
in
the
current
model
of
webrtcy
and
body
transformed
it's
a
transform,
so
you
can
only
write
a
frame
that
was
given
from
the
reader
that
that
is
related
to
the
writer
and,
in
your
example,
you're
taking
a
frame
from
a
reader
and
sending
it
to
another
writer
which
I'm
not
connected.
It's
not
the
same
receiver,
so
it's
not
allowed
in
respect
currently.
D
So
that's
that
will
be
a
change
that
we
should
discuss.
First,
that's
my
understanding
there
and
when,
when
we
discussed
that
in
in
the
past,
one
point
that
was
made
was
that
in
general,
what
you're
training
to
do
there
can
be
done
with
a
web
codec.
So
you
you
get
the
frames
from
and
credit
transform
you
get
the
data
and
then
you
use
a
new
user
decoder
and-
and
you
do
your
rendering
on
your
own,
so
it's
in
what
you're
trying
to
get
there
is
basically
a
digital
buffer
for
free.
D
Somehow
so
we
use
a
teacher,
but
first
and
other
proposals
have
been
through
it,
but
not
yet
concretized
where
they
would
be
like
registered
with
the
API.
And
then
you
could
realize
what
your
but
what
you
want
to
do
with
apis
plus
web
codec
plus
candles.
J
G
D
I'm
not
exactly
sure
how
you
do
relaying
there
actually.
D
For
instance,
is
that
is
done
with
it
is,
is
not
good
enough,
for
instance,
this
kind
of
thing
how?
How
do
you
do
the
these
things?
So
we
we,
the
initial
the
constraint
that
you
cannot
enqueue
a
frame
in
the
writer
that
was
not
banded
by
the
corresponding
reader
is
a
restriction
currently
that.
A
D
To
not
think
about
these
issues,
we
know
that
it's
working
if
a
transform
is
not
like
increasing
by
to
the
size,
and
so
that's
why
I
think
the
preliminary
question
would
be.
Do
we
really
want
to
enter
that
world
and
you
have
a
use
case
there
that
we
usually
begin
to.
So
that's
what
I
would
try
to
focus.
First.
J
Okay,
I
think
around
the
first
question
of
sending
and
receiving
I
think
the
most
you
consumer
receiving,
but
we
can
do
the
changes
for
sending
cell
as
well,
but
I
guess
right
now
in
this
pack
it
doesn't
differentiate
between
artist,
single
trains,
do
not
differentiate
between
senders
or
receivers
like
they
have
the
same
interface.
So
it
is
mind,
reading
and
yeah
I
think
if
it
is
not
allowed
in
respect
so
I
guess
that
is
the
first
thing.
Yeah.
F
But
I
mean
I,
don't
we
have
the
Restriction
but
I
don't
see
that
there's
any
fundamental
problem
with
allowing
passing
a
frame
from
from
I
mean
rather
than
calling
transform
from
one
connection
to
the
other
I
mean
we
might
have
a
restriction,
but
we
can.
We
can
just
remove
that
restriction
in
respect
if
there's
no
fundamental
reason
to
have.
D
Reason
was
that
there
was
no
use
case
for
it
and
it
allowed
to
have
a
very
simple
model
that
we
that
we
know
was
working.
We
do
not.
We
did
not
need
to
spend
the
time
on
investigating
whether
starting
to
expose
that
would
cause
some
issues,
and
we
can
certainly
think
of
Reviving
the
discussion.
C
E
Yeah,
so
so
I
I,
second,
you
went
concerned
and
that
the
so
I
want
to
say
I
appreciate
the
use
case
and
I
think
that's
something.
We
should
look
at
solving
I'm,
just
not
sure
that
the
existing
apis
are
the
right
that
that
the
the
way
it
was
accomplished
here
that
seems
to
have
be
problematic
and
that
the
intended
use
for
the
API
was
to
encode
and
decode
a
single
Stream,
So
I.
E
It
seems
a
little
weird
here
that
you
know
when
I,
when
you
do
this
fallback
I'm
gonna
have
a
receiver
from
one
pair
connection,
actually
giving
me
data
from
another.
It
seems
a
little
tacky
right
and
it's
like
what
happens
then,
that
all
these,
and
also
if
we
want
the
behavior,
we
need
to
standardize
it
and
specify
it
so
that,
and
it
seems
to
be
a
lot
of
corner
cases
here,
that
might
be
better
to
have
a
different
shape
surface.
E
That
more
directly
addresses
this
and
the
other
part
was
if
we're
going
to
touch
a
video
frame.
We
have
some
comments,
and
this
is
more
pedantic
comment
that
we
want
to
make
sure
we
align
with
about
codex.
At
some
point,
which
also
has
encoded
video
frames
and
metadata,
so
that's
the
detail,
that's
my
comment.
Thanks.
C
Arnold
and
yeah
about
about
restriction
to
one
to
to
being
within
one
PC
all
the
use
cases
we've
been
arguing
about
for
the
last
year
or
more
about
about
one-maned
use
cases
require
that
to
be
able
to
take
frames
out
of
a
PC
or
insert
them
into
a
PC.
C
E
E
F
E
A
call
for
consensus
on
one
way,
immediate
use
cases.
This
was
back
in
February
7th.
It
was
a
summary
of
confident
senses
on
on
what
is
the
Envy
one-way
media
use
cases.
B
E
A
E
B
B
D
So
we
talked
with
burnout
and
andovers
about
black
pressure
and
currently
in
webrt,
and
could
it
transform
back
actually
disabled
and
we
got
something
back
that
it
was
surprising
and
the
web
transportworking
group
also
had
feedback
about
hey
back
pressure.
Should
it
be
used
or
not?
So
that's
that's
the
issue
there
and
the
current
invest
back
currently
and
I
believe
in
all
implementations
by
pressure
is
feasible.
So,
let's
start
with
the
pipeline.
D
Encoder
it's
owned
by
the
user
agent
and
then
that's
where
you
had
readable
stream.
You
got
the
transport
rateable
stream,
you
go
to
the
network
and
then
then
you're
again
user
agent.
You
have
the
same
on
the
receiving
side.
So
in
typical
case,
where
everything
would
be
done
in
JavaScript-
and
let's
say
you
are
doing
like
a
video
encoding
in
canvas.
D
Can
that
generate
the
frame?
Then
you
encode
it.
Then
you
send
it
to
websockets.
For
instance,
you
actually
want
to
not
refer
too
much
data.
You
do
not
want
to
generate
a
lot
of
video
frames
because
otherwise
memory
will
blow
up
and
that's
where
back
pressure
is
shining
because
the
network
will
tell
the
transform,
but
it
might
be
two
flow.
D
So
the
transform
should
say:
hey,
okay,
I
need
to
say
my
readable
screen
and
then
my
video
frame
generator
that
hey
you
should
slow
down,
and
then
we
have
a
pi
that
is
optimized
in
terms
of
memory
and
and
in
terms
of
throughput,
so
that
that's
good
for
this
kind
of
use
cases.
In
our
case
the
network
is
lossy
and
we
do
not
want
to
to
wait
too
much.
So,
but
why
we
have
a
different
model,
but
in
any
case
next
slide
so
there.
D
The
first
thing
to
understand
is
whether
it's
important
to
say
anything
about
back
pressure,
and
we
can
note
that
back
pressure
is
observable
to
the
GF
platform.
For
instance,
if
a
writer
is
not
ready,
usually
people
doing
transform,
they
will
say:
okay,
I'm,
writing
and
I
will
await
the
right
to
do
the
next
right
or
I
will
await
your
writer
to
be
ready
to
do
the
next
pride,
and
that's
where
back
pressure
is
have
is
a
computation
is
happening.
D
The
resolution
of
the
right
promise
of
the
resolution
of
we
already
promised
will
be
dependent
on
back
pressure
computation.
So
it's
observable
to
JavaScript
whether
the
U.S
agent
will
apply
back
pressure
or
not,
and
that's
why
we
think
it's
important
to
for
consistency
between
browsers
to
be
as
specific
as
possible
and
that's
why,
since
currently,
browsers
are
aligned-
and
we
might
not
see
a
good
use
case
for
back
pressure-
we
prefer
to
say:
okay
back
pressure
is
disabled.
It's
consistency
called
browsers
and
there's
no
surprise
for
a
JavaScript
Developers.
D
Okay,
next
slide.
So
let's
say
now:
if
a
transform
is
writing
too
much
data,
so
data
is
there's
too
much
data
to
go
to
the
network.
D
That's
where,
in
the
case,
where
we're
using
websockets,
it's
actually
good
for
what
to
get
to
say,
hey
if
you
scroll
down
and
then
the
transform
can
can
do
that.
But
in
our
case,
if
a
transform
is
writing
too
much
data,
some
things
like
packets
will
be
dropped
either
directly
at
the
user
agent
level,
maybe
or
in
the
pipeline,
and
then
the
user
agent
will
be
notified
for
a
feedback
mechanism
that
it's
I'm,
putting
too
much
data.
D
So
at
some
point,
user
agent
will
know
that
too
much
data
is
sent
and
it
should
slow
down.
D
It
should
reduce
the
what
is
being
sent
and,
for
instance,
one
possibility,
for
you
is
a
agent
to
inspect
the
encoder
to
reduce
throughput
so
that
in
any
case
there,
as
you
see,
the
transform
itself
does
not
benefit
from
knowing
that
packets
are
dropped,
because
the
only
thing
it
can
do
is
slow
down
writing
data,
but
it's
adding
latency
without
helping
much
the
user
agent,
which
is
the
one
that
is
controlling
the
encoder
to
a
really
useful
throughput.
D
So
in
that
case,
since
the
user
agent
is
knowing
encoder
a
network
and
the
user
agent
is
in
a
good
position
to
apply
some
kind
of
adaptation
without
using
the
back
pressure
mechanism
that
is
going
through
the
transform
next
slide.
Okay,
so
another
case
is
if
the
transform
is
too
slow.
So
it's
not
writing
enough
data
and
then
the
readable
screen
is
anchoring
and
the
size
of
the
queue
is
is
getting
bigger.
So
there
you
could
say
hey
back
pressure.
D
It
would
be
good
because
the
encoder
can
can
maybe
do
something
there,
but
yeah
there.
It's
not
really
visible
to
the
JavaScript
transform
and
the
encoder
can
exactly
know
the
size
of
the
queue
of
your
double
string.
So
there's
no
need
for
streams
back
pressure
and
user
agent
can
do
what
they
want.
We
can
reduce
frame
rate
by
dropping
frame
supplier
encoder,
for
instance,
and
that's
good
so
again
in.
G
D
It
basically
knows
everything
and
and
allowing
the
transform
to
know
that
something
is
not
really
good
is
not
helpful,
because
the
Transformer
has
very
few
few
ways
to
do
anything
to
our
organization,
especially
since
it's
not
the
complex,
and
so
the
good
news
is
what
what
working
with
Sprint
spec
knowledge
into
this
with
using
plus
infinity
as
a
valid
Watermark.
And
it's
the
way
we
see
that
new
things
there.
We
have
done
a
different
way
which,
which
is
to
say
that
the
frame
at
the
size
of
zero,
which
is
equivalent.
D
That
is
not
very
helpful
in
terms
of
understanding
and
in
terms
of
message.
So
the
proposal
here
would
be
editorial
in
the
sense
that
we
would
not
change
what
implementations
and
specs
are
doing.
But
we
would
update
the
certification
to
use
plus
infinity
to
align
with
how
we
should
do
with
interact
with
the
string
stack
and
mention
in
professional
why
we
are
using
the
other
design
node
and
noting
the
fact
that
it's
a
transform.
B
D
B
So
I
I
want
to
ask
about
the
general
principle
here
so
I
think
what
you're
saying
is
that
you
know
webrtc
uses
RTP,
that's
unreliable,
so
you
don't
want
to
build
a
queue
I'm,
just
trying
to
think
of
the
implication
for
something
like
web
transport,
which
has
both
unreliable
modes
like
data,
quick
datagrams
and
reliable
streams
and
streams
of
streams
which
can
be
partially
reliable
and
I.
B
Guess
what
you're
saying
there
is
what
this
would
apply
to
the
datagrams
because
they're
lossy
and
maybe
to
The,
partially
reliable
stream
of
streams,
but
it
might
make
sense
to
have
back
pressure
in
the
reliable
streams
in
web
transport.
Does
that
make
sense.
D
Yeah,
that
seems
like
a
good
summary
for
for
web
transport.
Maybe
there's
a
use
case
for
web
transport.
Since
it's
a
lower
level,
you
you
do
not
control
both
ends,
so
maybe
there's
a
use
case
for
datagram
for
for
back
pressure
I'm,
not
quite
sure
frankly
that,
because
even
if
you
have
difficulties
sending
the
packets
to
the
network
process,
that
will
actually
send
send
them
to
UDP.
D
You
can
drop
them
at
any
point,
even
within
the
user
agent,
but
if
it,
if
it
helpful,
to
actually
allow
the
applicant
web
application
to
know
that
that's
what's
happening.
Networking
is
good,
but
for
some
reason
like
CPU
or
whatever,
some
packets
are
dropped
internally
to
the
user
agent,
then
maybe
it
could
be
useful,
but
I
don't
know.
But
the
difference
here
is
the
web
on
web
transport.
D
The
JavaScript
application
is
responsible
for
the
generation
and
for
instance,
what
do
you
perpet
trade,
what
QP
am
I
using
for
web
codecs
and
so
on
in
in
webrtc?
There's
no
way
for
the
JavaScript
application
to
actually
manage
that.
Currently,
so
maybe
when
it
will,
maybe
if
the
replication
at
some
point
is
able
to
do
this,
then
we
could
revisit
this,
but
until
then
I
don't
think,
there's
a
there's
a
need
to
for
back
pressure.
Peter.
K
D
C
With
the
current
specification,
will
the
will
the
right
promise?
When
will
the
will
the
right
promise
and
resolve.
D
In
the
next
micro
task
till
that
there's
no
change
of
behavior
there,
it's
just
so
somehow
it's
just
a
tutorial!
That's
how
I
slide
it
in
the
pr,
but
we
we
want
to
mention
the
rational
and
have
a
discussion
there
about
back
pressure.
C
Also
so
I
think
that
by
leaving
it
in
this
state
with,
we
make
a
lot
of
applications,
including
all
the
one
100
ones
I've
been
talking
about
for
a
year
impossible
and
so
I
would
I
could
accept
that
this
isn't
a
total
change
and
it's
probably
easy
to
read
than
our
current
specification.
So
this
seems
okay,
but
we
should
really
solve
the
basic
problem
and
do
explicit
explicit
feedback
that
the
application
can
see.
So
I
did
a
proposal
at
idf-115,
but
haven't
been
able
to
follow
up
on
that.
D
D
Absolutely
we
connect.
We
can
expose
things
in
this
transformer,
for
instance,
in
value
Swift
and
and
that
would
be
more
beneficial
and
if,
if
we
have
good
a
good
enough
mechanism
like
that,
then
whether
we
use
back
pressure
at
the
end
of
the
day,
I
I,
don't
know
for
which
one
and
use
cases.
B
Okay,
well,
thank
you
un.
B
So
our
last
item
is
actually
not
the
last
time.
It's
we're
on
to
the
ice
controller,
API
Samir.
L
Hi
so
next
slide,
please
thank
you
so
today
I'd
like
to
talk
about
how
we
can
allow
the
application
to
change
the
selected
candidate
pair,
that's
being
used
for
transport
and
from
the
previous
discussions,
the
main
questions
around
this
revolve
around.
How
can
we
do
this
in
a
manner
that's
compatible
with
RFC
8445
and
how
would
selecting
a
different
candidate
pairs
fit
in
with
the
entire
ice
State
machine?
L
So
let
me
try
and
address
that
so
traditionally,
the
way
ice
nomination
works
is
to
build
the
peers,
gather
candidate,
Pairs
and
exchange
them.
They
start
stem
connectivity
checks
and
then
the
controlling
agent
at
some
point
picks
a
candidate
pair
to
users
selected
pair.
It
will
redo
a
stun
binding
request
with
the
use
candidate
attribute
set,
and
if
that's
done
transaction
succeeds,
then
it
becomes
a
nominated
candidate
pair
and
becomes
a
selected
pair.
L
So
the
ice
RFC
8445,
it's
pretty
strict
with
respect
to
nominations
so
once
the
candidate
pair
has
been
nominated,
the
agent
has
to
use
that
to
send
data
and
changing
that
is
not
allowed.
There
has
to
be
an
ice
restart
to
change
the
nominated
candidate
pair.
So
that's
a
pretty
strict
requirement
in
terms
of
what
happens
with
the
non-selected
candidate
pairs
so
for
some
backward
compatibility
reason.
L
The
Asians
can
continue
to
respond
to
checks
on
that
for
a
few
minutes,
but
eventually
the
goal
is
to
free
those
up
and
then
the
controlling
side
is
allowed
to
reject.
There's
really,
no
reason
why
it
couldn't
do
that,
and
in
that
case
the
candidate
pair
would
become
failed
and
I
should
that
data
stream
at
least
would
go
to
the
failed
state.
So
that's
sort
of
what
exists
today.
Next
slide.
L
So,
given
the
constraints
and
limitations
of
RFC
8445,
how
do
we
actually
allow
the
application
to
change
the
selected
pair
so
the?
If
we
follow
the
strictest
boundaries,
then
we
can
only
do
this
with
an
I3
start
once
negotiation
has
actually
happened
and
that's
an
expensive
process.
So
in
that
case
we
would
have
to
do
a
nice
restart,
which
means
negotiation
has
to
happen
again.
L
We
can
try
and
optimize
this
a
bit
by
saying
that
we
retain
the
candidates
already
gathered,
maybe
even
just
the
the
candidates
from
the
pair
that
the
application
has
indicated
and
then
that
the
connectivity
checks
have
to
repeat
again
and
then
dot
pair
can
be
nominated.
So
that's
overall,
a
pretty
expensive
process.
L
L
But
I
would
like
to
talk
about
one
option
which
actually
stays
within
the
bounds
of
RFC
8445
and
shout
out
to
Peter
for
the
suggestion.
So
what
would
you
do
what's
Allowed
by
RFC
8445
is?
We
could
actually
separate
selection
of
a
candidate
pair
from
the
nomination
process
itself,
and
this
is
actually
a
lot
so
when
the
controlling
agent
actually
picks
the
pair
to
nominate,
that's
left
entirely
up
to
the
ice
agent,
so
that
can
be
deferred
as
long
as
necessary
and
until
a
pair
has
been
nominated.
L
Data
can
actually
be
sent
on
any
of
the
valid
pairs.
That
is
any
pair
that
actually
succeeded
connectivity
checks
without
nomination,
so
that's
yeah
we
allowed,
and
so
that's
how
I
propose.
We
allow
the
application
to
set
the
selected
candidate
pair,
so
the
controlling
side
can
simply
once
checks
have
completed,
and
once
there
are
certain
valid
pairs.
The
controlling
side
can
just
start
sending
data
on
any
of
the
valid
Pairs
and
the
control
side
will
follow.
L
Suite,
follow
suit
and
just
respond
with
data
on
the
same
candidate
pair
and
stunt
checks
are
also
allowed
to
be
performed
indefinitely.
To
make
sure
that
the
agents
are
aware
of
what
pairs
are
still
valid.
There
is
an
upper
limit,
that's
suggested
in
this
spec,
but
it's
not
mandated
so
there's
an
upper
limit
of
100,
it's
again
not
mandated,
and
there
is
a
lower
bound
on
how
frequently
checks
can
be
sent.
L
So
there's
some
sort
of
trade
limiting
already
been
built
into
it
and
then
we've
already
in
the
previous
meetings
talked
about
a
cancelable
event:
that's
fired
if
the
agent
wants
to
remove
a
candidate
pair,
and
so
with
that
application
scan,
keep
candidate
pairs
alive
and
available
by
preventing
default.
On
this
event,
and
then
the
final
step
that
we
need
is
to
actually
prevent
nomination
from
happening
itself,
because
once
nomination
happens,
then
we
can't
change
to
a
different
candidate
pair
and
so
for
that
I
propose
that
there's
a
cancelable
event.
L
So
there's
already
a
selected
candidate
pair
change
event
in
rtci's
transport,
which
could
be
made
cancelable
to
prevent
nomination.
It
doesn't
maybe
quite
work
because
today
this
event
is
far
after
the
pair
has
already
changed,
whereas
we
would
want
it
to
fire
before
the
candidate.
The
selected
candidate
changes
to
allow
the
application
to
cancel
that.
L
So
that's
the
proposal,
it
I
think
stays
pretty
compatible
with
any
future
ice
extensions,
so,
for
instance,
today
the
way
this
could
be
implemented
is
if
the
application
allows
the
nomination
to
go
through
then
calling
said
selected
candidate
pair
after
that
would
result
in
an
error,
whereas
in
the
future,
if
an
extension
is
implemented
and
denomination
is
allowed.
Let's
say
in
that
case,
calling
sub-selected
candidate
pair
after
a
nomination
just
performs
a
denomination.
Instead,
now
the
obvious
side
effect
was
this.
L
Is
that
since
nomination
doesn't
happen,
the
I
State
never
goes
to
complete,
it
goes
to
connector,
and
then
it
could
go
to
failed
or
closed
or
any
of
the
other
states,
and
that's
the
one
change
with
this
approach
and
RFC.
L
8445
also
does
say
that
a
pair
has
to
be
nominated
at
some
point,
but
I
think
the
main
reason
for
that
is
to
allow
candidates
to
be
freed
up,
and
we've
also
proposed
an
API
to
do
that
by
essentially
removing
candidate
bills
that
we
no
longer
indeed,
which
will
allow
candidate
pairs
to
be
freed
up,
so
that
countdown
should
also
be
mitigated.
L
So
that's
the
proposal.
I
can
go
over
the
details
on
GitHub,
but
I
would
really
like
to
get
some
thoughts
on
the
approach
for
this
so
yeah
questions.
Thank
you
guys.
First
in
here.
K
I
I,
obviously
like
the
idea
that
I
came
up
with
with
selecting
without
re-nominating,
but
I
I.
Don't
think
we
need
to
suppress
the
nomination
necessarily
because
I
think
our
8445
allows
changing
the
selected
candidate
pair
even
after
nomination.
K
The
only
downside
is
that
the
remote
side,
when
it
sees
a
nomination,
might
choose
to
remove
it's
local
candidates
that
aren't
the
nominated
candidate
pair,
but
I,
don't
think
that's
a
major
downside,
so
I
think
we
could
simplify
this
proposal
by
not
require
not
having
the
suppression
of
the
nomination
and
just
focusing
on
the
selecting
changing
the
slightly
candidate
pair.
Even
if
nomination
has
happened.
L
So
two
parts
to
that,
so
one
additional
reason
for
suppressing
nomination
is
if
the
ice
agent
on
its
own
decides
to
use
a
certain
candidate
pair.
L
Then,
instead
of
switching
back
and
forth
between
different
candidates,
the
application
can
simply
prevent
that
from
happening
and
continue
to
use
whatever
candidate
repair
was
in
use
or
calls
that
selected
on
its
own
and
then,
with
respect
to
this
spec
I
was
looking
specifically
at
one
section,
12.1
sending
data
which
said
that
the
Asian
must
send
data
on
the
selected
pair
itself,
but
we
we
can
discuss
offline
if
exactly
what
the
integration
of
that
is,
if
selecting
a
different
pair
is
still
somehow
allowed.
K
Okay
I,
just
one
other
comment
I
wanted
to
make-
is
that
the
current
behavior
of
Chrome
midwebrtc,
Edge
and
I'm
guessing
Safari
already
nominates
more
than
once.
So
this
particular
part
of
8445
which
we're
trying
to
avoid
violating,
is
already
widely
violated.
So
that's
all.
L
Right
so
yeah
I
guess
I'll
only
response
to
that
would
be.
This
is
sort
of
the
broadest
possible
proposal.
I
guess
that
would
avoid
the
most
amount
of
violations
to
say,
Harold.
C
E
Yeah,
so
this
sounds
good,
but
this
is
I'm
sort
of
wondering
if
this
should
be
addressed
to
the
ITF
rather
than
this
working
group.
Whether
this
is
okay
or
not
so
I
mean
I
I.
Think
some
of
the
people
I'm
thinking
of
on
our
company
are
on
vacation,
so
I
could
try
to
call
them
call
out
and
have
them
comment
on
the
issue
to
see
if
they
think
this
is
okay
or
not.
But
I
can't
tell.
L
Right,
okay,
yeah,
so
I,
don't
think
I've
updated
the
issue
with
this
proposal.
It's
yeah.
It
was
a
pretty
recent
discussion
with
Peter,
but
I'll
go
and
post
an
update
on
that
and
then
yeah
more
hours
definitely
appreciate
it.
Thank
you.
A
E
All
right,
yes,
so
I'm
going
to
talk
about
permissions
query,
which
is
an
API
to
detect
for
applications
to
detect
what
permissions
the
user
has
given
on
a
particular
browser.
E
And
the
context
is
our
motive
here-
is
that
in
Firefox,
we're
just
finishing
up
tightening
up
enumerate
devices
to
not
leak,
so
much
information
I
had
to
get
user
media,
which
is
what
the
spec
wants.
So
in
that
process
we're
running
into
some
web
compound
issues
where
some
video
conferencing
sites
we're
looking
at
whether
Firefox
would
expose
device
information
or
not
to
determine
whether
it
had
permission
or
not,
and
that's
because
Firefox
for
a
long
time
has
not
supported
permissions.
Query
for
camera
and
microphone.
So
we're
implementing
that.
E
Finally,
with
some
caveats
that
I
won't
go
into
here,
you're
happy
to
follow
some
of
the
links
to
discover
those
but
part
of
the
spec,
that
of
the
permissions
integration.
Spec
of
the
permissions
integration
section
of
our
media
capture,
main
spec
and
output.
Is
that
we're
supposed
to
implement
a
device
ID
modifier
to
the
permission
descriptor
and
the
idea
there
was
for
JavaScript
to
be
able
to
query
permissions
for
individual
media
devices.
E
Now
this
is
only
useful
on
browsers
that
Implement
per
device
permissions,
which
is
currently
only
Firefox
here
at
Mozilla.
However,
we're
not
planning
to
implement
the
device
ID
part,
because
we
have
some
ticket
printing
concerns
that
would
extend
beyond
all
the
browsers.
Also,
there
are
currently
no
implementations
of
this
part
of
the
API
and
that's
only
one
web
platform
test.
It's
a
manual
one
for
set
sync
ID
and
as
far
as
I
know,
there's
no
implementations
planned.
So
I
guess.
My
first
question
is
temperature
of
the
room.
E
Is
anyone
planning
to
implement
this
and,
if
not,
I,
propose
we
remove
this
API?
That
is
the
device
ID
member
of
the
permissions
query
dictionary
and
from
specifically
for
camera
and
microphone
and
then,
maybe
later
also
for
speaker
selection.
We
had
some
internal
discussion
where,
for
speaker
selection,
the
story
is
a
little
different.
E
E
The
existing
spec
says:
if
a
granted
permission
is
present
on
some,
but
not
all
devices
of
a
Kind
query
without
a
device
that
it
will
return
granted
and,
conversely,
if
a
denied
permission
is
present
on
all
devices
of
a
Kind,
a
clear
without
device
that
they
will
return
denied.
So
with
this,
this
existing
functionality,
We
Believe
applications
have
all
they
need
to
obtain
initial
consent
for
a
camera
or
microphone
and
a
Firefox,
which
is
the
only
browser
to
maintain
per
device.
Permissions
only
does
so
for
temporal
permissions
or
temporary
permissions.
E
So,
for
instance,
if
you
ask
for
the
default,
we're
not
specifying
constraints
and
you
already
have
access
to
camera
B,
we
will
return
camera
b
instead
of
camera
a
so.
Those
are
the
arguments,
no
implementations.
E
E
If
and
we'll
try,
I
guess
we
can
aim
for
removal
and
we'll
open
an
issue
if
we
want
to
keep
it
for
speaker
selection
and
try
to
argue
that.
Okay,
thank
you.
B
All
right,
I
think
we've
gotten
to
the
end
of
our
slides
and
we're
actually
ahead
of
time.
Wow
do
we
have
any
wrap
up
and
next
steps
Harold
from
your
notes.
C
B
B
All
right,
I
think
that's
it
for
this.
For
this
meeting
we
will
not
have
an
August
meeting,
because
people
are
typically
on
vacation,
so
the
next
meetings
will
be
at
keypack
and
I'll
send
an
email
to
the
list
summarizing
everything,
but
we
have
a
whole
bunch
of
meetings
during
feedback
loop,
including
a
Weber,
to
see
meeting
a
joint
meeting
with
the
media
working
group
so
about
at
least
I
think
four
hours
worth
of
meetings
during
feedback.
B
So
we'll
we'll
see
you
then,
but
it's
not
too
early
to
start
thinking
about
tpacking
what
you
want
to
present
there
because
go
on
vacation
as
soon
as
you
come
back,
it
will
be
Deepak.