►
From YouTube: IETF-AVTCORE-20221004-1500
Description
AVTCORE meeting session at IETF
2022/10/04 1500
https://datatracker.ietf.org/meeting//proceedings/
A
A
A
A
B
B
B
A
All
right,
so
we
are
six
minutes
past
the
hour
so
and,
as
you
probably
have
more
or
less
a
quorum
such
as
it
is
so
I
guess,
we
can
get
started.
Welcome
to
avtr
Virtual
interim
see
if
the
information
on
the
slide
I
guessed
you're,
driving
the
slides,
Bernard.
A
Yeah,
why
don't
you
do
it
because
there's
a
single
unified
side
deck,
it's
probably
easier
that
way.
Yeah
here
are
the
virtual
meeting
tips.
Hopefully,
music
I
has
not
changed
throughout
the
slide,
but
I
think
it's
all
accurate
so
far,
hopefully
you're
all
familiar
with
this
by
now
next
slide.
The
note
well
applies
here.
A
A
Sorry
so
yeah,
so
the
note
well
applies
here
so
you're
agreed
to
follow
ITF
processes
and
policies
in
regard
to
IPR
and
various
anti-harassment
and
general
good
behavior.
Please
next
slide
not
really
well.
These
remember
to
you,
know,
behave
professionally
and
treat
everyone
with
dignity.
A
decency
of
respect
in
general,
I
think
we
haven't
had
a
problem
with
this.
But
if
you
disagree,
please
contact
the
chairs
or
the
iatf
outputs
team
about
this
meeting.
A
I'll
keep
an
eye
on
zulip.
It's
actually,
though,
that
okay,
that
is
wrong.
Whichever
room
is
not
true
anymore.
We
need
to
remove
that
divide
themselves.
It's
only
zulup
now
chairs
are
me
and
Bernard.
I
am
watching.
Zulip,
Spencer
and
Harold
are
not
ticking.
A
Agenda
got
this
thing's
the
skip
pilot
format.
Bernard
will
talk
about
his
RTP
over
quick
sandbox
who's,
presenting
for
Garden
View,
real
quick
is
that
York
or
I
guess
we'll
find
out
when
I
get
there,
and
so
we'll
have
some
more
talk
about
RDP,
real,
quick,
the
spec
and
the
green
metadata
proposal.
So
it
probably
should
be
a
relatively
quick
meeting,
but
if
anybody
has
any
other
business,
they
can
also
raise
it
if
they
look.
A
A
So
that's
they're
waiting
on
the
various
Diana
steps.
So
that's,
hopefully
we'll
get
done
relatively
quickly.
Next
slide,
frameworking
we're
still
waiting
on
a
revision
for
that
I'm,
not
sure
any
of
the
people.
A
A
Excellent,
let's
do
anything
to
say:
bro!
Okay,
does
it
call
for
adoption
on
v3c
until
October
31st?
If
you
have
any
opinions
on
whether
we
should
adopt
that,
particularly
if
you're
in
favor
of
it
and
I
would
have
interest
in
doing
the
work
or
between
the
documents
or
just
you
know,
Implement
I
think
it
should
be
available
for
implementers.
Please
comment
so
that
we
can
know
that
there's
interest
again
until
October
31st
game
suite
over
RTP.
A
There
was
a
call
for
adoption.
That
was
very,
not
not
terribly
enthusiastic
and
we
thought
we
needed
a
plan
for
containing
responses
outside
the
ITF.
There's
been
no
further
follow-up
on
that
I.
Don't
think
any
of
the
proponents
of
that
are
here
today.
A
So
the
question
to
the
audience,
would
you
continue?
We
might
need
to
follow
up
with
that
on
London.
A
B
A
We
should
email,
probably
calling
them
to
us.
F
I'm
here,
thanks
John
all
right
next
slide,
so
revision
two
was
submitted
on
August
2nd
shortly
after
our
last
meeting
regarding
comments
that
were
Incorporated
from
the
Gen
art
and
art
art
reviews,
the
sector
comments
came
in
pretty
late,
actually
came
in
on
September
7th,
again
the
gist
of
it.
The
more
comments
regarding
you
know
the
opaque
standards
and
such
I
know.
F
We've
talked
about
that
before
and
kind
of
sort
of
the
direction
that
we
were
going
to
go
with
this
with
this
draft
to
treat
skip
as
an
opaque
standard,
so
that
first
part
of
the
comments
of
the
reviewer
comments
is
kind
of
what
we've
already
talked
about
before,
didn't
really
see
any
action
items
coming
out
of
that.
F
The
other
comment
major
comment
that
was
in
the
sector's
reviewer,
was
about
the
security
considerations
section.
The
comment
was
it
may
be
adequate,
not
sure
what
that
means.
Well,
we
have
to
change
because
I
mean
basically
that
text.
That's
in
the
security
considerations
is
boilerplate
out
of
RFC
8088
again
that
doesn't
our
opinion
that
skip
doesn't
add
anything
or
make
it
any
more
less
secure
or
add
any
additional
considerations
at
this
point.
F
So
again
we
didn't
feel
that
there
was
any
need
to
change
that
text
as
it's
written,
certainly
open
to
comments
or
questions
or
from
the
group,
if
they're
think
that
they
that
we
do
need
to
change
that
text.
But
from
what
I
understand
it's
pretty
much
boilerplate
from
most
RTP
format
drafts
at
this
point
so.
F
Next
slide,
finally
figured
out
I've
been
trying
to
do
the
XML
version,
submission
for
the
last
couple
of
revisions
and
have
been
unsuccessful.
I
mean
I
reached
out
to
the
ietf
tools
group.
They
were
able
to
figure
out
what
the
bugs
were
I
mean.
We
basically
started
with
a
the
Microsoft
Word
template,
that's
what
we
were
using
for
that,
but
for
whatever
reason,
the
tool
when
you
go
to
upload
and
try
to
convert
it
to
XML,
it
would
always
come
back
with
errors
again,
nothing
really.
No,
there
were
no
real.
F
There
were
only
editorial
changes
moving
stuff
around
renaming
some
sections
in
there
to
to
make
it
happen.
So
if,
when
there's
a
new
revision
to
propose,
we
can
post
it
the
XML
version,
so
next
slide.
F
So
again,
I
guess
it's
it's.
A
matter
of
I
saw
the
comments
that
were
added
below,
as
in
the
last
bullet
about
what
we
need
to.
What's
the
formal
process
for
addressing
the
the
comments,
I
thought
we
replied
back
to
the
reviewers
with
our
comments.
F
B
G
B
You
end
up
in
this
kind
of
limbo
state
and
basically
the
ads
increasingly
rely
upon
the
reviewers.
So
the
idea
of
doing
this
early
review
was
to
not
get
stuck
that
way.
So
that's
kind
of
the
last
step
is
to
just
confirm
with
them:
hey
are
you
okay
with
this,
and
particularly
to
post
it
to
the
list?
Because
then,
when
I
do
the
write-up
for
publication
requests,
I
can
say
the
following
director:
that's
reviewed
this
thing.
You
know
the
authors
responded
and
then
they,
the
reviewer,
said
it
was
okay
and
I.
B
F
F
B
A
A
B
Yeah
once
and
once
we
have
those
three
confirmations
from
the
reviewers,
then
we
can
go
to
public,
I,
think
and
then
you'll
get
a
whole
new
set
of
reviews.
Let's
discusses
that's.
B
Okay,
so
I
wanted
to
talk
a
little
bit
about
kind
of
experimental
program
that
I
pulled
together.
That
may
be
useful
in
understanding
the
transport
behavior
of
r2bo
quick.
B
So
why
do
this?
I
was
just
interested
in
exploring
the
behavior
of
RTP
over
quick
transport.
The
goal
here
was
just
to
visualize.
It
get
to
see
it
not
necessarily
to
fix
things
just
to
understand
what
might
be
an
issue
and
also
I
wanted
to
try
things
make
do
experiments,
maybe
change
a
little
code
run
the
experiments
again
see
what
the
difference
was
and
so
I
didn't
want
a
particularly
complex
implementation
to
do
that.
B
So
I
try
to
do
everything
with
a
modular
pipeline
which
is
made
possible
by
an
API
called
what
WD
streams
and
also
keep
the
code
fairly
small
and
easy
to
understand
so
only
in
JavaScript,
and
the
other
thing
was
to
explore
issues
that
would
require
not
just
a
sender
and
a
receiver,
but
a
complete
system,
and
that
would
include
things
like
bugs
and
unexpected
behaviors
there's
a
bunch
of
new
apis
and
it's
not
clear
that
they
work
as
intended.
B
There's
looks
like
there
might
be
interactions
between
codex
and
transports.
I
wanted
to
try
out
partial
reliability.
Multiplexing
forward
Behavior
stuff,
like
that.
So
having
a
complete
system,
was
kind
of
a
useful
thing
to
have.
B
So
what
can
you
do
with
it?
Well,
I'll
I'll
show
you
a
little
bit,
but
you
can
basically
vary
the
encoding
parameters.
You
can
choose
the
codec
the
bit
rates
the
resolutions
and
then
you
turn
it
on,
and
you
see
what.
Basically,
you
can
visually
compare
the
local
video
which
just
comes
right
from
your
camera
and
then
we
bounce
the
video
off
of
a
server
in
the
cloud,
and
then
you
could
see
the
remote
video
and
see
the
difference
between
the
two.
B
So
you
could
measure
the
glass
to
gas
latency
just
get
a
sense
of
how
jittery
the
video
is,
whether
they're
video
pieces
stuff
like
that.
So
you
visually
can
see
what
what
it
looks
like
and
then
after
you
press,
stop
you
get
some
Diagnostics
that
computes
a
metrics
RTG
stats
for
the
frames
loss
reordering
stuff,
like
that,
it's
all
calculated
the
application
layer,
because
at
the
moment,
Chrome
web
transport
doesn't
surface
the
quick
stack
metrics,
but
there's
also
a
graph
of
rtt
versus
frame
length.
B
We
can
do
additional
graphs
with
people
would
find
that
useful.
B
So
it
was
built
on
Next
Generation
web
apis,
which
are
all
fresh
and
new,
and
probably
quite
buggy.
That
includes
what
WD
streams
web
codecs,
web
transport
and
API
called
media
capture,
transform
I
implemented,
both
the
send
and
receipt
Pipeline
and
a
single
worker
that
might
not
have
been
a
great
idea
for
reasons.
I'll
describe
there's
some
new
features
that
have
just
come
in
called
bring
your
own
buffer
reads
that
weren't
in
Chrome
stable.
So
this
is
a
separate
version
of
the
canary.
This
didn't
intend
to
be
a
complete
implementation.
B
The
idea
was
to
let
the
experimenter
vary
the
bit
rate
and
then
kind
of
look
at
what
happens
and
considering.
What's
missing,
it
actually
works
surprisingly
well,
and
that
was
interesting,
because
I
didn't
expect
that
it
would
work
hardly
at
all
without
doing
all
this
other
stuff,
but
it
does
which
is
kind
of
interesting.
B
It's
not
production
quality
by
any
means,
but
it,
but
it's
useful
for
experimental
purposes.
Okay,
so
there's
actually
two
versions
of
this
number.
One
removes
the
transport,
so
it
does
encoding
and
decoding
only.
B
This
obviously
doesn't
help.
You
understand
transport
Behavior,
but
it's
a
use
I
think
of
it
as
a
control
group.
B
So
essentially
you
can
see
what
it
would
look
like
without
any
transport
and
then
in
number
two
I
basically
add
the
network
transport
to
the
sending
and
receiving
pipelines,
and
obviously
you
need
to
serialize
and
deserialize
as
well
and
what's
useful
is
you
can
compare
the
behavior
with
transport
to
that
without
it
may
help
isolate
some
of
the
things
you're.
Seeing
as
I
mentioned,
there
are
two
versions:
there's
one
in
Chrome,
stable
and
another
one
in
Chrome
Canary
and
there's
a
GitHub
repo.
B
If
you
want
to
look
at
the
code,
it's
it's
up
there
as
well,
and
the
code
for
number
one
is
also
up
in
a
GitHub
repo.
B
Okay,
so
here
are
some
of
the
things
you
can
play
with.
You
can
select
the
average
Target
bit
rate,
which
is
I
guess
supposed
to
be
an
average,
including
the
keyframe
and
the
keyframes
in
practice.
The
actual
bandwidth
consumption
is
typically
lower
and
I.
Guess
that
depends
on
your
keyframe
interval,
which
is
the
number
of
frames
between
each
keyframe.
The
default
is
300,
so
essentially
at
30
frames,
a
second
that
would
be
a
keyframe
every
10
seconds.
The
Codex
supported
a
vpa,
dp9,
h.264
or
ab1.
B
I
did
note
some
strange
behavior
of
vp9
I
think
that's
probably
a
bug
in
web
codecs
you.
If
you
select
real
time
you
get
this
very
large
frame
size
even
for
p
frames
like
10,
kilobyte
keyframes,
which
doesn't
make
sense.
If
you
select
quality
that
changes
so
I
think
there's
a
bug
there.
Av1
is
pretty
solid
on
Mac
OS,
not
so
solid
on
windows.
So
there
may
be
some
issues
there.
B
The
currently
Chrome
only
supports
h26
if
I
decode,
so
you
can't
really
use
it
in
this
application.
You
can
choose
whether
you
want
Hardware
acceleration
or
software.
It
doesn't
seem
that
Hardware
acceleration
is
very
widely
supported,
so
the
default
is
no
preference
and
which
will
probably
end
up
being
software
most
of
the
time.
There's
this
latency
goal,
which
can
be
quality
which
produces
smaller
frame
size
but
only
takes
marginally
longer
than
real
time.
I'll
talk
about
that.
That's
a
little
the
behavior
there
isn't
quite
what
you'd
expect
you
can
actually
support.
B
Svc
temporal
scalability.
The
default
is
three
temporal
layers
and
that's
useful
for
reasons
I'll
describe
because
you
can
do
partial
reliability
and
then
you
can
choose
your
resolution,
so
you
can
have
a
small
resolution
or
you
can
choose
as
high
resolution
as
your
camera
will
support.
Some
of
the
newer
camera
supports
full
HD
and
that
is
sometimes
useful,
particularly
for
for
looking
at
High
higher
bit
rates.
B
The
video
quality
is
highly
dependent
on
the
device
and
the
camera
good
quality
seems
to
be
possible,
particularly
if
you
get
up
to
about
a
megabit
and
you're,
using
a
desktop
or
a
high
quality
notebook.
You
have
a
good
camera
I've
been
able
to
generate
full
HD
video
with
a
talking
head
and
a
Target
bit
rate
of
one
megabit,
and
it
looks
pretty
good.
The
actual
bit
rate
is
more
like
700
kilobits,
something
like
that.
B
B
The
other
thing
is
that
the
combination
of
quick
and
temporal
scalability
gives
you
pretty
good
resilience.
Properties.
Quick
is
good
at
at
re-transmitting
getting
stuff
to
the
other
side,
and
if
you
do
temporal
scalability,
then
a
lot.
A
large
proportion
of
all
the
frames
you
send
are
discardable,
so
you
don't,
if
they
take
longer
than
you
think
you
can
actually
send
a
reset
and
and
it'll
be
fine.
The
decoder
will
keep
on
chugging,
so
that
is
is
actually.
B
It
looks
like
a
kind
of
nice
combination
for
resilience
there
and
overall
in
the
experiments
I've
done
I've
seen
very.
Very
low
loss
I've
also
been
on
pretty
good
networks.
So
that's
that's
part
of
it,
but
overall
it
seems
to
do
a
good
job
of
of
delivering
frames
so
latency.
This
is
where
there
were
more
issues.
B
I
observed,
particularly
with
the
higher
resolutions
I
observed,
glass
to
glass
latency
considerably
higher
than
the
measured
frame
rtt
we'll
get
into
that
in
a
bit.
But
the
the
P
frames
were
typically
small
and
we'll
talk
about
that,
but
we're
talking
about
a
few
thousand
bytes.
So
a
couple
of
packets
for
the
P
frames
and
those
come
in
at
very
low
rgt.
B
They
look
like
they're
coming
in,
in
many
cases
clustered
around
rtt
bin,
but
the
iframes
are
a
lot
larger
I've
seen
iframes
as
large
as
200
kilobytes,
and
they
exhibit
some
issues.
Often
the
frame
rtts
are
multiple
times
higher
than
the
P
frames.
Although,
interestingly
not
always
we'll
talk
about
that,
the
effect
is
most
pronounced
with
the
high
Gap
sizes,
as
I
said,
I
use
the
default
of
300,
and
in
that
case
you
only
get
a
few
iframes
for
experiment
and
I.
Think
I
have
an
idea
why
this?
This
might
be.
That
way.
B
Also,
you
see
the
effect
of
this.
This
higher
frame
rtt
and
the
glass
latency,
even
under
con
conditions
of
low
bandwidth
utilization,
so
I've
been
running
these
experiments
on
a
gigabit
Network.
You
know
only
consuming
700
kilobits
with
very,
very
low
loss,
and
you
still
see
this
high
glass
to
glass,
latency
and
some
of
the
large
art
frame
latencies
on
the
eye
creams.
B
So
we'll
talk
a
little
bit
about
why
that
might
be
okay,
so
here's
an
example
running
av1
with
a
full
HD
camera
Target
bitrate
was
my
megabit.
The
Gap
was
300
and
the
actual
bandwidth
consumption
was
around
700
kilobits.
B
What
you
see
here
in
the
smaller
frames,
these
are
probably
mostly
P
frames
or
almost
all
P
frames,
they're
very
tightly
clustered
around
100
millisecond
frame
rtts,
so
that
would
this
is
probably
very
close
to
rtt
min,
and
what
you
can
see
here
is
is
you
can
draw
a
line
with
the
of
the
rtt
versus
frame
length
and
at
least
on
the
on
the
lower
end
of
the
length
scale,
it's
pretty
close
to
that
minimum
round
trip
Transit
time,
but
on
the
right,
you'll
see,
there's
some
frames
of
roughly
25
kilobytes
and
then
another
one,
maybe
37
something
like
that.
B
These
are
iframes
there's
two
of
those,
because
I
ran
the
experiment
roughly
around
12
seconds,
and
so
you
had
two
keyframes
and
there
you'll
see
that
the
there's
the
rtt
above
and
beyond
the
minimum
round
trip
Transit
time.
The
frame
rtt
is
looks
like
about
200
milliseconds,
larger
on
one
and
maybe
100,
and
something
on
the
other
one.
So
the
question
is
why
why
are
we
seeing
this?
B
What's
what's
going
on
here
and
I'm,
not
not
sure
I
could
have
the
complete
answer,
but
I'll
give
you
what
I
I
think
the
answer
might
be
so
basically
what's
happening
here,
because
the
Gob
size
is
so
large
is
that
the
the
P
frames
are
all
pretty
small
right,
a
couple
of
packets,
so
they
don't
really
push
the
congestion
window
up
so
essentially
we're
in
an
application,
limited
situation,
and
as
a
result
of
that,
you
don't
really
get
a
good
bandwidth
estimates.
So
what
happens?
B
B
So
you
get
you
get
multiple
rtts.
So
that's
why
you
see
the
the
RT,
the
frame
rtt
being
multiples
of
this
rtt
Min
that
you'd
see
the
other
thing.
Is
that
because
you
don't
really
have
a
particularly
good
bandwidth
estimate,
the
application
isn't
getting
good,
particularly
good
feedback,
and
so
it
can't
adjust
the
iframe
size
or
quality.
B
So,
for
example,
it
could
change
the
the
average
Target
bit
rate
once
every
Gap
or
potentially
it
could
use
web
codex
is
about
to
implement
per
frame
QP,
so
it
could,
it
could
adjust
the
actually
either
the
iframe
or
the
P
frame
on
a
per
frame
basis
in
response
to
perceived
congestion,
but
if
the
bandwidth
estimate
isn't
very
good,
that
not
easy
to
do
so,
one
potential
way
to
fix
this
is
through
what
we
call
probing,
which
was
implemented
wherever
you
see
where,
essentially
you
try
to
fill
up
the
pipe,
even
if
the,
even
if
the
P
frame
size
is
very
low
and
so
you're
not
filling
the
congestion
window,
you
try
to
fill
it
with
something
else
like
RTX,
probes
or
effect,
or
something
like
that
to
kind
of
push
up
the
congestion
window
so
that
by
the
time
you
get
a
better
bandwidth
estimate
and
by
the
time
you
send
the
iframe,
you
don't
have
all
these
multiple
rtts
and
and
can
send
it
in
one
rtt.
B
B
You
reduce
the
frame
size
quite
a
bit,
because
it's
the
encoder
spends
more
time
on
it
and
you
reduce
it
and
you
actually
see
the
the
that
kind
of
excess
rtt
frame
rtt
go
down,
so
the
frame
size
gets
smaller
you're,
not
using
as
many
round
trips
to
to
send
the
iframe
and-
and
you
see
it
come
closer
to
the
transmission
line.
B
Also
after
you
send
that
maybe
second
iframe
subsequent
iframes
show
a
lower
frame
rtt,
so
the
congestion
window
maybe
got
pushed
up
a
little
bit.
It
finally
grew
because
you
took
you,
try
you
actually
fill
the
pipe,
and
so
if
your
Gap
sizes
are
smaller
it
you
actually
get
a
tighter
rtt
scatter,
which
is
kind
of
interesting
I'm,
not
not
necessarily
a
great
idea,
because
it
uses
a
lot
more
bandwidth,
but
it's
just
an
experiment
that
shows
that
this
this
might
be
the
cause
for
what
we're
seeing
here.
B
Okay,
another
effect
that
I
noticed
was
the
effect
of
CPU
utilization.
As
I
said,
I
put
all
of
the
the
encode
decode
and
transport
all
on
one
worker
thread.
I,
probably
wasn't
a
great
idea
because,
with
the
ab1
and
higher
resolutions,
pretty
much
taking
out
that
that
core
or
or
thread
with
100
CPU
utilization
and
that
may
be
the
cause
of
the
high
glass
glass
latency,
not
not
entirely
sure
so,
one
way
to
try
to
figure
out
if
this
is
true
is
to
support
more
threads.
B
So,
for
example,
maybe
it
could
separate
the
sender
and
receiver
pipeline
onto
a
separate
thread
and
see
if
that
lowers
the
glass
or
less
I
can
see.
B
Okay,
so
also
in
the
process
of
playing
with
this
came
up
with
some
some
issues
that
I
filed
and
I.
Think
Matthias
will
talk
about
some
of
those,
but
here
are
some
of
the
things
that
came
up
in
the
process
of
playing
with
this
stuff.
One
was
that,
as
I
mentioned,
partial
reliability
seemed
like
a
good
idea,
because
we
had
support
of
SVC
and
so
for
the
discardable
frames
we
didn't
have
to
wait
forever,
for
them
to
re-transmit
could
potentially
just
send
a
reset
after
the
timer
expires.
B
But
one
of
the
interesting
things
is
that
partial
reliability
requires
support
for
reset
frames
in
any
forwarders
that
you
have,
and
also
when
you
do
this
on
the
receiver.
You
don't
necessarily
know
when
you've
gotten
the
complete
frame
So.
In
theory,
at
least
the
quick
spec
says
when
you
send
the
reset.
The
sender
should
stop
retransmitting
when
it
gets
that
reset
and
also
on
the
on
the
receiver.
B
It's
it's
a
should
that
it
should
basically
use
the
reset
stream
as
a
signal
prioritize
it
when
it
comes
in
not
not
queue
it
up
and
wait
till
it
you
get
to
the
reset.
So,
potentially
you
may
not
get
the
entire
frame,
and
so
having
a
lane
field
is
a
potentially
useful
thing.
I
think
Matthias
will
talk
about
that.
B
The
other
thing
is
because
of
this
differential
reliability.
You
may
want
to
send
some
portions
of
the
stream
with
with
datagrams
like
the
P
frames,
or
maybe
you
want
to
send
the
iframe
with
with
reliably
over
a
reliable
stream
and
so
forth.
So
that's
another
issue.
I
think
Matthias
will
talk
about.
B
Another
thing
that
came
up
was
multiplexing
of
data
and
media.
Basically
I
wanted
to
make
this
as
simple
as
possible
and
have
a
single
quick
connection.
So
what
I
did
is
I'm
actually
sending
the
signaling
over
the
same
quick
transport,
quick
connection
as
the
media,
so
that
seems
like
a
pretty
useful
thing
to
do
and
also
I'm
not
doing
this,
but
you
might
want
to
communicate
state
or
device
input
again
over
the
same
quick
connection.
B
So
it's
useful
to
be
able
to
Multiplex
the
data
and
media
there's
an
issue
31
that
talks
about
some
of
this
I.
Think
Matthias
will
talk
about
that
as
well
and
then
there's
some
issues
with
RGB
topologies
I
was
just
doing
kind
of
an
echo
server,
which
is
kind
of
like
an
RTP
translator.
B
But
that
got
me
thinking
about
a
few
things
that
a
translator
needs
to
do,
which
is
different
from
what
we
think
of
RTP
translators
is
doing
so,
particularly
if
you
were
to
translate
between
not
to
be
over
quick
and
r2p
over
UDP
you'd
actually
need
codex,
specific
knowledge
in
the
translator
to
do
the
packetization,
because
you're
basically
dumping
a
whole
frame
into
this
quick
potentially
into
this
quick
stream,
and
then
the
translator
needs
to
figure
out
where
how
to
packetize
it,
and
that
could
be
a
little
bit
awkward,
because
if
you
think
of
the
translator,
we
don't
usually
think
of
them,
as
you
know,
having
knowledge
of
the
packetization
of
all
these
different
codecs.
B
So
that's
a
little
weird.
The
other
thing
is,
if
you're
using
S
frame
for
end-to-end
encryption,
it
would.
How
does
this
translator
packetize
this
opaque
blob
right
of
a
best
frame
stuff,
even
if
it
did
have
codex
specific
knowledge?
It
actually
doesn't
have
the
encryption
key.
So
how
would
it
do
that?
The
only
option
it
would
have
would
be
generic
packetization
and
we've
talked
about
that.
So
those
are
those
are
some
questions.
B
I
think
Matthias
may
talk
about
some
of
those
in
the
next
presentation
a
little
bit
more
about
partial
reliability.
Basically,
the
idea
was
to
take
advantage
of
SVC
to
not
to
have
a
not
have
to
re-transmit
the
discardable
frames
endlessly,
so
we
set
a
timer
for
those
which
could
be
fairly
low.
B
As
you
saw
in
the
graph,
the
the
P
frames
were
fairly
tightly
clustered
around
rtt
Min.
So
basically,
if
something
has
a
you
know
log
much
larger
rtt.
You
basically
can
send
a
reset
and
forget
about
it,
but
as
I
mentioned
it
does
it's
helpful
to
have
a
length
field,
because
then
you
can
figure
out
if
you
got
it.
The
other
thing
to
note
is
that
and
I
think
you've
seen
this
on
the
graphs.
The
iframes
can
be
pretty
large,
I've
seen
them
as
large
as
200
kilobytes.
B
So
a
16-bit
field
isn't
necessarily
large
enough
to
cover
that.
So
that's
something
to
think
about.
The
other
thing
is
in
my
server
implementation.
I
didn't
found
some
bugs
I
wasn't
forwarding
the
reset
frames.
I
was
also
doing
store
and
forward,
which
adds
a
little
bit
to
the
latency.
So
you
have
some
things
to
think
about
and
a
forward
or
translate.
It
should
probably
just
forward
the
reset
streams
and
the
fins
and
stuff
like
that
doesn't
need
to
get
the
entire
quick.
B
The
entire
frame
before
forwarding
it
on
should
probably
just
do
a
cut
through
would
make
a
lot
more
sense,
foreign
a
little
bit
about
the
data
and
media
there's
multiple
uses
for
it
and
in
this
particular
experiment,
I
didn't
need
multiple
APNs,
just
use
a
single
Welk
transport,
alpn
and,
and
you
can
send
whatever
you
want
over
it
and
that's
kind
of
how
webrtc
works.
There's
a
single
alpn
for
dtls
for
both
media
and
data,
but
it
does
bring
up
the
question
of
how.
B
What
do
you
use
to
distinguish
the
media
from
the
data?
I
did
some
kind
of
a
dirty
hack
to
do
it,
but
we
probably
should
talk
about
how
that
would
work.
Foreign,
okay.
Are
there
any
questions.
A
C
Yeah
Bernard.
Thank
you
for
doing
this
work
a
lot
one
question
I
had,
and
this
is
going
to
be
kind
of
a
your
take
of
the
year
on
the
on
the
state
of
the
universe.
Question:
do
you
have
a
sense
that
we're
going
to
need
to
rely
on
quick
datagrams
to
do
RTP
over
quick,
well.
B
Yeah,
that's
a
really
great
question
Spencer.
That
was
one
of
the
questions
I
had
going
into
this,
and
it's
why
I
focused
on
the
frame
per
stream.
Instead,
the
good
news
is.
As
for
the
P
frames,
you
saw
they're
really
clustered
around
rtt
men,
so
at
least
for
those
it
doesn't
seem
like
you
need
datagrams
right
you're.
Getting
this
quick
does
a
really
good
job
of
sensing,
the
bandwidth
and
so
forth.
So
you
know
I
guess
my
answer
to
that.
B
Spencer
would
be
dependent
on
whether
I
can
get
that
glass
to
glass,
latency
down,
I
suspect
the
glass
glass
latency
issues
have
nothing
whatsoever
to
do
with
datagrams
versus
streams.
I
think
it
may
be
due
to
the
CPU
pegging
that
I'm,
seeing
maybe
some
other
things
and
I
have
been
as
I
improve
the
code.
The
GLA,
the
latency
overall
latency
comes
down
quite
a
bit,
so
I
want
to
take
a
rain
check
on
that,
but
I'm,
not
I'm,
not
really
I.
B
A
Should
I
are
you
in
the
queue
to
speak?
You
were
asking
to
you're
asking
to
share
your
screen,
but
that
might
have
been
a
mistake.
G
Can
you
hear
me
okay,
yeah,
I
guess
I
can
unmute
my
video
that
worked
all
right.
I
was
just
gonna.
Imagine
that
in
my
previous
experimentation
with
using
frames
or
sorry
streams
that
it
worked
fine,
except
for
the
congestion,
control
issues
and
I,
think
those
would
be
the
same
between
datagrams
and
streams.
So
I
don't
think
it's
a
question
of
streams
versus
datagrams.
It's
more
of
a
question
of
how
the
congestion
control
work
for
both.
B
Yeah
I
mean
if,
if
the
hypothesis
we
had
here
is
correct,
you
know
it
wouldn't
matter.
If
you
dump
the
iframe
as
in
a
single
stream
or
if
you
dumped
it
in
a
datagram
right,
I
guess
the
only
difference
would
be
yeah.
The
the
quick
would
basically
congestion
control
would
do
the
same
thing
more
or
less
right,
Peter,
yeah,
so
you'd
see
more
or
less
the
same
kind
of
extended
you'd
have
to
you'd,
see
essentially
multiple
rtts
of
to
to
get
the
whole
thing
there.
Anyway,
the
suggestion
window
wouldn't
be
opening
up.
A
I'm
next,
in
the
queue
as
an
individual,
just
asking
I'm
wondering
if
you
had
any
thoughts,
I
guess
this
might
be
then
more
input
to
either
the
web
transport
working
group
or
the
the
w3c
web
address
API,
whether
you
know
the
current,
you
know
what
would
be
useful.
Do
you
have
enough
information
to
do
meaningful?
You
know
debugging
and
Analysis
of
what's
going
on
or
do
you
need
more
information
out
of
the
browsers.
B
Yeah,
that's
a
great
question.
I
I
will
say
that
debugging
this
stuff
has
not
been
fun
because,
like
you
use
your
traditional
web
tools
like
the
performance
toolbar
and
it
doesn't
show
any
info
on
web
transport.
B
So
it's
not
like
you
get
any
info
on
a
quick
stack.
It's
not
like.
You
have
webrtc
internals,
there's
like
nothing
like
that,
so
basically
I'm
having
to
instrument
everything
myself
and
look
at
packet
traces.
So
actually
that
is
a
great
question
and
I
I
will
say
that
the
more
I
work
on
this,
the
more
I
appreciate
webrtc,
where
the
browser
does
everything
for
you
like
in
in
this
kind
of
work.
You
have
to
do
your
own
threading.
H
H
I
assume
that
when
you
talk
about
sending
one
frame
per
stream
is
like
a
you
are
considering
like,
like
the
empty
you
to
be
infinite,
as
you
just
send
a
single,
an
RTP
packet
with
the
whole
frame
as
payload,
we
didn't
make
sense
to
you
say
like
using,
like
RPC
4571,
to
send
to
a
still
packetize
it
the
frame
in
multiple
RTP
packets
and
send
all
together
in
the
same
quick
stream,
so
wife
first,
because
they
will
allow
us
to
have
like
the
and
again
with
that
just
half
receipt
RTP
over
with
transport
in
one
side,
with
this
better
stream
in
with
one
frame
per
stream,
but
it
will
still
have
the
individual
RTP
packets
inside
the
the
stream,
so
we
can
forward
them
to
then
by
normal
webrtc
and
second
good.
B
Yeah
I
think
any
Lane
field
would
does
help
with
that,
whether
it's
4571
or
something
else,
I
think
I.
Think
it's
a
good
question
Sergio,
whether
you
might
want
to
packetize
it
anyway.
I
don't
have
enough
information.
One
of
the
hypotheses
I
had
before
this
other
this
one
that
I
presented
here
was
that
I
could
have
been
a
pacing
issue
like
dumping
the
whole
these
huge
iframes
in
and
having
them
spewed
into
the
network
might
not
have
been
such
a
great
idea.
B
I.
The
the
information
I've
seen
on
the
congestion
Windows
suggested
that's
not
happening,
I
think.
Maybe
we
should
leave
that
discussion
of
the
of
the
lane
field.
I.
Think
Matthias
will
have
a
presentation
on
that,
so
we
can
pick
that
up
there,
but
anyway,
I
think
discussion
of
the
langfield
is
a
useful
thing
to
talk
about.
A
D
Hi
to
all
this
John
and
seems
to
me
that
the
congression
control
issue
is
strongly
linked
to
the
different
models
of
congestion
control
that
we
have
with
real-time
media
and
with
with
the
window,
because
the
the
thing
you
hit
is
that
you,
you
don't
get
all
the
frame
iframe
within
out
within
the
congestion
window.
B
Yes,
that
was
my
conclusion:
Harold
I
thought
it
might
have
been
pacing,
but
then
I
I
checked
the
bandwidth
and
I
checked
the
some
of
the
traces
and
it
didn't
seem
to
indicate
that
pacing
was
the
problem.
A
I
Ahead,
yes,
great
yeah,
thank
you
Bernard.
That
was
very
interesting.
So
thanks
for
building
this
and
bringing
the
issues
back
to
the
spec,
I
will
talk
about
some
of
them
later,
but
let's
start
with
a
short
overview
of
what
changed.
I
Since
last
time,
we
added
a
short
paragraph
about
clarifying
the
usage
of
streams
which
now
says
that
we're
only
using
unidirectional
streams,
and
then
we
added
a
paragraph
that
said
that
we
have
to
immediately
close
streams
after
sending
one
packet
which
is
important
to
make
sure
that
the
receiver
can
identify
the
end
of
a
packet
which
is
useful.
I
If
you
don't
have
a
length
packet,
but
we
will
get
to
that
later
again,
then
we
clarified
the
flow
identify
usage
for
datagram
retransmissions,
which
was
an
open
issue
last
time
because
it
wasn't
clear
where
or
which
flow
IDs
I
should
use.
I
That
depends
on
which
RTP
stream
they
are
sent
in,
which
is
now
more
clearer
in
the
spec,
but
we'll
also
get
to
flow
identifiers
again
later
I
added
a
short
paragraph
about
exposing
the
bandwidth
estimation
to
the
API
considerations
and
I
removed
a
couple
of
absolute
editor
notes
from
the
document,
and
currently
there
are
a
few
open
pull
requests.
One
is
for
topology,
which
Bernard
already
mentioned,
and
then
there's
one
for
screen
concurrency
and
one
for
expressing
the
congestion
control
requirements
instead
of
specifying
fixed
congestion,
control,
algorithms,.
I
Then
yeah
thanks,
so
I
would
like
to
talk
about
four
issues
today
and
two
of
them
are
kind
of
linked,
which
is
this
one
and
the
next
one
Bennett
already
mentioned
arpn
usage
and
he
used
one
alpn
if
I'm
sort
correctly
for
sending
data
and
RTP.
In
the
same
query
connection
we
currently
Define
the
alpn
token
RTP,
most
quick
in
the
document,
which
we
thought
could
indicate
that
we
can
Multiplex
RTP
and
other
protocols
over
the
same
quick
connection.
I
The
problem
with
this
may
be
that
if
we
have
other
protocols
defining
a
map
into
quick,
they
also
have
to
Define
some
alpn
token,
which
would
be
incompatible
to
ours
and
a
second
problem
which
come
or
came
up
with.
That
is
the
issue
of
multiplexing,
which
will
be
the
next
slide.
But
let
me
finish
this
one
first
before
we
maybe
discuss
both
of
them.
I
If
we
do
multiplexing
in
or
RTP
over,
RTP
moves
quick
as
multiplexing
multiple
different
protocols.
We
don't
really
know
how
is
the
multiplexing
actually
going
to
work
between
the
different
protocols?
So
our
proposal
for
now
is
the
one
on
the
left
side
here
in
the
green
box.
To
Define
RTP
quick
instead
of
RTP
moves
quick
and
then
have
future
documents
to
specify
multiple
new
APNs
for
different
multiplexings
between
different
protocols,
for
example,
RTP
rtcp
and
some
other
data
protocol
and
I.
I
Think
that
approach
would
be
similar
to
what
was
done
for
webrtc,
which
defines
a
new
LPN
for
webrtc
in
an
extra
document.
I
I
I
I
So
the
multiplexing
issue
is
that
we
initially
Define
the
flow
identifier
for
multiplexing,
RTP,
rtcp
and
whatever
else
by
protect
pending
the
flow
identifier
to
each
packet
we
send
and
having
the
receiver
identify,
which
stream
it
is
sent
on
by
you,
pausing
the
slow
identifier
first
and
then
continue
passing
this
packet
on
if
we
Define
RTP
click
as
I
said
or
proposed
on
the
slide
before,
we
only
have
to
use
RTP
rtcp
in
this
quick
connection,
and
we
only
have
the
issue
of
multiplexing
RTP
and
rtcp,
and
we
don't
have
to
think
about
any
other
protocols
on
the
same
quick
connection.
I
There
is,
however,
still
the
question:
if
we
still
need
a
flow
identifier,
we
can
do
RTP
rtcp
multiplexing
by
RFC
5761,
and
we
can
send
multiple
types
of
media
nursing
RTP
session,
but
I
think
there
is
no
equivalent
to
sending
multiple
different
RTP
sessions,
as
we
could
do
it,
for
example,
in
using
different
RTP
at
different
UDP
ports,
because
we
only
have
this
one
quick
connection.
So
if
we
still
want
to
use
multiple
RTP
sessions
on
the
same
quick
connection,
we
still
need
something
like
a
flow
identifier
to
identify
them.
I
If
we
do
RTP
most
quick,
then,
as
I
said
before,
we
still
have
the
issue
of
multiplexing
RTP
and
rtcp
with
other
protocols,
and
that
could,
for
example,
be
done
by
the
flow
identifier,
but
that
flow
identifier
may
not
be
compatible
with
the
other
protocol,
whichever
that
is,
and
an
alternative
to
that
could
be
to
use
RFC
7983
biz.
I
But
the
problem
with
that
I
think
is
that
we
don't
necessarily
want
to
Multiplex
the
protocols
which
are
specified
in
that
RFC
because,
for
example,
dtls
doesn't
make
much
sense
to
put
on
top
of
crit
next
to
RTP.
If
we
can
just
send
data
channels
protocols,
for
example,
next
to
RTP
instead
without
using
dtls,
so
yeah
they're.
Not
do
you
have
any
thoughts
about
that
yeah.
B
Yeah
so
I
I
think
the
there's
a
couple
of
questions.
I
have
I,
don't
know
that
I
have
Solutions,
but
one
is
that
if
you
start
requiring
multiple
APNs
I
think
that's
going
to
end
up
requiring
multiple
quick
connections.
B
I
mean
they
they
could
go
over
the
same
port.
You
know
with
different
connection
IDs,
but
I'm,
not
sure.
That's
that's
I
think!
That's!
Probably
you
probably
won't
want
that.
I
know
that
China
has
stuff
them
in
the
same
connection
does
raise
issues
maybe
a
priority
or
something
like
that,
but
the
kind
of
things
that
I'm
thinking
of
that
you
would
Multiplex
and
the
data
are
probably
relatively
small
bandwidth
I
mean
I.
B
Guess
you
could
do
a
file
transfer
which
could
be
a
problem,
but
in
most
of
the
time
it
would
be
something
like
the
negotiation
or
some
small
updates,
in
which
case
it
would
be
nice
to
be
able
to
send
them
over
the
entire
quick
connection.
So
that's
why
I
I
kind
of
like
RTP,
mux,
quick?
B
Of
course
you
then
raise
the
question
of
great.
You
can
Muck
some,
but
how
that
probably
deserves
more
discussion.
B
I
would
agree
with
it
doesn't
make
a
lot
of
sense
to
Multiplex,
dtls
and
and
support
that
within
the
within
a
single
quick
connection,
but
so
we'd
probably
need
to
need
to
think
about
that.
A
little
more
with
respect
to
the
flow
ID
I
do
have
a
question
about
the
usage
model
and
we
can
talk
about
it
more,
but
the
reason
we
have
the
flow
ID
and
web
transport
is
because
we'd
have.
B
The
idea
was
to
support
multiple
browser
tabs,
and
so
we
used
the
flow
ID
to
separate
the
browser
tabs
I'm,
not
sure,
necessarily
that
that
would
be
the
model
in
RTP
over
quick,
although
I
could
be
wrong,
so
it
might
be
worthwhile
to
kind
of
articulate
what
what
the
use
cases
are
there
and
whether
we
might
just
say,
hey
everything.
That's
on
this
quick
connection
is
a
single
session
and
leave
it
at
that.
I
Okay,
thanks
I
also
think
it
makes
sense
to
have
something
like
RTP,
most
quick
for
later,
for
example,
for
using
RTP
and
data
channels
or
something
that
does
something
like
datogenets
in
quick
in
the
same
connection
for
RTP
and
that
data
General
protocol.
But
as
long
as
we
don't
have
this
other
protocol,
I
think
we
should
move
on
with
this
document
and
Define
RTP
quick
first
and
then
allow
future
documents
to
Define,
rdpmx,
quick,
which
can
do
RDP
and
another
data
Channel
protocol
on
top
of
quick
and
design
connection.
Does
that
make
sense.
B
I
G
So
the
question
of
multiplexing
is
very
similar
to
web
transport,
and
so
the
solution
we
came
up
with
there
I
think
would
make
sense
for
sending
media
as
well,
but
I
think
what
you
could
do
in
this
document
is
Define
how
to
send
RTP
over
quick,
like
connections
that
have
streams
of
datagrams.
And
then,
if
somebody
wants
to
run
that
over
web
transport,
they
can.
G
G
I
A
Yeah
Jonathan,
Max,
I,
I,
think
sort
of
you
know
partially
largely
agreeing
other
I.
Think
Bernard's
comment
makes
sense
in
the
context
of
web
transport,
where
you
have
your
own
custom,
signaling
mechanism,
I.
Think
if
you're
talking
about
raw
quick,
where
I
would
have
whatever
you're
multiplexing
would
have
to
be
a
defined
protocol.
I,
don't
think
we're
anywhere
near
ready
for
anything
like
that.
A
Yet
I
agree
that
having
something
making
making
this
make
sense
for
both
raw,
quick
and
web
transport
makes
sense,
whether
that's
whether
you
want
to
drive
or
describe
that
as
an
abstraction
layer
or
specifically,
mapping
to
both
I
mean
it's
more
or
less
the
same
thing.
It's
just
how
you
describe
it,
but
I
think
I
agree
that
having
writing
this
in
a
way
that
makes
sense
for
both
raw,
quick
and
web
transport
would
be
sensible.
I
So
so
one
question
I
have
about
the
making
it
work
for
both
is
then
we
we
have
the
multiplexing
from
web
transport,
but
we
still
need
some
kind
of
multiplexing
for
the
work
week.
Version
right.
A
Sorry
wrong
button,
maybe
if
you
have
it
defined
for
web
transport,
then
we
don't
need
multiplexing
for
RTP
itself.
A
If
we
don't
know
what
we're
actually
multiplexing
it
with
that
we're
defining
as
opposed
to
you,
want
to
Multiplex
it
with
something
which
is
what
web
transport
does
for
you
something
you
know
if
you
have
something
customer
proprietary
web
transport
is
obviously
well
designed
for
that
for,
whereas,
for
you
know,
I
feel
like
if
we're
multi,
if
we
want,
if
we
need
to
be
dmox
with
at
the
RDP
level,
if
you're
actually
talking,
you
know
box
to
box,
that's
directly
doing
RTP
without
you
know,
you
know,
if
you're
doing
custom
stuff,
then
you
can
do
custom
stuff
for
the
multi-multiplexing,
whereas
I
think
you
should
Define
it
at
the
protocol
level.
C
I
Yeah
I
think
I
have
to
think
about
this
a
bit
more
about
how
this
kind
of
abstraction
I
think
of
it
as
an
abstraction,
would
work
for
RTP
over
rock
with
them
yeah.
Maybe
we
can
move
on
first
year,
foreign.
E
Thanks
I
had
a
I
had
I
had
multiple
small
points,
briefly
back
to
Bernard's
point
about
sending
another
small
signaling
messages
back
and
forth,
I'd
like
to
understand
what
kind
of
signaling
between
whom
or
kinds
of
renegotiation,
would
usually
follow
the
signaling
path
that
would
be
used
to
set
up
the
subsequent
quick
connection
for
RTP
rtcp.
If
you
had
yeah
some
of
the
signaling
part,
then
that's
it.
E
That
connection
would
exist
up
front
and
might
actually
go
between
different
entities,
so
I'm
I'm
I'm,
not
completely
clear
on
on
what
that
would
be
used
for
exactly
the
other
point
I
wanted
to
make
is
that
we
had
a
bunch
of
comments
early
on
about
LPN
usage
and
what
the
exact
semantics
would
be,
and
one
of
the
decisions
that
we
took
was
trying
to
keep
it
dead,
simple
and
say.
E
Okay,
we
need
to
understand
how
this,
how
all
of
this
works
for
plain,
RTP
rtcp
only
so
that
we
don't
get
into
prioritization
issues
or
different
types
of
congestion,
controls
for
reliable
and
unreliable
media
and
so
forth,
and
so
so
there
are
a
whole
bunch
of
questions
to
be
answered
and
understood,
and
if
we
want
to
do
this
broader
perspective
and
that
might
easily
take
us
quite
a
bit
of
time
to
to
sort
this
out.
E
Rather
than
opening
this
up
very
broadly
at
the
beginning,
which
might
make
us
go
around
in
circles
a
few
times.
I
do
see
the
value
in
making
this,
for
example,
usable
in
a
broader
abstraction
but
I
had
I
would
feel
more
comfortable
if
I
understood
all
the
implications
before
diet
trying
to
design
a
solution.
So
this
was
why
I'd
be
happy
to
rather
focus
on
a
narrow
solution.
Space
right
now,
maybe
exploring
the
others
in
parallel,
but
not
trying
to
board
the
ocean
right
away.
E
I
B
I,
don't
think
you
necessarily
have
to
take
on
boiling
the
ocean.
In
other
words,
you
could
say
hey.
You
know
we're
doing
this
RTP
mux
quick
thing.
We
recognize
that
this
doesn't
work
for
all
apps
like
if
you
want
to
send
a
file
transfer.
B
You
know
it
might
not
work
well,
but
but
this
is
something
you
can
use
for
these
kind
of
small
data
exchanges,
which
is
much
of
what
we
see
with
webrtc.
By
the
way
I
mean
there
are
some
people
who
use
the
data
channel
to
send
media
along
with
data,
but
I'd
say
most
of
the
RTC
data.
Channel
usage
is
these
kind
of
small
small
updates
like
game
streaming,
going
back
the
other
way
stuff
like
that?
B
Typically,
you
don't
do
a
lot
of
muxing
of
data
and
media,
so
I,
don't
think
you
necessarily
have
to
solve
all
those
problems.
You
can
say
hey.
This
is
what
we
do.
It
doesn't
solve
all
problems,
and
then
you
know
if
something
comes
along
later,
my
concern
would
be
we
may
or
may
not
have
this
priority
prioritization
solution,
necessarily
it
could
take
years.
B
And
meanwhile,
if
people
don't
have
the
ability
to
Multiplex
data,
they'll,
just
invent
their
own
thing
or
just
do
multiple
quick
connections
and
the
whole
thing
will
just
rapidly
degrade
so
I
just
say
just
try
to
do
something:
simple,
State
the
limitations
and
be
done
with
it
and
not
try
to
solve
every
problem.
That
would
be
my.
I
B
Well,
you,
you
know
we
should
you
probably
should
open
an
issue
and
figure
out.
If,
if
the
group
wants
to
specify
something
now
you
could
say
to
Define
your
own
mixing
and
maybe
something
like
7983.
This
is
the
way
to
do
it,
or
maybe
you
want
to
do
the
flow
ID,
I
guess
I,
you
know,
I
would
leave
that
discussion,
open,
I,
don't
think
you
necessarily
have
to
solve
it
here.
B
You
could
let
let
the
app
do
it,
but
you
know
if
there's
something
something
you
think
is
is
relatively
straightforward
to
do
you.
Could
you
could
do
it
in
this
document
or
not.
I
Yeah,
so
the
issue
is
that
the
M24
here
so
if
there
are
any
more
comments
after
the
meeting,
then
I
would
appreciate.
If
everyone
puts
it
there,
Spencer.
C
Yeah,
thank
you.
So
this
is
probably
for
Bernard
but
I'm
a
little
confused
about
whether
you're
talking
about
having
having
your
application.
That
is
still
doing
the
same
alpn,
but
the
effect
is
behaves
differently
in
the
future
versus
how
big
a
pain
it's
going
to
be
to
do
a
new
alpn
skill.
If
we
do
it,
if
we
do
RTP
quick
and
then
do
RTP
mugs
quick,
is
that
can
you
if
you,
if
you
need
to
explain
this
to
me
over
a
beer
sometime?
That's
the
fine
answer
too.
B
Yeah
I
I:
it's
not
all
that
complicated
as
an
example
when
I
was
writing
the
little
sandbox
thing,
I
sent
the
configuration
of
the
Kodak
along
with
the
media,
so
in
other
words
web
codex.
When
you
configure
the
encoder,
it
spits
out
a
string
that
you
can
dump
into
the
decoder
to
configure
it,
and
so
it's
kind
of
like
a
poor
man's
offer
answer.
You
can
just
say
here's
what
I'm
sending
just
send
it
along
in
the
in
the
first
kind
of
pseudo
frame.
B
The
decoder
consumes
it
configures
itself,
and
then
you
just
send
the
media,
so
it
just
and
also
I
thought
hey
and
if
this
were
a
conferencing
server
and
so
that
the
endpoint
of
the
media
and
the
signaling
was
the
same
thing.
B
It'd
be
pretty
useful
to
do
your
offer
answer
within
the
same
quick
connection,
because
it
then
you
know
what
essentially
it's
a
built-in
way
of
associating
these
things
with
each
other
and
knowing
that
this
is
a
signaling
for
this
set
of
media
streams,
as
opposed
to
starting
to
open
multiple,
quick
connections.
B
One
of
the
big
discussions
we
had
in
web
transport
about
session
IDs
was
all
about
trying
to
collapse
the
number
of
total
quick
connections
we
had
so
that
they
didn't
explode
and,
and
people
were
saying
for
deployments.
If
this
was
actually
a
big
issue,
you
don't
want
to
have
a
connection
explosion
because
it
it
limits
your
scalability.
So
anyway,
those
were
some
of
the
things
that
came
to
mind.
Spencer
be
happy
to
talk
more
about
it,
but.
C
A
Yeah
I'm
just
sort
of
thinking
you
know
off
my
head
here.
Maybe
this
is
a
bad
idea.
It
won't
work,
but
it
occurs
to
me
we
could
just
say
if
you
want
multiplexing
well.
Web
transport
has
defined
a
perfectly
good,
the
multiplexing
scheme.
You
don't
have
to
be
a
browser
to
use
web
transport.
A
You
know
just
use
that
and
that
might
be
worth
something
thinking
about
I'm,
not
saying
I
haven't
I.
This
is
something
I
just
thought
about
just
now.
So
I'm
not
maybe
there's
some
fatal
flaw
of
this,
but
it
might
be
something
worth
thinking
about.
I
Okay,
thanks,
then
I
think
I
would
suggest
that
we
move
on
to
the
other
two
issues
first
and
have
discussion
in
the
issue
over
this
one.
Here.
I
Also
brought
this
one
up
in
the
presentation
earlier
length,
field
and
quick
streams.
So
far,
we
only
added
the
sentence.
I
mentioned
earlier,
that
quick
streams
have
to
be
closed
after
a
packet
was
finished
to
identify
the
packet
boundaries
as
the
receiver,
but
it
might
also
be
helpful
to
have
a
length
field
for
buffer
location
or
for
identifying
incomplete
frames
in
quick
earlier.
I
So
there's
a
suggestion
to
add
the
length
field
and
then,
if
we
add
a
length
field,
there's
the
question
of
what
kind
of
next
field
we
are
not
mentioned
that
16-bit
fields
from
RPC
4571
may
not
be
enough,
because
the
payload
may
be
longer.
Alternatives
are
using
units
of
four
octets
or
a
longer
field
or
click
variable.
Lengths
integers,
we
have
been
using
for
something
else,
I
think
provided
yeah
any
opinions
on
this
one
or
are
there
any
cons
to
adding
a
length
field?
Maybe
more
specifically
Spencer.
I
Oh
no
problem
Jonathan.
A
A
You
sorry
about
that
I
always
get
confused
with
raising
the
hand
versus
you
know
the
raising
pretty
pretty
turning
my
mic
on
yeah,
so
I
guess.
My
question
here
is
in
terms
of
the
length
field:
I
mean
I,
guess
the
question
is
about.
You
know
apis
for
the
incomplete
frames,
question
I,
guess
I'm,
certainly
asking
about
web
transport,
but
I,
guess
that
might
be
relevant
otherwise
do
they
distinct
tell
you
the
difference
between
a
stream
that
was
reset
and
truncated
versus
a
stream
that
was
received?
A
You
know
up
to
its
fin,
because
if
they
don't,
you
know,
you
could
obviously
identify
I
mean
if
they
do.
You
could
end
up
incomplete
frames
that
way
a
buffer
allocation
obviously
is
still
potentially
an
issue.
A
My
I
mean
the
downside.
Here,
obviously
is
just
you
know:
extra
bytes,
you're
spending,
I,
don't
think.
There's
particularly
any
other
I
mean
downside
as
such
other
than
just
architecturally.
You
know
you
have
the
streams
doing
the
framing.
So
why
do
you
need
the
extra
framing
here?
A
Bangladesh
would
be
the
variable
like
integer,
just
so
as
not
to
spend
the
bytes,
but
still
give
us
the
flexibility
to
go
very
large
I.
Think
even
the
256k
of
the
four
Architects
would
be
potentially
limiting.
As
you
get
the
very
high
bit
rates,
if
you're
doing
a
really
doing
a
frame
per.
A
So
but
yeah
I
mean
I,
guess
are
those
apis,
I
guess
in
Bernard,
as
the
partners
been
working
doesn't
raise
the
issue.
It
doesn't
tell
you
that
difference
because
that's
you
know,
obviously
how
you
could
tell
whether
you've
got
a
truncated
frame
or
not
yeah.
B
So
in
the
web,
transport
API
Jonathan
you're
supposed
to
get
an
error.
If
you
receive
a
reset
versus
what
they
call
a
done
indication
if
you
receive
a
fin,
so
at
least
in
theory,
you
should
be
able
to
tell
the
difference.
However,
what
I
found
is
sometimes
the
reset
doesn't
get
forwarded
like
in
my
dumb
server
implementation.
It
didn't
forward
the
reset,
so
I
actually
didn't
get
the
done
and
I
didn't
get
the
didn't
get
the
reset.
B
So,
and
you
know,
I
raised
the
question
about
whether
the
reset
was
always
reliable,
because
what
it's
supposed
to
do
is
turn
off
re-transmission
and
there's.
Actually
a
draft
that's
been
submitted
to
quick
for
kind
of
a
more
a
reliable
reset.
So
that's
been
a
question.
That's
been
raised
about
whether
it'll
always
come
through
so
anyway,
I'm.
B
A
I
I
So
you
might
not
get
a
value.
You
need
all
right.
D
If
you
want
to
send
something,
you
should
be
able
to
send
it
out
and
then
know
whether
it's
completely
received
or
not,
which
might
be
done
with
a
trailing
non-fakeable
signal
or
checksums
or
whatever,
but
and
length
Fields
sent
out
as
a
header
or
not
the
only
possible
solution
and
perhaps
not
the
optimal
solution.
We
have
burned
ourselves
before
on
that
one,
but
if
we
have
a
length
field,
then
we'll
definitely
go
for
a
variable
light.
Integra.
I
Okay,
thanks,
that's
good
to
keep
in
mind,
I,
don't
know
so
much
about
HTTP
0.921.1.
What
happened
there,
but
yeah
I
think
we
should
keep
that
in
mind.
Maybe
length
might
still
be
helpful
to
have
a
buffer
allocation
Peter.
G
About
the
Finn,
isn't
the
fin
reliable?
So
you
could
just
have
the
rule.
If
you
received
the
fin,
you
notice,
you
received
the
whole
thing
and
if
you
haven't
yet
received
the
fin,
then
you
know
there's
more
still
to
come.
So
I
think
the
fin
operates.
As
the
you
know,
the
trailer
that
lets
you
know
you're
done.
G
That
said,
if
we
do
decide
to
do
length,
prefixing
I
agree.
It
should
be
a
quick
variable
length,
integer.
A
A
bit
of
charge
interject
is
that
then
reliably
also
related
in
the
web
artist.
I'm,
sorry
in
the
web
transport
apis.
G
G
I'm
not
sure
about
that,
but
I
guess,
if
the
other
side
reset
instead
of
closing
it
normally,
then
it
doesn't
matter
what
you
do
with
it.
On
the
receiver
end,
it's
kind
of
like
undefined
Behavior.
A
G
G
B
G
I
Okay,
so
I
I
feel
like
langsfield
might
be
helpful,
so
I
will
prepare
a
pull
request
for
that
and
then
maybe,
if
there's
more
to
discuss,
we
can
do
it
in
the
issue
or
in
the
pull
request.
Then
we
can
continue
with
the
next
mixing
stream.
Genetograms
they're
not
also
provide
this
one
up
earlier
in
the
presentation.
The
current
draft
only
allows
to
choose
between
either
streams
or
datagrams,
but
does
not
allow
mixing
both
of
them
in
one
RTP
stream.
I
We
mostly
did
that
because
we
were
not
sure
about
what
the
implications
would
be
and
if
there
would
be
any
possible
downsides,
but
of
course
there
are
some
potential
use
cases
for
this,
which
were
also
brought
up
at
the
meeting
in
Philadelphia,
for
example,
you
might
want
to
send
iframes
and
streams
and
P
frames
and
datagrams,
or
do
scalable
video
coding,
based
on
streams
and
datagrams,
and
put
different
layers
on
different
in
streams
or
in
datagrams.
I
One
issue
I
hit
when
I
was
experimenting
with
this
is
that
there
was
a
synchronization
issue
which
is,
of
course
solvable,
but
the
problem
there
was
that
I
prioritize
datagrams
and
that
led
to
the
situation
where
the
first
height
frame
was
received
after
the
first
datagram
and
then
I
kind
of
had
to
wait
for
the
first
keyframe
to
arrive.
But
that
is
of
course
solvable
and
using
a
job
perform.
I
A
Yeah
I
mean
obviously
the
need
to
wait
for
the
iframe
after
the
P
frame
could
happen.
Just
you
know,
because
of
partial
reliability.
You
know
because
of
separate
individual
separate
reliability
of
the
streams.
Anyway,
even
if
they're
on
a
stream
they're
separate
streams,
you
know
you
could
have
a
packed
loss
on
the
iframe
and
thus
it'll
come
in
later
so
I
think
you're.
Not
this
doesn't
introduce
that
issue.
It
just
maybe
makes
it
more
common,
so
I
guess
one
question
which
I'm
not
sure
which
way
it
pushes
the
answer.
A
But
it's
an
interesting
question
is
there
was
a
question
raised
on
the
zulip
about
whether
there
is
actually
a
substantive
difference
between
using
datagrams
and
using
less
than
MTU
streams
and
I?
Guess,
there's
probably
detailed
differences
in
terms
of
how
it
appears
in
the
wire
and
obviously
because
it
might
be
implementation
difference
of
prioritization.
A
But
is
there
actually
like
fundamental
difference
between
the
two
and
if
there
isn't,
it
might
mean
that
mixing
datagrams
is
fine
or
maybe
we
don't
need
datagram
at
all,
I'm,
not
sure,
but
do
you
have
any
input
on
that.
I
A
A
I
B
I
So
I
think
that
may
be
smaller
differences
in
in
the
handling
of
datagrams
and
stream
frames
and
quick,
but
the
people's
always
different
would
be
I.
Think
is,
if
you
have
large
frames
like
large
iframes,
like
Bennett
talked
about
earlier,
then
that
would.
A
I
mean
I,
guess
I
mean
I,
guess
in
terms
of
like,
as
the
person
can
not
not
as
the
stack
implementation,
but
as
the
person
consuming
the
data.
A
Are
you
going
to
see
like
substantially
different,
like
Network
characteristics,
and
if
the
answer
is
no
I,
think
then
that
means
that
a
mixing
them
is
fine,
but
B
I'm,
not
sure
how
useful
datagrams
are
in,
in
any
case,
as
opposed
to
just
choosing
to
packetize
some
things,
not
as
a
frame
or
stream,
but,
as
you
know,
maybe
multiple
frames
per
stream
over
several
RTP
packets
targeting
the
MTU
you'd
have
to
know
the
MTU,
which
is
not
always
the
easiest
thing
in
the
world
to
put.
I
Yeah
so
I'm
not
sure
if
there's
a
difference
on
how
it
looks
like
for
the
receiver,
I
I
would
still
think
it's
useful
to
have
diagrams
for
Less
importance.
Things.
A
I
guess
my
point
is
that
if,
if
that
is
if
that
is
the
case,
that
those
aren't
different,
that
also
might
mean
that
mixing
them
is
fine,
because
it
doesn't
introduce
any
new
complexities.
It's
only
you
know,
might
make
certain
complexities
more
common,
but
they're
going
to
things
that
are
going
to
happen
anyway.
Yeah,
okay,.
G
G
B
With
respect
to
what
Jonathan
said,
what
I
was
seeing
is
that
the
the
stream
per
frame
transport
plus
the
partial
reliability
gives
you
something
very
close
to
betograms
I'll
Reserve
judgment
as
to
whether
it's,
whether
it's
almost
identical,
but
you
get
a
lot
of
assuming
the
reset
and
other
issues
get
cleared
up.
I
think
you
should
get
pretty
able
to
get
pretty
close
to
the
datagram
for
phones.
I
Yeah,
okay,
I
think
most
of
the
answers
were
Pro,
mixing
and
I.
Think
I
would
prepare
pull
request
for
that,
but
I
would
still
like
to
encourage
everyone
to
think
about
any
cons
that
you
see
with
this
I
mean
there
may
be
some
that
we
haven't
thought
of.
Yet
that's
why
we
didn't
put
it
in
there
in
the
beginning,
but
otherwise
I
will
I
think
it
makes
sense
to
go
with
mixing
them
and
I
think
that
was
the
last
issue.
I
have
to
present
yeah.
We
have
next
steps
results.
I
The
open
issues.
Obviously
then
I
plan
to
submitting
your
draft
and
continue
working
with
Spencer
and
STP,
signaling
and
I
think
that's
it
for
today,
thanks.
I
G
Yeah
I
kind
of
had
a
question
or
two
I'm
a
little
late
to
the
the
party
and
I
have
just
read
the
drafts
recently,
but
I
guess
there's
a
question
for
everybody:
does
anybody
else
think
it's
kind
of
silly
to
just
be
stuffing
an
RTP
packet,
as
is
in
in
quick
and
not
take
the
opportunity
to
like
change
the
packet
format,
to
not
be
as
old
and
crusty
as
it
is
that
just
me.
A
We
went
back
and
forth
on
this
and
I
think
it's
the
you
know.
Certainly
a
lot
of
rtcp
is
possibly
unnecessary,
but
you
know
a
lot
of
the
issues
are
a
you
know
you
might
need
to
interrupt
with
you
know.
Have
you
know
quick
over
UDP
versus
quick
over
you
know.
Sorry,
rwdiepiever
is
already
a
quick
interop
which
implies
and
then
also
just
now.
The
issues
of
you
know
do
you
want
to.
A
You
know,
redefine
30
years
worth
of
work,
we've
done
there's.
Certainly
a
lot
of
features
of
this
can
be
dropped.
New
York
might
have
more
to
say
to
this
because
he's
done
more
of
the
work.
Research.
E
Interesting
I
need
to
Grant
browser
permissions
every
single
time.
Okay,
yeah
I
mean
we
are
effectively
just
sticking
RTP
packets
into
quick,
datagrams
right.
E
The
the
main
question
is
whether
we
can
actually
get
more
accurate
and
more
fine-grained
information
that
we
would
compare
to
what
we
would
usually
obtain
from
our
TCP,
and
we
try
to
optimize
only
on
that
part
and
that's
not
even
mandatory.
So
we
try
to
retain
as
much
Simplicity
and
potential
backwards
compatibility
as
possible.
B
Yeah,
it's
I
think
we'll
need
to
think
a
little
bit
more
about
the
translation
aspects,
because
it
seems
to
me
like
it's
a
one-way
door
that
going
from
RDP
over
UDP
to
RTP
over
quick
is
is
not
hard
in
a
translator,
but
going
the
other
way
may
be
difficult
or
impossible,
and
you
know
that
also
applies.
You
start
start
adding
things
like
s
frame
in
there.
You
know
where
you
have
a
translator.
B
Regular
can't
do
the
right
thing
because
it's
opaque,
so
you
know
it
does
lead
me
to
wonder
whether
I
keep
you
know.
Rtp
over
quick
is
a
is
a
separate
animal
or
not
like
I
said
it.
The
you
know,
supporting
a
legacy
really
does
help.
You
go
from
RTP
over
quick
RTP
over
UDP
to
ought
to
be
over
quick,
but
the
other
way.
I
I,
don't
know,
I,
think
there's
probably
some
more
stuff
to
talk
about.
G
Again,
the
question
of
interop
I
think
Bernard's
right
that
if
you're
sending
with
small
RTP
packets,
perhaps
you
can
translate
over
to
a
non-quick
or
RTP
endpoint,
but
if
you're
sending
big
RTP
packets,
like
one
RTP
packet,
one
sequence
number
per
frame
per
quick
stream
like
how
do
you
forward
that
to
an
non-quake
over
RTP,
endpoint
I?
Don't
I,
don't
know
how
that
translation
would
work
because
then
you'd
have
to
break
that
thing
up
and
there's
no
way
to
do
that.
If
you
have
the
one
sequence
number
I
mean.
B
G
Right,
that's
kind
of
what
I
was
getting
at
is,
if
we're
already
going
to
do
this
big
RTP
thing
in
a
one
frame
per
sequence.
Number
then,
does
not
necessarily
a
great
interrupt
story
there
anyway,
and
at
that
point
you
can
perhaps
redefine
what
a
big
RTP
frame
means
as
far
as
redefining
30
years
of
work,
I'm
pretty
familiar
with
the
part
that
webrtc
uses
at
least
and
I
could
fairly
easily
Define
a
a
structure
and
say:
okay,
any
anything
that
can
serialize.
G
This
can
be
used
to
translate
with
RTP
modulo,
the
big
RTP
sequence
number
thing,
and
then
you
could
say:
okay,
could
you
do
with
protocol
buffers?
You
could
do
with
whatever
serialization
format.
You
want
the
kind
of
thing
and
so
I'd
be
willing
to
do
that
work.
But
the
question
is:
if
anyone
else
is
interested
in
that
kind
of
thing,
where
we
take
this
opportunity
to
redefine
30
years
of
work
in
terms
of
the
a
nice
clean
way
that
you
can
specify
the
fields
in
a
frame
of
RTP.
A
A
Yeah
I
mean
I,
think
it's
just
to
the
point
of
the
the
empty
the
the
Gateway
into
you
know,
RTP
over
UDP
thing,
I
think
you
know
we
can
conceivably
come
up
with
some
like
way
of
signaling
to
everybody.
Hey.
You
know,
here's
an
empty
you
to
respect,
but
I
guess
the
important
question.
Is
anybody
actually
want
to
do?
That?
Is
we've
all
sort
of
had
it
theoretically?
But
it's
actually
is
that
actually
needed
I.
Think
in
terms
of
the
you
know,
new
framing.
A
I
think
it's
an
interesting
idea.
I,
you
know
as
an
both
as
an
individual
editor.
If
you
want
to
like
do
a
quick,
quick
and
dirty,
you
know
cut
straw,
man
draft
I'd
be
interested
in
seeing
it
I,
don't
know.
If
that's
I'm,
not
you
know,
I'm
not
saying
it's
necessarily
we'd
want
it,
but
it'd
be
worth
seeing.
I
think
you
know.
If
we
started
getting
a
throwback,
we
could
see.
You
know,
okay,
how
much
work
does
this
really
be?
A
H
Yep,
for
me,
the
complexity
is
just
because
trying
to
use
quicker
streams,
I
mean
just
replacing
details
with
quick
and
you
see
in
RTP
and
TCP
since
very
is
something
very
easy
to
Define
and
use
not
sure
if
it
is
worthy
or
not.
But
I
think
that
most
the
complexity
comes
trying
to
use
a
quick
streams
that
makes
us
have
to
Define
how
the
frame
got
pocketized
into
a
single
quick
stream.
Then
we
have
to
Define
how
to
interrupt
between
quick
streams
RTP
over
quick
overestry
over
quick
streams
with
the
normal
RTP.
H
So
that's,
why
I'm
a
bit
concerned
about
naming
it
RTP
over
quick
at
all,
because
I
think
that
it
is
so
different
that
normal
rtb
that
and
that
I
mean
from
if
I'm
going
to
implement
again
with
that
impressive,
this
RTP
one
frame
per
RTP
packet
over
quick
stream
and
send
it
to
the
over
normal
RTP.
It
will
take
the
same
amount
of
time
that
implementing
any
other
protocol
like
SRT
or
or
whatever,
because
I
would
have
to
depacketize,
and
so
for
me,
the
quick
over
over
streams
RTP
over
quick
operation
over
quicker
streams.
H
Maybe
it's
worth
it,
but
I.
Don't
think
that
it
is
RTP
at
all.
A
C
I
just
wanted
to
add.
We
we've
talked
about
the
Gateway
thing
before
and
what
I
was
trying
to
sell
at
the
I
at
ITF
114
was
basically
deferring
work
that
was
deferring
work
on
things
that
start
to
bleed
into
the
topology
RFC
until
we're
a
little
further
down
the
road.
You
know,
that's
that's
a
that's!
A
I
thought
there
was
some
residents
in
the
room
where
yeah
when
I
was
the
only
person
saying
this,
but
and
it's
fine
to
change.
C
You
know
change
our
minds,
even
if
we
did
we're
leading
that
way.
It
wants,
but,
like
I
said
just
to
recognize,
you
know
once
we
started
saying
we're
gonna
we're
gonna
worry
about
topology.
C
It
seems
like
to
me:
that's
a
that's
a
significantly
large
amount
of
fork
that,
if
you
didn't
have
to
do
it
at
the
same
time
as
you
were
doing
the
first
specification,
maybe
that
would
be
helpful
for
us
and
I
fully
expect
people
to
need
it
has
to
do
the
topology
work.
The
question
is
just
how
you
know
how
quickly
they
need
that
and
whether
whether
they
can
wait
for
it
thanks.
E
Stefan
here,
Stephen
being
right,
I
think
Peter's.
Question
is
a
good
one,
but
it's
not
a
good
one
for
an
breakout
meeting
here
where
you
know
this,
this
it
really
should
go
to
above
frankly,
this
is
this
is
really
a
big
thing.
Yeah
and
I
would
not
be
in
favor,
obviously
to
kill
work.
That's
at
least
halfway,
probably
two-thirds
way
to
its
goal
in
favor
of
a
proposal.
E
That's
pretty
darn
close
to
boil
the
ocean,
so
I
I
would
suggest
that
this
debate,
or
to
be
held
in
a
different
venue
and
in
a
much
broader
scope
and
assuming
a
much
longer
time
frame
than
we
have
currently.
Thank
you.
G
G
I
would
like
to
think,
though,
about
how
we
could
make
this
Ford
compatible,
perhaps
with
something
you
might
want
to
do
further
down,
but
I
can
think
about
that
offline
and
come
back
as
Jonathan
said
with
a
strong
man.
Draft
I
do
appreciate
your
concern,
though,
that
if
it
tries
to
boil
the
ocean,
perhaps
it
it's
not
to
go
somewhere
else,
but
I
was
trying
to
think
of
something
we
could
do.
That
would
be
better,
but
not
try
and
boil
the
ocean.
A
B
Bernard
yeah
I
wanted
to
reply
to
Spencer
about
the
topology
issues.
Basically,
my
my
perspective
is
that
the
topology
issues,
many
of
them
are
quite
difficult
and
may
not
be
easily
solved,
but
this
draft
doesn't
have
to
solve
them.
It
can
just
note
that
they're
out
there
and
you
know,
for
example,
the
issue
of
requiring
codec
specific
knowledge
in
a
translator
or
the
S
frame
stuff.
B
It
can
just
write
down
the
issues
and
say:
hey
here's
some
stuff,
so
I
don't
think
we
necessarily
have
to
solve
them,
but
I
think
it
is
useful
to
be
aware
of
them,
because
I
I
think,
as
I
said,
the
the
interop
is
going
to
be
limited
really
to
going
from
RDP
over
UDP
to
RGB
over
quick
and
the
other
way
may
be
pretty
difficult
or
impossible.
So
just
just
say
it
and
don't
necessarily
try
to
fix
it.
A
Yeah
so
I
guess
on
the
two
points
on
the
topology
I
mean
obviously
don't
need
to
solve
the
topology
issues,
because
I
don't
want
to
put
yourself
into
a
corner,
you
know
and
with
a
solution
that
makes
topology
issues
very
hard
to
solve.
So
we
need
to
be
at
least
somewhat
aware
of
them
and
the
other
point
I
guess
to
you
know.
The
future
thing
going
forward
is
mock
is
a
working
group.
A
Now
it's
getting
started,
it's
not
solving
exactly
this
use
case
and
it's
pretty
close,
and
maybe
you
know
working
there,
make
sure
that
whatever
they
find
is
works
or
could
be
easily
adapted
for
RTP
type.
Use
cases
is
a
better
thing
than
starting
from
RTP.
F
C
C
If
it
does
that's
perfect,
if
it
doesn't,
if
you
want
to
do
something
like
a
buff
at
itf116,
it
would
be
good
to
have
a
side
meeting
or
something
like
that
in
the
in
the
ITF
115
time
frame.
Just
so
we're
not
you
know
coming
up
on
116
and
saying:
are
we
going
to
do
anything
so
I
was
just
suggesting
having
a
side
meeting
for
that.
If
people
are
interested
and
asking
if
people
are
interested
in
a
side
meeting
might
tell
you
something
too.
A
A
Great
go
ahead.
Thank
you.
A
D
Okay,
good,
thank
you.
So
this
is
a
quick
recap
of
the
rtcp
message
for
the
green
metadata.
Next,
the
slice
please
all
right
so
green
metadata
or
the
energy
efficient
media
consumption
is
an
ISO
IEC
standard
developed
by
the
mpac.
It
specifies
a
set
of
metadata
to
reduce
the
decoder
and
the
display
power
consumption
and
the
one
metadata
is
the
interactive
metadata
for
the
decoder
power
reduction.
D
That
is
not
need
to
be
specified
to
carry
this
data
from
the
decoder
to
the
encoder
or
from
the
receiver
to
the
sender,
and
we
have
the
updated
draft
version.
The
zero
one
version
uploaded
on
in
July
and
the
draft
specify
a
new
rtcp
payload
format
for
the
spatial
and
temporal
resolution,
request
and
notification
feedback
message.
D
So
we're
following
the
the
a
b,
p,
f,
IFC
and
deny
style
the
avpf
specified,
seven
payload
feedback
messages
and
one
application
layer
feedback
messages.
So
this
document
specified
two
new
payload
specific
feedback
messages
and
the
messages
can
be
sent
in
regular,
full
compound
rtcp
packet
or
in
an
early
rtcp
package.
The
following
two
figures
are
the
detail:
format
for
the
request
and
notification
messages,
so
it
carried
the
picture
with
picture
height
and
frame
rate
to
indicate
the
spatial
or
temporal
or
resolution
information.
D
So
thanks
for
Johnson's
comment
on
the
email
reflector
we
made
the
following
changes.
First,
is
that
we
assign
the
new
fmt
value
11
and
the
12
to
the
request
and
the
notification
message,
because
the
the
original
value,
9
and
7
were
already
assigned
to
Roi
and
AR
and
second
change
is
the
semantic
update
to
prohibit
the
value
of
zero
for
the
spatial
resolution
or
the
frame
rate
to
be
zero.
D
Yeah
original
the
zero
means,
there's
no
video
but
there's
other
mechanism
to
enable
that
function.
So
we
update
the
semantic
to
disallow
this
kind
of
value
and
third
changes.
There's
a
redundant
text,
description
for
the
existing
fmt
value
assignment,
so
that
part
is
removed
and
the
last
change
is
typos
in
green
metadata
reference
URL
and
the
email
address,
so
those
are
being
corrected.
D
So
those
are
the
four
changes
in
the
revision,
one
yeah
next,
please
so
next
step.
We
would
like
to
ask
further
double
jet
option
of
the
draft.
So
that
concludes
my
presentation.
Any
comments,
questions.
D
D
Well,
the
reason
I
ask
is
because
scalable
video
codecs
are
carrying
multiple
multiple
resolutions
and
multiple
layers,
depending
on
each
other,
thus
requesting
a
specific
resolution.
D
D
So
I'm
wondering
whether
you
have
actually
looked
at
this
yeah
for
scared
about
calling
you
for
the
like
streaming.
You
can
have
different
Revolution
presentation
to
choose,
or
even
for
the
real-time
communication.
If
you
there
is
a
SVC,
the
stream
you
can
adaptively
yes
pedestring
receive
is
not
the
SVC.
Then
you
probably
need
some
metadata
format
to
request
the
resolution
offering
rate
change
at
the
encoder
site
and
I
think
SVC
is
not
like
deployed
widely
so
for
some
video
conferencing
or
real-time
video
communication.
A
A
Yes,
yeah
I
mean
I.
Think
someone
to
answer
Harold's
question
I
mean
the
point
of
this
is
that
this
is
the
way
that
the
the
club,
the
receiver
says
this
is
what
I
want
to.
A
This
is
the
maximum
thing
that
is,
you
know
useful
for
me
or
that
I
want
us
to
get
for
power
and
then
what
the
encoder
does
to
satisfy
that
you
know
is
up
to
the
encoder
and
whether
and
that's
the
encoder
can
choose
hey
I
could
turn
off
a
layer
or
I
can
change
the
top
resolution
or
whatever,
and
those
could
all
be
valid
choices.
But
that's
question
choice.
A
So
I
think
that's
I,
think
you
know
it
might
be
worth
having
a
little
bit
of
discussion
of
that,
but
you
know
just
clarifying
that
you
know
but
I
think
it's
not
necessarily
that
doesn't
necessarily
need
to
be
firmly
specified.
What
exactly
it
means
in
this
context.
A
So
yeah
speaking
as
a
chair
I
think
we
can
certainly
do
or
sorry
student
ambassador.
Do
you
have
a
comment.
I
J
Let's
go
just
like
to
add:
actually
thanks
for
young
for
the
presentation
and
definitely
these
kind
of
rtcpm
feedback
messages
are
important
to
reduce
the
power
consumption
at
the
decoder
side.
So
we
actually
thought
like.
Maybe
we
can,
we
can
also
we.
We
also
support
end
of
messages,
but
understanding
that
a
few
changes
can
be
added
in
the
near
future.
So
I'd
like
to
basically
notify
that.
J
Maybe
we
will
be
presenting
few
more
updates
to
this
in
the
near
future
so
that
we
can
still
present
the
updates
in
the
near
future.
Just.
A
J
J
Probably
we
will
try
to
come
up
with
some
updates
before
the
I
mean
for
the
next
meeting,
so
I'm
not
really
sure
on
the
on
the
adaptation
adoption
process
but
yeah.
But
we
support
this.
A
Yeah,
if
this
is
additional
functionality
beyond
what
the
core
Beyond,
you
know,
basically
doing
different
kinds
of
I
started,
doing
different
kinds
of
messages.
I
would
be
inclined
to
say
that
that
should
be
in
the
thing
the
you
know.
The
group
is
considering
for
adoption,
but
oh
other
people
have
different
opinions.
I
could
let
them
talk.
F
J
E
Me
to
help
you
with
this
I'm,
following
this
from
a
distance
in
MPEG
and
I'm,
also
following
this
from
a
distance
here.
So
no
we
we
don't
have
to
wait,
there's
the
right
so,
just
generally
speaking,
the
screen
metadata
stuff,
that's
in
a
stage
where
it's
being
fine-tuned
in
MPEG,
but
it's
it's
not
that
there
is
anything
earth-shattering
coming
out
in
the
placebo
future.
So
from
a
from
a
Viewpoint
of
an
RTP
payload
format.
E
I
doubt
that
a
lot
will
change,
except
maybe
in
the
signaling-
and
maybe
you
know,
small
stuff.
A
E
A
Just
that's
the
important
question,
I
think
for
the.
J
Yeah,
so
we
just
wanted
to
make
sure
that,
even
if,
if
we
do
that
option,
it
should
be
possible
for
us
to
add
a
few
more
a
few
more
details,
more
more
messages.
That's
what
I
would
like
to
check
with
you
guys,
so
that
if
it
is
possible,
we
are
okay
with
that.
Otherwise
we
can
present
them
as
as,
sadly
as
possible,
so
that
we
can-
and
the
group
can
take
a
decision
on
that.
A
I
mean
the
other
possibility,
of
course,
is
that
these
don't
all
have
to
be
in
the
same
document.
If
you
have
additional
feedback
messages
for
different
green
metadata,
it
might
be
worth
breaking
them
out
into
a
different
document
if
they're
doing
something
different,
I,
don't
know
if
I
mean
if
this
is
all
addressing
one
MPEG
spec,
maybe
it's
lobbying
one
feedback,
but
but
if
they
are
doing
fundamentally
different,
things
like
one
is
requesting
right
information
where
it's
another.
A
J
Question
is,
is
it,
can
we
do
it
in
the
same
document,
even
if,
like.
A
Okay,
you
can
I
mean
I,
guess
the
question
is
and
is
there
something
wrong
with
the
same
document?
I
guess
I
just
feel
like
if
you're
adding
additional
feedback
messages,
Beyond
temporal
spatial
resolution.
A
E
E
Basic
mechanisms
of
the
payload
format,
I
mean
what
we
could
do
is
perhaps
just
just
don't
adopt
it
now.
The
timing
is
such
that
the
the
minds
meeting
of
ampac
is
just
before
the
just
before
the
ietf
over
in
in
London.
E
Let
them
go
through
the
mines
meeting
again
and
then
we
Revisited
in
the
main
session
over
there.
That's
probably
the
right
thing
to
do,
but
they.
E
G
A
J
Yeah
I
think
we,
we
probably
prefer
to
add
a
new
message.
Yeah.
A
B
So
what
is
the
action
item
for
this?
Are
we
going
to
wait
for
a
new
draft
and
then
do
a
call
for
adoption,
or
what
exactly
is
the
next
step.
A
D
Yeah,
just
clarification:
yeah,
we
don't
plan
to
add
anything
new
and
we
discussed
that
before
I
think
the
other
messages
are
they
not
incremental
so
but
yeah.
If
yeah
I.
A
Mean
ultimately,
if
if
if,
if
there's
you
know,
strand
of
us
wants
to
add
have
another
message
and
Yung
hay
does
not,
then
maybe
shouldn't
have
submit
a
separate
draft
for
the
other
message,
and
you
know
we
can
consider
them
separately.
That
might
be
better
than
in
which
case
we
could
do
a
call
for
adoption
on
this
document.
A
A
F
E
B
Are
there
any
other
action
items
or
next
steps
that
we
should
note
for
the
minutes.