►
From YouTube: IETF-AVTCORE-20230223-1600
Description
AVTCORE meeting session at IETF
2023/02/23 1600
https://datatracker.ietf.org/meeting//proceedings/
B
B
You'll
you'll
see
the
new
slides
starting
at
slide.
20.
A
B
B
Yeah
well,
maybe
maybe
maybe
we'll
try
that
hold
on
okay.
So
let
me
see
if
I
get
a
different,
oh
yeah,
let's.
D
E
B
Well,
okay,
so
we
can.
F
B
Thing:
okay
and
let's
see.
B
A
Right
should
be
able
to
destroy
just
a
single
window.
B
Great
all
right
so
now
we're
yeah
we're
good
to
go.
Okay,.
A
A
A
F
A
Okay,
so
welcome
everybody
to
this
interim,
hopefully
you're
all
familiar
with
medical
by
now,
but
here's
the
basic
instructions
you'll
need
to
unable
you'll
need
to
enable
your
audio
manually,
pressing
the
button
at
the
and
the
thing
at
the
top
left.
You
can
see
if
you
have
audio
by
looking
at
the
little.
It
should
be
a
little
audio
meter
thingy
there.
A
You
can
also
enable
video
if
you
want,
if
you
want
us
to
see
you,
it's
helpful
if
you
want
but
audio
edits,
separate
from
audio,
so
you
need
to
enable
audio.
Also,
please
keep
the
audio
video
off
unless
you're,
carrying
or
presenting
or
you
know,
commenting
and
use
of
a
headset
is
recommended.
So
you
don't
get
open
Michael.
A
There's
an
iitf
meeting
that
well
applies
so,
which
is
basically
a
you.
You
know
the
various
IPR
rules
we
you're
being
recorded
and
see
follow
our
code
of
conduct.
That's
a
very
brief
summary
of
something.
That's
it's
in
itself.
A
brief
summary.
Please
see
the
BCPS
for
in
the
privacy
policy
for
details.
A
Not
really
well,
you
know
a
professional
environment.
Please
behave
professionally.
If
you
feel
anyone
has
not
behaved
professionally.
Please
contact
the
chairs
or
the
ombuds
team.
A
And
so
this
is
the
about
the
meeting
we
have
Sam
Hurst
as
the
note
taker.
Thank
you
Sam
and
here's
our
agenda
we're
starting
with
every
payload
format
for
skip,
which
is
in
iesg
review
and
green
metadata
RGB
over
quick
and
viewport
region
of
interest
and
then
wrap
up
next
steps
in
any
other
business.
If
it
has
anything
else,
anybody.
A
A
Let's
move
forward
I
guess:
oh
no
draft
status
right!
Yes,
yes,
a
number
of
things
published,
VC
and
cryptex.
Both
exciting
vpdyne
is
still
a
misraph
waiting
in
directly
on
frameworking
7983
best
is
in
is
approved,
announcement
to
be
sent,
which
is
there.
A
D
A
Okay,
there
were
some
discussion
positions
on
skip,
so
I
will
be
talking
about
that
here
and
we're
still
waiting
for
trademarking
but
I'm
always
homo
is
here
so
I
can
AG
email.
No,
please
it
shouldn't
be
very
much
work
Mo,
just
just
please
get
it
done
so
mostly
so
we
can
publish
BB9,
wow
and
then
adopted
number
of
things
which
we'll
be
talking
about
today.
G
All
right,
thanks
John,
let's
Dive
Right
In
Here.
We
had
a
couple
of
comments
regarding
the
from
the
telechat
group
and
the
RS
tsv
art
group,
mainly
the
section
that
was
missed.
We
failed
to
put
in
there
before
the
congestion
control
considerations,
so
we
will
be
adding
that
into
a
upcoming
revision.
G
A
lot
of
it's
again,
the
boilerplate
text
from
from
the
standard
rfcs
and
as
well
as
a
particular
sentence.
Basically
saying
that
skip
in
itself
is
limited
as
far
as
basically,
what
codec
is
is
underlying.
G
For
example,
you
know
the
Melbourne
729d,
don't
really
have
much.
You
can
do
with
that
other
than
you
know,
increasing
the
packetization
rate
or
something
like
that.
You
know
more
packets,
more
video,
more
codec
frames
per
packet
or
something
like
that,
but
so
the
kind
of
congestion
control
mechanisms
will
be
kind
of
limited.
G
But
again
it's
really
based
on
what
the
underlying
codec
can
support
at
this
point.
So
it's
kind
of
our
position
with
that
we
going
to
be
renaming
the
section
we
had
put
in
before
change
controller
address.
We
realized
that
was
kind
of
poor
choice.
Awarding
on
that.
So
we're
going
to
change
it
to
skip
contact
information
regarding
specifically
the
the
skip
210
specification,
which
is
under
the
control
of
the
NATO,
skip
working
group.
G
G
There
are
some
other
minor
editorial
Corrections
that
we
will
be
making
based
on
some
of
the
comments
mine
rewarding
things
here
and
there,
but
also
what
we're
looking
to
do
is
kind
of
also
trying
to
shift
our
Focus
there's.
A
lot
of
there's
been
a
lot
of
conversations
about
exactly
what
our
RTP
payload
format
is,
and
we've
tried
to
describe
it
as
best
we
can
without
going
into
too
much
dto.
Only
because
it's
it's
hard.
G
It's
it's,
unlike
other
RTP
formats,
that
there's
a
fixed
format
and
there's
only
one
format
and
that's
all
there
ever
is
Skip
is
a
lot
different
in
that
aspect.
It's
that
there
are
many
depending
on
the
state
of
what
skip
is
in
the
the
stream
can
be
different.
G
You
know,
so
it's
kind
of
it's
really
hard
to
pin
it
down
into
one
particular
format,
which
is
why
we've
been
trying
to
avoid
doing
that
I
guess
it's
caused
some
confusion
and
some
of
the
reviewers
outside
the
group
that
have
some
issues
with
that
again,
but
we
we
want
to
emphasize
that
we
want
to
be
treated
basically
like
a
unformatted,
Clear,
Channel
kind
of
like
what
RFC
4040s
defined.
Unfortunately,
RSC
4040
beat
our
needs,
so
we
can't
just
use
that
for
transporting
skip.
G
That's
why
we're
looking
for
our
own,
because
it
doesn't
do
video,
and
it
doesn't
things
like
that,
so
we
kind
of
want
to
be
able
to
to
shift
the
focus
on
that
to
be.
G
You
know
again,
we'll
put
some
revised
wording
in
the
abstract
introduction
saying
that
we
want
to
treat
this
as
a
Clear
Channel,
with
the
intent
of
that
network
devices
cannot
or
should
not
be
able
to
try
to
profile
what
our,
what
our
traffic
stream
is.
Looking
looking
like
because
it's
it's
so
highly
variable
that
it,
it
would
seem
it's
an
impossible
task
that
you'd
be
able
to
build
a
network
filter
or
packet
filter
based
on
what
the
contents
of
the
the
skip
RTP
stream
would
be.
G
So
that's
kind
of
where
we're,
where
we're
trying
to
focus
on
that
I'm,
not
sure
what
the
group
thinks
about
that.
But
we
can
certainly
talk
about
that
as
a
way
forward
with
this
again,
because
we've
received
so
many
comments
for
folks
wanting
to
really
dive
into
the
RTP
payload
format,
which,
in
a
way
again
I
understand
that
that
that's
the
focus
of,
but
you
know
most
typical
RTP
payload
format.
Rfcs
do
that
but
again,
I.
G
G
There
was
also
some
talk
about
of
this
one
of
the
reviewers
about
the
security
consideration
sections
specifically
again:
the
boilerplate
text
that
refers
to
the
RFC
7202,
the
reviewer
felt
that
that
was
irrelevant
because
payload
is
encrypted.
We
disagree
only
because
part
of
what
7202
talks
about
is
secure,
RTP,
which
goes
beyond
just
encrypting
payloads.
G
As
you
all
know,
it
has
payload
encryption,
plus
authentication
header
authentication
mechanisms
of
that,
as
well
as
doing
secure
rtcp
as
well.
So
again,
there's
really
no
reason
why
we
can't
think
you
could
also
run
skip
over
secure
RTP.
If
you
really
wanted
to
I,
don't
think
there's
any
technical
reason
why
you
wouldn't,
but
so
we
feel
that
that
particular
security
consideration
sections
is
fine,
as
it
is
at
least
I
mean
so
again
for
the
reasons
why
we're
stating
with
that
so
to
where
that
sound,
I
guess
final
slide.
G
And
again,
just
what
we're
doing
at
this
point
to
resolve
short
of
you
know.
We
again
we
have
a
version,
five,
basically
ready
to
go.
We
were
just
waiting
to
make
sure
that
where
we
were
going
with,
the
changes
we
want
to
make
would
be
satisfactory,
so
we
don't
have
to
keep
doing
revision
after
revision
at
this
point
so
yeah.
So
that's
where
we
are
thanks
again.
B
I
think
it
might
be
helpful
to
show
the
working
group
the
actual
ballot
positions,
because
I
think
you
know
a
lot
of
things.
You're
talking
about
here
do
make
sense
to
do
them,
but
I
don't
think
it's
going
to
resolve
the
fundamental
I.
Don't
think
it'll
resolve
all
the
discusses
because,
as
you
said,
they
are
looking.
B
So
that's
where
I
think
the
disconnect
is,
but
it
might
help
to
talk
through
the
actual
ballot
positions
and
at
least
for
us
to
discuss
what
the
answer
is
to
the
because
I
think
you
obviously
we
know
what
the
answer
is
to
the
ballot
positions
and
the
questions,
and
it
might
be
simpler
for
you
to
just
write
some
of
that
in
the
document.
And
then
you
know
it's
all
in
the
skip
payload
info
right
once
you
read
that
you
understand
all
these
questions
are
asking.
B
They
didn't
read
it
because
you
made
it
non-normative
and
so
you're
kind
of
in
this
weird
position
where
they're
missing
some
info
and
they
they
have
they're.
Not
they
don't
want
to
read
the
document
to
get
it
because
they
want
to
convince
themselves
that
it's
really
non-norative
so
anyway,
I
think
it
might
help
to
look
at
the
actual
ballot
positions
and
then
discuss
that.
Does
that
make
sense
Jonathan.
E
A
Yeah
I
can.
B
Okay,
so
let's
go
through
them,
so
we
have
Francesca.
B
So
that's
the
change
controller,
one
and
I
think
so.
I
think
you've
got
a
reasonable
path
forward
on
that
one
and
then.
B
Okay,
so
Roman
right
so
Roman
says
he
didn't
read,
skip
210.
So
then
yeah
he's
looking
for
the
format
right,
so
he's
asking.
Are
the
details
entirely
opaque,
so
Dan
I
mean
once
you're
once
you're
exchanged
the
keys
right,
it
is
entirely
opaque.
Is
that
is
that
correct.
G
B
G
B
I,
don't
think
you
have
to
go
into
detail
on
on
the
key
establishment
process,
but
as
an
example,
if
you
were
to
say
here
that
once
once
you've
reached
the
key
establishment
state,
which
is
where
you
would
be
once
you
start
one
once
you
reach
that
state,
then
it
is
entirely
opaque.
I
think
that
would
address
Romans
first
question
and
then
I
think
that
can
help
you.
B
The
other
thing
I
think
you
need
to
say
as
far
as
the
RGB
format
is
concerned.
Is
that
because
it
is
essentially
dealing
with
the
underlying
codec
and
I,
think
you
do
say
a
little
bit
this,
but
maybe
not
all
together
in
one
section,
which
is
that
essentially
the
underlying
codec
is
doing
the
packetization
right.
So
whatever
it
happens
to
be
h.264.
So
the
h.264
is,
you
know,
figuring
out
where
the,
where
the
now
unit
boundaries
are
and
so
forth,
handing
that
to
the
encrypter.
B
B
B
B
Is
yeah
you
don't
have
to
go
into
that,
which
is
our
sticky
Wiki
yeah.
You
don't
have
to
go
into
Z,
but
the
point
is
that
once
they
reach
key
establishment
State,
that's
how
it
all
works.
So
anyway,
if
you
were
to
describe
that
I
think
that
would
that
would
address
Roman's
question
first
question
and
then
well
let
me
look
at
the
second
one.
G
Again,
that's
a
security
consideration
section
again,
that's
he's
commenting
on
the
again
the
boiler.
The
boiler
plug
text
that
you
know
is
is
the
standard
security
consideration.
B
I
think
that
one
just
trying
to
understand
what
you
could
do
there
well,
it
appears
there's
tight
coupling
yeah.
There
is
I
guess,
but
the
point
again
the
point
here
is
you
reach
the
key
establishment
State
and
that's
really
all
you
need
to
know
with
respect
to
RTP,
because
there
won't
be
any
RTP
exchanged.
Well,
you're
I
guess:
you're
exchanging
the
as
the
skip
messages
within
RTP
and
once
you
reach
key
establishment
state
right.
Then,
then,
after
that,
that's
the
Codex
only
appear
at
that
point.
Otherwise,.
F
G
B
Taking
the
skip
messages,
I
mean
I.
Guess
you
may
need
to
say
a
little
bit
about
how
you
the
skip
message
is
prior
to
key
establishment.
State
are
just
essentially
they're
they're,
essentially
fragmented,
into
the
RTP
payload
right
with
the
markup
bit
telling
you
at
the
end
of
what
end
of
a
particular
message.
Is
that
correct.
G
B
G
B
Side
so
so
that's
all
you
need
to
say
about
the
the
skip
exchange
prior
to
reaching
key
establishment
state
right.
That's
the
that's!
The
part
of
the
RTP
payload
that
just
describes
how
the
skip
messages
get
sent.
You
know,
there's
a
link
field,
that's
it
so
that
anyway,
I
think
I.
Think
those
two
things
would
address.
Romans
comment.
B
G
B
Keep
the
security
consideration
section
I'm,
just
saying
if
you
were
to
put
that
stuff,
we
just
described
into
a
just,
create
a
little
RTP,
payload
format,
section
and
just
put
what
we
just
talked
about
in
there.
I
think
that
would
that
would
give
Roman
what
he
needs
to
understand
the
payload
format.
It's
not
you
know
what
we
just
talked
about
is
just
a
couple
of
paragraphs
and
that's
it.
B
So
then,
because
then
they
would
have,
but
they
would
have
the
section
they're
expecting
and
then
they
would
go
I.
Have
it
okay
right
right.
So
then,
let's
talk
about
okay.
So
then
we
have
a
comment
from
okay.
Another
comment
from
Roman.
B
Okay,
then
he's
talking
about
secure
session
establishment
protocol
Behavior,
which
I
don't
again,
you
don't
have
to
go
into
any
more
detail.
I,
don't
think,
except
what
we've
just
talked
about.
I
think
that
would
that
would
give
him
what
he
wants
and
then
okay.
B
Are
just
editorial
right?
Okay,
all
right!
So
then
we
have
sarker,
that's
RGB
profile.
Well,
that's
right!
I
mean
within
with
most
codecs
right.
You
can.
The
answer
would
be.
It
depends
on
the
codec
right,
I
mean
and.
B
B
Yeah,
so
maybe
maybe
a
few
words
early
on
than
just
making
it
clear
that
skip
is
just
encrypting
underlying
codex,
and
so
the
underlying
codex
determine
the
answer
to
these
questions
that
make
any
sense,
yeah
and
then
Kevin
is
highly
variable
right,
because
it's
the
underlying
codex
again,
what
is
highly
variable
traffic?
Well
again,
it's
it's.
This
concept
that
the
underlying
codec
is
it's
the
underlying
codec
right,
which
could
be
audio,
could
be,
could
be
video,
that's
what
it
that's.
What
you
mean
right.
B
I
Okay,
I
haven't
been
able
to
be
part
of
this
from
the
beginning,
due
to
technical
problems
within
our
company.
Here,
I
would
like
to
we
use.
We
use
the
term
codec
flowing
it
out
there
a
lot
right.
The
that
underlying
codec
can
be
something
as
simple
as
a
chat
session
too.
So
right,
we
need
to
understand.
It
appears
more
like
data
than
it
does
necessarily
codec
going
across
there.
It
is
just
right,
whatever
skip
happens,
to
be
using
it
for.
I
D
B
D
B
Of
view
so
yeah
that
that
might
be
confusing
them
a
little
too
but
I
I
think
the
the
point
we're
trying
to
get
across
is
that
there's
an
underlying
encoder
or
decoder
and
that
that's
maybe
a
diagram
would
help
feeding
it
up
to
skip.
That's
encrypting,
it
I'm,
not
sure,
but
that's
why
I
think.
That's
what
I
think
people
are
stumbling
on
here.
B
Thank
you.
Yeah
I
mean
you're,
not
a
bit
the
bit
rate
again.
A
A
B
Okay,
so
I
think
I
think
between
those
yeah
I
think
I
think
we
I
think
sarker's
comments
can
be
resolved
with
that
way
and,
let's
see
so,
he
also
asked
for
a
link
to
skip
spec,
which
I
think
is
easy
to
provide
and
yeah
well
seriously.
Saying
he's
looking
for
the
design,
principles
and
I
think
that's
some
of
what
we've
just
been
talking
about.
B
I
mean
the
way
I
think
about
it
is
like
the
in
in
ipsec
right
we
have
the
division
between
Ike
and
the
and
the
ipsec
format.
This
is
equivalent
of
the
iPhone
6
format,
so
you
wouldn't
do
a
whole
bunch
of
stuff
about
icon
and
ipsec
document,
but
you
just
need
to
know
a
few
things
about
Skip
for
this
all
to
make
sense.
I
D
B
B
I
Right
that
was
kind
of
the
point
of
the
the
email
I
sent.
That's
the
truth,
you
you
don't
need
really
to
not
skip
210
to
open
a
payload
type
for
Skip
and
let
the
traffic
flow
it.
A
Has
yeah
I
think
it's
you
know
it's
it's
it's
more!
It's
not
so
much
a
matter
of
what
an
implementer
needs
to
know
is
what
the
document
reviewers
need
to
know
here,
so
basically
what
they
need
to
know
to
understand
whether
we
know
does
this
make
sense,
and
because
we've
told
them
that
it's
that
skip
is
informative.
They
want
to
know
this.
Without
reading
this
gift
document.
B
Yeah
I
mean
like
when
I
read
the
skip
document.
I
understood
all
this
within
the
first
couple
of
pages
and
then
I
didn't
have
the
question,
but
because
we
want
to
keep
it
non-nordive
I
just
think
you
just
have
to
have
a
little
bit
to
just
give
them
the
framework,
and
you
can
say
something
like
the
skip
document-
creates
a
state
machine
that
basically,
the
only
things
you
need
to
know
is
there's
this
exchange
of
of
key
establishment.
B
I
A
B
I
Application
change
yeah
all
that
can
go
on.
B
I
It
effects
my
opinion,
the
less
said
the
better,
because
it
can,
it
doesn't
necessarily
even
have
to
go
through
the
type
of
things
that
were
originally
discussed.
It
can
just
start
up
into
traffic
mode
in
a
particular
mode.
You
don't
have
the
messaging,
so
it
the
variability
on
this.
You
know.
One
of
the
fears
that
that
I've
had
in
speaking
with
vendors
of
network
equipment,
is
when
they
first
hear.
It
is
the
the
comment.
I
A
A
Don't
try
so
basically
just
give
enough
description
to
say
you
know
this
could
be
at
any
number
of
different
things,
not
try
to
filter
on
it,
whether
that
could
be
helpful
both
for
both,
for
you
know,
explaining
to
the
I
guess
see
what's
going
on
and
also
for
implementers
to
know,
don't
try
to
be
too
clever.
B
I
I
Rtp
and
our
endpoint
devices
to
best
practices,
but
they
are
vendor
specific.
Each
vendor
is
implementing
their
own
product
to
their
specifications.
That
will
work
on
a
network,
and
we
have
to
be
careful
not
to
put
things
in
here
to
make
people
believe
or
expect
certain
things
out
of
devices
when
we
don't
control
that.
A
B
So
I
I
mean
you
know
if
you
look
through
Dan,
for
example,
in
the
quick
specs
there's
a
lot
of
concern
about
what
they
call
ossification,
which
is
a
similar
thing.
People
expecting
quick
to
look
like
that
and
then
the
version
changes
and
you
know
they're
afraid
it'll
be
broken.
So
you
can.
You
could
include
a
little
section
within
security
considerations,
for
example,
on
on
DPAC
and
inspection
or
or
this
kind
of
profiling,
just
to
make
it
clear
that
that
fools
errands
well.
I
B
Just
don't
do
that
so
anyway,
I
think
we've
I,
think
if
it
you
know,
we
keep
to
what
we've
just
been
talking
about.
That
could
be
a
path
forward
to
get
past.
These
discusses
just
basically
put
in
the
minimal
amount
of
info
that
you
know,
I
need
what
they
would
get
if
they
read
skip
to
10,
but
they
don't
want
to
so
well.
A
I
It's
it
was
the
model
of
what
we.
A
B
Yeah
yeah
I
mean
I,
understand
that
you
don't
want
to
create
conflicting
normative
text
or
anything
in
the
two
documents,
but
you
know
they
in
BBC,
RGB,
payload
and
many
other
things.
You
basically
have
this
little
summary
in
the
beginning.
That
kind
of
tells
you
the
features
of
the
thing
you're
of
the
Kodak
or
whatever,
and
and
that
little
section
is
often
what
orients
the
reader
so
I
think
that's
what
they're
looking
for.
E
Okay,
thank
you.
Can
you
hear
me
yeah?
Okay,
good,
so,
yes
yeah!
This
is
the
rtcp
message
for
green
metadata
is
proceeded
to
the
WG
draft,
so
next
slice
please.
E
Just
quick
rewind,
so
the
Jeff,
the
proposed
two
rtcp
message:
where
is
the
temporal
spatial
resolution
request?
Another
is
the
temporal
spatial
resolution
notification
next
slide
space,
so
the
main
update
in
the
WG
draft
is
based
on
the
command.
We
received
that
the
spatial
temporal
adaptation
should
in
collaborate
with
the
STP
update,
so
a
new
section
section,
6,
sdp
definition
is
added
to
the
draft
to
define
the
rtcp
FB
attribute
and
parameters
for
the
proposed
message.
So
this
is
along
with
the
IFC
5104.
E
B
B
E
Comments
received
from
Nokia
that
it
should
note
how
the
STP
may
be
used
in
this
case.
Yeah.
A
D
A
A
I
think
we
had
a
question
about.
Nokia
is
I,
think
a
matter
of
if
there
was
any
discussion
of
the
if
there
was
any
negotiation
of
the
resolution
in
STP.
How
that
you
know
how
that
interacts?
With
with
these
messages
and
I
think
the
answer
is
probably
it's.
You
know
you
have
to
you
know.
A
You
know
you
should
never
go
about
what
you
guys
should
get
in
the
sdp,
but
you
know
this
is
not.
This
is
not
required,
Dynamic
to
get
renegotiation
with
the
sdp
as
long
as
it's
within
the
window.
Sdp
allows
blueberry
suspicion,
but
probably
you
should
say
that
there's
also
a
question
of
I
think
for
Magnus
westerland
during
the
call
for
adoption.
A
Yeah
yeah
I
mean
as
an
individual
I
would
certainly
be.
You
know,
I
think
it
would
be
clearer
what
it's
about.
If
we
just
said
it
was
temporal
spatial
resolution
you
know
because,
especially
because
the
the
green
metadata
document
from
the
that
the
that
was
that
did
that
MPEG
did
it
does
more
than
just
TSR.
It
does
a
number
of
other
formats
also
so
making
clear
this
is
just
TSR
and
the
people
are
not
familiar
with
MPEG
that
that
it's
that
it's
TSR
I
think
would
be
helpful.
That's
an
individual
opinion.
A
Okay,
is
that
all
you
want
any
other
sliders
at
all
right,
any
other
comments
on
green
metadata,
otherwise,
I'll
move
on
to
RTP
over
quick.
D
Oh
I'm,
sorry,
no
I,
don't
want
to
say
something.
It's
I
just
need
to
do
it
to
get
my
speakers
to
work.
A
Oh,
oh,
you
have
that
bug
where
it's
only
it's
not
working
in
the
yeah,
there's
a
so
some
sometimes
there's
a
well
sometimes
made
it
goes
weird,
and
so
because
they'll
use
it
two
different
modes
and
sometimes
if
one
of
the
modes
doesn't
work,
that's
fine!
All
right.
It
was
presenting
for
RTP
over
quick,
because
it's
Spencer.
J
H
J
Yeah
and
Madness
is
is,
is
starting.
I
was
making
sure
that
I
could
unmute
when
it
mattered.
Sorry,
okay,.
A
J
A
H
All
right,
I
think
you
can
hear
me
too.
Yes,
so
great,
we
have
a
short
update
on
a
new
submission
we
post
this
week
and
then
we
will
go
over.
What's
what
we're
currently
discussing
with
Spencer
and
York
and
in
the
end,
to
see
what
other
open
issues
are
upcoming
for
our
new
submission,
which
we
posted
this
week,
we
rewrote
the
abstract
and
restructured
introduction.
H
H
Then
we
added
new
terminology
points
for
congestion,
control
and
rate
adaptation.
We
will
go
into
a
bit
more
detail
of
congestion
control
later
so
this
terminology
parts
may
change,
but
we
added
them
for
now
and
to
have
a
start,
then,
in
the
last
interim
meeting
in
December
I
think
we
discussed
multiplexing
a
lot
and
then
also
on
the
mailing
list.
We
now
added
a
new
subsection
on
that
topic
in
the
draft
and
that
subsection
continues
to
use
the
flow
identifiers
we
had
before
and
adds
more
explanation
on
how
that
works.
H
H
For
example,
we
obviously
have
the
question
of
what
kind
of
congestion
control
should
be
used
because
RTP
obviously
prefers
low
latency
rate
adaptation,
while
some
quick
implementations
may
only
provide
by
transfer
optimized
congestion
control,
and
then
that
also
leads
to
the
question
which
layer
should
actually
do
congestion
control
and
how
do
they
interact
if
we
have
congestion
control
at
the
quick
layer
and
that's
the
application
layer
and
then,
since
we
added
or
since
we
already
had
it
before,
but
now
we
made
it
explicit
in
the
multiplexing
section
that
we
can
have
concurrent
non-rtp
streams.
H
J
So
where
we
are
so
far
is
that
we've
had
some
good
discussion
in
GitHub
and
on
the
mailing
list.
We
especially
appreciate
the
review
that
vidi
did
and
her
engagement
with
us
in
GitHub
I've
managed
to
convince
myself
that
there
are
multiple
issues
in
GitHub
now,
but
that
those
issues
are
not
complete
and
may
not
be
all
that
well
structured,
which
I
get
to
say
because
I
wrote
them
there
is.
J
There
are
also
suggestions
that
we're
talking
about
that
are
kind
of
contentious
and
I
can
go
into
more
detail
about
that,
and
maybe
it
would
be
good
for
me
to
do
that.
But
just
you
know
like
number
one:
if
there
are
things
that
we
are
going
to
have
to
do
or
have
to
not
do,
then
that
can
constrain
the
rest
of
our
conversation.
If
that
makes
sense
so
medicine,
is
it
okay?
If
I
talk
for
a
minute
about
the
congested
control,
controversies.
J
J
The
issues
I've
seen
with
this
over
the
past
number
of
years
that
we've
talked
about
it.
Any
implementation
can
do
anything
at
once
at
its
end,
but
there's
not
a
defined
way
to
tell
the
other
end
to
do
that.
So
how
do
we
make
sure
that
we're
talking
to
an
a
other
end?
That
will
do
the
right
thing
that
could
be
port
numbers,
which
some
people
don't
like
that
could
be
alpn,
which
some
people
don't
like
there.
We
could
just
think
assume.
J
The
RTP
sender
will
never
cause
quick
congestion
control
to
kick
in,
because
it's
interactive
media
and
just
think
happy
thoughts,
and
there
could
be
other
ideas
as
well.
I
feel
like
I
should
be
looking
at
zulup,
while
I'm
talking
the
question
of
what
we
would
need
to
do
to
conform
to
RC
8085,
because.
J
We're
running
over
a
UDP
and
if
we're
turning
off
quick
congestion
control.
However,
that
happened
that
it
would
be.
Did
we
need
to
figure
out
what
our
congestion,
our
own
congestion
control
story,
is
because
we're
not
running
over
a
congestion
control,
transport
ation?
Yes,.
J
B
D
B
My
comment
was
two
things,
one
of
which
is
that
I,
don't
think
you
need
to
negotiate
congestion
control
right,
the
the
quick
protocol
as
long
as
you
conform
to
that,
let's,
let's
send
their
choose
congestion
control
mechanisms.
So
it's
not
something
that
really
needs
to
be
negotiated
and
both
sides.
Don't
you
they
don't
need
to
use
the
same
thing
in
each
Direction
and
it
won't
cause
an
interrupt
problem.
It
might
cause
other
issues,
but
so
I
don't
think
you
have
to
really
get
into
negotiating
it.
B
With
respect
to
a
disabling,
quick
congestion
control
I
mean
we've
talked
a
lot
in
various
venues
about
figuring
out
what
algorithms
would
work
and
you
know
if
the
document
has
recommendations.
That
would
be
great,
but
is
there
really
serious
discussion
about
people
just
disabling
it
and
doing
away
with
it
that
that's
certainly
not
going
to
work
in
environments
like
a
browser
right?
You
you
pretty
much
have
to
have
something
there
and
I
I
don't
know.
Is
it?
Is
that
a
serious
thing
that
people
really
want
to
do
so.
J
I
was
so
okay,
so
I
would
say
three
things,
and
one
of
them
is
that
Menace
is
also
in
the
queue.
So
so
he
might
want
to
correct
me
on
this.
I
think,
there's
two,
you
know:
there's
two
levels
of
this
one
is
we're
using
the
language
about
disabling,
quick
congestion
control.
But
the
other
question
is
how
how
do
we
tell
how
do
we
tell
how
how
do
all
the
people,
how
do
all
the
entities
in
an
implementation
know
who
is
doing
congestion
control?
J
J
So
if,
if
you
you
know,
if,
if
we
could
not
run
in
you
know
if
we
could
not
run
this
over
bbr,
that
is
going
to
do
bandwidth,
probing
and
then
and
then
lose
lose
lose
packets,
as
it's
figuring
that
out.
That
would
be
great.
You
know
so
I
guess.
The
question
is:
how
do
we
know
that
we're
talking
to
an
other
end
that
will
do
the
right
thing
if
I'm
asking
it
to
send
something?
J
B
In
the
case,
in
the
case
you
just
mentioned
Spencer
it's
possible
for
one
site
to
do
bbr
the
other
side
to
do
new,
Reno
right.
They
can
still
talk
to
each
other.
You
know
the
draft
can.
Certainly,
as
you
mentioned,
you
know,
the
probiot
phase
will
mess
things
up,
so
that
probably
wouldn't
be
a
great
thing
to
to
do.
But
you
know
they
they
don't
have
to
signal
each
other
about
what
they're
going
to
do.
Each
side
can
can
decide
its
own
congestion
control
later.
It's
not
negotiated
and
I
I.
J
Okay
and
and
that's
fair,
what
I'm
saying
is
I'd
like
for
us
to
adopt
what
you
just
said
as
a
thing
going
forward
and
if
that's
not
the
right
answer,
we
can
argue
about
it
on
the
list
or
in
the
right
place.
J
The
the
third
thing
was
whether
there
was
a
whether
we
should
have
an
mandatory
to
implement
rate
adaptation,
algorithm
and
I.
Don't
think
we
should,
for
some
processing
sort
of
reasons
and
some
technical
sorts
of
reasons,
but
I
I,
you
know
I,
don't
know
that
we
agree.
We
I,
don't
know
that
we
agree
on
that.
J
I,
don't
know
who
is
looking
at
work.
The
Christian
guidelma
is
talking
about
in
mock
like
in
the
past
week
or
so
for
a
media
aware
congestion
controller.
J
J
I,
don't
know
how
much
work
was
done
on
them
running
against
other
congestion
controllers,
and
that
would
be
the
kind
of
thing
that
bringing
them
into
the
ietf
and
having
the
ietf
review
them
for
publication
as
proposed
standards
or
what,
whatever
whatever
it
would
be,
but
just
basically,
how
do
we
get
the
ietf
review
of
those
that
would
justify
putting
them
in
this
specification,
as
basically
a
must
implement
again
I'm,
not
saying
we
should
do
this.
I'm
saying
this
is
the
kind
of
conversations
we've
been
having
I
should
be.
A
A
F
Here
perfect,
so
I
don't
think
this
is
a
question
of
whether
or
not
we're
turning
off
congestion
control
in
quick.
F
How
an
RTP
over
a
quick
endpoint
implements
the
combination
of
quick
and
RTP
on
top
of
that
I
think
is
an
implementation
detail
and
really
what
we're
asking
is.
What
is
the
feedback
it's
going
to
use
to
implement
that,
and
so
the
question
is
whether
the
feedback
is
in
quick
or
the
feedback
is
on
top
of
quick
and
just
like
in
the
webrtc
world
we
don't
standardize
which
congestion
control
algorithm
use,
but
we
do
try
and
standardize
feedback
mechanisms.
F
We
should
be
talking
about
what
is
the
feedback
mechanism?
Are
we
going
to
extend
quick
so
that
it
has
the
time
stamps
necessary
on
the
feedback
already
there
so
that
you
can
do
proper
congestion
control
or
are
we
saying
we
don't
want
to
try
and
extend
quick
instead,
we're
going
to
do
it
on
top
of
quick
with
feedback,
that's
embedded
in
streams
or
datagrams,
so
I
think
that's
the
real
question.
What
how
a
particular
implementation
uses
that
feedback
is
I,
think
up
to
the
implementation.
J
J
If
an
application,
so
there's
really,
you
know
a
lot
of
the
like
I,
say:
I
want
to
improve
back
on
my
slide.
I
want
to
improve
the
quality
of
the
discussions
that
we've
been
able
to
have
by
having
better
issues
with
better
descriptions.
So
one
of
the
things
I'd
like
to
do
is:
we've
had
a
lot
of
conversations
as
if
there
were
two
levels,
but
if
an
application
on
top
of
RTP
is
trying
to
do
this.
J
Also
there's
really
three
and
we
haven't
had
a
lot
of
conversation
that
focused
on
all
three
we've
had
conversations
that
focused
on
two
of
them,
so
that
that
that's
that's
yeah
and
especially
taken
with
your
your
or
your
comments
in
zulup.
J
I
should
also
mention
that
Harold
had
a
useful
comment
in
Philip
as
well.
That,
let
me
let
me
like
I
said:
let
me
go
back
to
the
queue
and
Jonathan.
A
Yeah
so
I
think
you
know
on
the
topic
of
the
death
control.
I
mean
I,
agree
that
you
know
the
question
of
whether
you
have
you
know:
congestion
limitation.
You
know
condition
control
at
the
RTP
level,
I
mean
I.
Think
let
me
put
it
this
way,
I
think
the
decision
of
what
to
send
you
know
basically
the
decision
of
how
much
you
know
how
much
is
available
to
send
has
to
happen
at
some
one
of
the
transport
Stacks,
which
is
a
separate
question.
A
What
this
end
obviously
has
needs
to
be
at
the
application
Level,
whether
that's
adjusting
a
codex
encoding
rate,
or
you
know
a
SFU
doing
something
like
scalability
considerations
or
whatever
I
think
but
I
think
yeah.
That
probably
does
need
to
be
clear,
clarified
the
other
question
on
the
CC.
So
my
expectation,
though
this
obviously
needs
some
experimentation,
is
that
if
you're
looking
at
a
you
know
a
standard,
you
know
boss,
based
or
cubic
Cube
building
based
congestion
algorithm
like
neureno
or
cubic.
A
If
you
then
run
a
low
delay,
professional
algorithm,
like
you
know,
Google,
Cc
or
NADA
or
whatever.
On
top
of
that,
my
expectation
would
be
that,
except
in
extreme
circumstances,
the
the
the
quick,
the
lower
level
algorithm
would
never
cut
in
because
you're
keeping
your
cues
low,
except
in
like
extreme,
like
you,
have
a
sudden
drop
in
available
bandwidth
case,
which
you
know
so
my
expectation
would
be
that
those
would
actually
work
together.
A
Well,
things
like
bbr2
as
it
might
be
more
problematic
and
that's
not
really
query
to
be
what
things
are,
how
things
like
L
for
us
would
interact
figure
that
in
a
good
intuition
for
that
yet,
but
it
might
be
that
they
would
again
just
not
interfere
with
each
other
very
much
so
so,
but
I
think,
but
there
I
think
the
issue
is
not
so
much
centered
again.
A
I
think
all
these
are
things
that
can
be
done
at
one
end
point,
so
the
issues
are
then
communication
between
the
layers,
which
you
know
if
you're
talking
about
things
like
web
transport.
It
doesn't
matter
for
standardization,
though
not
necessarily
iatf
standardization.
Otherwise,
it's
just
advice
to
implement,
or
especially
what
kind
of
apis
need
to
have
available,
and
what
kind
of
features
of
your
quick
stack
you'd
have
available.
Both
of
those
things
are
useful
in
the
document,
but
with
different
levels
of
normativity.
J
Right
so,
like
I,
said,
the
the
high
order
bit
on
this
from
my
perspective
was
basically
asking
people
don't
go,
look
at
the
issues
that
are
out
there
now.
Let
us
clean
them
up
between
now
and
ietf
116.
So
we
can
have
a
coherent
and
well-structured
conversation
and
then,
like
I,
say
give
us
give
us
a
hard
time
at
116.
If
what
we
have
done
is
not
that.
B
Yeah
I
just
have
a
general
comment.
I've
been
playing
around
with
this
for
a
while
and
one
thing
I
found.
This
is
where
this
really
the
rubber
hits.
The
Road
Spencer
is
when
you're
sending
a
keyframe,
because
your
your
keyframe
will
be.
You
know
10
times
or
more.
The
size
of
the
P
frames
that
you're
sending,
and
so
that's
that's
where
you
can
start
to
get
yourself
into
trouble,
because
you
can
talk
about
average
bit
rate,
but
it
really
doesn't
mean
all
that
much
when
you
have
this
Spike
around
the
keyframe.
B
So
that's
where
that's?
What
really
becomes
a
problem
and
there's
two
things
that
you
need
to
know
it's
you
know,
there's
there's
the
bit
rate
available.
The
bottleneck
estimate
that's
useful,
but
but
it
is
sometimes
not
enough
information
when
you're
talking
about
a
keyframe,
because
it
can
be
so
big
right.
J
B
Not
just
right,
it's
you
need
to
know
so
so
some
one
one
thing
which
actually
I've
seen
is
useful
is
to
know
the
size
of
the
congestion
window
and
in
particular
what
that
tells
you
is
how
many
round
trips
it's
going
to
take
to
send
the
keyframe.
B
So
anyway,
you
know:
we've
been
trying
to
work
through
in
the
w3c,
for
example,
what
information
that
that
the
application
would
need
to
know
to
start
adjusting
the
encoding
parameters.
B
And
I:
don't
we
don't
really
have
implementations?
So
we
don't
know
whether
they're
good
enough
but
I
mean
Peter's,
been
doing
a
PR
on
trying
to
try
to
Bubble
some
of
that
information
up
so
that
when
it
comes
time
to
do
something,
you
you'd
have
the
information.
You
need
to
decide
what
your
best
option
is.
J
Yeah,
and,
and
and
that's
very
so
when
I,
when
I,
when
I
showed
up
and
started
talking
to
Madison
and
York
the
this,
you
know
this
is
exactly
the
kind
of
thing
we're
talking
about
which
is
yo.
J
Basically,
the
other
thing,
I
I
Bernard
tell
me
if
you
agree
with
this
or
not,
but
it
seems
like
we're
going
to
have
an
arms
race
between
people
wanting
to
send
more
and
people
having
better
compression,
and
you
know,
and
so
the
thing
with
the
thing,
with
what
you're,
what
you're
saying
about
the
very
biggest
frames
that
we
send
I
would
not
be
surprised
if
they
were
getting
bigger
over
time.
If
you
know
at
least
at
least
until
somebody
comes
up
with
a
way
to
advance
to
that.
B
Yeah,
because
you
know
there's
two
things
you
can
do
when
you
get
better
compression
right,
you
can
keep
the
resolution
and
frame
rate
the
same
or
you
can
decide
hey.
This
is
cool,
I
can
do
you,
know
irres
and
and
do
more
sophisticated
stuff
and
maybe
make
a
better
arbr
system
or
whatever
you're,
building
right,
and
so
it
just
having
better
encoders,
doesn't
necessarily
mean
you
end
up
sending
less,
because
you
could
just
take
on
more
try
to
try
to
make
it
more
sophisticated.
B
Right
yeah,
it
depends
on
the
publication
right
and
and,
as
you
said
sometimes
like,
for
example,
for
ab1,
which
we've
been
playing
with
you'll
see
the
P
frames
can
very
be
very
small,
but
then,
if
you're
doing,
even
even
if
you're
doing
high
resolution,
you
know
with
a
talking
head.
But
then
you
you
send
the
keyframe
and
the
keyframe
is
going
to
be
10
times
that
size.
So
it's
going
to
still
be
pretty
big,
you
know,
could
be
60
kilobytes
or
something
like
that.
J
Yeah
and
and
the
Madison
and
New
York
we're
doing
the
due
diligence
and
basically
trying
you
know
trying
to
see
if
quick
congestion
control
was
was
kicking
in
at
all
on
some
sample
traffic
that
they
were
looking
at
it
and
it
wasn't
but
I
mean
you
know.
The
question
is:
how
long
can
we
get
away
with
just
thinking
happy
thoughts
about
that
they
did
the
right
thing.
I'm
just
saying
you
know,
we
need
to
keep
an
eye
open.
B
F
B
That
and
that's
important
because
it
some
people
are
saying:
hey,
you're,
dragging
out
the
keyframe
transmission
time,
but
the
glass
to
glass
latency
could
actually
be
decreased
by
having
that
concurrency,
and
the
second
thing
that
the
concern
concurrency
does.
Is
it
actually
influences
the
the
sea
wind
and
the
growth
of
the
sea,
wind
yeah?
B
J
J
If
I
could
just
mention
a
couple
of
a
thing
for
our
Note
Taker
I,
first
I
I
will
I
will
cut
and
paste
my
notes
of
what
I
meant
to
say
about
the
congestion
control
controversies
in
the
notes,
so
you'll
have
that
to
work
with
and
just
in
general.
It's
really
helpful
to
have
the
discussion
in
now
zulup
as
part
of
the
notes,
because
often
that's
where
the
real
meeting
is
happening
so
back
to
you.
Matt.
H
Yep
sure
next
slide,
please
so
after
congestion
control,
we
have
a
couple
of
more
issues
which
we
group
into
three
here.
H
It
may
be
a
bit
optimistic
to
say
we
will
discuss
all
of
them
in
itf116,
but
that's
what's
the
plan
for
now
to
work
on
these
after
we
clean
up
the
congestion
control
issues,
then
we
have
two
issues
on
quick,
multicast
and
quick
multi-path,
which
we
think
may
be
rather
to
be
solved
in
a
follow-up
document,
because
those
are
themselves
the
work
and
progress
and
we're
not
sure
if
we
will
be
able
to
solve
or
say
anything
about
these
and
the
current
version
of
RTP
over
quick
yeah
and
then
in
I.
H
Think
in
London
we
had
a
discussion
about
s
frame
and
S
packet.
Depending
on
what
the
solution
there
will
be.
If
there
will
be
some
more
general
resolution,
which
also
applies
to
other
things
outside
RTP
over
quick,
we
might
only
reference
that
from
RTP
over
quick
or
if
there
won't
be
anything
like
that.
We
may
say
more
about
this,
so
at
least
for
now,
I'd
like
to
defer
the
discussion
about
SRAM
at
s
packet
and
see.
A
J
Yeah
I
was
just
going
to
observe
that
Mattis
has
a
great
point,
which
is
if
people
want
to
help
with
this
between
now
and
116.
J
The
the
issues
on
slide
19
here
are
great
places
for
people
to
look
at
and
provide
comments
and
feedback,
and
maybe
even
text,
and
definitely
definitely
thank
you
for
that.
Also,
the
the
text
and
the
the
text
in
this
scope
sections
you
know
what's
in
it
without
a
scope,
especially
but
section
one,
is
pretty
heavily
Rewritten
and
if
people
look
at
that
and
see
things
that
they
don't
agree
with,
please
let
us
know,
because
that's
that's
kind
of
what
we're
work.
B
D
J
Technically,
technically,
Madness
is
leading
the
charge
on
these,
so
on
the
19.
I
will
be
back
for
a
list
of
volunteers
on
the
previous
slide
in
116.,
okay,.
H
So
if
there
are
volunteers
who
want
to
work
on
these,
that
would
of
course,
be
great
I.
Think
45
was
already
discussed
in
London.
We
don't
have
a
final
solution
for
that.
Yet
we
will.
We
should
add
some
text
for
that,
but
there
were
some
details.
Why
I
didn't
do
that
yet
and
then
the
others.
Of
course,
if
there
are
volunteers,
I'm
always
happy
to
have
feedback
on
this
or
even
help
people
to
write
text.
A
C
Hello,
thank.
A
A
C
Yeah,
so
thanks
for
accommodating
your
request,
we
recently
submitted
the
important
region
of
Interest
dependent
delivery
of
volumetric
media
proposal
document
and
thanks
for
accommodating
for
this
interview
meeting
so
in
this
basically
I
provide
a
brief
background.
With
a
brief
background
on
the
immersive
media
and
the
motivation
behind
the
proposal
for
the
partial
delivery
of
famous
media
and
then
add
a
few
details
about
what
our
proposal
document
is
actually
providing.
Can
you
go
to
the
next
slide,
all
right.
C
Yeah,
thank
you
so
so
there
has
been
a
substantial
increase
in
the
interest
towards
emerging
media
Technologies,
such
as
virtual
reality,
augmented
reality
and
other
immersive
experiences.
It's
majorly,
because
the
visual
volumetric
media
increases
the
end
user
immersion
compared
to
the
traditional
2D
videos
and
majorly
an
example
for
such
kind
of
immersive
media
is
a
point
clouds
which
basically
stores
a
set
of
data
points
in
the
space
are
belonging
to
a
3D
object
and
the
number
of
basically
the
data.
C
The
data
required
to
store
these
Point
Cloud
objects
is
enormously
High
because
we
have
millions
of
points
in
a
dynamic
Point
clouds
and
the
the
number
of
bits
required
is
pretty
high.
So
MPEG
3dg
working
group
has
developed
a
coding
media
codings,
a
visual
video
based
coding,
standard
v3c,
video
based
visual
volumetric
according,
and
we
also
have
an
RTP
payload
format
for
transmitting
the
v3c
coded
that,
and
that
was
submitted
into
the
AUD
course
of
working
working
group.
And
it's
currently
in
the
working
group
stage.
C
It's
been
provided
the
details
in
the
slide
and
there
are
also
plenty
of
extra
use
cases
that
are
basically
discussed
in
various
standard
bodies.
Such
as
air
conferencing,
shared
ER
conferencing,
experiences,
streaming,
volumetric
media
to
Glass
type,
Mr
devices
and
other
stuff,
so
so,
basically,
the
motivation
behind
the
contribution
is
because
the
bandwidth
requirement
for
the
real-time
transport
of
this
immersive
media
is
so
high.
C
So
the
need
for
support
of
partial
access
and
delivery
of
the
MSU
media
content,
based
on
the
user's
viewport
or
region
of
interest,
is
very
important
so
to
enable
the
partial
access,
basically,
a
volumetric
video
frame
can
be
divided
into
a
number
of
independently
decodable
tiles,
as
we
call
that
as
a
3D
trial
Strand
and
this.
Basically,
all
these
decodable
tiles
are
mapped
into
a
three-dimensional
subdivision
of
the
space.
So
as
shown
in
the
figure,
basically,
the
the
3D
regions
are
access.
C
Align
cuboids,
which
were
basically
defined
in
the
Cartesian,
coordinates
with
an
anchor
point
and
the
size
of
the
cuboid.
So
a
volumetric
video
that
a
volumetric
image
that
was
same
that
was
shown
in
the
left
side
has
been
partitioned
into
three
spatial
regions:
three
3D
special
regions.
So
can
you
go
to
the
next
slide?.
C
So
obviously,
the
during
the
real-time
transmission
of
the
MSC
media,
the
3D
special
regions
that
we
can
define,
basically
need
to
be
transmitted
to
the
receiving
side
and
the
receiver
can
request
for
a
portion
of
the
the
MSU
media
content
based
on
the
region
of
Interest
or
viewport-based
requests.
So
if
you
take
the
re
based
on
the
region
of
Interest,
it
can
be
a
static
3D
region
or
the
receiver
can
request
for
an
arbitrary
3D
region
and
in
case
of
viewport,
it's
like
based
on
the
current
user's
viewport
it
can.
C
It
connect
first
for
a
viewport
based
delivery
of
the
content
and
obviously,
when
the.
C
So
when
the
receive,
when
the
sender
receives
the
request
or
that
it
basically
constructs
based
on
the
3D
tiles
that
are
present
in
the
spatial
region
and
it
constructs
the
actual
3D
regions
that
needs
to
be
transmitted
transported,
so
it's
it's
essential
for
the
sender
to
inform
the
client.
What
are
the
information
are
to
report
the
client
that
okay,
the
region
of
Interest
information
that
has
been
provided
to
the
client,
so
the
client
uses
that
information
and,
basically
when,
when
the
users
reports
are
changing,
it
can
based
on
that
information.
C
It
can
respond
back
and
sometimes
the
3D,
the
MSU
media
content
Point
routes.
The
special
regions
are
so
Dynamic,
so
yeah
that
they
will
be
changing
over
time
and,
in
this
case
also
reporting.
This
Dynamic
special
region
information
is
very
important
from
the
sender
to
the
client.
Can
you
go?
Can
you
go
to
the
next
slide?
C
So
considering
all
these
things,
so
the
problems
that
we
have
is
like
how
to
signal
the
special
subdivisions
that
are
defined
and
that
can
be
used
for
partial
access.
So
how
do
we
Define
those
subdivisions
and
how
do
we
share
it
with
the
client
is?
What
is
the
problem
and
also
when,
when
a
receiver
knows
about
the
special
subdivisions,
how
can
it
actually
retrieve
the
content
based
on
its
viewport
or
region
of
Interest
under
the
stuff?
So
how?
C
How
can
it
be
achievable
and
also
how
the
sender
can
inform
the
receiver
about
the
changes
in
the
spatial
region
learning
that
over
the
time?
So,
oh
and
basically,
all
these
cases,
so
the
required
signaling
between
the
end
points
to
negotiate
the
the
the
mentioned
capabilities
is
also
need
to
be
solved
so
to
resolve
all
these
problems.
We
have
actually
proposed
the
document.
Can
you
go
to
the
next
slide?
Please.
C
So
we
have
various
proposals,
so
we
we
proposed,
like
rtcp
feedback
messages,
so
the
rtcp
feedback
message
just
kind
of
contain
a
request
for
a
static
3D
region
of
Interest
or
an
arbitrary
special
region
of
Interest,
or
it
can
be
a
3D
viewport.
So
when,
when
a
sender
receives
this
information,
this
feedback
information
from
the
receiver,
then
it
basically
prepares
the
content
for
that
particular
special
region
or
viewport,
and
reports
back
that.
C
Okay,
what
this
is
the
special
region
we
are
transmitting
or
if
there
is
an
update
in
the
special
region,
so
I.
The
sender,
basically
updates
the
client
saying
that
these
are
the
dynamic
3D
special
regions
present
in
our
in
the
content.
So
this
information
can
be
transmitted
or
using
the
RDP
header
extension
for
real-time
transmission
cases.
And
of
course
all
these
capabilities
needs
to
be
negotiated
between
the
sender
and
the
receiver
using
the
STP.
So
the
sdp
signaling
for
static
3D
regions
are.
C
Needs
to
be
done
and
the
rtcp
feedback
messages
that
were
discussed
earlier
and
the
RDP
header
extension
formats
for
the
the
above
cases
needs
to
be
resolved.
So
we
defined
all
those
things
in
our
proposal
document.
Can
you
go
to
the
next
slide?
Please
so
for
signaling,
the
the
the
above
mentioned,
the
information
in
the
STP.
C
We
have,
we
defined
a
new
media
attribute
called
acre
to
3D
regions,
so
it's
basically
informs
the
a
number
of
3D
regions
present
in
the
content
and
what
are
their
positions
and
the
size
of
the
3D
cuboid
region,
the
cuboid
and
other
stuff,
and
we
also
have
some
information
about
like
okay.
C
C
So
we
wanted
to
use
the
payload
specific
feedback
message,
payload
type
for
this
rtcp
feedback
messages
and
the
FCA
for
this
rtcp
feedback
messages
for
requesting
a
static
3D
region
is
like
on
the
top
figure
and
for
the
arbitrary
region
can
be
in
the
below
figure.
Can
you
can
you
go
to
the
next
slide?
Please
and
prtcp
feedback
message
for
requesting
a
viewport
is
provided
here.
C
C
For
so
when
the
center
receives
the
request,
so
it
has
to
basically
prepare
the
content
and
also
inform
the
client
that
what
what
does
the
regions
3D
regions
that
it
has
been
sending
so
in
case
of
like
a
static,
3D
region
request,
it
can
basically
accommodate
like
what
are
the
number
of
regions,
how
many
regions
it
has
been
sending
and
other
region
IDs
and,
in
case
of
like
a
arbitrary
special
region
requests.
C
So
it's
basically
constr
the
center
is
constructing
the
data
for
that
specific
special
region,
so
it
basically,
it
accumulates
the
number
of
tiles
present
in
those
special
regions
and
finally
send
those
Styles
tile
content.
So
in
that
case,
basically
it
is
Recon.
It
is
constructing
a
special
edition
for
that
particular
request.
So
we
we
can
send
the
the
sender
can
send
the
position
and
the
size
of
that
constructed
3D
special
region,
and
can
you
can
we
go
to
the
next
slide?
C
C
It
is
just
reconstructed
at
that
point,
so
in
case
of
dynamic
3D
regions.
Basically,
the
center
is
updating
all
its
3D
regions,
the
3D
special
regions,
so
it
has
to
be
informed
for
the
client,
so
the
client
went
basically
request
based
on
the
region
or
are
based
on
the
viewport.
It
will
know
what
are
the
spatial
regions
that
needs
to
be
that,
so
that
information
can
be
transported
using
the
RDP
header
extension
yeah.
C
A
Yeah
I
mean
I
think
from
the
RDP
level
point
of
view.
Speaking
of
individual
here,
the
architecture
looks
good.
I
know
nothing
about
the
content
of
the
data,
that's
not
my
area,
expertise
but
sort
of
from
the
architecturally.
It
seems
you
know
from
the
you
know,
use
of
header
extensions
feedback
messages
seems
reasonable,
but,
like
I
said,
I,
don't
know
whether
the
data
that's
being
carried
is
the
right
thing
for
this
Spencer.
J
Spencer
is
often
confused
about
a
great
many
things,
but
is?
Is
this
ABT
core
RTP
v3c00.
A
You
know
this
is
that's
the
doc,
that's
the
draft.
That's
you
know.
We've
adopted
for
sending
the
sending
the
encoded
okay.
C
C
A
A
Yeah
I
guess
yeah
I
think
this
is,
you
know,
obviously
useful
necessary
work
as
part
of
a
whole,
the
infrastructure.
You
know
it's
a
if
people
have
any
comments
on
it.
Let
them
know,
but
hopefully,
and
so
is
there
are
there
multiple?
Is
this
something
that
people
are
actually
already
looking
at
doing
interop
with
and
multiple
people
working
on
implementing
this
and
want
to
usefully
talk
to
each
other.
C
The
Yes-
actually,
this
is
a
pretty
much
interested
for
many
aspects.
Our
use
cases-
yeah
I
mean
so
we
that
that
is
a
work
already
going
on
in
the
MPEG
and
we
introduced.
It
is
also
basically
interested
in
supporting
this
viewport-based
request,
yeah
and
storing
this
basically
they're.
Storing
the
content
to
support
like
a
partial
access
of
the
data.
A
C
Yeah
Yes
Yes,
actually
the
viewport
and
other
stuff
is
like
defined
in
the
MPEG,
and
so
we
we
were
using
those
okay
yeah.
So.
C
K
Yeah
just
wondering
about
the
quick
scan
of
the
draft.
You
were
talking
about
coordinate
systems
that
are
in
in
meters
and
floating
Point.
Is
that
really
the
model
in
Heath
RAC?
The
the
video
is
described
in
meters
and
floating
point
in
a
global
World
coordinate
system
where
it's
not
it's
not
traditional,
pixel,
based
or
voxel-based.
References
that
are
independent
of
any
real
world
coordinates.
C
So
it
is
basically
so
when
we,
when
we
make
the
3D
regions.
Yes,
it
is
based
on
the
points.
If
you
can
see
it
in
the
first
slide
itself,
it
is
based
on
the
dimensions
but
yeah
when
you
say
that
viewport
is
basically,
you
can
also
send
it
like
on
the
real-time
coordinates,
sorry
in
the
real
time
scenario,
where
basically,
the
the
distance
for
the
U
current
user
positions
and
other
stuff
foreign.
C
So
the
reports
can
be
requested
in
that
fashion.
K
It
just
seems
a
little
odd
to
me
that
it
didn't
seem
very
interoperable
to
have
something
some
concepts
of
oral
geometry
and
end
users,
physical
geometry.
Being
in
these
messages,
when
the
video
itself
is
not
in
terms
of
that
geometry,
it
seems
like
a
more
inoperable
who
would
be
relative
to
the
actual
videos
Dimensions
natural
Dimensions,
not
not
some
observers,
physical,
real
world
dimensions,
which
would
be
a
difficult
to
interoperate
with.
C
Yeah,
so
it's
basically,
it
is
this.
It
is
the
the
transmitter.
Ascender
basically
converts
those
viewport
information
related
to
the
3D
objects.
So
it's
so
it's
it's!
Basically,
the
sender
translate
translates
that
information
into
the
coordinates
from
the
special
I
mean
and
the
the
MSC
media
content
special
regions.
C
K
If
you
just
specify
an
Roi
of
a
pixel
based
coordinates
of
images
or
videos,
that
would
also
be
useful,
but
I
would
never
imagine
doing
that
from
some
real
user
observed
metric
I,
wouldn't
try
to
understand
the
the
viewers,
physical
environment
and
and
have
him
specify
RTC
feedback
messages
based
on
that
physical
environment
I
would
make
it
relative
to
the
actual
video
itself,
which
has
you
know,
internal
Dimensions.
It
has
internal
dimensions
of
pixels
or
something
like
that.
That's
what
I
thought
would
be
a
more
interoperable
format.
C
Yeah,
so
if
you
see
the
region
of
Interest,
so
basically
it
is,
it
is
based
on
the
pixels,
it's
not
based
on
the
meters
or
something
so.
The
regions
were
well
defined
in
the
sdp
based
on
the
pixels
and
they
were
they
were
actually
transmitted
to
the
receiver
and
the
receiver
basically
understands,
and
what
what
kind
of
pixels
I
mean?
What
kind
of
data
it
requires?
How
many
pixels
it
requires,
so
it
basically
requests
based
on
the
dimensions.
C
So
that's
the
thing
so
when,
when
in
case
of
viewport,
it
is
basically
it
is
translated
into
those
and
then
it
will
just
send
back
like
okay.
This
is
what
is
required.
This
kind
of
this
many
pixels
from
so
and
so
XYZ
position
to
this
one's
XYZ
position.
I
need
it,
and
that
is
translated
at
the
server
side
and
then
it's
it
basically
understands
reconstructs
sorry
constructs
the
tiles
and
then
sends
that
a
corresponding
content
as
a
special
region.
C
K
Okay,
so
maybe
a
little
bit
of
words
to
the
thing
on
the
draft:
the
difference
between
the
message
formats
for
the
positions
versus
the
viewports.
It.
K
Me
it
looked
like
they
were
all
four
four
by
32-bit
floats
in
meters
in
a
global
reference
system
in
the
global
you
know,
coordinate
reference
system,
that's
what's
confusing.
C
C
Yeah,
it
said
actually
yeah
thanks
for
the
yeah
I
think
it
is
basically
for
the
viewport
for
the
viewport
we
provided,
but
in
in
case
of
3D
special
agents,
it's
normal,
it's
basically
the
pixels
yeah.
Maybe
I
can
for
I
can
recheck,
and
if
there
is
anything,
I
can
upgrade
it
yeah.
K
A
All
right
any
other
comments
on
this.
A
B
We
have
any
action
items
or
or
things
to
follow
up
on.
B
B
B
Okay,
I
think
that's
it
for
today,
we'll
see
you
all
at
iqf,
116.