►
From YouTube: IETF113-MOPS-20220321-1330
Description
MOPS meeting session at IETF113
2022/03/21 1330
https://datatracker.ietf.org/meeting/113/proceedings/
B
D
D
I
I
don't
see
my
slides
uploaded
or
did
you
approve
my
slides?
I
uploaded
the
mops
a
couple
days
ago.
F
You
should
they
should
be
there.
I
thought
I
just
uploaded
them.
F
D
D
Well,
okay,
let
me
leave
and
try
to
come
back
then.
F
Yeah
and
if
it
doesn't
work
out,
then
I
I
can
in
theory
anyway,
if
I've
done
this
correctly,
I
can
share
them
and
we
can
do
the
next
slide
thing.
D
F
Yeah
I
was
kind
of
waiting
for
kyle,
but
there
he
is.
F
All
right,
I
think,
we'll
get
started.
Welcome
to
the
mops
media,
ops,
working
group
session.
Your
co-chairs
are
leslie
daigle,
myself
and
kyle
rose,
who
is
hopefully
online
successfully
from
an
undisclosed
location
on
vacation.
F
Before
we
get
any
further,
we'll
do
the
usual
the
usual
introductory
materials,
but
I
will
tell
you
now
that,
at
the
end
of
the
introductory
materials,
I
will
stop
until
we
find
a
volunteer
to
take
notes
again.
This
is
the
just.
I
need
a
designated
stuckey
who
will
be
in
charge
of
making
sure
that
the
right
stuff
lands
in
the
shared
notes
document
the
rest
of
us
can
be
in
there
and
helping
write.
F
So
you
don't
have
to
do
all
the
work,
but
I
need
somebody
who
will
be
a
stuckey
okay
so,
but
do
note
well
that
this
is
indeed
an
ietf
meeting
for
those
of
us
in
the
room.
F
It
almost
feels
like
an
ietf
meeting,
except
that
we're
missing
all
of
you
who
are
not
in
the
room
with
us
who
also
probably
feel
like
it's
an
ietf
meeting
because
you're
up
at
some
unusual
hour
and
on
video
so
but
as
a
reminder
that
there
are
requirements
in
participating
in
an
ietf
meeting
following
itf
processes
and
policies.
I
refer
you
to
bcp
79
for
all
the
details.
F
This
is
a
pre-see
summary
of
some
of
the
things
that
you
agree
that
you
are
agreeing
to
in
order
to
participate
in
this
meeting
and
a
particular
note
now
that
we
are
actually
within
within
arm's
length
of
each
other.
F
We
need
to
remember
that
we
are
working
respectfully
with
each
other
and
if
you
observe
any
disrespectful
behavior
or
are
the
subject
of
some
disrespectful
behavior,
we
do
have
an
ombuds
team
and
please
do
reach
out
to
them
in
order
to
help
resolve
any
issues,
all
right
and
lots
of
rfcs
to
give
you
the
full
documentation
on
what
that
means
and
where
to
go
and
who
to
see.
F
F
D
Guy
so
leslie
on
the
slides
it
it
turns
out.
When
you
go
to
the
slides
view.
For
me,
echo
using
that
refresh
button
in
there
does
help
logging
in
log.
You
know
you
still
get
an
old
version
of
the
the
overall
materials,
but
you
need
to
give
it
a
kick.
F
F
D
F
I
I
understand
that
okay,
hang
on
I'm
still
trying
to
get
back
to
the
agenda,
because
this
is
this
is
all
just
killing
time,
so
that
you
can
talk
amongst
yourselves
and
figure
out
who's
going
to
be
stucky
after
4
o'clock.
F
All
right,
so
this
is
the
slide
that
I
wanted
to
make
sure
to
get
to.
F
I
think,
by
this
point
in
the
day,
it's
possible
that
in-person
participants
understand
that
you
have
to
scan
the
the
qr
code
is
the
fastest
way
to
sign
the
blue
sheet
for
this
session,
but
otherwise
you
can
just
log
into
the
on-site
tool,
which
will
also
log
you
into
our
blue
sheets.
It's
how
we
know
how
many
people
were
here
and
we
can
plan
for
future
meetings.
So
we
would
appreciate
it
if
you
did
plus
also
we
will
be
running
the
question
queue
out
of
the
meat
echo
interface.
F
F
Okay,
I
think
that
is
and
yeah
remote
participants.
Please
make
sure
that
your
audio
and
video
are
off
unless
you
are
sharing
presenting
or
asking
a
question.
Okay.
F
So
he'll
just
have
to
make
do
all
right.
So
the
agenda
we
have
noted.
Well,
we
have
not
bashed
the
agenda.
We
do
have
a
scribe.
We
have
a
minutes
taker
jabber
scribe
is
somebody
somebody
willing
to
say
that
they'll
keep
an
eye
on
the
chat.
F
Well,
eric's
nodding
his
head,
so
I
think
okay
eric
will
make
sure
that
we
don't
lose
track
of
the
chat.
Are
there
any
further
bashes
to
the
agenda?
F
H
I
J
H
Sorry
about
those
online
that
gets
to
scratch
it
so
anyway,
those
for
onsite,
it's
grand
clean
tool,
2,
it's
a
non-working
group
forming
buff
and
the
kind
of
high
level
goal
is
to
judge
the
need
and
scope
of
interest
in
working
one
or
more
new
media
delivery
protocol
built
on
top
of
quick,
my
co-chair
is
alan
frindle.
H
H
H
H
You
have
path,
migration
to
support
clients.
Moving
between
networks,
you
have
user
space
implementation,
making
it
fairly
easy
to
deploy
so
building
on
this
can
simplify
the
protocol
and
improve
the
possibilities
to
deploy
so
next
slide.
H
H
H
Yes,
personal
observation
here
is
probably
the
on-demand
media
is
not
targeted
to
this,
but
I'm
just
rather
questioning
of
how
much
of
the
above
we
need
to
tackle
next
slide.
H
H
H
You
have
the
the
big
guys
up
there
speaking
speaking
speaking,
and
then
you
get
the
questions
and
answers
and
suddenly
someone,
okay,
you
want
to
ask
a
question
and
you
want
to
actually
let
them
ask
live
easily
without
any
kind
of
like
I
mean
we
notice
here.
Let
me
take
when
you
add
some
there's
a
short
powers,
etc.
Can
we
get
something
that
works
without
any
kind
of
clear
change?
H
H
H
H
Slides,
it's
not
yeah,
so
it's
a
lot
of
questions
for
this
buff,
but
if
you
want
to
prepare,
you
could
take
a
look
at
some
of
these
documents
that
have
some
inputs
or
proposals
etc
or
things
that
been
done
to
maybe
better
understand
where
people
are
coming
from
so
yep
take
care.
H
I
don't
have
that
much
more.
I
think
it's
you're
welcome
to
counter
buff
on
wednesday
morning
and
have
prepared
with
your
questions
etc.
Follow
along
the
discussion,
contribute
with
what's
your
views,
etc.
So
yeah
you
can
take
the
next
slide,
but
so
any
questions
from
in
these
sessions
about
the
buff
not
trying
to
hold
the
warfare,
but
if
there's
any
questions
or
things
that
you
should
think
about.
F
K
Sanjay
mishra
yeah
on
your
very
first
slide.
You
had
cdn
acknowledgement.
So
I
know
you
you
were
short
here.
You
know
yeah.
H
H
And
I
I
think
it's
it's
our
protocol,
maybe
not
acknowledge
them,
but
if
you're
actually
looking
at
system
wide
view,
you
usually
need
something.
How
do
you
deal
with
a
fan
out
who's?
This
part
of
the
fan
out
it's
affecting
security
and
things
like
that
about
what
security
mode
can
you
have
with
a
cdn
going
through
those
having
multiple
cdns,
because,
if
you're
large
enough,
a
single
cdn
is
not
enough,
you
probably
have
severals
et
cetera
those
kind
of
questions.
These
are
things
we
need
to
consider
so
yeah
glenn.
D
Hey
there,
sorry,
I
couldn't
be
there
in
person,
so
one
of
the
interesting
use
cases
that
I
don't
see
on
your
list
that
you
might
want
to
consider
if,
in
fact,
this
does
become
a
working
group
or
the
work
gets
taken
up.
D
Someplace
is
the
situations
in
which
you
end
up
having
because
of
your
immediate
workflows
and
the
practical
reality
of
who's
going
to
support
quick
and
who
might
not
there's
a
case
where
you
have
to
do
transitions
along
the
media,
workflow
potentially
from
quick
to
some
other
transport
back
to
quick,
even
and
those
transition
points
may
be
areas
of
either
like
a
best
practices
document,
or
even
maybe
some
innovation
around
the
quick
space
around
those
transitions
between
protocols.
D
It's
just
the
reality
of
complex
media
flows
that
you
end
up
with
these,
like
you
know,
islands
of
things
that
don't
support
the
latest
and
greatest
and
you
have
to
sort
of
transition
in
and
out
kind
of,
like
you
know
six
before
back
in
the
in
in
in
that
other
world.
So
just
that
one
comment,
the
other.
D
D
But
I'll
talk
about
in
the
sva
slides
that
are
coming
up
next
is
the
streaming
video
alliance
is,
is
putting
together
a
quick
streaming
plc,
not
to
sort
of
do
a
bake
off
of
the
the
quick
protocol,
but
to
really
build
a
test
environment
for
media
operators
who
want
to
understand
quick
and
evaluate
against
their
current
operating
environment
and
understand
what
what
it
means.
So
just
a
point
over
to
that,
and
it
may
be
relevant
work
to
the
stuff
you
guys
are
doing
in
mock.
F
L
Mozinha,
just
to
chime
in
on
the
point
about
cdns
two
of
the
drafts
going
into
the
buff.
The
quicker
drafts
do
explicitly
mention
relay
nodes.
They
don't
think
they
call
them
cdns
and
such,
but
it
essentially
is
a
cdn.
So
they
have
an
explicit
provision
in
the
protocol
for
those
relay
nodes
their
function
and
how
they
handle
publications
and
subscriptions.
K
Sanjay
misha
a
real,
quick
question
again
to
mo
you
mentioned,
see
a
couple
of
rfcs
or
something
can
you
can
you
repeat
that
again
draft?
It's
quicker.
F
D
No,
I
can
drive,
I
think,
okay
sergey's
in
the
in
the
queue
decide.
You
have
a
question
for
me.
Well,
sanji
is
out
with
you.
How
do
I
drive.
F
D
D
No,
I
want
to
use
them
out
of
the
just
out
of
data
tracker.
D
All
right
so
hi
there
I'm
blending
well
I'll
turn
my
video
on.
So
people
can
see
me
hi
everybody,
I'm
glenn
dean
and
I'm
going
to
give
another
one
of
our
updates
on
the
streaming
video
alliance,
there's,
obviously
a
considerable
overlap
between
the
interests
of
the
streaming
video
alliance,
participants
and
the
itf.
So
we've
been
doing
these
updates
now
for
quite
a
while
in
mops.
D
In
fact,
even
three
months
and
just
for
your
knowledge,
I
tend
to
do
these
updates
the
other
direction
too
over
to
the
student
video
line.
So
they
are
aware
of
what
the
ietf
is
up
to
next
slide.
Please.
D
So
for
those
of
you
who
are
not
familiar
with
the
good
old
sva,
there
is
about
100
members.
Now
these
are
people
that
tend
to
focus
on
the
actual
practice
of
delivering
streamed
video
to
people's
houses.
You
know
they,
the
people
who
participate
are
content,
studios,
streaming
services,
technology
providers
and
cdn
operators,
there's
a
very
strong
intersection
between
both
these
topics
and
the
ietf,
but
also
participants.
D
A
number
of
people,
even
in
this
working
session,
such
as
sanjay,
are
also
very
active
over
in
the
stream
of
video
alliance
as
myself
and
and
leslie
as
well.
So
if
you,
if
you
want
to
sort
of
position,
what
the
sva
does
it's
a
little
bit
like
like
an
operator's
group
like
nano,
would
be
to
the
itf
for
protocols
not
exact
match,
but
pretty
close
and
the
working
groups
that
the
it
or
the
sba
focuses
on.
Are
the
ones
listed
right
there
in
particular
to
the
itf?
D
Are
things
like
live
streaming
and
open
caching,
which
have
direct
tiebacks
to
the
itf
as
well
as
a
group?
I
chair
the
networking
transport
group,
which
is
where
the
quick
dlc
is
being
done.
We
also
do
vr
work
players
and
playback
and
privacy
and
protection
measuring
qe,
all
the
kind
of
good
stuff
that
relates
to
taking
video
and
delivering
it
over
the
open
internet
to
customers
next
slide.
Please.
D
Just
so
you're
aware,
as
I
pointed
out,
a
few
of
us
are
around,
so
if
you
want
to
know
more
and
have
questions
after
this
I'll
find
sanjay
in
the
hall,
I
see
he's
there
in
vienna
or
find
me
there's
our
email
contacts
for
both
of
us.
We
can
help
you
understand
the
group
and
and
and
and
build
bridges
next
slide.
Please.
D
So
let's
talk
about
a
few
things
that
have
come
out
since
we
did
the
last
update
last
update.
I
talked
a
little
bit
about
the
open
caching
api
becoming
available.
It
was
sort
of
early
release,
then
it
is
actually
full
release.
Now
there
is
a
url
down
there
at
the
bottom
and
this
open.
Caching,
testbed
is
you
know.
D
One
of
the
key
things
the
sva
has
been
producing
is
an
open
caching
specification
and
the
the
apis,
and
we
have
a
lot
of
people
in
that
room
who
are
building
the
thing
and
deploying
it,
and
so
one
of
the
obviously
one
of
the
important
things
is
interoperability
and
the
uses
apis
and
testing
of
them
so
that
we've
stood
up
a
service
now
where
you
can
actually
go
in
and
test
your
apis
against
a
reference
testbed
and
to
verify
that
they're
up
and
working
and
doing
the
thing
they're
supposed
to
do
next
slide.
D
D
D
There
is
three
sets
of
them
on
overview
architecture,
a
extension
for
the
cdni,
that's
the
itf,
cdni
metadata
object
model
and
the
publishing
layers
apis,
and
I
also
point
you
if
you're
interested
in
these
things
go
over
and
look
at
the
itf
and
you'll
see
or
sorry
cdni
at
the
itf
mailing
list,
you'll
see
like
documents
and
stuff
that
have
flown
over
from
sva
to
the
itf
around
cdni
and
these
specifications.
So
there's
a
strong
linkage
there
for
us
next
slide.
Please.
D
There's
another
paper:
I
wanted
to
point
out
to
you.
This
is
a
fairly
long
white
paper
that
the
sva
recently
published
it's
around
5g
edge
cloud
and
it
you
know,
given
the
work,
the
yes,
the
itf
has
done
around.
You
know,
network
slicing
and
other
things
like
that.
D
I
think
there's
a
lot
of
internets
here
that
the
itf
community
would
find
interesting
to
read
it's
about
110
pages
fairly
long,
and
we
try
to
balance
between
a
little
bit
of
marketing,
not
a
lot
of
marketing
really
and
technology
without
getting
down
to
the
leads.
So
it
really
should
appeal
to
the
itf
reader
of
this
white
paper
next
slide.
Please.
D
So
I
mentioned
this:
when
magnus
was
talking
one
of
the
things
you
know
we've
been
looking
at
from
an
operating
perspective
for
delivering
video.
Is
you
know
everyone's
talking
about
quick
and
from
the
itf
perspective?
Sometimes
you
know
quick
is
like
hey.
We
shipped
it.
It's
ready
to
go
from
the
adopter
perspective.
It's
still
early
days
in
some
cases,
and
so
one
of
the
questions
that
a
lot
of
implementers
and
users
of
quick
in
their
media
workflows
have
said
is
well.
This
looks
interesting,
but
it's
a
pretty
big
change
for
us.
D
It's
shifting
from
the
world.
We
know
which
is
tcp
based
and
all
the
optimizations
we
do
there
and
all
the
you
know,
tools
we
have
for
end-to-end,
monitoring
and
and
problem
fixing
and
detection
those
all
change.
D
What
changes
in
my
workflow
and
so
one
of
the
things
the
sva
is
doing,
is
producing
a
a
testing
poc
that
effectively
is
working
to
produce
a
sort
of
end-to-end
reference
evaluation
architecture
that
would
allow
anybody
who
wants
to
evaluate
quick
or
deployment
in
their
media
workflow
to
take
that,
along
with
instrumentation
recommendations
and
proof
of
concepts.
D
How
you
would
do
that
in
players
and
on
servers
and
build
that
reference
and
document
it
sort
of
test
implementation
for
your
company
or
your
organization
to
be
able
to
use
quick
and
do
a
meaningful
evaluation
of
how
it
fits
into
your
workflow?
This
is
not
meant
to
be
a
you
know,
quick
versus
hgb2.
D
It's
not,
you
know
meant
as
like.
This
is
better.
This
is
better
than
that
other
one.
It
really
is
an
ability
for
organizations
who
want
to
adopt
quick
to
understand
the
impacts
and
evaluate
it,
and
so
reference
testing
infrastructure
is
something
a
lot
of
people
identify
and
so
something
we're
working
on
right
now
to
produce
and
publish
next
slide.
Please.
D
And
finally,
if
you
want
to
get
in
contact
about
any
of
these
mentioned
projects
or
slides
or
just
information
in
general,
here's
some
contact
points
you
can
reach
out
to
myself.
Sanjay,
as
I
mentioned,
there's
also
jason
thibodeau,
who
is
the
executive
director
over
at
the
sba,
and
he
can
you
know,
help
point
you
and
put
you
in
contact
with
the
right
working
groups
and
or
tell
you
about
what's
coming
up
from
the
group
and
with
I'll
take
questions.
F
Thank
you
all
right,
moving
right
along
next
up
is
julia
kenyon.
F
F
Alrighty-
and
I
assume
I
am
sharing
the
slides.
M
All
right,
video,
hello,
hi,
I'm
julia
kenyon.
I
work
for
a
company
called
phoenix
rts.
If
you've
never
heard
of
us,
that's
not
surprising.
We
do
streaming
using
webrtc
and
we've
had
a
lot
of
use
cases
that
intersected
with
dash
over
the
years.
So
last
may
a
bunch
of
colleagues
joined
dash.
I
f
to
talk
about
what
kind
of
use
cases
we
could
support
if
we
were
to
kind
of
integrate
dash
webrtc.
M
So
if
you
pull
up
the
next
slide,
so
there's
a
number
of
different
use
cases
you
can
get
from
integrating
dash
and
webrtc.
One
is
fall
back
from
weber
completed
dash.
So
if
the
device
doesn't
support
webrtc,
if
the
network
connection
isn't
good
enough
to
sustain
webrtc
and
you
have
to
do
more
buffering
like
support
or
if
someone's
offering
a
real-time
experience-
that's
premium
and
someone's
not
paying
for
it,
they
can
call
back
to
the
higher
latency
hash
stream.
M
There's
also
use
cases
where
you
use
content,
that's
interleaved,
so
you
have
real-time
webrtc
and
then
you
interleave
that
with
dash
so
you're,
switching
back
and
forth
between
dashboard
rotc
for
ad
period
or
say
pre-recorded
content
like
a
movie
and
then
interactive
periods.
Where
you
talk
to
and
we
use
director
stuff
like
that
or
you
could
have
content,
that's
concurrent,
which
would
be
overlaid,
pre-recorded
ads
for
pre-recorded
dash
content
with
supplemental
live
web
rtc
streams
or
a
lot
of
co-watching
experiences
that
people
think
are
really
cool.
M
M
Webrtc
uses
the
sdp,
which
is
unique
for
each
client
and
typically
only
describes
one
audio
in
one
video
in
dash
where
the
client
selects
the
media
and
the
bit
rate
in
the
codecs
webrtc
just
negotiates
the
codex
to
the
server
and
client
and
the
server
is
what's
adapting
the
bitrate
in
dash
subtitles
and
captions
are
standardized,
whereas
webrtc
all
implementations
currently
are
proprietary
dash,
uses
buffered
and
time
synced
timing,
whereas
webrtc
just
immediately
renders
everything
it
doesn't
do
much
buffering
and
does
that
always
adds
latency,
which
is
is
bad
dash
even
for
low
latency
dash.
M
You
can
get
down
to
three
or
five
seconds.
Webrtc
is
usually
less
than
a
second
often
less
than
half
a
second
dash.
Distribution,
of
course,
is
via
cdn,
which
is
low,
cost
and
widely
available.
Webrtc
servers
are
more
boutique,
there's
using
standards,
but
they're
all
proprietary,
implementations
and
dash
has
first
libraries,
whereas
overseas
built
directly
into
browsers.
So
most
people
don't
even
need
to
download
anything
next
slide.
Please.
M
So
how
would
that
work?
You've
got
for
hybrid
delivery,
you'd
have
a
publisher,
publishing
a
live
content.
It
would
go
up
into
the
cloud
it's
transcoded
packaged
gets
delivered
as
webrtc
stream
through
a
scaled
delivery
system
and
to
the
subscriber
that
same
content
can
get
pushed
out
into
a
dash
manifest
in
chunks
and
delivered
via
cdn's
or
particular
recorded
content
can
go
into
that
same
processor
for
dash,
which
provides
a
manifest
and
chats
and
through
the
cdn
subscribers.
M
So
you'd
get
your
typical
dash
player
getting
mpd's
and
segments,
and
then
that
would
talk
via
a
set
of
apis
to
a
webrtc
client,
which
is
the
standard
stack,
an
http
client,
a
websocket
that
then
talks
to
the
webrtc
server
and
they
just
go
back
and
forth
using
a
set
of
apis
that
still
need
to
be
defined
next
slide.
Please.
M
So
the
workflow
would
look
like
this,
so
the
dash
player
gets
the
mpd,
which
then
comes
back
with
a
set
of
webrtc
adaptation
sets
the
dash
player
would
pick
out
which
which
tracks
it
likes.
So
if
the
mpd
says
hey,
there's
a
german
track
over
here
and
an
english
track
over
there,
it
would
know
your
preferences
or
you'd,
ask
you
and
you
could
select
one.
M
It
would
send
that
information
about
those
specific
streams
to
the
webrtc
client,
which
then
does
very
standard
sdp
negotiation,
except
that
that's
not
standard
yet,
but
we'll
get
to
that
in
a
minute.
Does
this
does
stp
negotiation
then
establishes
this
webrtc
connection
establishes
a
websocket
connection
and
then
media
starts
flowing
from
the
webrtc
media
server,
as
well
as
events
over
that
website.
M
As
I
kind
of
alluded
to
earlier,
the
current
state
of
real-time
streaming,
the
webrtc
stream
is
standard,
but
getting
there
is
not
so
if
you
want
to
connect
to
a
system,
that's
doing
webrtc
streaming,
you
need
to
get
some
kind
of
proprietary
manifest
and
the
system
to
get
the
session
negotiation
is
also
proprietary.
M
M
Okay,
that's
fine,
so
webrtc's
got
a
lot
of
work
to
do
as
far
as
the
standardization
efforts,
so
a
signaling
protocol,
which
is
currently
in
work
as
far
as
the
the
whip
protocol,
which
is
a
bit
much
a
bit
like
an
atm
machine,
so
there
needs
to
be
a
standard
session
management
protocol.
There
needs
to
be
some
kind
of
stream.
Switching
that
doesn't
require
starting
over
again
with
stp
negotiation.
M
There's
a
bunch
of
work
around
time,
synchronization
time
synchronization
with
dash
and
collecting
metrics,
so
dash
clients
collect
a
lot
of
good
metrics.
That
would
be
good
for
webrtc
sessions
to
also
collect
and
then
translate
those
into
existing
metrics
and
then
send
those
via
the
apis
that
were
in
the
system.
Diagram.
M
And
dash
needs
to
determine
the
apis
that
go
between
those
webrtc
clients
and
the
dash
clients
to
try
to
define
how
webrtc
information
gets
presented
in
those
mpd's
decide.
Whether
dash
and
webrtc
would
both
render
to
a
single
browser,
video
element
or
switch
between
two
and
support.
Hybrid
operations
between
dash
and
webrtc.
M
I
think
that's
the
last
slide.
Oh
sorry,
okay,
so
dash.
I
have
worked
on
a
report
that
summarized
that
there's
a
summary
of
that
report
at
this
link.
The
full
report
is
at
the
other
url,
and
we
also
have
an
interest
survey
where
we're
trying
to
find
out
which
people
are
interested
in
this
for
in
furthering
the
standards
work
that
we've
defined
and
what
use
cases
they're
interested
in.
F
D
Yeah
hi
there
nice
presentation,
julia
thanks,
glendon
here
from
compass
embassy
universal.
So
a
couple
questions
one
in
this
sort
of
workflow,
this
hybrid
workflow.
How
would
trip
play
like
the
ability
to
pause,
rewind
jump
back
to
live?
D
And
the
other
one
was,
you
know
the
watermarking
is
done
today
through
dash
manifests
with
it,
with
the
av
trick.
Baby
selection
trick
any
thought
about
how
you
do
that
in
this
world.
M
Well
I'll
start
with
trick
way
so
keep
playing
the
way
we
the
way
people
are
doing
it
right
now
is
you're
you're
on
the
live
edge
and
you're
watching
you
know
what
happened
say
on
the
football
field
just
a
second
ago,
and
you
can
pick
which
kind
of
football
you
like
american
or
real.
So
sorry,
not
sorry
so
you're
on
the
live
edge
and
something
happens,
so
you
want
to
back
up
so
then
you
have
an
option,
usually
in
ui
to
pop
backwards.
M
When
you
go
backwards,
you're
you're
then
going
into
the
dash
world
right.
So
then
you're
in
the
dashboard
you
can
scrub.
You
can
do
all
the
usual
things
you
can
do
in
a
recorded
stream
and
then,
when
you
watch
and
see,
oh
that's
why
that
guy
got
a
red
card
or
got
a
flag,
then
you
can
then
pop
back
to
the
live
edge
and
then
you're
back
out
of
rtc.
D
And,
and
so
with
the
weber,
so
I
noticed
your
media
flow
that
you
at
webrtc
both
being
packaged
as
a
dash
manifest
element
but
also
direct
delivery
into
clients.
D
How
would
the
direct
delivery
in
the
clients
work
there?
They
basically
see
a
black
black
box
of
the
webrtc
content
during
the
trick
player.
M
Yeah
and
that's
that's
kind
of
one
of
the
things
that
we
want
to
work
on
in
additional,
that's
kind
of
the
further
work
additional
reading
kind
of
thing.
We
need
to
work
on.
So
what
what
we
do
is
we'll
actually
just
have
you
have
a
an
app
that
just
shares
a
window
and
either
the
webrtc
client
or
the
dash
client
or
control
that
window
at
any
given
time.
M
D
Right
yeah,
so
so
an
observation
that
we
talked
about
what
I'm
working
in
a
minute,
two
one,
one
question:
one
observation:
one
observation
is:
potentially
you
might
run
into
some
if
the
code
rates
for,
like
the
the
the
the
the
the
the
webrtc
versus
the
dash
content
or
a
different
frame
rates
or
different,
you
know
different
verses.
You
might
write
into
like
some
weird
interactions
there
that
you'll
have
to
sort
of
synchronize
in
addition
to
the
time
synchronization,
but
the
frequency
synchronization,
and
then
that's
the
observation.
D
The
question
is
and
why
I
assume
you're
referring
to
real
football
being
canadian
football.
When
you
make
that
comment
now
now:
no
australian
rules,
football
actually
but
yeah,
okay.
So
so
and
then
the
question
about
how
would
you
potentially
do
the
watermarking
at
this
point.
M
At
this
point
we
don't
so
that's
another
thing
that
would
be
good
to
kind
of
hash
out.
As
far
as
would
you
have
completely
separate
streams,
or
would
you
just
say,
live
edge,
we're
not
going
to
bother
with
that
or
you
know,
do
some
other
way
of
content
protection
and
then
just
do
the
the
av
selection
on
the
dash
side.
D
M
Familiar
used
to
work
at
parents
yeah,
so
I
mean
you
could
you
could
still
watermark
the
content
as
it's
going
through,
for
if
you
wanted
to
put
that
kind
of
mark
in.
M
Great
and
everyone
can
see
ellie's
comment
about-
everyone
should
look
at
the
report
and
probably
do
that.
So.
F
F
Yes,
rena
did
you
want
to
present
the
slides
yourself
or
shall
I
do
it.
N
Hi
everyone,
my
name,
is
raynan
and
I
will
be
presenting
an
update
to
our
draft.
This
is
a
joint
work
with
a
grammar
next
slide.
Please.
N
N
N
N
We
propose
to
replace
the
term
augmented
reality
with
the
much
more
broader
term
extended
reality.
We
believe
this
reflects
the
scope
of
the
document.
More
accurately,
we
will
discuss
the
scope
in
the
next
slide
extended.
Reality
is
a
term
that
includes
augmented
reality,
virtual
reality
and
mixed
reality.
N
Er
combines
the
real
and
virtual
is
interactive
and
is
aligned
to
the
physical
world
of
the
user.
On
the
other
hand,
vr
places
the
user
inside
a
virtual
environment
generated
by
a
computer,
mr
merges
the
real
and
virtual
world
along
a
continuum
that
connects
completely
real
environment
at
one
end
to
a
completely
virtual
environment
at
the
other.
N
N
So
we'd
like
to
solicit
the
working
group's
view
on
the
proposed
scope
and
the
intended
audience.
We
welcome
other
pertinent
issues
that
the
working
group
would
like
to
include
in
the
draft.
Reviewers
and
contributors
are
invited
to
support
the
draft.
The
link
to
the
github
repo
is
given
in
that
slide.
Many
thanks
to
kyle.
G
This
is
I'm
spencer,
dawkins
and
I'm
really
struggling
with
the
first
itf
out
of
the
last
four
or
five
when
I
have
not
had
two
24-inch
monitors
to
go
back
and
forth
with
others
here
may
feel
the
same
way.
G
G
For
things
like
split
rendering
and
I'm
curious,
if
there's
anybody
who
is
not,
who
is
going
to
who's
going
to
try
to
be
supporting
ar,
but
not
using
edge
computing
as
a
way
to
support
ar
and
that's
a
question
for
the
group
of
course,.
F
G
Yeah,
I
mean
any
any
other
way,
some
of
which
people
in
this
room
may
be
inventing
on
their
laptops.
Now
that
you
know
I,
I
guess
I'm
kind
of
asking
if,
if
there.
G
Other
ways
to
do
this:
do
we
do
you
know,
do
we
expect?
Are
we
planning
to
expand
this
document
or
write
another
document
about
cl?
You
know
we're
doing
it
through
cloud
or
what
like
that
and
I'm
like.
I
say
I'm
mostly
asking
for
the
working
group
to
think
about
this
and
I
have
been
where
renin
is,
as
you
know,
and
so
have
aliyah
and
jake.
As
far
as
you
know,
you've
got
to
work
your
document
and
you're
not
getting
a
lot
of
input
for
it
and
it
really.
G
N
I
think
the
use
of
edge
computing
essentially
comes
into
play
when
you
are
talking
of
offloading
computationally
intensive
stuff
which
might
result
in
heated
equipment
or
limited
battery
capacity,
so
the
battery
runs
out
in
addition
to
your
standard
requirement
for
very
low
latency
in
such
applications.
N
So
what
we
see
going
forward
is
that
the
edge
computing
solution
is
an
overarching
umbrella
under
which
other
possible
things
can
also
be
added
to
so.
I've
had
discussions
with
people
who
suggested
things
like
multipath
dcp.
N
Just
you
know
just
throwing
that
to
the
working
group,
but
we
we
do
feel
that
edge
computing.
Has
this
umbrella
umbrella-like
property?
Where
you
you
can
do
a
lot
of
things
once
you
have
solved
the
problem
of
of
highly
computationally
intensive
applications
running
on
your
device,
and
you
have
just
offloaded
it
to
a
nearby
set
of
edge
servers,
and
then
you
can
use
all
sorts
of
clever
tricks,
including
adaptive,
bitrate
algorithms,
to
further.
F
P
There
we
go
wrong
buttons,
okay,
so
it
I
thought
our
remit
was
more
about
media
and
less
about
edge.
Compute
there's
like
the
coin.
Rg
work,
research
group,
that's
doing
some
work
on
edge
compute.
Are
you
looking
there
to
refine
edge,
compute
use
cases?
F
So
I
I
thought
the
question
was
slightly
different.
I
thought
and-
and
I
could
be
way
out
of
line
here,
but
I
thought
it
was
more
that
because,
because
the
computing
is
being
offloaded
to
the
edge,
there
are
different
media
delivery
expectations
and
requirements
than
there
would
be
if
it
wasn't
being
done
on
edge
computing.
Not
not
the
question
of
you
know
how
to
do
the
computing
there,
which
I
think
would
be
the
purview
of
a
different
group.
F
And
and
then
spencer's
question
was,
if
you
weren't
doing
it
at
said,
edge
device,
but
we're
doing
it
somewhere
else
like
in
the
cloud
and
he's
going
to
come
and
explain
to
us
where,
then
the
delivery
requirements
might
be
different
and
therefore
would
they
be
covered
in
this
document
or
would
that
be
managed
in
a
different
document?.
G
G
Thanks
yeah,
spencer,
talkings
again
the
so
the
thing
I'm
wondering
about
is
whether.
G
So
coin
rg
is
a
great
research
group,
but
it's
a
research
group.
They
can't
they're
not
chartered
to
produce
itf
standards,
among
other
things,
so
I
guess
my
question
here
is:
do
we
it
you
know?
Are
we
talking
about
engineering
here
or
are
we
talking
about
research
and
even
if
we're
talking
about
engineering,
this
is
an
ops
working
group
which
is
an
odd
place
to
talk
about
the
engineering
stuff.
G
Many
of
the
comments
that
I
had
on
this
draft
versus
previous
versions
of
this
draft
were
not
arguing
with
the
draft
was
more
like.
Are
we
sure
that
we've,
you
know,
got
this
draft
these
draft?
This
draft
contents
pointed
to
the
right
research
group.
I
note
that
mops
is.
F
Q
F
So
I
I
certainly
agree
that,
if
we're
off
into
that
kind
of
research
space,
we
should
revisit,
but
I'm
I'm
also
smelling
potential
rat
smells
as
in
we
might
have
fallen
down
a
rat
hole.
I've
pulled
up
the
document
and
I
I
I
would
have
to
reread
it
again
with
that
lens
as
well.
F
But
I
think
that
the
I
think
that
we
are
focusing
on
the
xr
side
of
it
and
not
on
the
what's
actually
pertinent
here,
which
is
the
media
challenges,
given
that
the
the
computing
isn't
being
done
on
the
device
right
so
you're
creating
and
I'm
I'm
looking
at
the
screen,
I'm
sure
that
that
doesn't
help
bring
it
at
all,
but
but
the
it's
only
so
virtual.
The
the
fact
that
you
are
trying
to
create
this
immersive
environment
and
aren't
doing
the
compute
local
does
create
a
different
cloud
of
realities
for
streaming.
F
N
K
N
Protocols
that
you
might
use
to
to
support
those
use
cases
and
what
are
the
limitations
of
those
protocols.
So
these
all
these
all
questions
arise
when
you
start
to
think
about
operationalizing
use
cases
such
as
the
one
that
we
have
presented
right.
F
Thank
you.
I
see
colin
in
the
queue,
and
I
see
that
spencer
would
like
suhas
nanda
kumar
to
be
in
the
queue
colin.
R
Yeah
I
mean
I,
I
think
this
document
sort
of
is
operational.
I
mean
I
I
view
it,
I
don't
you
know
edge
compute
and
cloud
compute,
really
all
you're
talking
about.
I
mean
as
far
as
this
document's
concerned,
it's
compute
and
it
has
a
different
latency
going
to
it.
You're
talking
about
the
operational
latencies
of
where
you
put
your
compute,
how
much
latency
there
is
to
that
point
and
that's
not
related
to
anything.
That's
going
on
coin
rg
or
anything.
R
S
Can
you
hear
me?
Yes,
yes,
I
cannot
say
no
to
sensors
right
here.
I
I
just
wanted
to
run
on
to
look
at
the
computer
compute,
our
networking
box,
which
kind
of
captures
its
computing
and
what?
How
can
you
kind
of
offload
some
of
the
work
to
the
edge
nodes
might
be
used?
Something
useful
there,
not
sure
thought
of
sharing
the
thought.
G
G
Most
of
the
things
past,
the
introduction
haven't
changed
a
lot
and
there's
a
very
interesting
description
of
why
latency
of
targets
are
tied
with
ar
and
then
there's
an
there's,
a
description
of
the
eotcp
considerations
which
what
it
says
isn't
wrong.
But
you
know
if
you're
doing
ar
over
tcp
that's
you're
going
to
bump
into
a
lot
of
buildings,
but
for
us
to
to
for
us
to
really
think
seriously
about
what's
needed
here,
and
I
I
take
cullen's
point
that
there
are
people
in
the
world
who
do
this
every
day.
G
So
I
have
no
doubt
that
there
is
engineering
work
to
do
here
and
that
if
you
stood
in
the
right
place
and
squinted
hard
enough,
it
would
be
operational
engineering
work
to
do
here.
But,
like
I
said
I
just
I'd
like
to
have
us,
have
a
really
good
sense
of
what
what
the
right
scope
is
for
the
working
group
document
before
we
do
a
lot
more
wordsmithing.
F
Yeah
so
and
and
a
side
note
you
your
memory,
your
memory
wins.
I
just
looked
it
up.
If
the
can
buff
is
tomorrow,.
F
But
the
so
to
the
point
about
scoping.
I
think
we
are
fairly
clear
on
scope
and
I
think
the
challenge
is
determining
how
to
make
sure
that
the
document
stays
within
scope
and
the
big
move
between
the
last
ietf
meeting
in
this
ietf
meeting
was
the
move
to
get
this
into
github
so
that
we
can
pursue
it
more
methodically.
F
So
I'm
pretty
sure
that
renan
would
appreciate
some
help
in
going
through
and
identifying
sort
of
which
areas
align
well
with
the
scope
that
we've
agreed,
which
is
media
delivery
in
on
operational
sense,
in
which
areas
maybe
are
a
little
less
directly
related
to
work
in
this
group.
So
wait
for
it.
Volunteers.
G
Well
like,
but
but
you
you're
at
a
you're
you're
in
a
computer,
so
you
could
actually
stare
meaningfully
at
the
people
who
are
remote.
G
I'm
willing
to
help
with
this,
but
I
think
the
I
think
the
really
important
thing
for
the
for
working
group
contributors,
which
is
more
than
me
to
be
writing
issues
rather
than
prs
at
this
point
right,
and
I
think
I
think
that
we
should
applaud
kyle
for
setting
up
the
repo
for
this
draft
and
we
should
do
the
right
thing
after
that.
F
Yep
and
and
certainly
fair
point
about,
I
can
see
the
list
of
people
in
the
room,
because
I
mean
one
of
the
challenges
as
we've
been
working
in
this,
mostly
remote
mode
has
been
really
getting
that
sort
of
cycle
of
motion
going.
So
I'm
a
little
concerned
that
we'll
walk
out
of
the
meeting
room
today
agreeing
on
what
needs
to
happen
here
and
then
we'll
have
the
same
discussion
in
a
few
months
in
philadelphia
or
remotely
in
philadelphia.
F
So
I
really
like
to
make
sure
that
we
agree
having
agreed
that
this
is
a
working
group
document
and
that
there
is
working
group
work
to
do
here.
Let
us
do
some
working
group
work
here
and
if
the
right
thing
is
to
say
right
for
the
next
month-
and
I
would
say
it's
the
next
month-
not
the
month
before
the
july
meeting
for
the
month
following
this
meeting-
let's
focus
on
identifying
issues
and
making
sure
that
we
can
get
this
document
to
line
up
with
with
you
know.
G
And
maybe
having
a
side
sidebar
conversation
with
jake,
while
everybody
else
is
listening,
but
jake
was
talking
about
the
difference
between
media
delivery
and
what
it
takes
to
do.
Media
delivery.
I
think
that
I
think
that
was
his
point
in
his
statement
and
for
us
to.
I
would
suggest
that
that
was
a
really
useful.
P
Yeah,
sometimes
the
buttons
are
tricky,
so
I
I
I
take
the
point
I
think
they're.
I
think
what
colin
said
is
is
right.
There
is
some.
You
know
some
good
ops
work
to
do
here.
I
guess
I
would
so
one
question
is
like
how
central
is
the
compute
offload
to
this,
but
another
maybe
way
to
focus
here
is
to
like
think
that
it
it.
P
Presumably
there
is
some
component
that
that's
there
and
to
talk
about
like
what
are
the
kind
of
bandwidth
and
latency
and
reliability
requirements
around.
That,
I
think,
would
be
the
kind
of
useful
direction
to
flush
out
like
what
you
know.
If
you
tried
to
run
a
system
like
this,
then
what
kind
of
network
are
you
going
to
need,
you
know
and
can
wi-fi
do
it
and
like
how
many,
how
many
you
know
how
what
are
the
scaling
considerations?
P
I
guess
as
well,
because
you
know
I
imagine
this
museum
tour
that
you've
written
about
you
know
if
you
try
to
bring
100
people
instead
of
15
people,
then
you
know,
what's
that
going
to
do
your
to
your
wi-fi
and
or
is
it
even
wi-fi?
I
don't
I'm
not,
I'm
not
really
sure
what
the
state
of
the
art
is
here.
So
I'd
love
to
see
more
about
that
in
the
doc
and
kind
of
what
can
be
achieved
with
what,
with
what
kind
of
parameters
at
a
quantitative
level.
P
I
guess
do
you
need
me
to
put
that
in
a
github
issue,
or
can
you
take
care
of
that.
F
T
F
N
Now
I
would
like
to
thank
everyone
for
participating
in
the
discussion
and
just
wanted
to
reiterate
that
we
for
this
iteration.
We
only
focused
on
the
abstract
and
the
introduction,
because
you
think
it's
it's
really
important
to
fix
the
scope
of
the
document
and
also
to
to
fix
the
intended
audience.
And
so
that
was
the
goal
of
this
particular
update
and
to
spencer.
We
have
taken
your
comments
about
the
tcp
section
on
board
and
we
have
replied
to
you
on
that
point
as
well,
but
really.
U
N
Our
goal
is
to
make
sure
that
everyone
in
the
working
group
agrees
with
the
scope
and
the
intended
audience,
and
we
can
take
things
forward
from
there.
Yeah.
F
F
I
think
that
there's
material
there
for
the
draft
authors
to
work
through,
but
I
didn't
know
if
the
draft
authors
wanted
to
spend
some
time
either
talking
to
some
of
the
points
or
asking
further
questions,
and
I'm
going
to
assume
that
everybody
here
has
read
that
discussion
on
the
mailing
list,
because
you're
all
subscribed
and
active
contributors.
G
Yeah,
spencer,
dawkins
and
noting
that
there
are
two
other
draft
editors
on
the
on
the
here
in
meet
echo,
so
I
invite
them
to
help
me
think
I
looked
at
eric's
comments
and
almost
all
of
them.
I
knew
what
to
do
with
the
only
one
that
I
did
not
know.
What
to
do
with
was
his
observation
that
some
of
the
description
of
latency
seemed.
G
I've
convinced
myself
that
what
that
the
definition
of
latency
categories
in
the
draft,
which
was
hammered
out
at
great
length
in
this
in
this
working
group,
is
reasonable
for
streaming
media,
and
we
can
talk
about
whether
we
should
even
be
even
mentioned
video
conferencing
as
a
streaming
application.
G
G
I
think
I
know
what
to
do
with
pretty
much
all
the
comments
that
that
eric
made
and
thank
you
eric
for
being
gentle
and
let
other
people
express
opinions
about
any
of
the
other
comments
that
may
have
been
made
I'll
just
go
ahead
and
say
this
now.
If
I
can,
I
have
been
living
with
the
nice
people
that
are
doing
the
mockmoff
and
the
latency
categories
that
they're
talking
about.
There
are
not
to
say,
you
know
not
necessarily
the
same
world
as
operational
guidance
that
we
would.
G
We
would
provide
to
existing
people
who
are
doing
video
streaming
today
and
so,
like
I
say,
I've
ali
convinced
me
that
we're
talking
about
two
different
problems
and
it's
okay
to
not
change
stuff
in
one
description,
just
because
of
the
other
thing
we're
talking
about.
Thank
you.
Q
Yeah
andy,
all
my
comments
are
saying
are
pretty
minor.
They
are
more
the
editorial
side
and
really
technical
for
me.
I
have
two
things:
the
latency.
I
really
want
to
point
you
on
this.
If
you
confirm
that
you
are
okay
with
it,
I'm
all
set
right.
You
are
the
expert,
I'm
not,
and
the
other
parent
is
about
the
living
documents
and
we
got
the
discussion
this
morning
within
the
isg.
We
have
not
yet
a
way
to
do
this,
so
it
will
mostly
be
static
when
it's
published.
Q
F
Yeah,
so
so
it
would
be
great
to
have
a
general
ietf
wide
answer
to
the
question
of
what
to
do
with
living
documents.
I
I
will
just
throw
out
that
in
an
earlier
draft
of
this,
in
an
or
in
an
earlier
draft
of
this,
there
was
a
discussion
with
john
levine
about
how
to
refer
to
things,
and
I
think
that's
a
piece
of
the
puzzle
as
well.
Oh-
and
I
see
that
warren
is
about
to
warren's
head-
is
about
to
crater
spencer
spencer's.
In
the
queue,
though,.
G
Okay,
this
is
your
regular
host
spencer
dawkins,
and
so,
if
I'm
remembering
the
conversation
that
we
were
having
correctly
on
the
previous
version
of
the
draft,
it
was
using
something
like
tinyurl
to
generate
the
urls,
and
so
the
question
was:
what
happens
if
tinyurl
goes
away,
so
I
think
that
one
reasonable
suggestion,
which
people
tell
me
is
not
totally
crazy.
G
If,
if
not,
everybody
has
noticed
that
because
it
was
added
in
fairly
recent
revisions,
it's
worth
taking
a
look
at,
but
I
think
it's
really.
I
think
it's
really
useful
for
us
to
provide
that
to
the
readers
and
if
we
don't
have
to
drive
the
rc
people
crazy
yeah
and
we.
F
And
I'm
trying
to
get
over
that,
but
this
is
servicing
a
lot
of
the
challenges
with
you
know
what
is
permanent?
What
is
I
mean
heck,
let's
get
into
document
formats
too,
while
we're
at
it
and
yeah.
So,
let's
I'm
not
really
not
trying
to
drive
warren
under
the
carpet.
Did
you
need
the
mike
horn,
I'm
gonna?
Let
warren
have
the
mic
and
then
we'll
get
back
to
the
queue.
J
Somebody's
gonna
have
to
cut
me
off
because
otherwise
I'm
gonna
go
on
a
20-minute
rant,
so
we've
tried
to
the
living
documents
thing
a
few
times,
and
part
of
the
problem
seems
to
be
that
everybody's
got
their
own
use
case
so
figuring
out
exactly
what
the
use
case
would
be
seems
like
an
option.
J
I
think
the
easiest
thing
that
we
came
up
with
last
time
was:
there
is
one
document
which
is
owned
by
the
chairs
or
someone,
and
they
just
update
it
to
point
at
the
most
recent
version
of
another
document,
which
seems
to
be
what
everybody
kind
of
agrees
as
the
currently
best
viewed,
it's
like
the
chairs
kind
of
determine
consensus
that
you
know.
This
thing
is
what
everybody
kind
of
agrees
at
the
moment,
and
they
just
have
a
document
that
points
at
that.
J
It's
incredibly
messy
and
ugly,
but
it
at
least
gets
you
something
for
now.
But
if
people
have
the
stomach
to
have
the
living
documents
discussion
again,
that
would
be
really
useful.
I
still
think
it's
worth
doing
it
just
sort
of
as
one
of
those
endless
discussions
like
document
formats
that
just
dies.
F
F
All
right,
glenn
you're
next
in
the
queue,
oh
sorry
conveniently
spencer,
you're
after
glenn
in
the
queue.
If
you
really
wanted
to
say
something.
D
A
couple
of
things
in
in
no
particular
order
on
the
latency
discussion.
I
agree
with
the
comments
valley
made
and
I'm
glad
spencer
came
around
overall,
you
know:
latency
is
going
to
be
a
moving
benchmark
because
it's
an
area
of
a
lot
of
operational
investment
by
various
parties,
both
in
standards
and
also
in
technology,
to
move
latency
into
ever
ever
lower
numbers.
So
it's
always
going
to
be
a
moving
thing
on
the
discussion
of
living
documents.
D
You
know
one
one
thing
mom
should
consider:
if
we're
going
to
do
this
sort
of
isolate
alone
and
not
try
to
solve
the
bigger
picture
which
I
agree
with,
is
also
the
concern,
though,
that
if
the
itf
does
get
requests
for
citations
by
lawyers
and
and
signed,
you
know
what
happened
when
whatever
solution
we
decide
in
mobs
should
be
able
to
support
those
legal
frameworks
that
the
itf
does
get
pinged
on
occasionally
by
law
firms.
G
And-
and
this
one
this
is
on,
this
is
on
that
one,
the
my
my
my
question
is
just
I'm
not
going
to
break
I'm
not
going
to
raise
my
hand
about
this
at
the
plenary
right.
We
all
agree,
okay,
just
making
sure.
F
Yes,
okay,
so
I
think
that
that
was
useful
discussion
around
the
points
in
the
mops
ar
you
know.
Sorry,
in
the
ops
cons
sanjay
did
you
want
to
jump
in
on.
K
Oh
yeah,
this
is
sanjay,
just
real
quick,
I
mean
the
the
decision
on
living
document
does
not
have
to
be
made,
I
mean
if
we
should
publish
the
document.
Of
course,
if
there's
consensus
and
if,
after
the
publication
there's
a
really
good
use
case
that
we
think
belongs
in
that
document,
then
we
can
always.
You
know
revisit
it,
but
I
don't
know
if
we
want
to
consider
this
as
a
like
living
document.
You
know
forever.
F
Yep
all
right,
so
I
think
I
think
eric
you
correctly
identified
the
two
major
issues
with
the
document
latency
and
the
zombie
document
problem.
Okay,
then,
thank
you
very
much
for
that
and
and
I'll
take
this
opportunity
to
say
thank
you
very
much
to
sanjay,
who
is
the
document
shepard
and
did
all
the
work
to
do
the
write-up
and
get
it
to
the
isg.
So
thank
you.
Sanjay.
F
All
right
and
now
we
should
move
on
to
the
milestones
discussion
I
wanted
to
to
get
there,
because
the
current
state
of
our
milestones,
I
won't
say,
is
woeful.
There
are
certainly
more
woeful
milestones,
lists
and,
and
of
course,
you've
you've
all
seen
these,
because
I
sent
all
this
to
the
mailing
list
on
friday-ish.
F
So
I've
just
noted
here,
the
the
items
that
are
either
either
seem
to
be
out
of
out
of
scope
or
whatever.
So
we
were,
we
had
as
a
list
on
our
list
to
last
call
a
document
on
sm
simpty's
use
of
our
reliance
on
our
protocols,
since
we
don't
even
have
a
draft
of
that
and
simply
seems
to
be
moving
away
from
some
of
the
really
ietf
related
work
at
the
moment.
I
think
that
it
just
doesn't
seem
to
be.
I
don't
think
we
should
have
this
on
our
list
at
all.
F
I
think
we
should
punt
on
that.
One
revisit
it
when
there
are
more
in-person
meetings
for
both
the
ietf
and
symtee
and
then
see
where
to
go
from
there
for
the
streaming
video
alliance
document.
That
also
doesn't
exist.
Yet
I
think
we
move
it
out
a
little
bit
because
I,
I
believe,
we've
updated
a
possible
pointer
to
a
plan,
that's
enough
layers
of
indirection
to
cause
that
document
to
to
exist,
and
I
think
that
by
july
we
should
be
in.
We
should
be
possibly
even
seeing
a
draft
of
that
document.
F
So
that's
my
proposal
there.
We
have
done
two
of
our
milestones
and
yeah
the
november
2021
71
also
out
because
it
didn't
happen
and
then
there
was
a
bogon.
F
I
think
that
there
was
a
mist.
We
just
didn't
clean
up
the
milestones
properly
last
time
and
we
do
have
a
revised
ar
use
case
document.
So
all
of
that
markup
is
kind
of
messy
and
the
net
net.
Is
it
I'm
proposing
this
for
our
updated
milestones?
F
F
J
R
Well,
I
was
just
going
to
ask
just
an
informational
question:
just
what
how
do
you
imagine
that
you're
drafting
the
reliance
the
sva
reliance?
What
do
you
imagine
that
looking
like?
Is
that
just
like
a
list
of
references
of
the
stuff
they
use
that
we're
doing
or
what
it?
What
was
that?
What
does
that
look
like.
F
So
I
think
it's
useful
to
know
what
things
what
the
relationships
are
between
the
work
that
we're
doing
and
the
work
that
sva
is
doing,
and
you
know
we've
had
reports
every
meeting
from
sva
saying
you
know
we're
doing
this,
and
this
is
how
it
touches
some
of
the
ietf
protocols,
for
example
the
work
on
cdni
with
which
sanjay
is
intimately
familiar
and,
and
that
was
referenced
in
in
glenn's
presentation
earlier
so
sorry,
dry,
air,
I'm
gonna
cough
it's.
F
I
think
the
important
thing
here
really
is
to
understand
how
entities
that
are
doing
more
evolved
things
with
video
or
more
industry.
Specific
things
with
video
are
relying
on
our
protocols
so
that
we
have
some
understanding
of
you
know
who
our
users
are
and
what
we
might
like
to
take
into
account.
I
don't
think
it's
directive
in
any
way.
I
think
it's
entirely
informative.
R
I
capped
a
draft
that
just
had
where
the
status
of
where
all
our
all
of
our
drafts
and
rfcs
were
that
the
w3c
could
use
to
look
at
that
there's
and
it
had
everything
that
the
w3c
depended
upon
that
for
webrtc,
and
that
was
one
in
the
spectrum
and
then
the
other
inspector
back.
When
we
were
doing
a
lot
of
sip
stuff,
I
mean
the
liaison
managers
kept
an
amazing
excel
spreadsheet
that
was
reviewed
at
every.
R
You
know
every
three
months
in
a
formal
meeting,
so
I
I
anyway
and
both
of
those
turned
out
to
be
really
useful.
So
I'm
not
quite
sure
you
have
in
mind
here,
but
it
was
useful.
F
Well,
thanks
very
much
for
those
suggestions
and
I
think
we'll
certainly
keep
them
in
mind,
and
I
think
somewhere,
perhaps
somewhere
in
between
and
perhaps
a
little
bit
of
a
backing
story
in
part,
because
you
know
part
of
the
goal
is
for
people
to
know
that
the
sva
exists
and
for
the
sva
to
remember
that
we
exist
spencer.
Is
that
a
new
cue?
It's
a
new
queue
thanks,
colin.
G
This
is
spencer
dawkins,
and
this
is
actually
just
a
follow-up
for
cullen's
thing
I
mentioned
earlier
yeah.
We
are
chartered
to
provide
input
to
protocol
working
groups
about
gaps
and
deficiencies
in
the
protocols
that
people
are
trying
to
use.
So
I
think
that
the
producing
that
document
for
someone
who's
trying
to
do
more
than
just
show
up
you
know
show
up
show
a
video
is,
is
a
really
useful
first
step
towards
saying
do
what
you
know
do
we
have
do?
G
Do
we
have
everything
that
we
need,
or
you
know
or
is
there
other
stuff?
G
We
haven't
really
talked
much
that
I've
that
I
can
remember
about
gaps
and
I
think
that
having
mops
members
participating
in
the
mock-up
on
wednesday,
it's
going
to
be
a
really
good
opportunity
for
that
kind
of
input
to
happen.
I
know
we
have
one
person
from
the
broadcast
type
business.
G
That's
been
active
in
discussions
about
and
mock,
but
a
lot
of
other
kind.
A
lot
of
other
kinds
of
operators
that
haven't
haven't
been
represented
at
all.
I
think
it'd
be
useful
to
have
that
in
a
non-work
group
forming
buff.
F
Yeah,
so
I
think
part
of
what
I'm
hearing
and
what
you're
saying
is
that
maybe
we
shouldn't
expect
to
be
last
calling
the
document
a
couple
of
months
after
the
first
draft
appears.
So
the
november
2022
milestones
should
probably
be
moved
out
or
changed
from
last
call
to
an
update
or
something.
C
F
Okay,
all
right.
That
sounds
good.
I
will
send
that
update
to
the
mailing
list
and
you
know
gather
whatever
comments.
Other
comments
show
up
there
give
it
a
couple
weeks
and
plan,
I
think
file,
and
I
will
confirm
that
it
looks
like
we've
updated
milestones
and
then
do
the
updates.
F
So
with
that,
is
there
any
other
business.
F
All
right,
well,
that's
the
important
bit
then
I
will
say
thank
you,
everybody
for
coming
virtually
or
physically
and
yeah
close,
officially
close
the
meeting.
Thank
you
all.
You.