►
From YouTube: IETF110-MOPS-20210312-1600
Description
MOPS meeting session at IETF110
2021/03/12 1600
https://datatracker.ietf.org/meeting/110/proceedings/
B
Worries:
that's
it's
good
to
know
that
that
the
chairs
have
the
power
to
do
that
even
to
each
other.
B
I
knew
I
should
have
done
this
on
two
two
devices
and
I'll
just
keep
the
windows
right
next
to
each
other.
When
they're,
when
I'm
using
linux
and
apparently
when
you
switch
another
virtual
desktop,
it
stops
painting
the
windows
that
aren't
on
the
current
desktop
so
which
is
rather
annoying
but
oh
well.
D
B
Or
I
can
do
it,
I
can't
hear
you
leslie.
A
All
right,
can
you
hear
me?
Yes,
no,
okay,
thanks
yeah,
so
we'll
get
going
for
real
in
a
minute
the
we
have
the
note.
Well,
the
standard
note.
Well,
please
make
sure
that
you
understand
that
you
are
at
an
ietf
meeting
and,
as
such,
everything
you
say
can
will
be
used
in
the
course
of
ietf
proceedings
standard
concerns
about.
If
you
are
aware
of
any
ipr,
you
must
disclose
it.
A
I
will
let
you
all
read
the
slides
and
the
note
well
well
and,
as
you
are
reading
that
I
would
also
like
to
have
a
volunteer
or
two
to
work
on
the
notes
in
the
kodi
md
note-taking
notepad
and
a
preference
somebody
who
can
track
the
jabber
slash
chat
window
and
carry
forward
any
any
questions
for
people
who
need
it.
Or
do
we
even
really
need
that
anymore.
C
C
A
Okay,
thank
you
and
thanks
to
jake,
who
has
said
he
will
lead,
lead
the
charge
on
the
notes.
I
would
certainly
appreciate,
if
others
can
get
in
in
the
code
emd
session
and
help
jake
out,
so
that
he's
not
all
by
his.
A
Nope
yep,
sorry
there
we
go.
A
Yeah
cool,
so
we've
done
the
note
well
and
now
is
a
moment
to
bash
the
agenda.
This
is
the
agenda
as
it
was
circulated
on
the
list,
except
correctly
noting
that
spencer
dawkins
will
be
doing
the
ops
cons
update
yes,
so
any
bashes
to
the
agenda.
Or
are
we
good
to
go.
A
Not
seeing
any
bashes,
I
will
take
that
as
good
to
go
again.
Thanks
for
the
yeah
you
can
back
up
to
the
agenda.
Please
we'll
go
to
the
milestones
discussion
at
the
end
of
the
meeting,
so
for
now
we'll
we'll
dive
in
first.
The
first
thing
I
wanted
to
introduce-
and
I
hope
francis
is
here-
yes-
he
is
excellent.
A
I
sent
her
on
to
the
mailing
list
pre-watch
for
the
excellent
presentation
of
the
work
done,
the
excellent
presentation
that
was
shared
in
the
irtf
open
meeting
and
thank
you
colin
for
pointing
it
out
to
us
the
learning
in
situ
a
randomized
experiment
in
video
streaming
by
francis
yan,
and
so
I
asked
people
to
have
a
look
at
that.
Have
a
look
at
the
paper
and
come
prepared
with
with
observations
and
or
questions
for
francis
francis.
Thank
you
for
joining
us,
and
I
would
ask
great
so
with
that.
G
H
The
irtf
and
then
and
then
thanks
to
chairs
for
bringing
it
to
the
attention
of
mops.
I
wanted
to
thank
you
for
this
very
valuable
work
and
I
found
it
especially
valuable
because
we
do
a
lot
of
work,
comparing
abrs
mechanisms
using
simulators
and
emulators
and
you're
reporting
results
from
you
know:
measured
results
from
actual
actual
measurements
for
work
with
media
streaming
at
the
on
american
networks
at
scale.
H
One
of
the
most
interesting
things
you
reported
was
that
it
takes
about
two
years
of
video
to
get
even
a
20
confidence
rate
on
differences
between
abr
mechanisms.
Did
I
do
I
get?
Did
I
get
that
right?.
G
Yes,
it
takes
as
much
as
two
years
of
data
per
scheme
to
measure
20
percent
of
difference.
H
Fabulous,
so
I'm
I'm
suspecting
that
that's
the
case,
because
you
don't
see
lots
of
congestion.
Events
on
isp
paths
in
the
us,
with
relatively
short
paths
compared
to
the
rest
of
you
know,
compared
to
the
entire
planet
and
that
you're
going
over
pretty
heavily
managed
network
paths.
It
seems
to
me
that
it
would
be
a
useful
thing
to
to
be
running
over
less
managed
network
paths
and
over
longer
paths,
and
I
know
that
you
said
in
jabber
conversation
on
monday
that
the
problem
was
the
content.
H
You
were
that
you
have,
the
right
to
distribute
is
legally
bound
to
u.s
users
is,
is,
is,
but
for
a
a
cloud,
sorry,
a
position,
a
platform
that
could
be
used
for
content
that
could
be
distributed
internationally.
If
you
had
the
legal
right
to
distribute
that.
G
Yes,
definitely
so
puffer's
avr,
algorithm
fugu
can
work
on
any
kind
of
video
systems.
G
We
have
a
like
some
differences
compared
with
mostly
widely
used
apr
systems
such
as
dash.js,
for
instance,
buffer,
is
sender,
site
apr
and
we
encode
video
into
10
versions,
more
fine,
green
and
we
calculate
pre-calculated
actually
as
in
of
each
video
such
that
we
can
determine
or
optimize
the
qe
beforehand
before
sending
the
video
but
other
than
those
differences.
G
Fugu
can
work
on
any
video
systems
on
any
video
service
providers,
and
I
agree
with
you
that
it
will
be
a
a
nice
study
if
we
can
extend
this
work
to
more
than
one
servers
across
continents,
because
it
is
indeed
one
limitation
of
us
serving
from
one
user
to
all
users
in
the
u.s
and
but
but
I
think
the
conclusion
or
the
observation
of
noisy
internet
is
somewhat
generalizable,
because
we
we
do
see
rebuffering
events
are
so
rare
in
practice
that
that's
expected
right.
G
Most
of
our
internet
users
have
highly
high
enough
speed
of
networks
to
stream
1080p
video
and
that's
why
rare
events
like
rebuffering
rebuffering
has
to
be
measured
on
tail
users
and
has
to
be
evaluated
on
a
large
amount
of
data
and
actually
youtube
and
netflix
confirmed
our
observation.
G
So
they
didn't
say
that
how
much
data
they
need
to
measure
some
abr
schemes
reliably,
but
they
did
confirm
that
rebuffering
is
also
very
rare
in
their
systems.
So
maybe
they
you
know
they
have
a
much
larger
user
base.
So
maybe
they
only
need
one
hour
of
like
real
world
time
wall
clock
time
to
achieve
the
same
level
of
uncertainty
we
measured
in
our
system
over
two
years-
yeah.
I
I
wouldn't
be
surprised
about
that,
but
in
general,
rebuffering
is
rare
for
tail
users.
H
And
and
you
you
have
this
platform
open
for
other
researchers
to
try
other
things.
So
that's
one
of
the
things
I
wanted
to
be
sure
and
call
attention
to
here.
Thank
you
very
much
and
I
will
get
out
of
the
queue
thanks.
I
Spencer
and
everyone
else,
congratulations
for
the
really
solid
work
very
nice
on
well
written
paper.
I
had
actually
a
question
about
figure.
Six
of
the
paper.
G
I
G
Yes,
I'm
not
sure
so
yeah.
I
remember
this
question
from
the
irt
of
opening
meeting
and
in
this
specific
figure.
Actually,
the
mpc
controller
is
responsible
for
calling
transmission
time
predictor.
So
you
know,
if
you
recall,
correctly
transmission
time.
Predictor
is
in
the
neural
network
that
takes
a
bunch
of
features
and
then
predict
the
transmission
time
of
the
proposed
chunk
or
the
next
chunk.
We
want
to
send
then
based
on
this
kind
of
information.
The
mpc
controller
can
use
value
iteration
some
kind
of
dynamic
programming
to
plan
ahead.
G
So
in
this
specific
diagram,
what
we
mean
by
state
update
is
actually
all
the
information
needed
by
both
mpc
controller
and
the
transmission
time
predictor
of
the
neural
network
to
work.
G
So
that
means
the
input
of
ttp,
the
sizes
of
past
chunks,
actual
transmission
times
of
past
chunks
and
the
chunk
to
send,
and
also
the
tcp,
so
low
level,
tcp
statistics,
because
fugu
or
poffer's
api
algorithms
run
on
the
server
side.
So
it's
easy
for
power
to
access
the
low
level,
tcp
statistics
and
also
the
sn
values,
because
in
the
objective
function
to
optimize
for
model-based
controller,
we
optimize
some
video
quality,
minus
rebuffering
event
minus
the
quality
variation.
I
Okay,
so
the
transmission
time
predictor,
does
it
output
a
probability
distribution
that
is
then
sent
to
the
to
the
mpc
controller?
How
does
that
work.
G
G
So
then
the
mpc
controller
can
query
this
neural
network
for
all
the
options,
because,
okay,
let's
still
take
another
step
back,
let's
assume
that
the
model
basic
controller
doesn't
run
any
net
dynamic
programming.
All
it
does
is
brute
force
in
numer
enumeration,
so
the
model
base
predictor
wants
to
plan
ahead
right,
so
five
steps.
That
means
it
has
10
to
the
5
number
of
options,
because
each
step
we
have
10
versions
of
video
as
the
option.
G
Then,
if
you
think
of
model
based
controller
as
some
brute
force
enumeration,
then
we
only
need
to
evaluate
how
long
it
takes
and
how
much
qe
we
obtain
by
sending
each
of
those
10
to
the
fifth
options
and
to
calculate
each
of
those
options.
We
need
to
query
the
model
based
controller
needs
to
query
the
neural.
D
G
For
all
the
transmission
times
of
those
chunks,
and
then
we
come
back
like
we
return
to
the
original
design
of
fugu.
Instead
of
brute
force
enumeration,
we
used
dynamic
programming
to
accelerate
the
computing
to
be
more
efficient
and
then
instead
of
point
estimate.
As
the
output
we
use
probabilistic
distribution,
because
sometimes
the
neural
network
is
just
not
sure
which
value
which
single
value
is
the
best
prediction
for
the
transmission
time.
G
For
the
next
chunk,
a
probabilistic
distribution
makes
more
sense
and
we
can
easily
incorporate
the
distribution
into
the
dynamic
programming
of
a
motherbase
controller.
G
Yeah
no
problem,
if
that's
still
not
clear,
we
can
go
to
the
offline
and
I'd
be
happy
to
show
you.
H
J
Yeah
hi
sorry
I'll
try
to
make
mine
quick,
hey
francis.
I
work
at
akamai
and
we've
been
looking
at
puffer
for
several
years
with
interest
and
nice
to
see
somebody
behind
it.
My
question
is
about
bias
in
your
test.
Data
is,
is
the
majority
of
the
training
for
fugu
still
coming
from
the
test
page
that
geeky
people
like
ourselves
go
to
test
puffer
on?
J
The
the
training
when
you
said
you
said
you
need
a
a
year's
worth
of
data
because
buffering
is
so
rare.
What
I'm
seeing
in
the
real
world
looking
at
services
like
peacock
or
disney
plus,
there's
still.
D
J
Good
amount
of
buffering
happening,
probably
one
percent
of
sessions,
are
buffer,
impacted
in
general
right
now,
and
that's
a
much
higher
incidence
of
what
you're
seeing
and
I'm
wondering.
If
that's
because
the
people
providing
your
import
data
are
people
who
are
selectively
have
better
internet
connectivity
to
begin
with.
G
So
if
you
look
at
the
figure
9
the
main
results
of
our
experiment,
actually
we
also
observe
point
point
two
percent
rebuffering
rate
and
if
on
slow
network
paths,
we
also
observe
like
1.2
as
much
as
1.2.
G
So
I
think
that
kind
of
close
to
what
you
observe
in
practice
in
large
larger
scale,
production
right
and
in
terms
of
the
training
data.
G
Maybe
I
was
not
clear
so
the
findings
we
had
were
obtained
from
years
of
data
right.
The
main
experiments
were
performed
on
years
of
data,
but
the
training
itself
only
takes
two
weeks
of
data,
so
we
on
each
day
we
take
the
past
two
weeks
of
data
to
train
tdp
and
then
deploy
this
new
version
of
ttp
alongside
you
know
the
other
components
of
fugu
right.
We
have
a
new
version
of
fugu
now
and
then
the
new
version
of
fugu
is
only
tested
on
the
future
data
collected
on
users
watching
fugu
in
the
following.
A
K
All
right
so
so
glenn,
I
thank
you.
B
Okay,
like
we,
we
lost
leslie
off
of
audio
and
video.
So
if
you
guys
want
to
get
started.
L
K
All
right,
so
why
don't
we
get
started
here
happy
friday
and
good
to
see
that
we
have
ietf
now
extended
into
friday,
so
maybe
signs
for
good
things
to
come
anyway.
My
name
is
sanjay
mishra
and
I'm
here
to
give
a
quick
update
on
at
least
part
of
the
slides
here
and
then
I'm
gonna
hand
it
over
to
to
glenn.
K
All
right,
thank
you
so
for
those
not
familiar
with
the
sva,
the
streaming
video
alliance
and
the
open
caching
working
group
within
the
streaming
video
alliance.
So
let
me
just
do
a
quick
recap
so
open
caching,
as
a
framework,
is
a
use
case
of
content,
delivery,
network
interconnection,
ie,
the
cdni
working
group,
in
which
the
commercial
content
delivery
network
is
the
upstream
cdn
and
the
isp
caching
layer
serves
as
the
downstream
cdn
so
open.
K
Caching,
in
general,
the
implementations
rely
largely
on
the
cdni
rfcs,
but
the
working
group
also
does
a
lot
of
technical
work,
some
of
which
does
come
into
the
ietf
and
some
of
which
just
turns
out
into
into
like
best
practices
and
those
best
practices
and
the
specifications
essentially
guide
the
open,
caching
implementation
and,
to
that
point,
the
performance
management
specification
that
I'm
sharing
here
an
update
on
is
essentially
this
was
published
late
last
year.
It
specifies
performance
measurement
for
open
caches.
K
So,
and
this
document
in
particular,
essentially
outlines
key
performance
indicators
of
the
open
caching
nodes,
specifically
both
at
the
ingress
and
the
egress
points
of
the
ocn.
The
measurement
may
include
data
such
as
http
response
time
time
to
first
byte
transaction
throughput.
Http
error
account
delivery,
bitrate,
the
the
document
I've
posted
the
link
up
to
the
document.
Hopefully
you
should
be
able
to
get
it.
If
not,
you
should
be
able
to
make
a
request
and
you
should
be
able
to
then
download
the
document.
K
K
Okay,
so
the
little
bit
of
a
switch
in
gear
here,
the
slide
here
is
to
give
a
quick
update
on
the
intersection
of
work
between
the
open
caching
working
group
and
the
cdni.
One
of
the
active
ongoing
work
involves
extensions
to
the
existing
rfc8007,
which
specifies
control
interface
and
trigger
commands.
K
That
is,
the
upstream
cdn
can
use
this
mechanism
to
request
that
the
downstream
cdn,
for
example,
pre-position,
metadata
or
content
or
the
upstream
cdn,
can
request
to
invalidate
or
purge
metadata
or
content.
So,
in
the
same
spirit,
the
draft
cdna
triggers
extension
adds
several
more
capabilities
and
extensions,
including
use
of
regular
expression
to
purge
specific
content,
say
within
a
specific
specific
directory
path.
K
The
draft
also
adds
several
knobs
for
granular
control
for
tasks
like
pre-position
and
purge
that
can
be
localized
based
on,
say,
location
policy.
It
adds
a
geo
limit
on
content
distribution
also
by
time
policy
to
allow
requests
pre-positioning
take
place
between
certain
hours
and
and
also.
Lastly,
the
the
updates
includes
adding
a
generic
mechanism
for
future
extensions
to
trigger
execution.
The
the
current
document
did
not
allow
to
do
that.
K
So
the
tasks
will
involve
essentially
consolidating
the
work
with
the
new
work
with
the
old
work.
That's
still
relevant
and
publishing
a
new
new
rfc,
so
we're
working
with
the
chairs
right
now
to
update
the
milestones
and
most
likely
we
would
be
targeting
to
have
the
document
go
to
the
iesg
by
ie2l
ie
tf
112
in
november.
K
So
that
is
what
the
plan
is,
and
these
are
the
two
slides
I
think
glenn.
I
am
supposed
to
talk
to
and
then
hand
it
over
to
you,
but
before
we
do
that
any
questions
on
the
previous
two
slides.
C
Yeah
hi
there,
so
I
I
swapped
my.
B
C
So
I'm
going
to
walk
you
through
a
bunch
of
other
stuff
that
the
sva
has
been
engaged
in
and
in
particular,
I'm
going
to
highlight
areas
like
sanjay
has
that
we
have
overlap,
because
one
of
the
purposes
of
this
update
from
the
svh,
the
itf
mops
group
is
to
sort
of
help
bridge
what
is
sort
of
professional
industry
streaming.
C
C
C
Typically,
this
is
use
cases
for
things
like
concerts
news,
but
very
much
around
things
like
live
sports,
and
so
I
wanted
to
highlight
that
back
in
just
recently
last
month
there
was
a
new
document
published
over
in
the
sva
that
talked
about
best
practices
for
adobe,
reducing
of
latency
in
video
streaming
and
I'll
point
out
that
it
talked
about
transports
such
as
http
2
and
webrtc.
C
It
looked
at
qoe
and
often
when
this
topic
comes
up,
people
say,
well,
you
say
low
latency
like
what
is
low
latency.
These
are
the
definitions
that
we
use
within
that
work.
The
low
latency
being
within
the
two
to
five
second
range,
ultra
low
being
one
to
two
and
then
near,
live
being
sub
one
second,
so
I'll
take
questions
at
the
end,
but
so
that's
one
thing
next
slide
please.
C
C
It's
also
interested
in
helping
bridge
the
gap
between
adopters
implementers
and
that
code
base,
and
so
one
of
things
that
has
been
spun
up
is
a
new
effort
called
sva
labs,
and
I
think
this
might
be
a
common
interest
for
a
lot
of
mobs
participants,
because
we're
going
to
be
making
code
available
that
people
can
take
and
deploy,
and
the
nice
thing
about
this
is
this
is
in
many
cases
code.
C
That's
based
upon
itf
work
that
mops
or
sorry
that
sva
has
taken
adopted
into
its
environment
to
solve
particular
problems
and
then
releasing
overall
as
open
source.
So
right
now,
if
you
go
there,
this
is
a
new
effort.
We've
started
up,
but
we're
we
have
a
geo
for
ipv6,
json
schema
and
there's.
Actually
we
call
it
that
inside
there's
also
ipv4.
C
We
highlight
ipv6
because
geo
identification
within
the
v6
space,
of
course,
is
a
slightly
more
complicated
process
and
not
as
well
done
to
date
around
the
world
as
ipv4
is.
This
is
a
schema
that
allows
you
to
help,
update
and
provide
information
around
the
geo
allocations
of
ipv6
and
ipv4.
Address
space
in
process
is
coming
capacity,
insight
apis
and
the
request
running
api
that
are
coming
out
of
the
open
caching
group
within
the
sva
next
slide.
Please
and
referencing
back
to
the
the
actual
work
that
drove
that.
C
Ultimately,
the
geo
ipv6
json
work
is
a
document
which
the
sva
originally
published
back
in
october,
2019
and
then
later
updated
in
march
2020.
This
is
an
actual.
We
took
the
work
in
a
variety
of
rfcs
that
we've
referenced
in
the
itf
incorporated
into
this
work
and
allowed
it,
for
it
means
to
be
used
for
updating.
As
I
said,
the
geolocation
attribute
association
for
ip
blocks.
This
is
getting
a
little
into
the
leads,
but
boy
is
it
important?
C
If
you
look
at
the
landscape,
you
know
probably
three
or
four
years
ago,
if
we
were
talking
about
what
the
big
problems
were,
we'd
probably
be
talking
about,
you
know:
how
do
we
deliver
at
scale?
Video,
that's
been
pre-recorded
such
as
movies
or
tv
shows,
and
how
to
do
that
efficiently
and
how
to
do
without
buffering,
as
we
even
heard
from
you
know
earlier
talk.
C
The
new
frontier
in
a
lot
of
ways
is
how
do
we
do
that
for
why
the
ability
to
stream,
especially
things
like
sports,
is
a
interesting
problem
globally
to
the
industry
and
it's
interesting
problem
that
consumers
are
really
driving
for,
because
you
know,
if
you
look
at
today,
you
know
historically
the
latency
of
the
lag
of
delivery
of
sports
in
a
live
thing.
C
Another
area
that
comes
up
a
lot
is
discussions
around
quick
in
http
3..
Obviously
this
has
been
a
huge
area
of
focus
for
the
iitf.
The
interesting
thing
is
so
far
within
the
professional
media
space.
It's
an
area
that
a
lot
of
people
are
watching
a
lot
of
people
are
discussing,
but
frankly
very
few
video
services
to
date
have
announced
that
they've
either
adopted
it
or
they
have
announced
firm
plans
to
adopt
in
the
future.
I
would
say
that,
while
quick
is
a
hot
topic
for
the
itf,
the
adoption
broadly
is
not
quite
there
yet.
C
But
ultimately,
you
know
there's
hope
right
for
the
ita
perspective
that
it
will
be
adopted,
we'll
have
to
see
what
the
future
brings
next
slide
please.
C
So.
Finally,
if
you
want
to
get
in
touch
with
us,
there's
three
addresses
I'll
give
you
one's
mine
over
at
nbc,
comcast,
there's
sanjay
who's
at
verizon
and
another
point
of
contact
is
jason
thibodeau
from
streaming
video
lines,
he's
the
executive
director
and
he's
a
good
person
if
you're
interested
in
learning
more
interested
in
getting
involved
and
helping
help
the
rest
of
us
bridge
that
gap
between
what
the
itf
does
and,
what's
being
done
in
professional
media
that
jason's
a
great
person
to
go
talk
to.
C
If
you
want
to
get
engaged
with
that
work,
and
with
that,
I
guess
we'll
open
up
for
questions
for
sanjay,
and
I.
C
C
A
Well,
don't
go
too
far
because
we're
when
we
get
to
discussing
milestones,
we
have
an
sva
item
to
to
address.
So
thank
you
very
much
for
that
update
and
putting
us
right
back
on
schedule.
A
Okay,
yes,
so
next
up
low
latency
streaming
will.
M
B
J
J
I
really
want
to
share
my
screen.
Yes,
I
have
a
very
wide
screen
you
might
need
to
adjust.
Your
web
browser
here
come
here.
C
So
it
will
go
right
here.
If
I
get
interrupted,
it
makes
the
screen
display
bigger
for
the
rest
of
us.
J
J
Okay,
so
thank
you.
Thank
you
very
much
for
the
invite
I've
been
speaking
on
low
latency
a
few
times
now.
I
recognize
some
people
in
the
audience.
Who've
probably
seen
some
of
these
slides
or
know
this
field
very
well,
but
hopefully
this
is
a
catch
up
on
on
on
the
low
latency
and
luckily
yes,
you
know
sva
and
glenn
just
spoke
about
some
of
the
ranges
and
and
the
names
were
giving
to
low
latency.
J
So
so
here's
a
chart
mine,
pretty
much,
aligns
we're
looking
at
sub
second
below
one
second
ultra
low
latency
from
one
to
four
seconds
I
think
sda
was
was
one
to
three
or
so
and
then
low
later
later
on
above,
what's
what's
interesting
is
we
can
overlay
some
of
the
applications
that
we
might
want
to
deploy
across
these
latency
ranges
and
they're
intentionally
fuzzy
at
the
ends,
because
these
definitions
are
not
solid
by
any
means,
and
nor
do
they
need
to
be
one
person's
sport
might
need
to
be
three
seconds
delayed.
J
Others
can
happily
be
at
10
seconds
commercial
broadcast,
depending
where
you
are
in
the
world,
typically
sits
about
the
the
six
second
mark
or
three
seconds
of
dvd
t2
up
to
10
seconds,
satellite
or
cable
in
the
us,
but
the
real
driver
now
for
latency
is
social
media.
This
is
what
is,
is
forcing
much
lower
latency
people,
tweeting
scores
goals
in
a
game
or
even
chatting
at
that
speed.
J
I
think
what
I'm
going
to
speak
to
you
about
today
is
is
really
the
new
developments
taking
place
and
streaming
down
on
the
bottom
right
here
around
chunk
to
see
math
and
the
ability
to
start
decoupling
overall
latency
from
sigma
duration,
and
this
gives
us
end
to
end
latencies
in
the
one
second
on
up
and
that's
what
I'm
gonna
speak
about.
Webrtc
is
another
obvious.
Obviously,
this
the
real
standard
choice
if
you
want
to
be
sub
second,
so
I'm
not
pitching
at
all
segmented
media
to
be
sub.
J
Second,
we
have
it
working
in
the
lab
sub
second,
but
what
I
want
to
talk
about
is
at
scale
international
based
distribution
for
media,
so
luckily
2020
was
a
rich
year
for
us.
We
got
two
standards
given
to
us
to
do
low
latency
streaming
with
segmented
media.
J
Both
of
those
solutions
rely
upon
chunked
encoding
of
the
content.
So
I'm
not
talking
anything
about
the
transport
of
the
delivery.
This
is
simply
how
the
content
is
prepared
at
the
encoder.
So
imagine
this
is
a
six
second
video
segment,
and
this
is
an
iso
based
media
segment.
So
we'd
have
an
index
up
at
the
front.
J
The
move,
atom
we'll
have
a
keyframe,
the
black
box
and
then
the
encoder
would
produce
six
seconds
of
data
and
only
at
the
end
of
that
six
seconds
does
it
release
it,
upload
it
to
the
cdn
and
the
cdn
distributes
it
to
the
client.
So,
by
the
time
you
you're
you're
uploading
the
very
first
byte
of
data
here,
it's
already
six
seconds
old,
so
to
address
latency.
The
encoder
shifts
to
a
different
pattern.
J
That's
mdat
here
might
only
hold
one
frame:
33
milliseconds
of
data
and
at
the
end
of
it
the
encoder
starts
an
http
post
up
to
the
cdn
and
start
sending
this
one
frame
down,
and
then
it
makes
another
index
and
the
next
frame
the
result
is
we
gain
six
seconds
of
latency
while
doing
it
or
depending
whatever
our
segment
duration
is,
and
this
is
fundamentally
a
concept
used
by
both
dash
and
hls
to
reduce
the
duration.
So
why
does
chunking
reduce
duration?
J
Let's
imagine
we.
We
have
an
encoder
making
segments
that
are
four
seconds
long.
So
if
you
look
at
av
player
and
apple,
the
the
default
behavior
in
fact,
and
that
that
algorithm
is
copied
by
most
players
is
to
have
three
fully
formed
segments
in
the
buffer
before
to
start
playback.
So
if
you
do
that
you're
going
to
have
14
seconds
of
latency
with
this
stream,
because
we're
we're
observing
this
encoder
halfway
through
the
production
of
the
fifth
segment,
so
we've
got
12
seconds,
plus
two
seconds
14
seconds
now
you're
a
smart
player
developer.
J
You
say
I
see
an
easy
way
to
reduce
the
latency.
Here,
I'm
only
going
to
put
one
segment
in
my
player
buffer,
so
I'm
going
to
trade
buffer
stability
for
latency,
which
is
always
a
trade-off.
We
have
to
make
and
I
can
get
on
that
same
stream.
I
can
now
reduce
my
latency
to
six
seconds,
but
let's
look
at
what,
if
these
segments
were
actually
chunked,
so
we're
imagining
each
four
second
segment
is
divided
into
four
chunks.
J
Now
only
the
first
chunk
holds
a
keyframe,
so
we
have
to
start
decoding
from
the
first
chunk
that
I'm
coding
as
the
darker
green
segment
here,
but
other
than
that
we
can
play
through.
So
now
our
starting
option
changes.
I
I
don't
need
to
grab
the
full
segment.
I
could
start
by
grabbing
the
nearest
keyframe
that
produced
and
the
chunk
after
it
5a
and
5b.
J
That
would
give
me
just
two
seconds
of
latency
and
I
can
improve
it
even
further
by
saying
you
know
what,
when
I've,
I
might
have
to
start
decoding
from
this
chunk,
but
I
don't
have
to
start
rendering
from
that
point.
So
I
could
decode
seek
forward
into
this
data
and
I
could
start
playing
at
some
point
in
the
second
chunk,
which
would
give
me
less
than
one
second
of
latency.
J
I
have
another
option
as
well,
which
is
to
make
a
well-timed
request
for
the
start
of
the
next
segment
and
its
keyframe,
which
would
be
6a,
and
that
would
give
me
a
delay
of
2
seconds,
but
then
a
low
latency
of
1
second,
so
the
takeaway
from
this
slide
is
that
a
chunking
can
reduce
latency.
But
b,
the
latency
you
see
in
a
player
is
primarily
a
function
of
this
starting
algorithm
that
player
chooses
to
deploy
against
the
live
stream.
I
could
play
this
live
stream
with
one
second
of
latency.
J
I
made
a
simple
animation
for
a
different
talk.
I'll
repeat
it
here,
just
to
show
how
these
solutions
are
are
functioning
in
practice.
We
want
we,
we
have
a
thirsty
monkey
and
we
have
buckets.
We
want
to
get
water
to
the
monkey
so
traditional
streaming,
the
encoder
would
fill
up
the
six
second
segment.
Only
when
it's
complete
would
it
give
it
to
the
cdn.
We
would
do
the
same,
get
a
complete
media
segment.
We
would
give
it
to
the
player.
J
The
player
would
retrieve
the
complete
media
segment
before
giving
it
to
its
source
buffer,
represented
by
the
monkey
and
we'd,
be
able
to
drink
and
watch
the
video
so
dash
solves
this
problem.
We
can
model
it
as
a
set
of
leaky
buckets,
so
we
don't
ever
accrue
the
entire
segment
at
any
one
part
of
the
delivery
stage,
we're
just
allowing
a
continual
flow
of
our
media
data
down
low
latency
hla
solves
a
different
way.
They
just
shift
to
very
small
buckets
which
they
empty
very
quickly
and
that's
a
bit
of
a
simplification.
J
So
if
you
can
read
small
text
on
your
screen,
here's
just
an
example:
low
latency
dash.
It's
got
some
differences
to
standard
latency.
You've
got
a
latency
target.
Now
this
is
the
service
provider.
Saying
hey
I'd
like
you,
mr
player,
to
keep
a
three
second
latency
on
the
stream.
If
possible.
These
latency
targets
also
have
bounds.
There's
an
upper
bound,
a
lower
bound
and
a
sort
of
target
in
the
middle,
because
the
reality
is
not
all.
J
Players
can
keep
the
same
latency
and
our
our
mindset
needs
to
be
that
the
latency
should
adjust
intelligently
to
match,
especially
the
last
mile
and
the
players
characteristics
we've
put
in
a
utc
timing
element
here.
That's
also
important.
You
can't
have
any
slop
in
your
timing
if
you're
trying
to
keep
two
to
three
seconds
of
end
to
end
latency,
and
also
we
have
this
notion
of
an
availability
time
offset.
J
This
is
telling
the
dash
player
hey
that
segment
that
you
normally
would
have
waited
for
it
to
be
complete
at
the
end
of
its
duration,
can
actually
be
accessed
early.
So
not
too
many
changes
to
dash
on
hls.
Here's
the
equivalent
hls
introduces
the
notion
of
parts.
They
came
up
with
a
new
name,
but
if
you're
in
the
cmaf
world,
this
is
equivalent
to
a
cmat
chunk
or
a
cmf
fragment
depending
on
its
size.
So
we
have
segments
here
they're
four
seconds
long,
but
they're
they're.
J
The
parts
are
basically
one
second
and
the
last
three
segments
worth
of
parts
are
described
in
the
media,
playlist
and
down
here
at
the
bottom.
We
have
a
very
useful
one,
which
is
the
what's
called
a
preload
hint.
This
describes
a
part
that
is
not
yet
finished
production,
so
the
player
is
allowed
to
ask
for
it
early
and
this.
This
is
a
simple
way
to
solve
a
timing
problem,
because
you
can
ask
very
early.
The
origin
will
block
your
request
and
wait
for
that
part
to
be
complete,
and
then
it
will
release
it
immediately.
J
There
are,
in
fact,
multiple
commercial
and
open
source
players
available
today
that,
by
virtue
of
these
two
schemes
I
just
described
will
give
you
stable
two
to
three
seconds
of
latency.
There's
the
usual
players.
You
can
look
at
bit.
Moving
you
can
look
at
theo
player,
you
can
look
at
sharker
player
and
you
can
look
at
dashjs
and
I'll
I'll,
be
demoing
dash.js
for
you
later
today.
J
An
interesting
side
effect
of
some
of
the
low
latency
work
comes
around
synchronization
if
you
take
an
external
time
source
so
that
you
have
a
trusted
time
source
and
you
introduce
a
notion
of
playback
rate
adjustment,
which
is
the
ability
of
a
player
dynamically
to
increase
or
decrease
its
distance
behind
the
live
point,
in
other
words
its
latency,
by
adjusting
its
playback
rate.
This
allows
it
to
hone
in
on
a
common
latency
target.
If
you
tie
those
three
things
together,
you
get
synchronization
between
players
and
you
get
it
in
a
very
cheap
manner.
J
There's
no
need
for
the
individual
players
to
communicate
with
one
another.
I
have
a
screenshot
here
of
three
independent
players,
a
laptop
and
ipad
and
desktop
that
are
playing
the
same
video
with
within
two
frames
of
each
other
within
66
milliseconds
of
each
other,
but
entirely
unaware
of
the
other
players
they're,
simply
following
this
rule
to
play
it
with
a
very
precise
latency
and
they're.
Given
the
information
to
do
that,
so
this
is
a
nice
consequence
of
the
low
latency
work.
I
need
to
point
out,
though,
it's
not
dependent
on
low
latency.
J
These
players
could
all
be
30
seconds
behind,
live
and
equally
synchronized,
and
I'm
really
surprised
more
services.
Don't
take
advantage
of
this
type
of
behavior.
It's
not
tied
to
low
latency,
but
because
it's
very
useful
as
we
move
to
low
latency.
I
think
it's
something
that
could
spread
further
out
in
terms
of
ott
synchronization.
J
So,
let's
just
look
at
some
commonalities
very
quickly.
Both
these
formats
require
the
chunked
encoding.
They
support
latencies
from
two
seconds
and
when
I
say
support,
you
can
use
these
to
show
sub
second
latency
in
the
lab.
If
you
want
to,
but
I
work
for
a
large
cdn,
we
care
about
getting
media
from
europe
to
the
usa
to
millions
of
people.
So
I
certainly
I
don't
currently
recommend
going
below
the
two
second
mark
and
even
that
it's
reasonably
aggressive,
they're
backwards,
compatible
they're
cacheable,
so
we
can
get
scale.
They
use
http
for
delivery.
J
They
support
drm.
They
support
ad
insertion
and
when
I
say
support,
can
you
can
you
go
and
buy
ad
insertion
today
for
low
latency
hls?
No,
because
the
ad
ad
vendors
are
too
slow
in
their
decisioning
systems
to
allow
the
insertion,
but
that's
not
a
fault
of
the
format
itself.
It's
a
format
of
the
ad
industry
catching
up
with
the
new
latency
capabilities
of
media.
It
will
catch
up.
There
will
be
solutions
like
pre-decisioning
the
ad,
so
it's
available
at
the
insert
point:
let's
support
multiple
codecs.
J
They
allow
abr
playback,
so
we
can
switch
bit
rates
and,
as
I
mentioned,
they
leverage
http
delivery,
so
a
lot
of
commonalities,
but
there
are
some
major
differences
and-
and
I'm
showing
green
and
red
here,
not
to
invoke
good
or
bad,
but
simply
to
invoke
yes
or
no,
so
green
is
yes.
Red
is
no
dash
uses
something
called
chunked
encoding
transfer
so
with
http
1.1.
J
This
is
the
transfer
of
what
I'll
call
an
aggregating
asset
because
under
http
2
it's
not
called
chunk,
encoded,
transfer
anymore
and
there's
confusion
here
with
the
chunk
encoding
of
the
media
and
chunks
encoded
transfer,
but
this
is
actually
moving
it
across
the
network
and
in
fact,
llh
ls
can
use
this
under
certain
modes.
So
this
is
a.
This
is
there's
a
nuance
to
this
bullet
point
here.
Hlhs
describes
the
internal
segment
structure
a
dash
doesn't
currently
it
will
in
a
future
update
we're
using
something
called
resync
elements.
J
Hls
is
quite
for
both.
You
have
a
playlist
refresh
with
every
single
chunk,
we'll
look
at
some
of
the
consequences
of
that
hls
objects
should
be
delivered
at
line
speed
that
helps
with
bitrate
throughput
estimation,
but
it
also
delays
delivery.
A
little
hla
supports
ts
segments.
I
will
say
that
today
the
vast
majority
of
low
latency
work
is
being
done
with
iso-based
segments
both
for
hls
and
for
dash.
I
don't
see
anybody
looking
aggressively
towards
ts
deployment
later
laser
requires
http
2
for
the
last
mile.
J
A
dash,
doesn't
you
need
a
smart
origin
with
llhs
because
of
this
notion
of
blocking
and
also
skipping
and
some
other
features
that
were
added
to
hls,
whereas
dash
can
still
work
off
a
relatively
simple
http,
server
and
apple
because
it
describes
the
internal
segment
structure,
has
a
more
deterministic,
startup
and
dash.
You
have
to
put
some
logic
into
your
player
to
figure
out
where
you
are
in
this
stream,
but
there's
solutions
to
both
of
these
one
of
the
biggest
differences
is
in
request
rate
between
the
two
formats.
J
So
let's,
let's
hypothesize,
we
have
a
six
second
segment
with
333
millisecond
chunks,
so
we
have
three
chunks
per
second
and
18
chunks
per
segment.
So
my
my
dash
client
is
basically
going
to
make
one
request
every
six
seconds
and
it's
going
to
get
a
slow.
What
looks
like
a
slow
response
from
the
server,
but
it's
actually
the
server
giving
out
the
data
as
fast
as
the
encoder
is
producing
it,
which
is
a
six
second
segment
every
six
seconds.
J
Llhs,
on
the
other
hand,
has
to
make
a
playlist
request
for
the
audio
chunk
and
then
a
physical
request
for
the
audio
chunk
every
33
milliseconds,
and
I
must
repeat
that,
for
the
video
chunk
as
well,
dash
will
give
you
20
requests
per
minute
to
the
cdn
edge.
The
lr
hls
on
the
same
scale
is
looking
more
like
this.
I
don't
know
what
the
refresh
rate
looks
like
on
the
screen
share,
but
it's
a
lot
higher.
J
J
Cadence
there
are
ways
to
reduce
this
and
I'm
going
to
get
into
those,
and
one
of
them
is
using
bite
range
addressing
with
llhs,
which
I
think
is
very
interesting,
and
I
want
to
talk
more
about
that.
So
we're
a
cdn.
We
care
about
cash
footprint,
the
more
we
can
cash
at
the
edge,
the
better
the
performance
for
the
client.
So,
let's
imagine
a
low
latency
stream.
That's
got
we're
looking
at
just
the
first
four
seconds
of
it.
J
So
if
this
is
a
low,
latency,
hls
stream,
all
the
objects
you
see
appearing
on
the
left
hand,
side
are
objects.
That
would
appear
in
our
cache
in
the
first
four
seconds,
there's
quite
a
few
of
them
and
I
can
overlay
their
sizes
because,
if
we're
interested
in
cache
footprint,
it's
this
video
segment,
the
monolithic
video
segment,
the
non-parts,
that's
actually
the
largest
one
in
the
cache.
J
We
have
duplicate
content,
which
is
the
the
video
content
broken
up
into
parts,
and
then
we
have
an
audio
smaller
because
audio
bitrate's
smaller
than
video
the
audio
parts,
and
we
have
our
text
files
around
that.
So
this
is
sort
of
a
visual
representation
of
the
space.
This
takes
up
in
the
cache
notice.
The
stuff
on
the
left
is
what
a
standard
latency
hls
client
would
be.
Loading
and
the
stuff
on
the
right
is
what
a
low
latency
client
would
be
loading
and
they're
different.
J
So
we
have
duplication
of
content
and
therefore
cash
inefficiency
at
our
edge.
Now,
if
we
layer
on
dash
on
top
of
this,
the
dash
has
additional
objects,
there's
less
objects
that
dash
is
going
to
load,
but
it
turns
out
the
size
of
them
is
very
similar
to
the
standard
latency
client.
You
can
think
of
the
dash
clients
as
just
downloading
the
same
stuff,
but
one
segment
ahead
of
the
standard,
latency
hls
client.
J
So
now
we
have
a
worse
situation,
though
we
have
three
times
the
footprint
and
what
we
really
run
from
an
efficiency
point
of
view
is
a
common
cache
footprint.
How
can
we
drive
these
three
solutions
from
the
same
binary
object
that
we
might
cache
at
the
edge
secrets
here
lies
in
a
mode
of
low
latency,
hls
called
bite
range
addressing
so
on
the
left.
I
show
a
standard
media
playlist
from
low
latency
hls
every
every
every
part
here
has
a
discrete
url.
J
So
I
call
this
just
discrete
part
addressing
but
there's
an
equivalent
playlist
in
which
every
part
is
described
as
a
byte
range
within
the
media
segment
of
which
is
the
constituent
part.
So
these
are
equivalent
in
terms
of
the
information
they're
describing,
but
the
byte
ranges
are
particularly
interesting
because
if
the
player
makes
a
request
for
a
the
first
part
or
the
first
range-
and
so
here
this
is
for
a
a
segment
at
the
beginning.
J
So
I
know
the
start
of
the
byte
range,
but
I
don't
know
where
the
the
part
is
necessarily
going
to
end
because
it
hasn't
been
produced
yet
and
there's
a
bunch
of
text
here.
But
basically
the
end
result
is
the
server
needs
to
keep
delivering
the
parts
to
you
when
you,
when
you
request
the
first
part,
so
an
open-ended
response
is
going
to
burst
each
part
as
it
becomes
available,
and
this
is
a
very
nice
consequence
because
that's
exactly
what
we
want.
J
We
want
the
parts
to
be
sent
to
us
as
fast
as
possible
when
they're,
when
they're
ready
so
an
animation
of
how
this
works
together.
I've
got
an
encoder,
that's
producing
segments
that
are
four
seconds
long.
My
my
hls
client
is
going
to
my
discrete
one
will
request
segment.
One
part
one:
the
origin
is
going
to
block
it
until
that
part
is
ready.
These
are
one
second
parts.
Then
it's
going
to
burst
it
across
the
wire,
which
is
this
little
burst.
J
At
the
same
time,
the
player
is
asking
for
the
next
part
and
it
bursts
it.
We
get
a
pattern
that
looks
like
that.
Pretty
consistent
with
this
part
is
being
delivered
at
line
speed
and
not
in
code
speed.
The
buy
range
addressing
is
somewhat
different.
The
client
asks
for
segment
one
as
a
as
a
single
request.
J
The
origin
will
block
exactly
the
same
way
and
in
the
precise
time
that
it
can
release
that
part,
it
releases
it
into
an
aggregating
response,
so
it
releases
the
part,
then
there's
a
gap,
no
more
data
moves
down
that
channel
and
then
it
bursts
the
next
part
when
that
part's
fully
available-
and
the
pattern
looks
like
this
what's
cool-
is
these
aggregating
transfers
are
exactly
what
a
dash
client
needs.
So
we
can
serve
the
hls
client
and
the
dash
client
with
the
the
same
object
and
the
same
data
flow
on
the
cdn.
J
The
startup
request
flow
is
pretty
interesting.
A
client
could
make
individual
range
requests
and
in
fact
I
think
it
would
make
seven
of
them
to
start
up
the
stream
or
it
can
make
an
open
range
request,
say:
hey
from
zero
to
the
end,
and
it
would
get
all
these
objects
so
that
single
request
returns
all
the
parts
and
sequence
just
as
as
if
they
had
been
requested
individually-
and
this
is
a
nice
feature
except
we
run
into
a
problem.
J
You
have
two
options.
You
can
wait
until
you've
received
the
entire
object
and
then
give
a
200
response
code
with
a
content
length
in
this
case
of
1000,
because
that's
that's
the
content
length,
but
you
didn't
know
it
when
the
client
asks
for
it,
but
you're
going
to
wait
to
get
the
response.
This
turns
out
to
be
the
default
behavior
of
the
cdn
at
least
for
akamai
today,
but
the
behavior
we
want
from
a
streaming
perspective
is
slightly
different.
J
J
This
is
an
ietf
meeting,
because
there's
an
obscure
rafc
called
8673
to
the
rescue
and
it
basically
says
if
you're
a
client-
and
you
want
an
aggregating
response
from
an
open-ended
range
request,
you
should
send
your
last
byte
position
using
a
magic
number,
and
this
number
is
a
very
big
number.
It
happens
to
be
2
minus
1
or
same
as
max
safe
integer
in
a
64-bit
system.
The
idea
is
you're,
saying
hey.
J
J
Now,
there's
all
sorts
of
arguments
against
not
using
magic
numbers.
I
get
that.
However,
this
is
a
very
practical
solution.
We
can
deploy
detail.
In
fact,
akamai
has
deployed
this
onto
our
edge
network,
we're
using
it
for
streaming.
It's
working
very
nicely
and
apple
have
also
indicated
they're
happy
to
support
this
as
well
from
their
clients.
So
it's
a
very
useful
rfc
that
was
written
for
an
entirely
different
purpose,
but
turns
out
to
solve
this
problem
for
low
latency
streaming.
J
So
the
steady
state
case
for
an
hls
client,
then,
is
the
client
might
start
up
asking
for
a
request,
but
now,
basically
every
four
seconds.
It
just
makes
one
request
for
the
object.
It
doesn't
need
to
keep
requesting
the
individual
parts
because
they're
all
flowing
down
as
a
consequence
of
that
open
range
request
against
the
segment.
J
So
it's
interesting
that
the
hls
client
only
has
to
make
one
request
for
media
segment,
duration
and
also
the
curious
fact
you
can
win
drinks
at
bars
that
you
can
play
a
bike
range
addressed,
hls
playlist
without
actually
making
any
bike
range
requests.
I
can
explain
that
to
you
in
a
bar
sometime.
If
you
bet
me
a
quick
chart
here
on
request
rate
improvements,
the
details
aren't
too
important,
but
depending
on
the
ratio
of
part
duration
or
segment
ratio
by
using
byte
range
address,
you
can
reduce
the
overall
request
rate
37
percent
40.
J
The
asymptotic
max
would
be
50,
and
this
is
useful
if
there
are
a
million
clients
talking
to
a
cdn
having
370
000
requests
less
per
second
is
is
meaningful,
a
quick
note
on
bandwidth
estimation.
So
normally
we
have
the
bits
delivered
at
line
speed,
in
other
words,
they're
limited
by
the
throughput
of
the
connection.
We
take
the
the
bits
divided
by
the
time
and
we
get
an
estimate
of
the
throughput.
J
The
solution
is
to
again
calculate
your
throughput
over
the
incrementing
or
aggregating
portion
of
this
incoming
data
flow.
That's
a
little
more
complex
on
the
planet,
so
players
have
to
learn
a
new
skill
in
how
to
estimate
the
throughput
in
the
face
of
this
continually
aggregating
response
with
intermediate
idle
periods.
But
it's
possible
and
players
do
it
today.
J
J
It's
interrupt
with
standard
hls
clients,
low
latency,
dash
clients
and
standard
latency
dash
clients.
It
does
require
support
of
rfc
8673
at
each
of
the
components:
origin,
cdn
and
client
to
work
effectively
with
mid-segment
range
requests.
So
these
these
are
not
the
standard
requests,
they're
pretty
much
edge
cases,
but
these
they're
edge
cases
that
need
to
be
accounted
for.
J
Looks
like
I
had
my
oh
another
summary
on
low
latency
streaming.
So
both
dash
and
hls
offer
solutions
that
I
think,
are
scalable
in
the
two
seconds
and
up
range
both
rely
on
chunks
and
coding
and
they're
able
to
decouple
your
latency
from
your
segment
duration.
They
both
rely
on
the
transfer
of
aggregating
objects.
This
complicates
network
delivery
because
the
content
length
is
not
blown.
J
So
if
you
have
a
multi-tiered
distribution
system,
which
is
essentially
a
cdn,
it
can
have
a
hard
time
knowing
the
difference
between
a
and
a
source
that
has
stalled
and
no
more
data
is
coming
or
a
source.
That's
simply
waiting
to
send
more
data,
so
timeouts
and
retries
become
more
complicated
than
they
are.
Where
you
know
the
content
lengths,
you
know
when
you
either
have
or
don't
have
the
object.
J
Rfc
8673
is
necessary
and
by
range
addressing
mode
is
a
cache,
efficient
solution
for
combining
the
two
protocols.
You
may
not
want
to
use
it
because
you
only
care
about
one
protocol.
That's
up
to
you,
then
I
wanted
to
try
to
do
a
live
demo.
For
that,
let
me
stop
oh
wait.
No,
I
can
just
get
out
of
here.
Okay,
sorry,
you're
gonna
see
a
bunch
of
screen
rearrangement,
so
I'm
to
show
you.
I've
got
two
web
browsers
here.
J
And
on
the
left
playing
dash,
we'll
just
start
here,
let
me
just
get
it
up
and
running:
I'm
gonna
mute
the
audio,
so
this
is
a
stream
whose
origin
is
in
london
in
the
uk
and
I'm
in
san
francisco,
so
we're
playing
a
good,
a
good
distance,
we're
using
akamai
cdn
to
take
the
content
from
london
to
me
here
in
san
francisco.
J
I've
set
this
player
up
with
a
target
latency
of
2.5
seconds.
So
you
should
see
here
in
in
the
chart.
The
player
is
honing
in
on
the
2.5
second
latency
mark.
The
red
line
is
showing
my
player
buffer.
My
buffer
can
never
exceed
my
latency
now
here
on
the
right.
Let
me
just
reload
this
page
is
a
low
latency
hls
player
coming
from
the
same
origin,
so
this
is
indicating
that
use
case
I
spoke
to
you
earlier
of
where
we
have
the
same
objects
coming
from
the
origin
and
being
cached
at
the
cdn
edge.
J
The
the
the
low
latency
hls
the
chart
here
is
a
little
different
because
I
got
a
number
of
different
objects.
These
these
segments
being
used
in
both
these
solutions
are
eight
seconds
in
duration.
So
the
fact
that
I'm
able
to
let
me
just
lower
my
latency
here
to
2.5
seconds
the
fact
that
I'm
able
to
achieve
2.5
seconds
into
n
speaks
to
the
decoupling
we're
able
to
achieve
between
the
segment
duration
and
the
overall
latency.
My
segments
here
showing
on
time,
are
still
being
downloaded
in
eight
seconds.
J
That's
because
they're
being
downloaded
at
the
speed
they're
being
encoded
at
which
is
one
second
per
second
notice,
all
these
little
orange
requests.
These
are
all
the
media
updates.
My
chunks
now
are
500
milliseconds,
so
I've
got
16
playlists
that
I
update
in
between
every
media
segment
request,
but
it's
working
internationally,
as
you
can
see
it
here,
and
I
have
pretty
close
sync
between
these
two
streams:
they're
playing
independently
of
one
another
and
they're
praying
pretty
stably
across
the
cdn,
and
I
can
go
into
more
depth.
J
I
don't
think
I
have
time
if
we
want
to
look
at
the
network
panel,
because
it's
a
technical
audience,
you
can
start
to
see
these
these
media
segments
being
produced
notice.
Each
of
them,
as
I
mentioned,
is
eight
seconds
long.
So
we
have
a
slow
download.
The
total
time
it
sent
is
just
under
eight
seconds.
We
will
burst
a
little
bit
because
of
encoding
and
transfer
time,
but
essentially
the
segments
will
all
all
take
eight
seconds
to
come
across
on
the
on.
J
N
N
J
Okay
and
I'm
happy
to
take
questions.
A
That
was
really
great
thanks.
A
lot
will
and
we
should
definitely
take
questions.
We
are
at
the
risk
of
running
over
the
lit,
but
I
suspect
that
people
will
gladly
trade
off
time
in
the
discussion
of
milestones
for
this
so
go
ahead.
Glenn
you
were
first
in
queue.
C
Will
you
are
the
bravest
of
all
presenters
I've
seen
this
week
at
the
itf?
My
hat
is
off
to
you.
I'm.
C
C
O
O
J
So
best
practices
should
never
be
carved
in
stone.
They
should
be
written
in
disappearing
ink
so
that
when
they
change,
you
can
ignore
all
the
old
ones
and
just
move
to
the
new
ones.
I
mean
that's
a
real
danger,
literally
in
carving
anything,
we
do
have
standards,
which
is
great.
We
have
a
clear
hllhla
spec
we
have
a
dash
spec.
What's
missing
is
the
application
of
these?
J
In
other
words,
what's
consistent,
caching
rules
for
hls
how
how
how
should
cdns
cache
the
various
components
and
for
what
duration
there's
some
illusion
to
this
in
the
spec,
but
the
spec
talks
more
about
the
the
precise
formatting
of
the
content
and
less
about
the
mechanics
of
its
delivery.
So
that's
where
groups
like
dash
industry
forum
has
guidelines
for
how
to
do
low,
latency
dash-
and
I
think
sba
from
what
glenn
alluded
to
is
also
looking
at
practical
recommendations
for
these
formats.
J
Those,
I
think,
are
super
useful,
they're
useful,
because
you
can
get
consistency
between
cdns
and
in
a
multi-cdn
environment,
and
you
also
get
consistent.
Behavior
players
react
in
the
same
way.
A
big
gap
I'm
seeing
today
in
the
industry
is
still
player
behavior
in
the
face
of
problems.
So
if
you
have
a
missing
segment,
one
player
will
keep
retrying
that
segment
three
times
then
try
another
bitrate
and
then
try
an
alternate
url.
Another
player
will
retry
it
52
times
and
then
roll
over
and
die.
J
There's
really
no
consistency
in
failover
operation
players
and
therefore
service
operators.
Don't
have
a
consistent
quality
of
service
when
they
have
a
problem.
Each
player
goes
and
does
something
slightly
different,
so
there's
an
issue,
so
I
think
industry
groups
look
taking
specs
and
then
conditioning
them
with
guidelines
that
are
updated
frequently,
and
I'm
talking
at
least
annually
make
a
lot
of
sense
in
order
to
get
robustness
in
these
these
formats
and
there's
new
advances.
There's
additions
coming
out
every
year
in
features
we
can
do
so.
J
A
A
Not
hearing
any,
then
I
think
we'll
say
thanks
for
now
will
and.
J
L
A
Great
all
right
now,
switching
gears
just
a
little
bit
we'll
go
to
some
of
our
our
own
working
group
work
and
discuss
the
state
of
the
ops
cons
document,
which
I
think
spencer
is
going
to
take
us
through.
Please.
H
Yes,
yes,
please,
for
extra
credit
spectrum
is
having
a
outage
in
our
service
area,
so
I'm
actually
hot
spotting.
My
cell
phone.
H
D
B
B
There
we
go
all
right
that
look
alright,.
I
H
B
H
Okay,
you
know
I'm
saying
I
have.
I
have
my
slides
up
that
I
can
see.
Okay
now
I've
seen
yours
excellent.
Thank
you,
okay!
So
let's
go
ahead
and
start
so
we're
talking
about
status,
report
updates
and
next
slide.
Please.
H
So
and
I'm
I'm
actually
looking
at
these
on
my
on
my
screen,
so
I
don't
have
to
wait
for
beat
echo
to
paint
on
the
next.
So
the
next
slide
is
the
outline
to
talk
a
little
bit
about
the
model,
the
status
of
this
milestone
and
beg
for
forgiveness.
Well,
I'm
sure
we
could
talk
more
about
that
at
the
end
of
the
session.
We
have
some
issues
in
github
that
have
proposed
text
and
we
should
move
forward
to
merge
these
in
the
next
update.
H
Then
we
have
some
open
issues
and
I
want
to
stop
for
comments
and
questions,
and
then
I
want
to
talk
about
solicitation
update
next
slide.
Please.
H
Okay,
so
that's
so
we're
behind
on
this
coven
19
was
a
reasonable
excuse,
but
that's
so
2020.
This
is
the
working
group's
only
adopted
draft,
so
we
should
probably
work
on
it.
We
talked
about
this
at
the
interim,
actually
moving
the
milestone
to
july,
but
that
probably
really
needs
to
be
for
real.
H
H
I
should
also
mention,
as
we
do
the
next
slide.
I
should
also
mention
that
this
itf
is
pretty
early
in
the
year
compared
to
other
spring
ietf
or
first
itfs
and
the
summer
when
the
second
itf
is
pretty
late.
So
this
is
an
opportunity
for
it's
a
nice
opportunity
for
another
interim
which
seemed
to
work
well
for
us.
Are
you
guys
on
slide?
Four?
H
The
one
on
low
latency
versus
others-
we
actually
do
have
a
pr
for
that
that
we've
had
lots
of
discussion
on
in
github,
more
more
discussion.
We've
had
on
any
other
issue
or
pr.
It
just
needs
to
be
applied
and
updated
and
applied.
H
Let
me
skip
down
to
number
31..
We
have
a
section
from
my
stock
on
on
adding
a
section
on
ads
and
I
thought
we
had
a.
We
had
an
action
dude
for
somebody
to
to
talk
to
matt
and
see
if
they
could
establish
contact
with
him.
According
to
the
the
minutes
from
our
interim
meeting.
Did
that
happen?
P
Yeah
I
spoke
to
matt
and
he's
okay
with
whatever
you
want
to
do
so
I
can.
I
can
make
edits
if
needed,.
H
Okay,
excellent,
so
we
are
good
on
that
one,
which
is
where.
H
And
so
it's
probably
worth
looking
at
at
the
pr
for
issue
24,
if
you
could
click
on
that
for
us.
This
was
to
we.
We
did
a
wildly
unpredictable
section
after
after
the
kobe's
19
unpleasantness
last
last
spring
and.
H
So
while
we
we
our
last
conversation
about
this,
was
that
the
isg
was
sorry,
the
iab
was
having
a
workshop
on
impact
of
covet
19
on
networks
and
that
actually
happened
and,
and
so
I've
I've
got
new
text
on
this.
H
B
H
The
one
for
24.
yeah,
I
thought
it's
33-
is
the
pr.
H
And
you're
you're
fine
you're,
fine,
you're
fine
and
there
should
be
a
there
should
be
a
there's
yeah.
There
should
be
a
linked
pr
on
this.
H
I'm
actually
trying
to
do
a
better
job
on
that
too.
So,
if
you
could
just
display
the
pr
that-
and
that
will
that'll
basically
say
show
it,
what
I'm
what
I
was
proposing.
H
Yeah,
so
can
you
look
at
all
the
look
at
all
the
changes?
I've
got
two
commands.
H
Okay,
excellent
fabulous,
so
basically
we
had
three
so
so.
Basically,
sanjay
did
a
a
presentation
on
this,
and
I
was
I
was
referring
to
the
references
that
he
was
using
in
his
presentation.
Also,
I
just
deleted
those
and
then
added
the
one
for
the
ib
workshop
draft
report.
H
So
if
you
just
scan
down
to
subsequently
at
the
bottom
of
the
slide
that
that
at
the
bottom
of
the
page,
that
would
show
what
you
know.
Basically,
I'm
doing
a
summary
here
of
things
that
seemed
important
to
me.
H
My
default
on
this
would
be
to
go
ahead
and
merge
this
pr
and
submit
a
submit
an
updated
version
of
the
draft
that
includes
it.
If,
if
nobody
has
thoughts
here
or
you
know,
I
could
wait
a
week
and
see
if
anybody
has
thoughts
on
the
mailing
list.
C
Hi
glenn,
hey
spencer,
so
question
here,
then:
how
does
this
intersect
with
the
difference
between
sort
of
the
use
of
during
covet
of
a
lot
of
streaming
content
and
versus
the
video
conference?
You
know
upstream
downstream
sort
of
going
on
at
the
same
time,
do
we
need
to
break
that
out
as
a
distinct
piece
of
it
or
or
or
do
you
think
that
this
covers
it
already.
H
I
I
so
I
think
you
know
so
the
the
the
workshop
report.
Definitely
so
the
the
and
I'll
just
say
this,
and
let
other
people
tell
me
I'm
wrong.
It
seems
like
to
me
that
a
lot
of
the
impact
was
having
a
lot
more
traffic
off
peak,
so
you
know
for
for
for
like
isp
networks
and
stuff
like
that.
H
So
and
that
was
theoretically
a
lot
of
people
doing
video
conferencing
and
vpns
from
home
during
the
day
where
they
might
have
been
doing
that
at
work
or
someplace
else
before,
but
they
they
like.
I
said
they
did
call
out
the
usage
pattern
of
of
applications
as
well.
H
So
I
mean
they
they
did
they
did
they
had
a
good.
They
had
a
good
workshop
report.
I
was
just
trying
to
minimize
the
amount
of
of
information
that
I
was
putting
in
here.
So
since
the
workshop
report
is
a
reference.
C
Okay,
that
seems
legit.
You
know
it
would
be
fun
to
be
able
to
see
how
many
people
were
watching.
You
know
movies
off
of
netflix,
while
they
were
also
video
conferencing
for
work
or
school.
H
Cool,
I
will
I
will.
I
will
look
at
the
the
minutes
after
which
community
jake
is
taking
and-
and
we
can,
we
can
do
this
back
with
when
we
are
doing
an
update
so
and
next,
if
I
can
get
the
next
slide.
H
Please
so,
especially
on
the
first
page,
I
would
recommend
people,
volunteering
and
I
really
am
looking
for
people
to
start
basically
start
a
pr
just
using
go
using
the
discussion.
That's
already
happened
on
the
issue.
H
There
are
a
couple
of
these
that
are
on
the
next
page
that
we
really
do
need
somebody
to
work
on
the
you
know
on
the
issue
itself,
but
the
refinements
to
tcp
idle
time
discussion.
This
is
again,
this
is
a
and
if
the
chairs
think
it
would
be
helpful
to
look
at
the
at
the
issue
and
let
people
see
the
discussion
that
happened
there.
H
O
So,
spencer,
while
reviewing
your
slides,
I
I
went
and
find
myself
a
couple
of.
I
think
a
couple
of
these
points
I
forget
precisely
which
ones,
but
I
I
I
am
volunteering
to
take
on
some
of
this,
but
oh.
H
Fabulous
fabulous
so
so
can
I
just.
O
Specifically,
sir,
the
specifics
are
there
on
on
github
as
far
as
what
I've
done
now.
But,
yes,
I
encourage
others
as
well
to
to
take
a
look
into
into
more
of
this.
H
Cool,
could
I
ask
the
people
that
come
to
the
mic?
If
you
could
turn
off
your
video?
That
would
help
a
lot
because
jake
jake
was
breaking
up
for
me,
but
I
know
where
to
find
him
after
the
meeting,
so
so
the
end-to-end
encryption,
one,
which
is
issue
number
four
conveniently
these
are
starting
at
the
bottom.
Is
it
easier
if
we
go
start
at
the
top
and
go
down
or
what's
easier
for
you
kyle.
H
If
people
are
looking
at
the
slides
they're
in
a
second.
A
Order,
yeah
and-
and
I
I
think
I
have
a
question
about
how
much
of
this
we're
going
to
resolve
here,
versus
how
much
it's
important
here
to
raise
awareness
of
these
issues
and
get
them
resolved,
get
them
resolved
afterwards.
H
Right-
and
so
I
I
was
look,
my
theory
was
that
if
we
had
text
on
it,
we
could
we
could
resolve
those
issues
here
and
just
apply
and
just
apply
things,
but
these
are
the
issues
that
we
don't
have
text
on.
Yet
we
just
have
a
lot
of.
We
have
a
lot
of
discussion
which
a
lot
of
it
includes
suggested
text
in
the
in
the
in
the
issues.
Does
that
make
sense.
A
Largely,
although
I'm
I'm
a
little
bit
concerned
that,
apart
from
the
fact
that
I'm
concerned
we've
lost
kyle.
A
I
I
don't
know,
I
think
I
would
take
suggestions
from
other
people
who
are
trying
to
who
will
likely
be
offering
texts
as
how
you'd
like
to
process
your
way
through
this,
as
we
wait
for
kyle
to
hopefully
come
back
online,
or
else
I'm
gonna
have
to
figure
out
how
to
share
slides,
which
could
be
exciting.
H
I'm
doing
pretty
well
with
audio
in
and
out
of
about
30
kilobits.
Anything
beyond
that,
I'm
going
to
have
issues
with.
Oh,
it
looks
like
kyle's
coming
back
in
hope,
so
there
we
go.
A
So
I
we're
we're
we're.
We
are
where
we
were
when
you
left
us,
and
I
asked
people
if
they
would
like
to
express
how
they'd
like
to
process
through
this
I've
not
heard
anything
on
audio.
Although
I
observed
mike
english
comments
in
the
chat
about
that,
only
one
is
still
unassigned
mike.
Would
you
like
to
come
to
the.
Q
Yes,
so
sorry
I
missed
the
second
page
in
the
slide
deck
there.
So
there
are
a
couple
more
issues.
I
was
just
kind
of
trying
to
sift
through
because
I
know
jacob
said
he
had
self-assigned.
A
number
of
those
so
looks
like
14,
22,
6
and
32.
Q
H
Season
these
are
the
ones
these.
These
are
at
least
what
yeah
14
and
26
the
you
know.
We
we
don't
have.
We
don't
have
any
a
really
full
amount
of
text
in
the
issue,
so
those
are
actually
somebody
to
work
on.
Q
Okay,
so
it
looks
like
32,
it
is
still
unsigned
and
that
could
be
cool
adaptive
bitrate
at
low
latency.
Q
Okay,
so
I
think
I
think
32
was
opened
by
when
I
see
him
on
the
call.
I
don't
know
if
he
wants
to
comment
on
that.
M
M
So
it's
worth
mentioning
because
players
don't
have
a
good
solution
for
it
right
now.
It's
it's
all
over
the
place.
H
If
you
could
respond
to
that,
jake
luke
was
breaking
up
for
me
because
he
was
also
doing
video.
Sorry.
O
Yeah,
so
I
guess
it
seems
to
me,
like
the
the
right
action
here
is
to
to
reference
the
right
references.
I'm
I'm
thinking,
actually
some
of
will's
slides
might
be,
might
be
a
good
source
they're,
always.
O
Sorry,
I
said
ooh
score
right.
I
I
think
will
also
I
was
actually
you
know
kind
of
thinking
about
this
when
I,
when
I
asked
my
question
of
will
about
like
where
would
you
look
at
the
right
places
do
to
do
sort
of
best
practices
and
what
should
be
written
down,
and
you
know
I
think
his
answer
was-
was
very
informative
and
helpful,
although
complicated.
O
So
I
think
the
right
thing
to
do
here
is
to
is
to
try
to
capture
some
of
this
through
references
to
good
sources
rather
than
trying
to
make
it.
You
know
with
maybe
a
little
bit
of
text
about
like
well.
This
is
going
to
be
complicated,
so
you
should
do
research
because
I
you
know
I
I.
I
really
gotta
think
that
making
concrete
claims
in
this
stock
is
gonna,
be
out
of
scope
right.
O
O
Sure
I
will
take
that
one
too,
unless
I
don't
know
like
can
ollie
do
you
want
to
do
any
work
here
on.
L
O
Okay,
yeah,
I
don't
mind
trying
to
find
their
the
good
references
and
put
in
a
paragraph
about
this.
I'm
not
sure
I'm
not
sure
it's
possible
to
do
a
good
job
here,
but
I
can
take
that
and
try
and
do
it
again.
You
can
assign
it
to
me
sure.
H
Okay-
and
we
also
needed
help
with
the
number
14-
the
standard
metrics
one
unless
you've,
no,
that
would
not
be
assigned
at
this
point.
O
I
I
did
actually
self-assign
that
I
kind
of
remembered
that
aaron,
maybe
at
one
of
the
prior
meetings,
commenting
that
we
probably
have
some
material
somewhere
and
if
I
just
go,
look
it
up
and
find
out
where
you
guys
recommendations
to
send
me
on
that.
I
I
would
find
that
helpful,
but
I
can
I
can
go
annoy
people
as
best.
I
can
again,
I'm
not
sure
it's
going
to
be
good,
but
I
will
try
to
put
a
strong
man
together,
at
least.
H
Yeah
I
I
honestly,
I
will
say
I'm
kind
of
expecting
that
we're
going
to
go
through
some
stuff
where
we
say
you
know
we
need
some
more
stuff
and
we
need
some
more
stuff
out.
But
but
you
know
just
getting
an
update
of
the
document
that
the
working
group
can
work
on
would
be
great.
H
I
I
believe
yes,
okay
and
if
and
if
somebody,
if
somebody
does
not
want
to
you,
know
if
nobody
else
picks
number
four,
I
think
I
said
I
could
do
that
one
if
nobody
else
signs
up
for
that.
But
again
you
know
if,
if
I,
if
I
take
a
shot
at
the
text
and
you
end
up
sending
comments
on
it,
you
might
as
well
have
written
the
text
anyway.
H
O
H
O
H
Yeah
but
the
end,
but
you
know
ideally,
the
working
group
covers
the
things
that
we
don't
understand
so
and
normalizing
the
white
space.
Is
we
just
waited
for
a
while
on
that?
That's.
O
H
O
H
O
Did
we
have
any
further
updates
on
the
ad
stuff,
like
I'm,
not
sure
that
what's
there?
If
I
remember
it
in
the
in
the
comments
of
this,
it
was.
O
D
H
I
understood
I
understood
mike
right
to
to
say
that
he
could
he
he
could
do
updates
on
that,
if
needed,.
O
O
O
Q
Q
Yep
yeah
so
to
clarify
matt
said,
do
whatever
you
think
is
best,
and
I
offered
to
help
with
that.
So
let
me
know
what
you
think
we
should
do
with
fpr,
I'm
happy
to
open
another
pr
with
you
know,
largely
the
same
content
and
whatever
changes
we
want
to
make.
H
H
It
sounds
like
we're
bringing
in
stuff
from
sga
already
we
were
talking
to
mile
high
video.
Also.
Is
that
right.
C
C
A
I'm
not
seeing
him
unmute
and
we
do
need
to
move
on.
So
I
think
that
we
probably
need
to.
C
One
thing
I'll
say
about
the
mile
high
video,
my
live:
video
is
a
conference
versus
a
a
group
that
itself
produces
documents
yeah
like
the
streaming
video
alliance,
so
I
mean
you,
we
can
talk
to
them,
but
I'm
not
sure
it's
a
that.
There's
a
my
live
video
entity
to
give
feedback
directly.
If
you
see
what
I
mean.
A
D
A
A
Yeah
I
appreciate
your
volunteering
to
organize
it.
Thank
you
moving
moving
right
along
or
yours,
spencer,
yep
thanks.
The
next
on
our
list
is
the
update
on
the
hang
on.
I
have
it
right
here:
media
operations
use
case
for
an
augmented
reality:
application
on
edge
computing
infrastructure.
A
We
had
scheduled
20
minutes
for
that
and
then
15
minutes
for
milestones
and
apparently
I'm
completely
incapable
of
math.
So
I
made
an
agenda
that's
longer
than
our
session,
so
rheem,
please
do
step
up
and
take
us
through
your
work.
If
there's
any
way,
you
can
and
do
it
more
quickly
than
the
time
allowed.
I
would
certainly
appreciate
it.
B
Can
you
please
try
the
slides
for
me?
Okay,
we'll
do
and
if
I
can
find
them
here,
they
are.
B
B
I
I
Please
we
have
added
two
new
subsections
to
the
draft.
We
have
also
made
editorial
changes
as
appropriate
throughout
the
draft.
I
will
talk
about
these
changes
in
the
next
few
slides
next
slide.
I
I
We
got
very
helpful
comments,
suggestions
and
questions
at
the
last
working
group
meeting
and
have
incorporated
them
as
updates
to
the
draft.
The
updates
are
along
these
two
dimensions.
Firstly,
we
clarify
in
the
draft
that
it
is
the
heat
and
battery
drainage
problems
in
ar
devices
that
require
offloading
of
computationally
intensive
tasks
to
the
edge.
I
I
I
I
This
requires
tracking
natural
features
and
developing
an
annotated
point,
cloud-based
model
that
is
then
stored
in
a
database
to
ensure
that
this
database
can
be
scaled
up.
Techniques
such
as
combining
a
client-side,
simultaneous
tracking
and
mapping
and
a
server-side
localization
are
often
used
once
the
natural
features
are
tracked.
Virtual
objects
are
geometrically
aligned
with
those
features.
I
This
is
followed
by
resolving
occlusion
that
can
occur
between
virtual
and
real
objects.
The
next
step
for
the
air
application
is
to
apply
photometric
registration.
This
requires
aligning
the
brightness
and
color
between
the
virtual
and
real
objects.
Additionally,
algorithms
that
calculate
global
illumination
of
both
the
virtual
and
real
objects
are
executed.
I
I
I
This
entails
dealing
with
registration
errors
that
may
arise,
ensuring
that
there
is
no
visual
interference
and
finally
maintaining
temporal
coherence
by
adapting
to
the
movement
of
users
eyes
and
head
next
slide.
Please
so.
Questions
comments
and
suggestions
are
invited
from
the
working
group
to
improve
the
draft
and
does
the
working
group
think
that
the
draft
is
ready
for.
F
Just
one
one
woman,
like
the
draft,
I
thought
lots
of
stuff
you
had
was
really
good.
One
thing:
I
think
that
would
be
well
worth
talking
about
the
latency
budget
for
this
computation,
so
particularly
from
the
time
from
a
head
rotation
to
the
time
the
video
needs
to
be
updated.
F
I
mean
I
see
lots
of
numbers,
but
I
I
think
when
that
starts
becoming
below
about
150
hertz
people
start
feeling
sick
you,
you
can
get
different
numbers
from
different
researchers
or
whatever,
but
I
think
that
that
aspect
of
it
would
be
really
useful
information
to
happen
here,
because
it
really
does
constrain
how
close
to
the
edge
this
off-board
rendering
happen.
A
All
right,
so
thank
you
for
the
update
and
I
guess
at
this
point.
The
key
question
is:
what
do
people
think
about
adopting
this
as
a
mops
working
group.
H
Started
ringing
when
I,
after,
as
I
raised
my
hand,
I
would
I
think
that
this
document
was
in
fine
shape
for
the
working
group
to
work
on.
So
I
would
support
adoption.
A
A
A
A
This
is
not
going
to
be
definitive
yeah.
This
is
not
going
to
be
definitive,
but
I
want
to
get
a
sense
of
where,
where
the
people
who
are
here
are
at.
A
Let's
see-
and
I
think
if
I
have
succeeded
in
doing
this
and
there's
very
little
probability
of
that-
we
now
have
a
poll
a
hand-raising
opportunity
for
should
we
adopt
the
ar
use
case
document
as
a
working
group
item
and
people
are
already
carrying
it
out.
Thank
you
very
much.
A
So
if
you
are
in
favor
of
it,
please
raise
your
hand,
and
I
will
give
it
about
another
10
seconds.
Obviously,
if
you're,
not
in
favor
of
it,
please
do
not
raise
your
hand.
I
will
after
we
have
done
this,
ask
if,
if
there
are
further
comments.
A
Okay,
so
nine
of
the
28
participants
have
said
that
that
we
should
adopt
it.
Do
we
have
any
objections
to
taking
that
supposed
approach
to
the
mailing
list
for
confirmation,
or
are
there
other
things
we
should
discuss
at
this
point
before
we
do.
A
That
okay
and
I
now
have
10,
raised
hands
out
of
28..
I'm
not
hearing
any
objections
to
that.
So
I
think
that
my
takeaway
from
that
is
that
there
is.
There
is
support
in
this
room
for
for
adopting
it.
We
will
take
it
to
the
mailing
list
as
a
as
a
for
confirmation
and
take
it
from
there
that
how
do
you
do
it?
Kyle.
B
Ask
so
jake
asked
a
question
in
the
chat:
do
we
traditionally
ask
or
how
many
have
read
it?
Also
I've
I've
heard
in
I've
heard
in
other
working
groups.
The
question
asked
who
is
going
to
help
work
on
this?
B
But
I
I
don't
know
that
we
need
to
that.
We
need
to
actually,
like
you
know,
run
a
poll
here,
but
but
I
I
would
be
interested
to
hear
from
people
who
have
you
know
who
would
who
find
this
interesting
enough,
that
they'd
be
willing
to
work
on
it
willing
to
help
out.
A
Yeah,
so
I've
asked
a
slightly
different
question
and,
and
I've
asked
about
be
willing
to
review
and
contribute
text,
because
I
think
it's
a
very
fair
point,
and
so,
if
you
are
willing
to
review
and
or
contribute
tax,
please
raise
a
hand.
If
you
are
not,
please
do
not,
and
the
good
news
is,
I
I
see
more
hands
going
up
than
there
are
authors
of
the
document.
Current
document
I'm
seeing
somebody
not
raising
their
hand.
A
Yes,
although
I
would
like
to
know
if
the
person
who
did
not
raise
their
hand
is
willing
to
say
there
is
no
list
of
names.
Sadly,
if
if
the
not
raising
a
hand
is
a
matter
of
don't
have
time
or
outside
of
area
of
interest,
that's
fine.
If
it's
an
objection,
please
please,
unlike
and
share
your
objection.
D
A
A
So
thank
you
very
much
renin
for
presenting
and
we
will
take
to
the
list
the
formally
take
to
the
list.
The
question
of
if
we
are
adopting
this
as
a
working
group
item
phrased
in
the
assumption
of
yes,
cool
cheers.
A
Great
thank
you
which
takes
us
to
the
last
item
on
our
agenda,
which
is
a
discussion
of
our
milestones
and
kyle.
I
think
you
have
that.
A
A
Great
thanks
cool
all
right,
so
the
I
think
the
next
slide
is
the
set
of
milestones.
A
Great,
so
this
is
the
current
charter
milestones,
some
of
which
are
even
done,
yay,
some
of
which
have
become
overrun
by
events,
and
we've
discussed
this
a
little
bit
at
the
last
ietf
meeting
and
notionally
on
the
list.
So
next
slide
please.
A
A
So
I
I
would,
as
a
friendly
amendment
to
this
text,
suggests
that
we
align
it
better.
With
the
document
title,
I've
highlighted
in
yellow
the
draft
document
of
the
sva
reliance
on
ietf
protocols,
because
that
one.
A
We
had
a
presentation
so
glenn
and
or
sanjay.
Would
you
like
to
comment
on
whether
you
think
there's
a
document
here
or
whether
we
should
be
pushing
it
off
or
what.
C
C
It's
just
I'll
tell
you
the
the
the
forcing
function.
That's
is
these
discussions,
which
helps
it's
just
it's
a
very
tough
priority
to
make,
given
all
the
other
things
that
are
going
on,
but
we'll
get
there.
A
Okay,
so
then
I
I
would
suggest
it
sounds
like
we
should
sort
of
push
the
sva
document
milestones
off
by
sort
of
one
meeting
cadence
so
make
the
make
that
a
july
milestone,
and
also
the
the
sympty
document
push
that
we
were
going
to
revisit
whether
that
one
still
made
sense
in
july.
I'm
I'm
thinking.
Maybe
we
should
be
revisiting
that
in
november,
if
that
makes.
A
To
me,
okay,
not
hearing
any
other
comments,
although
certainly
open
to
other
comments.
A
I'm
I
I
appreciate
that
we're
starting
to
get
milestones
that
reach
beyond
our
committed
life
span,
namely
that
in
november
of
this
year
the
isg
is
slated
to
review
whether
or
not
we're
rechartering
or
closing
the
mops
working
group.
I'm
not
entirely
sure
where
that's
going
to
sit
given
given
me
that
that
it's
been
a
crazy
three
quarters
of
of
the
mop's
working
group
lifetime
will
have
been
insane
eric.
Perhaps
you'd
like
to
comment.
D
D
R
Yeah
I
mean
I
largely
agree
with
that.
You
know,
looks
like
we're
doing
stuff
things
are
happening.
There
is
discussion,
there's
been
no
real
drama
or
you
know
people
punching
each
other,
so
I
think
that
it
makes
sense
for
it
to
carry
on.
There
is
always
the
risk
that
you
know
the
isg
does
something
crazy,
because
somebody
decides
they
don't
understand
the
sort
of
discussion
type
thing,
but
I
think
the
chance
of
that
are
really
small.
A
Thank
you
for
that,
so
I
will
take
that
as
at
least
firm
enough
that
I
won't
worry
too
much
about
milestones
that
appear
after
that
point
in
in
in
the
charter
lifetime
and
we'll
just
take
it
as
it
comes.
So
what
I
propose
is
that
we
then
I'll
share
this
with
the
list.
A
One
last
time
give
it
like
a
two
week
discussion
period
and
then
actually
update
the
milestones,
and
it
will
appear,
as
shown
here
with
the
exception
that
the
sva
and
simply
documents
will
be
one
meeting
cycle
later
and
having
said
that,
I
know
that
spencer
alluded
to
the
fact
that
this
this
yeah,
the.
A
July
time
frame
for
last
calling
the
document,
the
ops
cons
is
kind
of
looming,
but
I
think
it's
worth
keeping
in
that
in
that
time
frame.
Even
if
the
document
hasn't
progressed
quite
as
much
as
aspired
spencer.
H
Yeah,
the
the
last
interim
we
had
was
mostly
a
drafting
session
on
that
on
that
document,
and
it
was
very
helpful
so,
like
I
say,
forcing
functions,
force.
A
Okay,
so
at
the
interim
you'll
at
the
interim
that
you
organized
to
move
the
move,
that
document
forward
will
be
for
july
and
we'll
last
call
it
in
the
july
time.
A
Frame
cool
any
other
comments
or
questions.
A
K
A
All
right,
well
with
that,
I
would
like
to
thank
jake
for
his
note-taking
efforts
and
thank
everybody
for
coming
out
to
participate
and
engage
in
the
meeting
thanks
to
all
our
people
who
are
willing
to
be
agenda
victims.
Great.