►
From YouTube: IETF111-MOPS-20210727-1900
Description
MOPS meeting session at IETF111
2021/07/27 1900
https://datatracker.ietf.org/meeting/111/proceedings/
A
And
knowing
this
area,
there
is
a
non-zero
chance
that
the
power
goes
out
here.
Hopefully
that
won't
happen.
If
it
does
well,
let's
you
know
cross
that
bridge
when
we
come
to
it.
A
Yes,
okay,
I
guess
it's
three
o'clock,
so
we
can
get
started
now.
This
is
mops
media
operations.
If
you
are
in
the
wrong
place,
you
may
have
clicked
on
the
wrong
link,
but
if
you
are
intending
to
be
here,
welcome
leslie
will
be
along
shortly.
I
assume
notewell.
A
You
should
all
be
fairly
familiar
with
this
by
now,
if
not
take
the
opportunity
to
review
it.
If
you
type
notewell
into
ietf
notewell
into
google,
it
will
bring
up
something
that
looks
very
much
like
this
and
you
can
peruse
it
at
your
at
your
leisure.
The
tldr
is
anything
that
occurs
any
any
contributions
that
occur.
Sorry,
anything
that
occurs
at
an
ietf
official
meeting
or
on
ietf
mailing
lists
and
whatnot
are
considered
idtf
contributions.
A
So
here's
our
agenda
time
for
some
agenda
bashing,
but
we
have
several.
I
think
we
have
five
presentations
today.
C
A
I
I've
got
your
slides,
don't
worry,
I
wouldn't
believe
me.
I
would
have
bugged
you
if
if
there
was
something
missing,
oh
did
you
did.
Are
you?
Are
you
saying
that
you
weren't
sure
whether
you
asked
for
a
slot
or
not.
C
I
I
thought
I
was
I
hadn't
looked
at
the
agenda,
I
hadn't
finished
reading
it,
and
so
I
didn't
see
the
second
slot,
and
so
I
was
a
little
and
I
thought
10
minutes
just
a
minor
point
of
confusion.
Sorry
about
that.
A
A
Anyway,
we
just
we
just
did
the
the
agenda
bashing,
so
I
guess
you.
A
We
have
noted
well
the
the
next
two
things
are,
of
course,
our
favorite
part
of
any
working
group
session,
which
is
we
need
jabber,
scribe
and
minute
taker.
A
I'm
perfectly
happy
if
multiple
people
contribute
to
the
minutes,
if,
if,
for
instance,
speakers
or
people
who
are
otherwise
participating
in
the
meeting
can
take
minutes
when
they're,
not
otherwise
distracted
by
either
presenting
or
contributing,
then
as
long
as
we
have
multiple
people
we'll
make
sure
to
cover
to
cover
everything
important.
A
A
D
You
and
I
kyle
had
some
discussion
about
what
it
entails
to
take
minutes
and,
and
we
acknowledge
that
the
having
the
video
stored
on
youtube
really
does
take
away
the
pressure
of
capturing
every
single
word
that
somebody
says,
but
it
would
be
helpful,
particularly
because
the
automatically
generated
transcript
in
those
videos-
maybe
not
quite
so-
up
for
keeping
track
of
technical
jargon
and
whatnot.
It
would
be
helpful
if
we
had
hands
who
could
help
capture
the
high
points
and
and
in
particular,
catch
capture
points
around
particular
technology
terminology.
A
Spencer
spencer
suggested
that
he
is
willing
to
take
minutes
during
jake's
slot,
so
thank
you,
spencer,
so
I
think
we're
covered
now.
We
just
need
a
jabber
scribe,
although
I
mean
jabber
ends
up
in
the
chat
here
right.
So
it's
just
if
somebody
if
somebody
wants
wants
something
announced
at
the
mic.
I
mean
to
be
perfectly
honest.
I
could
probably
take
that
so
so
I
will
anoint
myself
jabber
scribe.
D
Okay,
all
right,
I'm
trying
to
copy
and
paste
the
url.
For
here
we
are,
I
put
the
url
for
the
kodiamg
stuff
into
the
chat
in
case.
Anyone
else
doesn't
have
it
handy
and
we
are
not
doing
milestones
yet.
A
F
Yeah
I
was
able
to
join
thanks
for,
for
all
your
help,
getting
it
set
up
great.
D
D
An
awful
lot
of
interesting
considerations-
and
I
thought
it'd
be
interesting
to
have
brian-
give
us
sort
of
a
flyover
of
sort
of
what
are
some
of
the
aspects
of
5g
that
are
of
particular
interest
when
it
comes
to
delivering
video
and
any
lessons
learned
along
the
way.
So
with
that,
take
it
away
brian.
F
Okay,
thank
you
leslie,
so
yeah.
So
as
he
said,
we've
been
working
at
this
in
this
document
at
the
sva.
For
probably
the
better
part
of
a
year,
we've
had
a
number
of
you
know
clearly
some
very
smart
people
working
on
it
from
vendors
through
to
the
the
the
the
network
operators
themselves
through
to
the
caching
companies,
and
even
the
public
cloud
providers
have
contributed.
So
hopefully,
we've
got
something
that
that's
fairly
good.
F
F
So
really
you
know
if
you
ask
people
what
are
the
key
drivers
for
streaming
media?
It's
always
in
many
cases,
there's
always
more.
So
it's
really
more
bandwidth.
They
need
more
and
more
capacity
to
deliver
better
quality
content.
They
need
less
latency
to
actually
make
that
experience
a
little
bit
more
responsive
and
they
really
need
massive
scale
access.
F
You
know
through
mobile
devices,
to
even
the
connected
car,
and
if
you
look
at
some
of
the,
the
car
companies
are
actually
building
out
their
own
infrastructure
to
deliver
entertainment
and
gaming,
really
as
part
of
the
package
that
you
get
or
you
can
subscribe
to,
when
you
purchase
a
car
from
them
and
so
there's
a
number
of
things
within
5g
that
really
helps.
You
know
this,
there's
all
that
additional
spectrum
the
5g
brings
is
really
enhanced.
F
Radio
connectivity
and
then
there's
a
whole
5g
cloud
and
the
5g
video
edge
stack
that
can
actually
help
contribute
to
improving
the
experience
for
for
users
and
that
and
that's
some
of
the
stuff
that
we
did
look
at
really
with
it
within
the
document.
Next
slide.
F
F
Spectrum
that
really
was
available
to
deliver
these
these
new
services
and
really,
if
you
look
at
the
increase
in
spectrum,
you
know
or
the
increase
in
bandwidth
that
comes
with
spectrum.
F
It's
actually
quite
interesting,
because
if
you
look
at
the
lte
today-
and
I
don't
like
putting
numbers
to
a
particular
device
or
to
a
particular
session-
but
if
you
look
in
general,
with
lte
you're,
probably
getting
about
one
gigabit
per
second
delivered
per
kilometer
squared,
when
you
actually
look
at
the
sub
one
gig
bandwidth,
which
is
effectively
the
lte
spectrum
for
for
for
5g,
that
increases
by
10
times,
so
it
jumps
to
10
gigabits
per
second
5g,
centimeter,
wave
or
sub
6.
F
Gigahertz
bandwidth
increases
that
by
10
again
so
now,
you've
got
100
gigs
per
kilometer
squared
to
to
to
to
service
those
customers
and
then,
when
you
go
up
to
sort
of
5g
or
or
millimeter
wave
you're,
not
looking
at
at
least
one
terabit
per
second,
potentially
it's
even
more.
You
know
in
some
cases,
and
it
depends
who's
selling
this
stuff,
but
in
some
cases
they're
claiming
up
to
10
gigabit
10
terabits
per
kilometer.
Squared
of
bandwidth.
F
That's
really
available
for
applications,
and
so
really,
when
you
begin
to
look
at
5g
and
you
begin
to
look
at
sort
of
the
millimeter
wave
deployments
that
are
going
on
there's
effectively
a
thousand
times
more
capacity
than
really
you're.
Seeing
with
with
sort
of
traditional
lte
that
we
have
at
the
moment
and
again,
you
know,
millimeter
wave
seems
to
at
least
for
me.
F
The
entertainment
really
seems
to
be
the
most
exciting,
because
potentially
you
can
actually
get
up
to
10
gigabits
per
per
cell
site,
obviously
shared
you
can
get
extreme
low
latency,
so
less
than
four
milliseconds
run
trip
time
on
the
network
and
for
things
that
are
interactive
like
xr
and
vr.
That
makes
a
huge
difference
really
in
the
user
experience.
F
F
So
additional
spectrum
really
won't
do
this
on
its
own.
You
need
some
enhanced
radio
connectivity
to
to
go
along
with
it
and
really
as
part
of
the
5g
review
network.
A
lot
of
the
connectivity
has
been
redesigned.
I
mentioned
sort
of
pardon
me,
millimeter,
wave
and
again.
Millimeter
wave
is
really
deployed
as
these
series
of
small
cells
within
a
mesh
and
effectively.
You
know
every
250
meters
in
urban
areas,
and
it
really
forms
this
dense
network
or
dense
mesh.
F
That
provides
exceptional
connectivity
and
I
always
like
to
think
of
it
really
as
a
way
that
sort
of
bridges
wi-fi
with
with
cellular
services
so
from
the
construct
of
the
mesh.
It's
more
like
a
wi-fi
network,
even
though
it's
delivered
really
as
as
part
of
the
the
radio
network
from
the
the
content
or
the
communication
service
providers.
F
Mimo
or
or
multiple
input
output
on
the
antennas
is
another
big
thing
and
with
lte,
lte
already
supports
mimo,
but
what
they've
done
is
they've
really
increased.
The
number
of
antenna
ports
on
these
antennas
and
they've
actually
sort
of
you
know
in
instead
of
supporting
about
about
8-12
antenna
ports.
F
That's
that's
not
achievable,
because
you
can
have
so
many
multiple
devices
and
applications
connecting
to
these
networks
all
over
the
same
site
that
it
makes
it
worthwhile
even
on
a
television
or
a
computer
or
whatever,
to
actually
build
in
a
5g,
modem
or
5
or
5g
connectivity,
and
let
that
device
connect
directly
to
the
network
itself.
So
it
massively
improves
the
number
of
devices
that
the
5g
network
can
support.
F
Clearly,
one
of
the
drawbacks
is
really
things
like
you
know:
cross-interfe
interference,
so
you've
got
a
lot
of
density
with
massive
mimo
and
you
can
get
crosstalk
really
in
all
those
channels,
but
they
have
implemented
beam
forming.
So
you
know
similar
to
what
you
would
get
with
the
wi-fi
network
within
your
home,
so
the
network
now
orchestrates
packet
delivery.
F
Finally,
you
know
clearly
today
with
lte,
it's
like
using
a
walkie-talkie,
so
you
can
either
transmit
or
receive
on
that
on
that
connection
that
you
have
with
the
the
sales
site
with
5g,
it
now
becomes
full
duplex,
so
you
can
transmit
and
you
can
receive
on
the
frame
same
frequency
at
the
same
time,
and
so
it
really
optimizes
the
the
delivery,
particularly
of
content,
where
it's
an
interactive
content,
where
you
actually,
where
it's
actually
required
that
you
can,
that
you
can
do
both.
F
So
combined
with
spectrum
plus
all
these
new
technologies,
it
actually
gives
you
a
lot
more
bandwidth
and
a
lot
more
capabilities
for
delivering
content
services
next
slide,
one
of
the
other
key
things
and
I've
worked
actually
quite
a
lot
on
this
at
both
ericsson
and
nokia,
is
really
this
sort
of
concept
of
a
of
a
converged
5g
edge,
and
so
you
know
nokia
obviously
has
their
own
flavor
of
it.
They've
been
the
one
really
championing
the
the
etsy
standard,
or
on
mac
or
or
multi-edge
compute.
F
The
large
public
cloud
providers
are
also
providing
solutions
in
this
market.
So
aws
has
a
wavelength
infrastructure.
That's
already
deployed
a
number
of
different
operators
and
so
effectively.
This
converged.
F
Once
they
get
this,
this
stuff
deployed
there's
going
to
be
thousands
of
these
sites
and
each
one
of
these
sites
will
actually
be
from
one
we'll
have
from
one
to
five
milliseconds
latency,
so
particularly
for
for
applications
like
xr
and
and
vr
that
are
highly
interactive.
F
It
really
optimizes
the
the
experience
and
it
really
makes
sense
to
begin
placing
these
functions
of
these
workloads
really
at
the
edge
of
the
operator
network
to
to
provide
a
much
better
quality
of
experience
really
for
for
the
user,
again,
amazon
and
azure
already
deploying
in
the
aggregation
network
within
the
operators.
You
know
you
have
multiple
applications
sitting
at
the
required
near
time,
performance
and
again,
there's
thousands
of
sites
currently
being
rolled
out.
F
A
key
part
of
this
workflow
is
that
a
lot
of
the
key,
a
lot
of
functions
will
actually
migrate
to
the
edge
so
functions
that
require
massive
data.
Throughput
that
require
low
latency
that
require
data
sovereignty
are
all
moving
really
down
to
the
edge
of
the
network
again
providing
low
latency,
providing
turret
scale,
letting
a
lot
of
devices
actually
connect
and
and
utilize
those
services
sitting
at
the
edge
and
there's
a
lot
of
software
defined
video
functions
that
are
migrating
from
the
device
up
into
the
edge
cloud.
F
So
a
great
example
of
that
would
be
something
like
xr
devices.
Clearly
they
want
to
make
these
devices
really
as
small
as
they
can.
They
want
to
make
the
batteries
last
as
long
as
they
can.
They
want
to
make
them
all
day
wearable
they
want
to
make
them
as
cool
as
possible,
so
really
having
a
large
headset
and
walking
around
is
something
that
that
people
are
not.
F
They
want
to
sort
of
minimize
it
and
make
it
really
look
more
like
a
pair
of
glasses
and
so
effectively
moving
a
lot
of
that
functionality
out
of
the
device
is
really
into
the
edge
having
the
edge
rendered
a
lot
of
that
video
having
the
edge
make
a
lot
of
the
decisions
perform
a
lot
of
the
compute
that
actually
goes
with
those
workflows
for
delivering
extra
content
really
makes
a
lot
of
sense
and
then
streaming
the
final
result
dawn
to
the
to
the
user
device
or
the
user
werble,
where
it's
simply
rendered
really
begins
to
optimize
those
workflows
and-
and
I
think
when
you
look
at
this,
and
particularly
when
you
look
at
the
rollout
of
xr
more
and
more
of
that
functionality
is
actually
being
moved
up
into.
F
F
So
to
go
along
with
that,
we
actually
have
a
new
5g
video
stack,
that's
that's
being
sort
of
contemplated,
it's
being
deployed
and
it's
being
delivered
really
now,
if
you
look
at
nokia,
they
actually
have
their
own
5g
video
stack,
they're,
actually
pushing
with
a
lot
of
of
of
their
deployments
and
really
they've
sort
of
taken
a
lesson
from
the
public
cloud
providers.
You
know,
if
you
look
at
the
at
the
lower
levels,
you
have
platforms
of
service.
F
You
obviously
have
etsy
mech
and
there's
number
of
private
cloud
and
public
cloud
deployments
that
are
actually
happening
to
support
these
video
functions.
That
said,
above
that,
at
the
next
layer
you
have
a
set
of
platform
services.
These
tend
to
be
sort
of
big
data
services.
You
know
around
analytics
or
orchestration
or
inference.
You
know
it's
applications
that
are
already
being
provided
by
people
like
azure
or
people
like
aws
or
or
for
analytics.
F
It's
open
source
applications
like
hadoop,
but
they're,
really
being
deployed
as
part
of
the
platform
service
for
this
5g
video
edge,
deck
and
effectively
being
leveraged
by
all
the
applications
that
are
sitting
on
top
of
that.
So,
typically
for
analytics,
there
will
be
different
sets
of
of
hadoop
instances
or
or
or
databases
that
are
actually
collecting
this.
Typically
people
will
actually
use
the
same
instance.
It'll
be
multi-tenant
and
they'll
be
able
to
extract
and
analyze
their
own
data
really
is
a
part
of
that
above.
F
That
is
really
the
the
applications
and
and
if
you
look
at
the
video
applications,
they're
becoming
more
and
more
modular,
so
to
contribute
video
to
distribute
video
even
to
do
things
like
time
shift
or
or
advertisement
for
monetization.
F
It's
really.
The
architecture
is
really
really
changing
in
terms
of
how
the
stuff
is
is,
is
being
deployed,
there's
really
no
more
large
monoliths
that
are
driving
this
people
are
migrating
more
and
more
towards
sort
of
disaggregated
applications,
really
looking
at
small
capacities
or
small
capabilities
that
are
actually
orchestrated
and
again,
it's
very
flexible.
F
It's
very
future
proof
and
they're
actually
beginning
to
leverage
things
like
ci,
cd,
dev,
devops
and
and
other
sort
of
I
would
almost
describe
them
as
culture
changes
to
really
push
a
lot
of
this
stuff
out,
and
you
know
arguably
amazon's
been
doing
this
for
sort
of
15
years,
but
I
think
for
the
telco
telcos
and
really
the
the
cdn
providers.
This
is
really
sort
of
the
first
foray
into
this
into
this
infrastructure
and-
and
I
think,
when
you
begin
to
look
at
it-
I
mean
the
models
there.
F
It's
actually
been
proved
by
by
a
number
of
these
large
cloud
providers,
and
I
think
ultimately,
it
really
gives
the
the
streaming
vendors
really
so
much
flexibility
to
deliver
services
and
to
customize
and
sort
of
tailor
services
and
to
provide
those
services
really
at
will
and
to
be
able
to
scale
them.
It
will
that
you
know
effectively
it
it.
It
really
enables
this
vision
of
the
5g
video
stack
and
the
5g
quad
and
sort
of
ties
and
ties
them
together.
F
So
I
think
ultimately,
when,
when
you
see
this,
this
rod-
and
there
are
some
pieces
of
it
ruled
out
today,
but
it's
going
to
be
highly
elastic.
It's
going
to
be
highly
scalable.
It's
going
to
be
available
will
and
particularly
for
the
the
content
owners
they're
effectively
going
to
pay
for
what
they
use.
So
they
don't
need
to
actually
have
dedicated
boxes.
F
They
don't
need
to
set
them
into
an
operator
infrastructure
and
they
don't
need
to
be
paying
for
rec
space
for
cooling,
for
par
on
a
724
basis,
they're,
actually
going
to
take
bits
and
pieces
of
this
infrastructure
and
they're
going
to
use
it
really
as
they
require
it.
F
F
Slide
so
there's
two
other
important
things
really
to
consider
when
you
begin
to
look
at
these
these
video
applications.
One
is
a
vertical
scale
and
you
know
clearly,
there's
no
right
place
in
this
sort
of
edge
to
cloud
stack
to
run
a
video
application.
It
really
depends
on
the
functionality.
F
F
If
you're
looking
at
applications
such
as
you
know,
origin
services
or
content
creation
really
makes
doesn't
make
a
lot
of
sense
to
put
those
at
the
edge.
So
you're
probably
going
to
see
those
really
deployed
more
sort
of
at
the
public
edge,
so
the
the
likes
of
lumen
or
or
akamai,
or
even
in
the
public
cloud
and
and
hyperscalers
so
amazon
and
and
and
azure
again,
and
so
when
these
things
are
being
deployed.
F
What
we're
actually
seeing
is
is
we're
having
these
sort
of
cooperative
distributive
functions
throughout
the
network,
and
so
you
know
highly
functions
that
are
doing
highly
personalized
content
sitting
down
at
the
edge
while
up
in
the
public
cloud
providers
are
still
doing
a
lot
of
the
storage.
A
lot
of
the
archive
for
content.
F
All
of
the
origin
servers
are
still
sitting
in
there
and
effectively.
These
applications
are
really
showing
their
state,
their
analytics
they're,
sharing
runtime
information
to
make
it
a
little
bit
easier
and
scale
a
little
bit
a
bit
better,
really
across
the
stack,
and
then
it
really
allows,
certainly
in
in
times
of
peak
workloads,
to
actually
scale
by
provisioning
up
the
stack.
F
So
if
you
don't
have
enough
resources
sitting
at
the
edge
of
your
network
and
potentially
a
service
can
actually
you
know
absorb
a
little
bit
of
latency,
you
can
now
be
getting
begin
to
deploy
those
or
redeploy
them
really
in
the
core
network,
or
really
in
the
public
edge
to
provide
massive
scale,
and
so
it
that
that's
certainly
one
of
the
things
that
I
I
think
is
worthwhile
considering
for
for
video
applications,
and
it's
certainly,
I
think,
one
of
the
things
that's
going
to
scale
globally
across
multiple
operators.
F
The
other
thing
to
consider
is
horizontal
scale
for
5g,
particularly
when
you're
looking
at
mobile
applications,
and
I'm
not
going
to
go
too
deeply
into
this.
Because
it's
it's,
you
know,
I've
done.
Another
presentation
sounds
like
opening
a
can
of
worms,
but
there's
really
a
number
of
different
scenarios
that
content
owners
have
to
consider.
F
F
Interedge
and
and
again,
if
you
look
at
nokia
and
and
ericsson,
this
is
something
that
they're
working
on
today
is
where
effectively
you're
moving
from
cell
site
to
sales
site,
but
you're
also
moving
around
from
from
edge
pop
to
edge
pop,
and
so
one
of
the
key
things
there
is
to
be
able
to
actually
take
the
application
to
take
the
session
state
that
you
have
to
take
the
data.
F
That's
actually
archived
with
that
session
and
move
it
from
one
edge
pop
to
another
and
there's
really
a
number
of
ways
that
people
are
looking
at
doing
this.
You
know
one
of
the
concepts
within
mech
is
that
you
take
that
whole
object
that
that
the
user
is
is
interacting
with
so
complete.
So
you
have
the
runtime.
You
have
the
the
state
and
you
have
the
data
and
move
that
whole
thing
as
an
object.
If
you
look
at,
you
know,
functions
like
lambda
and
serverless.
You
know
effectively
you
that
data
is
archived.
F
You
can
actually
just
spin
up
that
lambda
function,
really
on
the
other
edge
pop
and
really
import
the
data.
There's
a
little
bit
of
a
hit
there.
You
know
because,
obviously,
it's
a
cool
start,
but
but
effectively
it's
another
way
of
doing
it
and
it's
something
that's
actually
being
used
today.
If
you
look
at
service
compute
in
a
lot
of
the
large
large
club
providers
and
then
sort
of
the
third
one
here
is
really
this
inter-edge
or
different
version
really
of
inter
edge.
F
Where
you
know,
if
I
move
from
one
edge
pop
to
another,
that
h-pop
may
not
necessarily
be
cloud-enabled,
there
may
not
be
an
application
there.
That's
that's
available
or
compute
services
that
are
available
to
actually
run
my
application
on.
F
That
session
continues
and
as
a
user,
I
really
don't
experience
much
of
a
hit
as
I'm
moving
from
from
edge
to
edge,
and
so
these
are
really
some
of
the
things,
particularly
mobility.
That
need
to
be
considered
and
again,
as
I
said,
there
are
a
number
of
different
ways
to
do
this.
F
I
mean
clearly
the
the
large
telecom
vendors
and
the
radio
vendors
have
a
slightly
different
approach
than
the
cloud
vendors,
but
I
think
ultimately,
what
you're
going
to
see
is
them
both
coming
together
and
providing
a
really
a
solution
that
that
leverages,
the
the
best
the
best
of
both,
I
would
say
so,
being
able
to
kind
of
use
serverless
really
within
conjunction
with
the
with
the
core
networks
and
the
the
e
node
b's,
as
as
a
control
play
in
the
mechanism
to
really
drive
that
next
slide.
F
The
other
interesting
thing
about
5g
is
that
you
know
with
with
lte
and
implementations
before
that
you
effectively
had
appliances
that
you
would
go
to
a
telecoms
vendor
and
buy
that's
changing
quite
a
lot
within
5g.
The
the
infrastructure
itself,
both
for
the
network
and
for
the
the
edge
cloud,
is
becoming
actually
very
agile.
It's
becoming
virtualized
and
everything
is
now
being
driven
through
through
network
function,
virtualization
and
so
really,
there's
three
key
parts
to
deploying
that
there's
the
actual
infrastructure
or
the
nfvi
that
actually
sits
below
that.
F
It's
generally
comprised
of
cut
servers.
You
know
off-the-shelf
storage,
other
stuffs
for
me,
the
off-the-shelf
switches
that
are
virtualized
really
as
appliances
that
you
can
now
take
your
your
platform,
services
or
your
application
functions
and
begin
deploying
them
on
top
it's
driven
by
this
management
and
orchestration
layer.
That
provides
a
third
degree
of
self-organization
really
around
that.
So
basically.
F
And
I
think
I
talk
about
in
another
slide,
but
but
basically
the
whole
premise
behind
5g
networks
is
that
effectively
you
know
it's
going
to
be
based
on
on
an
nfb
platform,
it's
going
to
be
self-organizing,
so
you're
going
to
have
an
orchestration
layer
that
actually
does
all
the
life
cycle
management.
It's
going
to
distribute
functions,
it's
going
to
scale
those
functions
at
the
edge
of
the
network
or
within
the
network.
F
So
again
it's
just
going
to
be
more
like
a
an
amazon
instance
rather
than
what
you've
traditionally
got
from
the
from
the
application.
Vendors
next
slide.
F
And
again,
I
sort
of
mentioned
a
little
bit
about
networks
being
intelligent
and
self-organizing.
So
if
you
look
at
sort
of
the
the
slice
through
these
networks,
you
know
you
obviously
have
your
orchestration
layer.
That's
going
to
organize
the
the
5g
core,
backhaul
coordination
and
the
edge
cloud
itself.
Plus
you
have
all
the
radio
network
coordination
itself.
F
F
So
for
things
like
you
know,
congestion
control,
that's
really
going
to
be
organized
you're,
you're,
going
to
be
able
to
do
things
like
really
organize
that
on
the
fly
and
be
able
to
use
new
protocols
like
you
know,
pcc,
to
be
able
actually
to
deliver
that
and
to
be
able
to
tailor
that
for
the
type
of
application,
that's
being
pushed
out
and
then
clearly
scale
and
resilience,
you
know
being
able
to
sort
of
con
configure
these
these
networks
really
as
required
and
so
at
a
high
level.
F
F
Looking
at
the
plants
so
to
deploy
appliances,
it
really
is
sort
of
a
six
to
seven
month
process.
You
know
you
have
to
buy
it,
install
it
rack
and
stack
it.
You
have
to
test
it
and
do
that
sort
of
thing
when
you're
looking
at
deploying
applications
or
new
services
in
the
cloud
and
effectively
it
takes
me
about
11
seconds
to
spin
up
an
instance
within
amazon,
and
so
from
that
perspective
you
know
things
would
be
a
lot
faster,
it'll,
be
a
lot
more
agile.
You
know
time
to
market
can
be
reduced
substantially.
F
You
know,
we've
definitely
seen
a
a
massive
reduction
of
around
80
percent
and
getting
new
applications
out
into
the
field
and
then,
in
terms
of
reduction
in
time
cycles
to
deliver
applications.
You
know
that
can
be
reduced
by
about
90
percent,
really
because
you're,
not
updating,
servers,
you're,
not
spinning
up
new
environments,
you're,
really
not
deploying
this
stuff
and
testing
it.
A
lot
of
it
is
already
there
and
it's
simply
just
a
question
of
using
it
and
really
because
of
that.
F
A
lot
of
these
relationships
that
the
operators
have
had
with
the
the
traditional
vendors
are
changing.
You
know,
I
still
think
erickson
and
nokia.
You
know
they're,
probably
going
to
be
the
only
game
in
time
to
go
to
and
buy
your
radio
networks
from,
but
really
when
you
look
at
sort
of
the
compute
and
really
the
platform
services
that
go
along
with
that,
it's
definitely
going
to
be
more
of
the
hyperscalers,
the
open
source
communities.
That's
really
going
to
provide
that
next.
F
Slide
and
then
sort
of
you
know
some
of
the
other
things
to
sort
of
consider.
You
know
I
I
mentioned
congestion
control.
You
know.
Clearly,
people
typically
today
will
deploy
a
linux
box
and
they'll
actually
use
tcp.
F
Typically
cubic
really
is
part
of
that
and
there's
a
bunch
of
issues
when,
particularly
when
it
comes
to
mobile,
you
know
the
spectrum
share
of
the
spectrum.
That's
available
to
you
changes
a
lot.
You
have
these
deep
packet
buffers
that
are
problematic
and
then
you've
eaten
none
congestion
loss,
that's
inserted
there.
So
if
you
run
out
of
a
bandwidth
or
or
data
on
your
plan,
your
operator
may
actually
throttling
you
to
do
that.
F
Application
that
you're
delivering
over
the
network,
with
a
different
with
the
different
with
different
parameters
really
around
the
congestion
control.
So
if
you
think
of
things
like
you
know,
video
conferencing
and
streaming,
they
need
very
different
things.
Video
conferencing
has
to
be
very
low,
latency
streaming.
You
know
you
need
to
deliver
massive
amount
of
data
to
actually
provide
the
best
quality
of
experience.
So
these
things
things
can
actually
be
configured
tailored
and
adjusted
on
the
fly,
and
it
fits
really
well
into
that
sort
of
5g
model
of
virtualization
and
self-optimization
new
protocols.
F
You
know
people
are
actually
doing
a
lot
of
experimenting
with
with
quick,
which
is
an
itf.
I
think
the
wreck
was
actually
just
released
back
in
in
may
you
know,
and
really
it
does
things
like
you
know,
optimize
the
the
connection
set
up,
there's
no
head
of
line
blocking
so
effectively
your
video
content
and
the
content.
That's
important
to
you
can
actually
bypass
a
lot
of
the
other
stuff.
That's
actually
helping
that's
actually
happening
on
that
connection,
and
you
can
actually
get
better
transitions
between
cells
and
and
networks.
F
You
know
effectively
each
each
quick
session
has
its
own
unique
user
id
and
it
brings
almost
brings
state,
I
guess,
to
to
the
protocol
so
effectively.
It
allows
you
to
actually
transition
from
network
to
network
from
ip
address
to
ip
address,
and
it
really
makes
that
very
seamless,
and
then
you
know
anycast
is
already
being
deployed
within
the
5g
network.
You
know
so
you
have
multiple
servers
with
the
cmip
address.
F
Bgp
protocol
writes
you
to
the
one
that's
kind
of
most
appropriate
based
on
on
you,
know,
latency
or
or
load
within
the
server
or
even
cost,
and
this
this
effectively
helps
really
distribute
that
that
workload
and
distribute
the
sessions
really
to
the
best
place
within
the
network
and
the
place
that
can
actually
serve
those
sessions.
Better
next
screen.
F
And
then
really
just
some
of
them
and
they
use
cases
that
are
already
being
trialled
in
mg
on
on
on
5g.
So
clearly,
you
know
enhanced
ott,
they
claim
is
really
the
killer
use
case.
70
of
the
data
within
the
next
sort
of
three
or
four
years
will
actually
be
media
that
that's
actually
being
streamed
both
to
to
a
mobile
device
and
to
the
home
with
with
5g
fixed
wireless
access.
F
So
again
it
gives
you
that
ability
to
actually
broadcast
content,
and
then
network
slicing
really
allows
operators
to
actually
buy
or
content
owners
to
buy
a
piece
of
the
operator
network
to
take
this
virtual
slice
that
effectively
guarantees
them
a
really
really
good
qe
when
they're
delivering
their
their
content
to
users.
Cloud
gaming
is
another
one.
You
know
again.
F
Gaming
is
a
lot
of
the
processing
and
rendering
is
actually
done
in
the
cloud
and
then
streamed,
and
it's
very
applicable
to
xr
and
vr
experiences
when
you
really
want
to
offload
a
lot
of
that
stuff
from
the
device,
and
so
it
really
makes
this
available
to
sort
of
cheap
low-end
devices
without
having
to
go
out
and
spend.
You
know
six
to
eight
thousand
dollars
on
on
a
dedicated
gaming
rig
connected
cars
is
another
interesting
example.
F
There's
a
two
or
three
of
the
large
car
companies
in
north
america
that
are
already
building
out
their
own
cdns
that
they're
going
to
provide
as
a
as
a
paid
service
when
you
actually
purchase
their
car
in
venue,
experiences
so
being
able
to
actually
take
xr
and
mixed
reality
and
actually
stream
that
to
a
user
as
they're
sitting
in
their
seats.
So
they
can
get
updates
and
they
can
get
metadata.
They
can
actually
see
what
the
what
the,
what
the
the
batters
last
six
about
sword
against
the
particular
picture.
F
So
having
all
that
additional
metadata
and
additional
information
in
the
stadium
is
something
that
that's
that's.
That's
definitely
a
use
case
that
people,
like
you
know
major
league,
baseball
and
and
the
nba
are
looking
at
and
then
sort
of.
Finally,
you
know
content
contributions,
so
you
have
things
like
you
know:
news
gathering,
citizen
journalism.
F
A
lot
of
this
is
actually
being
uplinked
using
5g
and
there's
actually
two
companies
on
the
market
that
are
providing
a
platform
that
will
allow
people
who
are
in
the
middle
of
a
protest
or
or
whatever
to
effectively
go,
live
and
begin
to
stream,
their
own
content
that
can
be
picked
up.
You
know,
either
by
sort
of
traditional
outlets,
that's
like
cnn,
or
sort
of
more
of
the
new
media
outlets,
where
it's
actually
being
streamed
over
youtube
and
stuff
like
that
next
slide.
F
And
sort
of
you
know.
Finally,
one
of
the
things
that
we've
been
thinking
about
is,
you
know:
well,
5g
challenges
status
quo
and
again
it
depends
on
who
you're
actually
buying
this
equipment
from
obviously
fe
fe
m
bms
could
potentially
be
a
a
competitor
for
atsc3.
F
You
know
it
provides
for
8k
capability
to
deliver
content.
It
supports
all
the
all
the
standard
streaming
formats,
it's
generally
codec
and
and
packaging
agnostic,
and
it
also
provides
things
like
free
to
our
mobile
reception
on
devices
without
a
registered
sim.
So
on
a
television
set
effectively
with
5g
receiver,
I
can
actually
begin
consuming
my
my
content
services
really
over
that
television
set
if
they're
being
delivered
over
the
mobile
network.
F
Personally,
I
think
it'll
be
a
little
bit
different.
I
think
these
services
there's
a
lot
of
synergy
and
I
think
you're
going
to
see
a
lot
of
hybrid
cooperatives.
So
you
know
atsc3
will
probably
be
the
primary
channel
that
you
consume
your
content
on,
but
anything
personalized
or
bespoke
will
actually
be
delivered
over
the
5g
network.
So
it
could
be
multi-views.
You
know
different
camera
angles.
It
could
be
ketchup,
it
could
be
rewind.
F
It
could
be
a
way
to
hyper
monetize
content,
that's
actually
being
delivered
with
with
very
targeted
advertising
to
your
your
television
and
and
I
think,
there's
a
lot
of
utility
functions
that
were
already
you
know.
If
you
look
at
nokia
and
sinclair
they're
really
beginning
to
experiment
with
this
stuff,
so
things
like
title
six
compliance
so
effectively.
F
You
know
if
I'm
watching
a
a
television
program
and
there's
someone
on
the
screen
actually
signing
what
the
what
the
presenter
is
saying,
it's
very
intrusive
sort
of,
if
you're,
if
you're
not
hard
of
hearing
but
for
someone
in
the
family,
it's
hard
of
hearing,
they
can
simply
just
put
on
a
set
of
xr,
glasses
and
they'll.
Have
that
person
standing
right
beside
the
television
signing?
F
What
the
what
the
person
is
saying,
so
I
think
you're
going
to
see
a
lot
of
these
these
side
services
a
lot
of
these
utility
services
that
are
actually
going
to
be
used
to
actually
enhance
atsc
3.0.
So
from
my
own
perspective,
I
don't
ever
see
5g
replacing
traditional
broadcast
and
final
slide
again.
F
As
I
said,
we've
been
working
on
this
technical
brief,
probably
for
over
a
year
now,
and
really
it's
it's
a
way
to
provide
really
an
educational
resource
to
let
people
know
the
capabilities
of
5g
to
let
them
know
how
applicable
this
to
streaming
and
to
really
let
them
know
some
of
the
pitfalls
if
they
do
decide
to
go
ahead
with
that,
and
you
know
from
from
our
perspective,
you
know
we're
really
looking
at
the
technology.
F
There's
no
business
case,
there's
nothing
like
that
in
there,
so
it
really
does
for
for
a
content
owner
or
for
a
communications
service
provider.
It
really
does
provide
them.
I
think,
with
a
very
high
level
overview,
just
outlining
the
benefits
outlining
the
technologies
that
they
can
potentially
use
and
really
giving
them
a
place
where
they
can
actually
start
and
begin
sort
of
kicking
tars.
When
they're,
when
they're,
considering
delivering
media
over
over
5g.
F
And
I
think
that's
that's
it.
D
Great,
thank
you
very
much
brian.
So
I
think
we
have
a
few
minutes
left
for
any
questions.
I
think
it
was
a
really
interesting
presentation
and
I
hope
people
got
that
perspective
of
you
know
not
just
the
technology
but
what
it
might
support.
So
any
questions
for
brian.
C
Hi
brian,
I
think
that
was
that
was
interesting.
C
C
Are
there
any
provisions
you've
seen
emerging
during
this
research
about
like
how
to
prevent
providers
from
sort
of
picking
congestion
controllers
that
just
make
theirs
faster
at
the
expense
of
others?.
F
Yeah,
I
mean
that's
actually
a
very
good
question.
I
mean.
Certainly
some
of
the
things
that
we've
been
looking
at
as
part
of
this
are
really
methodologies
like
pccs,
so
performance-oriented
congestion
control
pcc
is
unlike
cubic,
it's
really
highly
configurable,
so
you
can
actually
tailor
that
for
the
a
particular
use
case.
So
there's
really
two
parts
to
the
to
the
pcc
infrastructure.
F
One
is,
is
really
the
basically
a
utility
function
that
is
really
looking
at
the
network.
It's
looking
at
sort
of
loss,
latency
and
stuff
like
that,
and
it's
really
creating
a
what
they
call
a
utility
score
and
they
can
use
that
utility
score
to
actually
then
have
a
writ
selection
algorithm
within
the
caches,
decide
what
we
had
to
use
and
stuff
like
that.
The
nice
thing
about
it,
though,
is
you-
can
have
actually
multiple
utility
functions,
so
it's
not
a
one
size
fits
all
like
like
cubic.
F
You
know
basically,
that
multiple
that
that
utility
function
can
be
tailored
in
many
different
ways,
so
you
can
have
policy
around
things
like
you
know,
video
conferencing,
where
you
want
extremely
low
latency,
you
can
have
policies
that
actually
manage
the
streaming
part,
so
you're,
actually
looking
at
delivering
very,
very
high
quality
video,
perhaps
at
the
expense
of
some
latency,
and
so
it's
very
highly
tailorable
and
highly
configurable.
F
So
by
implementing
stuff
like
pcc,
it
actually
gives
you
the
ability
to
adjust
the
writ
control
really
on
on
the
fly
and
you
can
actually
adjust
it
on
a
per
service
basis
and
and
again
you
know,
since
it's
being
managed
by
the
utility
function.
It
actually
knows
about
all
these
other
other
services
that
are
actually
sitting
on
the
network
and
they
can
actually
go
out
and
play
play
really
nice
together,
and
you
know
it's
it's
it's
from.
From
my
perspective,
it's
something
that's
that's,
that's
actually
very,
very
interesting.
C
Sure,
okay,
so
you're
saying
that
the
the
sort
of
you
know.
I
guess
I
don't
know
what
you
call
this.
The
the
deeper
edge
cloud
sort
of
operators
would
be
providing
a
sort
of
interface
for
real-time
bidding
for
the
desirable
traits
or
something
for
the
the
services
they're
running
on
top
of
it
or.
F
C
F
That
on
so
so
effectively,
it
really
just
replaces
tcp
cubic
that's
there
today,
but
it
gives
you
a
lot
more
flexibility
when
determining
the
services
and
determining
policy
around
those
services.
The
the
other
thing
so.
F
Exactly
and
and
the
other
thing
too,
when
you
look
at
edge,
you
know
edge
means
a
lot
of
things
to
to
different
people,
and
the
last
mile
means
a
lot
of
things.
So
how
you
want
to
configure
something?
That's
really
sitting
at
the
side
at
the
edge
of
the
operator
network
or
really
down
at
the
sales
side,
maybe
something
a
little
bit
different
than
what
a
nakami
or
aluminum
might
do
that
are
actually
paired
with
the
operator
network.
C
All
right
all
right
thanks
and
then
the
the
one
other
one
I
had.
If
there's
nobody
else
in
front
of
there
is
like
the
you
said,
it
would
be
kind
of
like
an
aws
instance,
sort
of
a
setup
with
the
with
the
managed
service
deployment.
As
I
understood
it,
and
I
I
was
wondering,
like
the
cloud
providers,
usually
you're
kind
of
explicitly
picking
which
data
center
you're
going
to
run
in
from
a
you
know,
relatively
small-ish
pool-
and
I
was
wondering
if
the
like
it
sounds
like
with
these
5g
deployments.
C
You're
going
to
have
a
lot
more
locations.
Is
there?
Is
there
anything
again
in
the
in
the
course
of
this
research
that
you've
seen
about
like
how
that
will
be,
will
be
interfaced,
or
is
that
just
sort
of
up
to
the
service
operator
to
to
figure
out.
F
No,
there
definitely
are
going
to
be
a
lot
more
data,
centers
really
deployed
at
various
tiers
within
the
operator
network
and
again
both
the
the
etsy
mech
implementations
and
the
service
operators
are
really
sort
of
adopting
the
same
same
approach.
F
They're
going
to
really
attempt
to
run
that
function,
you
know,
as
close
or
in
the
best
possible
location
really
for
the
client
to
to
access
it
and
so
for
streaming
media
you
know
having
it
right
at
the
edge
of
the
network,
really
makes
sense
to
sort
of
get
lowly
and
seeing
data
throughput
and
and
stuff
like
that,
but
the
ability
is
there
really
to
feel
not
feel
up,
but
but
to
actually
deploy
up
in
the
stack.
F
So
again,
you
know
if
the
edge
is
really
busy
and
you
can
actually
go
up
to
the
next
tier
and
add,
maybe
a
little
bit
of
latency
they're
actually
going
to
deploy
it
up
there
or
if,
if
you
actually
need
you
know
global
coverage
or
or
broader
coverage
for
a
particular
application,
they
can
actually
deploy
that
higher
in
the
stack
as
well.
F
So
the
layer
when
you're
actually
orchestrating
this,
the
the
the
the
manual
layer
really
has
a
fair
amount
of
intelligence,
really
were
to
decide
where
to
run
this
not
only
based
on
the
performance
requirements
but
really
based
on
on
location
and
really
based
on
things
like
you
know
how
busy
that
edge
is
the
minute,
maybe
it's
not
possible
to
to
to
deploy
it.
There.
D
All
right,
thank
you.
One
question
from
me:
I'm
a
little
curious
if
you
have
a
timeline
for
when
the
sva
5g
technical
brief
will
be
available.
F
It
should
be
available
within
a
couple
of
weeks
we're
just
going
through
one
final
edit
at
the
moment
and
as
soon
as
that's
done,
it's
actually
going
to
be
published.
There's
a
placeholder
already
in
the
website,
for
it.
D
That's
great,
I'm
sure
that
the
mops
working
group
would
love
to
have
a
pointer
when
it's
available,
so
I'll
put
it
in
my
diary
to
make
sure
to
share
that
great.
Thank
you
very
much.
Brian.
C
Yeah,
as
note
taker,
I
hope
it's
okay.
I
went
and
searched
the
searched
for
a
link
to
it.
It
looks
like
that'll
be
a
stable
link
but
yeah.
I
will
update
the
notes
if
that
changes.
C
D
Awesome
all
right,
thank
you!
Much
and
kyle's
getting
us
ready
for.
C
Right,
hi,
I'm
jay
collins,
so
we've
been
working
on
the
streaming
outcomes.
Draft
go
ahead
to
the
next
slide,
I'll,
be
just
getting
an
update
on
what
we've
been
doing.
C
C
We
think
so
there's
a
few
remaining
open
issues
in
the
in
the
repo
there's.
Actually
one
brand
new
one
from
I
think
two
minutes
before
this
meeting,
or
so
that
was
a
that
was
a
suggestion
from
mike
english
as
part
of
part
of
the
work
that
we're
doing
on
the
on
the
advertisement
section.
C
What
we'd
like
to
do
is
we
think
the
advertisement
section
its
impact
on
on
streaming
considerations
is,
is
actually
pretty
important
to
get
in
there,
the
rest
of
them
we're
kind
of
thinking
like
we
will
do
what
we
can
we'll
maybe
take
something
like
two
weeks
and
then
go
to
last
call,
even
if,
if
we
end
up
dropping
these
and
that's
I
guess
I
would
invite
comment
on
on
whether
you
think
these
are
right
or
not
the
the
new
one.
C
By
the
way,
was
it's
regarding
segment,
size
and
keyframe
intervals,
just
a
sort
of
in
in-passing
suggestion
that
came
with
some
of
the
text
from
the
sort
of
proposed
advertisements
section
and
that
might
be
worth
including,
but
I've
tentatively
slotted
it
right
under
the
further
reading
in
the
non-blocking
section.
So
I
can
give
some
updated
slides
for
the
records
on
that.
C
And
I
guess
if
we
don't
have
anyone
commenting
on
that
at
the
moment,
we
can
also
include
the
the
next
slide.
There's
one
other
issue
that
I've
listed
there,
which
is
likewise
an
outcome
of
a
recent
workshop
that
ollie
brought
this
is
this
is
about
cloud-based,
encoding,
packaging
and
origin
workflows.
C
There's
some
really
interesting,
ongoing
work.
We
think
it's
very
likely
to
be
relevant
to
operational
considerations
for
streaming,
but
it's
kind
of
early
days
and
we
don't
think
it's
solid
enough
to
dig
in
very
deep.
There
might
be.
There
might
be
something
we
could
say,
which
I
guess
I
would
include,
and
maybe
a
nice
to
have.
That's
that's
just
a
sort
of
pointer
to
the
few
resources
that
are
out
there,
and
I
mentioned
that
that
it's
worth
researching
or
maybe
we'll
just
sort
of.
C
If
there's
not
enough
there,
then
then
we
again
could
could
just
go
on
without
it.
We
don't
think
it's.
It's
developed
enough
to
really
put
a
lot
into
it
and
we
don't
think
we
should
hold
up
the
stock
waiting
for
it.
Probably
so
yeah,
that's
that's
kind
of
where
we're
thinking
right
now.
What
we'd
like
to
do
is,
I
guess
next
slide.
C
That
is
where
I
have
that
is
move
to
move
to
last
call
in
two
weeks
after
one
last
chance
to
see
if
there's
any
issues
we
get
inspired
to
clear
out
and
to
and
to
finish
up
the
ads
and
then
and
then
just
do
the
last
call
and
hopefully
have
that
go
through
cleanly.
C
I
guess
I'm
interested
if
anybody
has
been
keeping
up
with
the
changes.
There's
been
a
lot
of
them
recently,
but
we
think
we're
almost
wrapped
up
and
failing
any
input.
I
guess
our
plan
is
to
just
go
ahead
with
that.
With
that
proposal,.
D
Yeah
yeah,
so
I
would
particularly
like
to
hear
people's
comments.
I
think
that
you've
articulated
the
list
of
of
known
remaining
issues
in
the
documents
in
your
plan
for
dealing
with
them.
I
think
even
even
reactions
in
terms
of
whether
people
think
that
that's
these
are
reasonable
approaches
or
not,
would
be
helpful
at
this
point,
so
it
doesn't
sound
leave
you
feeling,
like
you,
are
talking
to
dead
air.
B
Generally,
jake
feedback
is,
I
think,
it's
going
in
a
really
good
direction
and
I
have
been
watching
and
when
you
guys
do
your
commits
my
inbox
blows
up,
so
I
can
tell
when
you
and
spencer
on
the
job
and
ally
can
doing
stuff,
but
I
think
it's
gone
in
a
really
good
direction
and
other
than
that,
I
I
don't
really
have
any
changes,
of
course,
to
suggest
you're
doing
well,
thanks.
D
Well,
you
know
you
earned
it
yeah
all
right,
so
we
had
talked
about
doing
the
last
call
for
this
document
in
this
time
frame
and
it
sounds
like
you
know,
we
are
more
or
less
on
track
for
that,
given
your
plan
for
handling
it,
so
I
think
that
yeah.
D
I
said
more
or
less
in
the
context
of
the
milestone
discussion
that
we
can
have
later.
I
think
that
in
pretty
solid
shape
yeah,
so
I
guess
I
would
ask
for
sort
of
going
once
going
twice.
Are
there
any
other
comments
from
people
thumbs
up
thumbs
down,
can't
wait
to
see
it
in
print?
Don't
save
your
don't
save
your
comments
for
when
it's
actually
in
last
call
and
then
give
give
jake
grief.
C
A
I
mean
you
know
if
we
go
back
to
to
this
slide
right.
This
is
a
good
illustration
of
when
comments
tend
to
come
in
during
the
life
cycle
of
document.
C
Reality,
so
that's
that's
all
I
got
and
thank
you.
D
All
right
reading
are
you
looking
to
make
a
comment
on
the
document,
or
are
you
getting
lined
up
to
present.
G
G
G
We
got
very
helpful
comments,
suggestions
and
questions
at
the
last
working
group
meeting
and
have
incorporated
them
as
updates
to
the
draft.
The
updates
are
in
form
of
a
discussion
on
the
allowable
time
budget,
beyond
which
the
problem
of
motion
sickness
caused
by
motion
to
photon
delay
occurs.
So
let
us
take
a
look
at
these
updates
next
slide.
Please.
G
This
synchronization
is
necessary
to
avoid
motion
sickness
that
results
from
a
time
lag
between
when
the
user
moves
their
head
and
when
the
appropriate
video
scene
is
rendered.
This
time
lag
is
often
called
motion
to
photon
delay.
Studies
have
shown
that
this
delay
can
be
at
most
20
milliseconds
and
preferably
between
7
to
15
milliseconds.
G
G
G
G
One
way
is
to
use
prediction
techniques
to
mask
latencies.
Another
way
is
to
offload
these
computationally
intensive
tasks
to
edge
devices
emerging
5g,
standardization
efforts
by
the
3gpp
target,
an
ultra
low
latency
of
0.1
milliseconds
to
1
milliseconds
between
an
edge
server
and
user
equipment
such
as
arvr
devices.
G
G
G
G
D
So,
by
way
of
background,
I
will
remind
people
as
raynon
did,
that
when
we
discussed
this
document
before
and
and
agreed
to
take
it
on
his
working
group
document,
it
was
with
the
understanding
that
there
were
people
involved
in
the
working
group
who
were
willing
and
interested
to
review
and
contribute.
So
I
think
this
is
a
nicely
set
up
opportunity
to
give
some
particular
input,
although
if
you
have
feedback
to
give
other
than
on
the
particular
areas
that
the
authors
have
asked
for,
I'm
sure
that
would
be
welcome
as
well.
D
There
I've,
given
everybody
enough
air
cover
to
think
of
their
questions,
who
would
like
to
jump
in
with
some
input
to
the
authors.
D
D
I'm
mildly
tempted
to
actually
start
naming
people.
A
So
I
can't
remember
last
time:
did
we
did
we
take
a
home
last
time
on
who
would
be
willing
to
contribute
or,
like
a
I
mean
virtual.
C
A
D
All
right,
so
we
were
not
quite
so
so
the
poll
results
were
10-0
for
adoption
of
28
participants.
We
took
it
to
the
list
for
confirmation
which
we
did
and
then
the
willing
to
review
and
contribute
texts.
There
were
nine
people
to
one
so,
but
we
didn't
capture
names,
sadly,
except
for
one
so
colin.
What
do
you.
D
D
A
A
Yeah,
so
I
I
will,
I
did
not
so
I've
read
the
draft.
I
have
not
submitted
any
comments,
but
I
will
take
it
upon
myself
to
set
myself
a
deadline
because,
as
we
know,
deadlines
are
a
good
forcing
function.
D
D
So
should
we
say
two
weeks
sounds
like
a
reasonable
deadline
for
the
working
group
to
chime
in
I'm
gonna
capture,
someone's
name
now.
Some
names
now.
D
Chris,
I'm
putting
your
your
confession
down
as
a
volunteering
to
review
it
spencer
is
in
the.
G
H
Yeah,
thank
you
I
was.
I
was
just
asking
if
this
this
ultimate
was
in
github
anywhere.
I
I
just
will
not
see
a
reference
to
it
in
the
document
itself,
but
I
wanted
to
ask.
Thank
you.
G
No,
it's
not
on
github,
but
I
could
do
that.
Yes,.
D
I
think
it
would
be
helpful
at
this
point
if
people
are
willing
to
actually
sign
up
to
review
the
document.
I
think
it
would
be.
It
would
be
helpful
to
have
it
in
in
github
sort
of
as
a
as
a
good
balance
of
participation.
There
kyle,
I
think,
you're
our
github
maestro.
So
can
you.
H
Yeah
I
was
just
going
to,
I
was
just
going
to
say:
we've
probably
gotten
half
the
comments
that
we've
received
on
the
operational
considerations
document
in
in
github.
So
that's
a
strategy
that
worked
well
for
us.
A
Yeah
I
mean
so
yeah
if
you
take
an
action
for
me
to
to
work
with
the
authors
to
get
to
get
the
draft
onto
github.
That
would
be
I'm
happy
to
do
that.
A
A
A
Point
in
the
chat
about
the
about
it
would
be
helpful,
especially
if
it's
in
markdown,
if
you
I
don't
know
what
what
tools
you're
using
to
author
it,
but
it
might
be
worth
looking
at
martin
thompson's
internet
draft
template.
That
does
lots
of
goodness,
if
you
start
with
a
markdown
document,
but
we
could
talk
about
that
offline.
D
Okay,
great
thank
you
randon
and
thank
you
in
advance
to
all
the
volunteers.
D
I
look
forward
to
making
more
progress
there.
Thank
you.
A
D
A
Yeah
make
sure
that
it's
using
the
right
microphone.
I
Better:
okay,
okay,
all
right
hi,
my
name
is
lee
joon-dong,
I'm
going
to
present
this
new
draft
on
use
case
of
package
significance,
difference
with
media
scalability.
I
This
is
a
new
draft
with
other
two
co-authors
kieran
and
richard
I
want
to.
First
before
we
go
to
the
slides,
the
real
content.
I
want
to
give
a
small
introduction
of
this
draft.
So
with
the
dominance
of
video
traffic
on
the
internet
selectively,
dropping
packets
from
competing
media
streams
could
become
a
complementary
mechanism
when
dealing
with
network
congestion.
I
It
is
worse
to
investigate
which
packets
in
the
outgoing
queue
should
be
dropped
to
minimize.
You
know
the
impairment
to
the
end
users
quality
of
experience
for
video
streaming
due
to
the
various
scalability
design
in
the
modern
video
codex.
It
is
not
hard
to
observe
that
significance.
Difference
may
exist
among
video
streaming
packets.
I
I
Let's
take
a
look
at
those
two
figures
from
the
reports.
Recent
studies
you
know
have
shown
that
ib
video
traffic
will
be
67
of
all
consumer
internet
traffic
by
2000.
You
know
2021,
and
it
is
up
from
51
in
2016,
and
we
can
see
that
it
will
be
increasingly
likely
that
you
know
multiple
streaming
flows
will
share
a
bottleneck
link
if
it
exists
so
which
could
inevitably
cause
network
congestion.
I
I
Let's
give
a
closer
look
at
the
modern
video
codex,
so
a
visual
thing
in
a
video
is
represented
in
digital
form
by
sampling,
a
real
thing,
especially
and
temporarily.
So
correspondingly,
the
modern
media,
modern
video
codec,
would
incorporate
three
types
of
scalability
means
you
know,
namely
temporal
capability
and
special
capability
scalability
and
quality
scalability.
I
So
the
temporal
scalability
refers
to
the
you
know
the
scalability
designed
to
allow
the
frame
rate
of
the
video
stream
to
be
varied.
Using
interlayer
prediction
and
special
scalability
represents
the
special
resolution,
variations
or
differences
with
respect
to
the
original
image
frame,
and
this
quality
scalability
is
also
commonly
referred
as
fidelity
scalability.
So
each
special
layer,
a
spatial
layer,
could
have
many
quality
layers
so,
for
example,
in
less
scalable
video
coding,
it
divides
a
single
video
stream
into
multiple
representations
and
layers.
So
it
includes
the
base
layer
and
enhancement
layers.
I
I
So
specific
to
dealing
with
network
congestions
for
video
streaming
by
leveraging
the
variable
types
of
scalability
in
the
modern
video
codecs,
the
video
content
is
made
available
at
you
know
a
variety
of
different
view
rates,
so
the
rate,
control
and
video
adaptation
methods
have
been
proposed
to
minimize
the
possibility
of
network
congestion.
I
Today's
adaptive
streaming
technologies
are
almost
exclusively
based
on
http,
so
adaptive
bill
rate
algorithm
apr
is
in
the
client
side
that
performs
the
key
function
of
deciding
which
bit
rate
segments
to
download.
You
know,
based
on
the
current
state
of
the
network,
go
to
the
next
slide.
Please.
I
We
acknowledge
the
benefits
offered
by
various
congestion,
contrary
and
condition
avoidance
mechanisms,
but
we
would
like
to
point
out
that
the
feedback
and
rate
adaptation
might
not
be
quick
enough
to
cope
with
the
drop-in
packets
on
the
wire-
and
we
know
diffserv,
you
know,
has
been
proposed
to
use
to
be
used
in
managing
resources
such
as
bandwidths
and
queuing
buffers
on
a
per
hour
basis
between
different
classes
of
traffic,
so
the
internet
traffic
may
be
separated
into
different
classes
with
differentiated
priorities.
I
This
allows
you
know
the
differentiated
or
treatment
for
latency
sensitive
traffic
like
video
streaming.
However,
you
know
with
the
video
traffic
dominating
the
internet
flows,
the
media
streaming.
Applications
with
the
same
class
still
compete
for
network
resources,
so
you
imagine
that
you
know
67
or
even
more
in
the
future.
Nina
traffic
is
video
traffic,
no
matter
which
class
they
are
classified.
I
I
I
So
if
the
transport
layer
protocol
is
tcp,
so
after
timeout
or
duplicate
acknowledgements
received
at
sender,
the
sender
may
retry
to
send
dropped
packet
before
the
maximum
number
of
re-transmissions,
so
the
re-transmission
of
packets
wastes
the
network.
Resources
reduce
the
overall
throughput
of
the
connection
and,
of
course,
also
causes
longer
latency
for
the
package
delivery
that
would
affect
the
user's
quality
of
experience
go
to
next
slide.
I
I
So
with
you,
you
know,
as
earlier
I
mentioned,
there-
are
various
globality
designed
or
implemented
in
the
media,
modern
media
codex.
So
we
can
see
that
some
bits
of
the
encoded
media
stream
are
more
important
than
others.
You
know
like
bits
belonging
to
the
base
layer
usually
are
more
significant
to
the
decoder
than
bits
belonging
to
the
enhancement
layers.
For
example,
here
iframes
hold
complete
picture
data
and
is
frequently
referenced
by
the
you
know.
I
So
iframes
are
most
essential
in
a
media
stream
in
that
group
of
pictures
which
have
the
most
effect
on
you
know,
perceived
video
quality
for
p
frame,
so
the
p
frame
stands
for
the
predicted
frame,
so
it
allows
the
micro
blocks
to
be
compressed
using
temporal
prediction
in
addition
to
the
spatial
prediction,
so
video
things
with
low
level
movement
are
less
sensitive
to
to
both
b
frame
and
p
for
impact
loss,
so
a
lost
keyframe
can
impact
remaining
part
of
the
gop
or
group
of
pictures.
I
A
large
b
frame
has
only
local
effects
in
a
slowly
moving
content
or
with
large
static
background.
So
means
that
you
know
you
lose
a
b
frame.
The
decoder
could
just
ignore
it,
so
we
can
probably
think
that
you
know
the
iframe
is
the
most
significant
frame
in
this
group
of
pictures
or
p
frame
where
p
frame
is
less
important.
Different
may
be
less
important
than
keyframe.
If
it
is
a
slow
lane,
you
know
moving
content.
I
So
as
another
example,
microblocks
are
identified
to
represent
objects
in
regional
interest.
Original
interest
means
that
you
are
mostly
interested
in
certain
objects
in
the
video.
I
Like
you
know
the
football
player,
or
some
movie
star,
you
are
you
like,
so
this
is
called
roi
or
regional
interest,
so
those
micro
microblocks,
luts,
are
identified
to
represent
objects
in
in
roi
or
region
of
interest,
are
likely
more
important
than
any
other
microblocks
of
the
non-roi
regions
to
the
end
users,
so
for
packets,
carrying
roi
micro
blocks
in
the
media
stream
need
to
have
higher
priority
to
be
retained
compared
to
other
packets,
carrying
non-roi
micro
blocks.
I
So,
let's,
let's
go
back
to
the
end
user's
perspective.
Okay,
what
the
end
users
would
prefer
so
from
the
prospective
end,
user's
experience
and
the
user's
expectation.
I
That
is
above
the
tolerance
threshold,
rather
than
getting
nothing
at
all
for
a
few
seconds
or
a
user
may
be
particularly
interested
in
certain
group
of
blocks
belonging
to
interested
objects
in
the
in
the
video
content
roi,
which
is
what
is
named.
So
it
is
necessary
to
prevent
our
oi
blocks
from
being
lost
during
transmission.
Here
I
don't
know
if
you
can
see
the
resolution
difference
for
less
dark
picture
on
the
left
side.
You
know
it
is
trans.
I
It
is
original
picture
which
is
transferred
through
the
network
instead
of
getting
nothing
versus
degraded
car
quality
in
resolution,
maybe
the
end,
the
user
would
prefer
that
you
know
receiving
something
in
lower
resolution
instead
of
getting
nothing
or
it
received
the
you
know
the
roi
blocks,
but
some
of
the
background
is
missing
is
fine
to
the
end
user.
Instead
of
getting
nothing
at
all,
okay
go
to
the
next
slide.
Please.
I
I
The
network
could
selectively
drop
packets
in
a
differentiated
manner
according
to
such
information,
so
this
could
avoid
re-transmission
or
delay
of
those
packets
with
higher
significance
and
also
could
reduce
the
experienced
and
latency
of
end
users
and
could
maintain
the
continuous
streaming
of
the
media.
I
This
is
actually
achieved
at
the
cost
of
dropping
those
lower
significance,
packets,
so
in
order
for
a
network,
be
able
to
trade
packets
of
media
streams
in
a
differential
manner
and
at
high
finer
granularities
and
deserve
so.
The
application
share
reveals
some
of
the
information
to
network
between
able
is
selective
packet.
Dropping,
for
example,
you
know
the
receiving
end
user's
preference
on
the
media
quality
or
some
neighbor
labeling
of
the
packets
or
some
parts
of
the
pack
that
correspond
to
reservoirs
interests,
objects
as
roi
or
some
characteristics
of
the
media
content.
I
Content
containing
the
packets,
for
example,
the
the
frame
title
or
its
iframe
type,
or
b
from
type
or
keyframe
type
or
some
movement
level.
If
it's
a
static
background
or
it's
a
highly
moving
picture,
so
those
information
would
help
the
network
to
decide
which
package
may
be
of
higher
significance
than
others,
and
then
those
selective
packet
dropping
could
happen
in
the
network.
I
A
Have
someone
in
the
queue
carol
is
in
the
queue,
so
I
recognize
carol.
J
Thanks,
I
have
a
question
about
the
marker
blocks.
I'm
not
aware
if
they're
exposed
anywhere
like
to
container
or
to
like
even
application
layer.
I
No,
let's,
let's
relate
it
to
the
codex
like
how
the
application
could
provide
some
that
data
information
or
some
information
about
the
blocks
that
contained
in
the
packet
payload.
So
it's
it's
not
like
the
micro
blocks
will
be
seen
in
the
pa
in
the
network.
Just
some
information
about
the
micro
blocks
containing
packets
could
be
reviewed
to
network
that
could
help
to.
You
know,
decide
whether
this
package
is
more
significant
than
the
other
packets
in
the
output
right
yeah.
J
No
all
right,
I
got
that,
but
in
order
for
someone
needs
to
tell
like
hey
this
packet
contains
micro
block
and
for
that
so
someone
to
tell
that
codec
actually
needs
to
expose
the
information
somehow
right
so
like
whatever
we
use
today
for
video
delivery,
like
most
of
the
time
it's
http
and
like
some
dash
or
progressive
containers
and
before
right.
So
there
is
no
information
in
there
so,
like
that's,
that
part
seems
to
be
missing.
I
Yeah,
that's
that's
correct,
so
that's
why
we
think
there
there
could
be
some
some
way
to
incorporate
this
information
at
the
application
layer
from
the
source
that
could
have
wasteless
and
then
it
will
be
seen
in
the
network
layer
as
well.
I
think.
J
Yeah,
I
have
another
observation,
so
you
mentioned
the
presentation
that,
like
information
packet
level,
like
would
be
useful
so
like
we
can
selectively
drop
it
at
routers
and
so
on.
Right.
I
wonder
if
we
can
actually
leverage
what
we
have
today
in
transport
right.
So
quick,
for
example,
seems
to
be
already
provided
capabilities
like
they
have
datagrams,
so
you
can,
for
example,
put
pb
frames
on
datagram
and
iframe
on
a
quick
stream,
and
then
you
basically
get
a
reliable
iframe
delivery
guarantees
and
everything
else
is
going
to
be
not.
I
Yeah,
let's
that's
a
good
implementation
detail
like
you
know,
I
wanna
only
want
to
bring
less
concepts
for
your
attention.
Like
you
know,
if
the
network
layer
could
have
this
information,
whether
it
is
implemented
in
quick,
some
options
there
or
in
in
http
that's
okay
or
even
in
the
now
clear,
so
any
implementation.
I
Why
still
here
could
be
discussed
later,
but
I
want
to
bring
your
list
to
your
attention
that
you
know
the
application
layers.
Some
of
the
information
from
the
source
could
actually
help
the
network
to
deal
with
the
network
congestion.
I
We
we
do
not
think
that
you
know
it's
it's,
alternatively,
to
dealing
with
network
congestion
budget
is
a
complementary
way
to
do
that.
You
know
to
help
with
fighting
the
network
congestion.
C
C
No,
I
think,
yeah
okay,
here
we
go.
C
All
right,
it
gave
me
an
error
in
that,
so
it
was
retrying
all
right,
great
thanks,
yeah,
I
guess.
Is
there
a
proposed
api
for
this
or
a
paper
or
a
draft
or
something.
I
I
C
C
Possibly,
I
guess
you
mentioned
diffserv,
I'm
not
sure
which
use
cases
you
had
in
mind,
but
in
in
cases
where
disserv
is
appropriate.
I
think
you
already
can.
C
Can
set
so
if
you're,
if
you're
sending
a
udp
stream
of
some
sort
or
rtp
or
whatever?
If
you
have
access
to
the
macro
block
information,
I
think
you
can
set
if
served
differently
for
the
different
packets
that
you're
sending
you
know
the
the
apis
to
do
this
would
be
just
a
sock
opt,
I
believe.
So,
I'm
not
sure.
If
that
will
meet
your
needs
or
not,
certainly
for
streaming
to
end
users,
that's
going
to
have
some
trouble,
but.
I
We
can
we
can
have
a
little
more
detailed
discussion
with
you.
I
think,
yeah,
let's,
that
could
be
one
way
of
implementing
this.
If
you
think
there
is
a
way
of
adding
the
you
know,
differentiating
different
packets
in
different
flows
with
the
same
class,
then
maybe
that's
a
way
of
implementing
it
as
well.
C
I
meant
more
like
different
different
classes
in
the
same
flow.
C
C
D
H
Yeah,
I
think
that
you're
we're
either
talking
about
we're
either
talking
about
something
that
is
outside
the
quick
which
dipserv
would
be
if
that
would
work
or
something
that
gets
added
to
a
quick
public
header.
H
If
I'm
understanding
this
correctly,
unless
you
and
and
my
point
is,
unless
you're
terminating
a
quick
connection
someplace
in
the
network
and
regenerating
it
on
the
other
side.
Of
that,
I
mean
that's
actually
a
conversation
that
we
could
have,
but
I
I
you
know,
I
think
that
there's
a
real
fundamental
conversation
to
have
about
how
this
is
going
to
work,
because,
if
you're,
not
if
you
know
if,
if
you
are
trying
to
do
this
based
on
quick
packets,
anything
that's
running
over.
H
Http
lot
of
stuff
in
a
quick
packet
that
you
can't
read
unless
you're
terminating
it
it
seemed
like
for
me
but
we'll
you
know,
I'm
sure,
we'll
talk.
I
Yeah
yeah
yeah.
So
thanks
for
the
comments,
yes,
I
think
the
you
know
I
I
want
to
have
this
kind
of
conversation
going
on,
because
I
haven't
looked
into
the
detail
of
the
you
know
the
up
layer
protocols
which
one
to
use
like,
for
example,
you
mentioned
the
quick
and
there
any
limitations
for
this
concept
to
be
implemented
in
on
in
body
of
the
you
know,
or
quick
as
application
layer
protocol
so
or
transport
layer
protocol.
So
I
think
that's
something
we
can
discuss
more
and
see
you
know
I.
I
The
only
intention
for
this
document
or
presentation
is
to
bring
your
attention
to
you
know
maybe
at
least
something
we
can
leverage.
You
know
from
the
perspective
the
end
users,
preference,
and
then
you
know
what,
if
the
you
know,
the
application
could
provide
more
information
about
the
package
or
the
payload
package
payload,
then
the
network
could
actually
fight
the
network
congestion
better
than
right
now
and
then
you
know
there
is
no
package
dropping,
maybe
some
slice
slight
packet
dropping
but
not
severe
packet
dropping.
I
So
that's
something
I
want
to
bring
your
attention
and
then
we
can
have
more
discussion
on
the
solution
side.
Yes,.
D
Yeah,
I
think
that's,
that's
a
really
it's
an
important
point
and
I
think
there
have
been
some
efforts
in
the
past
to
do
more
of
this
sort
of
application.
Network
transport
information,
awareness
sharing,
but,
as
you
point
out,
there's
so
much
of
the
internet
today.
So
much
of
the
traffic
on
the
internet
today
is
is
for
video
that
you
know
we
really
do
have
to
figure
out
a
better
way
of
handling
this.
D
So
I
think
it's
it's
interesting
stuff
and
I
think
the
biggest
challenge
really
is
to
get
people
who
are
aware
of
applications
workings
to
work
with
people
who
are
aware
of
how
networks
work
and
and
figure
out
how
to
get
them
to
talk
together
without
violating
the
various.
I
D
I
Very
true,
actually,
it
must
have
the
help
from
the
kodak
side.
You
know
to.
I
What
is
included
in
the
packet
payload
and
how
the
codecs
work
could
help
right
with
the
network
understanding
what
is
going
on
in
the
packet.
So
I
think
definitely
that
requires
two
types
of
expertise.
Maybe
a
lot
is
lacking
in
the
ietf
about
the
codex,
but
I
hope
that
you
know
we
can
bring
some
of
the
the
liaisons
or
maybe
something
you
know
some
experts
here
just
to
help
with
the
kodak
side.
I
D
Great
spencer,
you're
not
done
you're
still
in
queue,
oh
spencer's,
out
of
cue
okay.
That
great!
Thank
you
very
much
and
please
do
share
you
know
onward
materials
with
the
mops
working
group
we'd
be
happy
to
hear
thoughts
and
questions
as
they
evolve.
D
D
So
if
you
could,
you
would
mention
you
have
a
paper
that
you
might
be
able
to
share
if
you
can
join
the
mops
working
group
mailing
list
and
perhaps
share
information
there.
Hopefully
we
can
get
more
discussion
and
pointers
to
you
know
appropriate
things
to
follow
up
in
the
ietf
context.
D
J
Hello,
can
you
guys
hear
me.
A
J
You
can
awesome
all
right,
hi
everyone.
My
name
is
kirill
pogen.
I'm
gonna
talk
about
the
draft
that
we
published
a
couple
weeks
ago
and
it's
about
reliable,
unreliable
streaming
protocol.
J
All
right
so
first,
I
would
like
to
kind
of
cover
a
little
bit
of
motivation
why
we
decided
to
build
something
new.
So
first
of
all
like
for
live
video
streaming,
specifically
there's
a
lot
of
different
use
cases
and
different
requirements.
In
some
cases,
latency
is
important.
Some
cases
quality
is
more
important,
so
like,
for
example,
live
streaming
of
soccer
match
may
be
okay
with
10
30.
Second
latency.
J
However,
interactive
streams
like
game
streaming
benefit
from
really
low
latency
and
five
seconds
is
not
really
low,
but
it's
it's.
It's
can
be
good
enough.
We
also
wanted
to
be
a
more
extensive
support.
Extensibility
like
new
audi
video
codecs,
have
some
sort
of
rpc-like
mechanism,
so
we
can
implement
client-server
interactions,
support
multi-thread,
including
captions,
next
slide.
Please.
J
A
lot
of
our
use
cases
are
mobile
devices
and
people
on
mobile
migrate
from
cell
to
wi-fi
network
between
different
cell
towers.
They
also
drive
through
tunnels,
so
net
and
in
general
network
conditions
changing
over
time.
That's
one
part
of
availability,
however.
The
other
part
availability
is
in
order
to
actually
ingest
live
videos
at
scale
that
there
is
a
lot
of
pressure
on
the
server
infrastructure,
and
so
in
many
cases
several
need
need
a
way
to
go
on
maintenance
and
how
to
do
that
in
a
graceful
manner.
J
Quality
super
important
as
well
and
having
actually
better
signals
from
network
layer
to
adjust
to
to
provide
audio
video,
bitrate
selection
and
match
them
to
network
conditions
is
super
important.
So
that's
one
of
the
reasons
we
actually
was
trying
trying
to
build
this
utilize,
a
new
transport
protocol.
Quick
for
this
work
next
slide,
please.
J
So
obviously
a
bunch
of
different
protocols
exist
today,
so,
like
rtc,
has
a
lot
of
flexibility,
but
its
primary
focus
is
on
peer-to-peer
video
calling
experience.
So
it's
in
my
in
my
mind.
It's
a
little
bit
more
latency
focus,
so
it's
a
little
bit
harder
to
a
bit
more
difficult
to
do.
Tradeoffs,
and
it's
also
it's
pretty
complex
13p
that
one
that
we
had
running
and
it
was
not
very
flexible
so
like
no
new
codec
supports
some
implementations
for
like
not
even
try
supporting
quick,
connects.
J
Other
type
of
protocols
that,
like
http,
based
like
dash
or
cmf
ingest,
does
not
allow
per
frame
control,
and
so
it's
makes
it
hard
to
control
the
latency
next
slide.
Please.
J
J
It
was
built
as
a
replacement
for
tmp
with
the
goal
to
provide
support
for
new
audio
video
codecs
extensibility
a
multi-track
support.
In
addition,
it
gives
application
option
to
control
data
delivery
guarantees
by
utilizing
quick
streams.
So,
as
in
previous
presentation,
we
discussed
the
importance
of
iframe
and
pnd
frames
may
be
less
important.
This
actually
allows
us
to
to
to
provide
different
delivery
guarantees
and
based
on
our
frame
types.
J
Next
slide,
please
at
core:
it's
a
frame
based
protocol,
so
client
server,
exchange
frames
or
messages,
and
we
can
call
them
messages
as
well,
and
each
frame
has
a
common
header
which
consists
of
length
id
and
type
next
slide.
Please
so
length
should
just
define,
determines
the
size
of
the
whole
frame.
Id
is
a
frame
sequence
number,
so
each
frame
is
actually
have
a
sequence
number,
and
every
new
frame
must
have
a
sequence
id
greater
than
the
previous
one,
and
the
type
is
just
to
differentiate
different
type
of
the
data.
J
Next
slide,
please
we
define
seven
frame
types,
but
obviously
we
can
they
can
be
extended.
It's
a
connect
frame,
connect,
acknowledgement
and
the
video
frame
error,
audio
video
and
go
away
frame
next
slide.
Please
so
rush
actually
defines
two
mode
of
operations,
normal
mode
and
multi-stream
mode
in
normal
mode.
J
We
use
one
bi-directional,
quick
stream
to
send
and
receive
data
so
using
that
one
stream
guarantees
reliable
in
order
delivery,
so
application
can
rely
on
transport
layer
to
retransmit
lost
packets,
so
the
performance
and
behavior
is
very
similar
to
just
using
tcp
connection
next
slide.
Please.
J
So
in
normal
mode,
client
sensor
connect
frame
on
bi-directional,
quick
stream
and
then
continue
sending
audio
and
video
data
on
the
same
quick
stream.
We
only
support
on
one
video
being
sent
on
one
connection:
server
upon
receiving
connect
frame,
replies
with
connect,
acknowledgement,
connect
and
command
connect.
J
Acknowledgement
is
a
mechanism
to
sort
of
negotiate
the
features
and
versions
of
the
protocol,
so
if
server
doesn't
support
something
and
can
basically
reject,
but
because
we
use
this
one,
one
single,
quick
stream
in
this
mod
frames
are
if
another,
so
server
doesn't
need
to
worry
about
reordering
data
and
making
sure
that
no
date
is
lost.
That
is,
transport
layer.
Guarantees
client
send
once
client
is
done
streaming.
It
sends
end
of
video
frame
and
close
the
quick
connection.
So
pretty
simple
next
slide,
please.
J
So
one
of
the
problems
in
this
mode
is
that
if
one
of
the
packets
got
lost-
and
let's
say
it-
belongs
to
v2
frame
in
this
diagram,
so
then
all
the
frames
that
are
sent
after
it
will
not
be
available
to
the
server
like
not
visible
to
the
server
they
still
potentially
can
arrive
at
transport
layer.
J
But
quick
would
not
expose
them
so
until
v2
is
actually
transmitted
and
fully
received,
so
server
cannot
do
anything
so
this
this
is
sort
of
variation
of
head
online
blocking
and
it
can
affect
latency,
introduce
jitter
and
so
on.
Next
slide,
please
so
multi-stream
mode
is
essentially
tries
to
address
head
of
line
block
and
also
gives
application
much
more
control
over
delivery
guarantees.
J
So
in
multi-stream
mode,
every
frame,
every
new
frame,
audio
or
video
is
sent
on
new,
quick,
bi-directional
stream,
since
quick
streams
are
independent
from
each
other.
This
allows
server
receive
data
as
it
arrives,
and
not
wait
for
transmissions
of
lost
packets
on
unrelated
from
unrelated
streams.
Next
slide,
please.
J
So
in
this
mode
still
a
client
creates
a
on
an
existing
connection.
A
quick
connection
creates
a
bidirectional
stream
sensor,
connect
frame
and
creates
a
new
stream
for
every
audio
video
data.
J
So
because,
as
I
said,
we
use
different
streams.
Frames
arrive
out
of
order,
so
server
needs
to
to
be
able
to
restore
that
order,
and
it
can
use
frame
ids
for
that,
as
well
as
to
detect.
If
there
are,
if
some
frames
are
missing,
client
can
stop
re-transmission
by
essentially
resetting
the
corresponding
extreme.
That
way
it
can
provide
diff,
set
different
delivery
guarantees,
depending
on
frame
type.
So,
for
example,
on
iframe,
we
can
always
return
some
need
pnb
frames.
J
J
So
I
talked
about
a
little
bit.
Reliability
and
being
able
to
reconnect
so
reconnection
is
straight,
can
be
triggered
by
the
client
or
by
the
server
in
case
of
client.
It's
usually
when
connection
got
lost
either
because
we
drove
through
the
tunnel
or
some
other
reasons.
So
in
this
case
clients
open
the
new
connection
and
they
close
the
previous
one
and
continues
with
a
normal
connect
flow
and
continue
sending
data
on
a
new,
quick
connection.
J
In
case
of
server,
initiated,
reconnect,
server,
sense,
go
away,
method
frame
and
client
may
send
frame
up
on
on
the
current
connection,
even
after
several
cents
go
away,
that's
usually
useful
and
somewhat
required
to
drain
the
existing
gulp
interval,
so
otherwise
the
data
is
going
to
be
lost,
so
client
establish
new
connection
and
follow
the
normal
connect
flow
and
continue
sending
data
on
your
quick
connection.
J
It's
up
to
the
server
whether
to
close
the
connection
immediately
after
something
going
away,
sending
go
away.
Usually
it's
preferable,
so
server
doesn't
close
connection
right
away
and
allows
to
allows
client
to
send
existing
current
golf
size
next
slide.
I
think
that's
probably
it
questions
and
can
we
move
to
next
slide
as
well.
We
have
side
meeting
on
friday
if
anyone
interested
to
join
we'll
be
happy
to
answer
additional
questions
there
as
well.
D
Great
thank
you
very
much
carol.
We
have
a
q
james.
D
All
right,
if,
if
james,
is
being
shy,
how
about.
D
K
J
Oh
for
rush
yeah,
there
is
a.
There
is
no.
K
J
Yeah
there
is
a
draft
that
we
published
a
couple
weeks
ago.
I
think
it
should
be
in
in
the
mid
meeting
materials.
If
not
there,
I
can
send
a
link.
K
J
So
construction
control
is
basically
transport
layer
right
so
because
we
use
quick,
we
use
we,
we
use
whatever
quick,
provides.
However,
in
our
deployment
we
so
the
the
beauty
of
quick
you
actually
can
can
change
congestion
control
right
so
and
that's
because
it's
all
an
application
layer.
So
in
our
deployments
we
used
dbr
we
tested
cubic,
we
tested
copper
as
a
congestion
control,
and
there
is.
J
There
was
a
a
presentation
that
congestion
control
working
group
in
singapore
couple
years
ago,
a
year
ago,
that
kind
of
provides
a
lot
of
information
about
the
the
performance
performance
comparison
among
three
of
those
we
use
right
now,
I
believe,
bbr
and
and
and
call
play
the
congestion
control
in
addition
to
quick.
J
So
quick
is
itself
right.
So
it's
quick
as
a
is
pdp-based
protocol
right
so,
but
it
does
assume
there
is
a
conjunction
control
right
so
and
so
we
as
brush
does
it's
only
on
its
own,
does
not
specify
congestion
control.
It
really
conjunction
control
is
implemented,
the
transport
layer
at
quick
layer-
and
it's
just
it
happens
so
that
we
have
multiple
implementations
and
yeah
by
the
way.
Actually,
we
also
did
another
test
disabling
quick,
conjunction
control
transport
layer
and
implementing
it
application
layer
using
aramcat.
K
D
C
I
I
think
I
did
see
this
draft.
It
seemed
like
it
had
a
lot
of
kind
of
video
specific
stuff
in
it
codex
and
you
know
references
to
audio
versus
video
versus
you
know,
and
these
were
quick
extensions.
If
I
remember
right,
have
you
thought
about
making
it
more
generic
again
here
I'll
point
to
srt
as
a
like?
C
Yes,
there
are.
There
are
things
that
that
so
the
the
one
feature
that
keep
like
that
keeps
that
sticking
in
my
head
is
the
way
that
you
can
say
it's
reliable
as
long
as
it's
before
a
deadline,
it's
after
a
deadline,
then
you
drop
it
instead,
so
you
don't
do
the
headline
blocking
you
don't
do
the
you
know
so
like
are
there
features
like
that
that
you
could
embed
into
quick
in
a
sort
of
transport
feature
sense,
rather
than
doing
it
as
like.
This
is
video.
This
is
audio
and
then
treating
it.
J
So
let
me
answer
our
first
question
about
like
making
it
more
generic.
It's
so
the
audio
video
specifics
are
there
but
they're
like
actually
like,
like
at
high
level.
The
the
whole
thing
is
pretty
generic
you
can.
Your
frame
can
be
anything
right,
so
it
can
be.
You
can
send
text
file
inside
of
inside
of
the
frame
so
and
it's
up
to
you
as
a
application
to
specify
delivery
guarantees
on
that
frame
right,
so
you
can
set
a
timeout.
Basically
deadline
delivery
right.
J
So
if
it's
not
delivered
just
stop
retransmitting
to
answer
a
question
around
quick
yeah,
I
would
love
that.
I
think
the
quick
went
in
a
little
bit
different
direction
so
as
they
introduce
quick
data
grams
right
so
where
essentially
like
there
is
no
guarantees
right
so
and
so
they're
gonna
get
dropped
and
and
that's
it.
So
this
is
all
over.
J
This
is
our
our
implementation
does
not
use
quick
data
grounds
because
it
was
developed
and
deployed
before
quick
data
grants
were
even
introduced
or
proposed
as
a
as
a
draft
it
can
be
built.
I
think
the
like
in
my
mind
the
diagrams
are,
it
will
just
gonna,
be
more
work
for
application,
because
you
still
need
to
figure
like
you
still
need
to
control
the
transmissions
right.
So
then
how
it
interacts
with
congestion
control
and
it's
something
that
quickstream
seems
to
provide
out
of
the
box.
C
Okay,
so
it
was
this
like
a
there
was
a
priority
of
audio
or
video,
or
some
such
was
that
the.
D
Well,
yeah,
it
might
be
better
to
take
it
offline,
we're
actually
now
overtime.
I
asked
kyle
to
put
the
milestones
slide
up,
because
this
was
supposed
to
be
our
last
agenda
item
we're
not
going
to
get
to
it.
So
I'm
going
to
say
last
two
questions
in
the
order
that
they
were
there
maxine.
You
can
come
back,
thank
you,
but
we
will
take
the
milestones
discussion
to
the
list
and
we'll
do
these
last
two
questions
in
order.
As
long
as
the
meat
echo
doesn't
disappear
on
us,
so
should
we.
L
Two
questions:
actually,
you
mentioned
the
new
codex
support,
just
wondering
in
what
perspective
rush
going
to
support
your
new
codec.
The
second
is:
we're
going
live
streaming,
what
kind
of
application
layer
protocol
you
are
mainly
referring
to?
I
believe
it's
going
to
be
http.
J
So
live
streaming
in
this
case.
That
was
ingestion
all
right,
so
it's,
but
there
is
also
desire,
like
thinking
on
on
our
side,
is
to
let
essentially
what
we
want.
I
think
future
is
for
us
is
to
have
ingestion
and
delivery
protocol
similar
right
so
like
actually
be
the
same,
we're
not
there
yet,
so
our
ingestion
was
the
first
attempt.
L
The
live
streaming
pro,
I
was
wondering
if
only
the
http
is
the
one
you
are
referring
to,
or
you
have
another
like
rtp
in
mind
as
well.
J
J
Unreliable
delivery
on
top
of
https-
that's
something
that's
at
least
I
I
think
it
would
be
interesting
http
by
itself
is,
is
not
bad,
especially
with
http3
who's.
Quick
is
like
it's
there's
a
lot
of
things
that
are
useful
and
existing
infrastructure
that
already
been
deployed
from
so
many
places.
M
First
of
all,
thanks
for
for
the
great
presentation
there
was
a
topic
of
secure,
reliable
transport
protocol
raised
during
the
discussion
so
from
the
secure,
reliable
transfer
protocol.
I
would
like
to
say
that
we
are
also
relating
quick,
datagrams
and
30
over
quick
datagrams
streaming
and
the
main
benefit
of
datagrams
over
reliable
tweak.
M
As
we
see
it
is
the
ability
to
control
and
manage
the
latency,
because,
even
if
you
want
something
to
be
delivered
reliably,
you
still
have
certain
latency
and
if
your
iframe
gets
delivered,
for
example,
one
second
late,
then
the
whole
your
stream
is
blocked
for
one
second,
so
that's
something
we
would
also
present
on
the
friday
side,
meeting
video
interest
real
quick.
So
if
someone
is
interested,
please
join,
we
have
some
discussion
there.
Thank
you.
E
J
Yeah,
I
I
I
I
can.
I
think
I
address
that
I
don't
think
datagrams
or
quick
streams,
necessarily
different
in
terms
of
latency.
At
the
end
of
the
day,
quick
underneath
is
work
on
top
of
qdp.
J
However,
it's
a
question
whether
you,
what
is
the
best
default
mode,
no
retransmissions
or
transmissions
up
to
some
certain
point
right
so
and,
as
I
said
in
my
mind,
streams
are
a
little
bit
easier
to
work
with,
but
again
I
haven't
explored
much
data
grams.
Yet
so
I'm
happy
to
discuss
on
friday,
yeah.
M
M
D
Great
all
right,
thank
you,
cheerio.
Thank
you
all
for
the
questions.
Thank
you
all
from
saving
us
from
a
milestones
discussion
and
again
thanks
to
our
to
our
our
meeting
minutes
takers
and
yeah
thanks
everybody
for
coming
out
and
making
it
a
lively
session.