►
From YouTube: IETF114-MOPS-20220729-1400
Description
MOPS meeting session at IETF114
2022/07/29 1400
https://datatracker.ietf.org/meeting/114/proceedings/
A
B
D
A
D
E
F
B
D
A
Well,
it
is,
it
is
1001,
so
we
could
get
started
or
we
could
wait.
A
few
minutes.
A
E
Are
we
starting,
then
all
right
well,
we're
starting
then
yeah.
Can
you
back
up
to
the
earlier
slides,
please
so
thanks
everybody
who
bravely
made
it
to
the
room
on
Friday
morning
appreciate
the
turnout
and
thanks
to
everybody
online
as
well
in
your
own
home
time
zone.
E
E
How
do
I
really
feel
thanks
and
yes,
we
are
still
wearing
masks.
So
please
keep
your
mask
on
and
keep
your
mask
on
at
the
mic
and,
as
somebody
else
said
in
another
session,
unless
you're
standing
on
the
pink
X
over
here,
please
keep
your
mask
on
right.
I
think
we
know
the
drill
in
next
slide.
E
Yeah.
If
you
have
all
of
the
resources.
Here's
the
agenda
for
today,
I
think
we
have
a
volunteer
Note
Taker
in
the
shape
of
Chris
lemons.
Thank
you,
but
don't
let
that
stop
anybody
else
from
getting
into
the
shared
document
and
helping
out
yeah,
in
particular
during
Chris's
presentation,
but
also
to
help
make
sure
that
we
capture
names
and
details
and
whatnot
right
any
batches
to
the
agenda.
E
E
And
just
a
word
about
this
agenda
item.
We
working
group
glass
called
this
document
a
while
ago.
It's
been
through
ietf
last
call
and
is
now
continuing
to
get
comments
from
the
isg.
So
this
is
an
update
on
what
has
happened
since
you
last
saw
it.
H
H
Excellent
Jake
is
ready,
so
okay,
so
that
that
does
imply
something
about
how
you
know
things
going
forward
next
slide.
Please
one
moment:
okay,.
H
Cool
so
just
in
case,
you've
forgotten
where
we've
been
since
dash
10
went
to
ITF
last
call,
so
we
had
21
GitHub
issues
created
most
were
from
area
review,
team
reviewers
and
very
soon
after
that,
we
were
on
an
isg
ballot
like
the
next
telechat,
and
we
got
two
yes
ballots,
which
is
and
10
no
objection
ballots,
which
is
not
surprising
and
one
discuss
ballot,
which
we
can
talk
about
in
more
detail
that
created
nine
GitHub
issues,
basically
one
per
ad
with
comments
or
discuss
positions,
and
so
actually
most
of
the
comments
that
we've
been
getting
have
been
from.
H
The
review
teams
comments
that
we
were
still
working
on
when
we
hit
the
isg
total
chat
and
a
number
of
the
ballots
on
the
ISD
tells
that
were
please
address
the
comments
from
my
area.
Team
reviewer,
so
no
surprise
there.
So
we
got
Dash
11
submitted
the
last
day,
well,
basically,
as
of
the
internet
draft
cutoff.
So
that's
basically,
where
we've
gotten
on
GitHub
comment
resolution
we
have
a
few
more.
A
small
number
of
issues
that
have
been
addressed
in
GitHub
dash.
H
12
is
not
submit
been
submitted
yet
because
we,
you
know,
we
submitted
Dash
11
at
the
beginning
of
the
week,
and
we
have
two
issues
that
remain
open
just
from
lack
of
time
to
work
on
them.
I
I,
don't
think
either.
One
of
them
are
particularly
big
and
the
editor's
plan
is
to
address
the
two,
the
last
two
issues
and
submit
Dash
12.
H
Fine,
you
know
yeah
go
ahead
and
please
yeah.
So
this
is
a
high
level
description
of
what
changed
between
dash,
10
and
11,
and
if
you
bring
up
the
drafting
the
data
tracker
and
look
at
the
diff
from
H
dashed
into
a
dash,
11.
you'll
see
you'll
see
this.
The
two
things
that
I
think
are
worth
mentioning
are
well.
The
three
things
are
highlighted
here,
either
in
what
started
out
as
yellow
or
in
purple
text.
H
So
we
synchronize
the
abstract
and
introduction
sections,
because
our
intention
was
to
keep
these
keep
these
synchronized,
but
we
did
have
a
couple
of
changes
that
were
made
in
one
place
and
not
the
other,
so
that
got
significantly
better.
H
We
added
a
lot
of
definitions
and
scoping
text
and,
honestly,
a
significant
number
of
the
the
comments
that
we
got
familiar
review
teams
and
things
like
that
were
I,
don't
understand
if
this
is
in
scope
for
whatever
the
definition
of
this
is,
and
part
of,
that
is
because
they
were
asking
about
specific
use
cases,
and
so
we
were
trying
to
make
that
clearer.
H
And
you
know,
one
of
the
other
things
is
that
we
did
have
some
text
that
seemed
helpful
to
say
about
RTP,
but
that
took
us
a
little
bit
out
of
the
general
purpose,
streaming,
video
and
media
that
we
were
talking
about
in
most
of
the
document,
so
that
we
added
some
notes
about
video
bit
rates
and
explaining
what
was
going
on
there
or
maybe
a
little
bit
better.
In
the
section
that
talked
about
video
bit
rates,
we
had
some
questions.
H
We
had
some
comments
about
the
definition
of
Baseline
for
people
who
are
monitoring,
because
our
advice
was
to
say
notice
when
things
change
from
what
you
expect
to
see
and
try
to
make
it
clear
that
we're
not
thinking.
Oh,
this
is
constantly
you
know
constant
bitrate,
video
or
anything
like
that.
You
know
so
you
know
we're
expecting
fluctuations,
but
if
there's
three
times
as
much
traffic
there
today,
as
you
thought
there
was
yesterday,
that
might
be
worth
knowing
noticing
we
cut
a
lot
of
the
details
about
unpredictable
usage
profiles.
H
To
be
honest,
that
this
was
something
that
God
added
early
in
the
draft
and
grew
as
things
went
along.
We
were,
you
know,
we
started
out
talking.
I
started
out
talking
about
issues
with
sudden
growth
of
peer-to-peer
networking
in
certain
access
networks,
and
things
like
that.
H
H
H
Think
it's
I
should
say
here
and
I
should
probably
say
several
times
during
the
discussion
that
the
draft
is
better
for
these
comments,
so
that
we
added
some
background
about
adaptive
bitrate
streaming,
which,
if
you,
if
you're
not
coming
at
this
from
the
streaming
streaming
media
Community,
you
may
not
have
that
background
and
we
were
like
I
say
we're
just
trying
to
describe
at
a
high
level
what
adaptive
bitrite
streaming
is
before
we
start
talking
about
its
effects.
H
We
added
competing
goals,
text
for
personalized
ad
insertion-
and
this
is
this-
was
in
response
to
Romans-
discuss
the
only
discussed
belt
that
we
had
and
this
the
topic
of
personalized
ad
insertion
came
up
during
the
sector
review,
which
was
actually
submitted.
H
After
the
isg
telechat,
where
this
was
valid
and
that's
fine
so
like
I
say
we
were
working
on
that
and
we'll
I
think
I
well
I'll
wait
to
give
an
update
on
that
until
we
get
to
the
last
slide,
I
think
I,
basically
re
so
I
basically
rewrote
section
six,
which
was
talking
about
the
effect
of
transport
protocols
and
I
added
the
idea
of
media
transport
protocols
at
the
beginning.
At
the
beginning
of
section
six,
we
can
talk
about
that,
but
it
seemed
like.
H
It
seemed
like
a
helpful
thing
to
me,
because
we
had
protocols
that
might
be
right
underneath
the
media
or
it
might
be,
underneath
another
protocol.
That
was
right
underneath
media
all
the
way
down
so
trying
to
be
clear
about
what
the
first
protocol
was.
That
was
right,
underneath
media
and
a
protocol
stack,
and
everything
else
is
what
that's
trying
to
get
at.
H
We
provided
a
number
of
clarifications
about
media
encryption,
about
tunneling
and
about
vpns
and
about
several
other
things
and
I
think
that's
section
seven
and
we
made
so
so
so
so
many
additions
in
the
acknowledgment
session.
The
next
slide
is,
would
would
show
you
what
that
looked
like.
H
H
Please
pay
special
attention
to
the
changes
that
are
in
color
on
slide.
Three.
They
are
the
they
are
the
ones
that
probably
have
the
most
likelihood
of
surprising
people
in
the
working
group
and
then
to
address
the
last
two
issues
and
submit-12
to
ask
Roman.
If
we
have
addressed
his
discussed
comments,
this
is
yeah
Jake.
You
want
to
say
a
word
about
this.
K
Sure
Jay
Colin
yeah
cornered
Roman
on
his
way
out
the
plenary
yesterday
or
day
before
I
guess
he
said
that
he's
you
know
very
busy.
Guy
has
had
trouble
wrapping
this
up.
He
got
a
little
hung
up
because
he
is
trying
to
propose
text
there's
like
some
subtle
he
wasn't
able
to
cover.
K
He
had
lost
contacts
with
some
of
his
other
responsibilities
at
the
moment,
but
he's
working
on
some
texts
that
he
wants
to
suggest
for
for
finishing
up
his
discuss
comments
so,
but
he
said
he
thinks
it's
addressable
and
and
is
trying
to
get
there.
He
said:
Eric's
been
poking
him
every
few.
K
H
Could
we
give
a
Short
Round
of
Applause
for
Eric,
cornering
Robin.
H
And
one
of
the
advantages
of
coming
to
the
ietf
in
person
is
that
you
can
actually
Corner
area
directors
between
the
stage
on
West
United
and
the
restrooms
and
bars.
H
H
E
It
doesn't
look
like
the
caffeine
has
hit
the
bloodstream,
yet
I
did
make
a
note
to
send
to
the
mailing
list
to
say
to
follow
up
your
particular
points
about
checking
the
document,
especially
the
highlighted
items
in
this
slide
deck.
E
H
Actually
so
Kyle
is
presenting
just
like
I
said
the
deaf
between
Dash
2
and
dash
11.,
and
if
you
want
to
just
kind
of
fly
through
that
at
a
relatively
slow
rate,
we
give
up
people
ideas
about
what
changed
and
what
didn't
so
and
like
I
said
I,
because
you
can
just
kind
of
slide
through,
but
there's
a
there's,
a
lot
of
text
that
did
not
change.
H
It's
not
what
he's
showing
right
now,
yeah
see,
there's
a
lot
of
stuff
that
didn't
change
and
there's
there's
some
say:
there's
a
lot
of
stuff
that
was
deleted,
especially
the
stuff
in
section
six
and
the
stuff
in.
Is
that
section
three
or
four
on
unpredictable
usage
profiles.
A
lot
of
deletions
in
both
of
those
so
like
I,
said:
there's,
there's
a
significant
number
of
words
that
changed.
H
Reviewers
that
I
think
there
is
I,
think
there's
one
isg
member,
that
I
did
not
have
a
GitHub
id4
and
everybody
else
is
tagged
in
the
conversation.
So
it's
not
like
Jake
and
Ali
and
I
went
off
in
a
corner
and
just
started
typing,
but
so
this
is
actually
had
eyes
on
it.
Just
maybe
not
the
working
groups
Eric.
G
Yeah
Eric
wink
as
an
Ade,
so
thank
you
for
all
the
change.
As
you
have
seen
on
the
diff
yeah,
they
are
not
small
change.
I've
read
to
them.
It's
mostly
editorial
clarification,
so
that's
not
really
deserve
another
last
call
because
you
change
nothing.
Substantial
techniques
to
extension,
but
I
would
really
encourage
this
working
group
to
review
the
document.
While
we
still
can
change
it
to
revert
some
change.
Sure.
H
Sure,
yeah
and,
like
I,
said,
we've
still
got
two
issues
to
resolve
better
that
are
in
GitHub.
H
Jake
has
started
reminding
me
that
at
this
point
with
the
document
having
left
the
working
group,
we
should
be
trying
to
respond
to
comments
rather
than
trying
to
continue
to
evolve
the
document.
So
that
is
all
he
has
also
encouraged
me
to
behave
and
I'm
doing
my
best
to
do
that
as
well.
E
H
And
and
if
we,
if
we
are
bike
shedding
word
text
in
the
document,
that
may
be
a
Fine
Place
to
to
stop.
E
H
Right
yeah,
thank
you
and
thank
you,
everyone
in
the
working
group
who
commented
and
thank
all
the
reviewers
who
have
helped
to
improve
this
document.
Great.
E
I
Hello,
can
you
hear
me?
Yes,
we
can
thanks
Leslie
hi
everyone,
my
name
is
Arena
and
Krishna
and
I
will
be
presenting
an
update
to
our
draft.
This
is
a
joint
work
with
Akbar
Rahman
next
slide,
please.
I
I
Many
thanks
to
Sanjay
Mishra
for
the
feedback,
in
particular,
asking
for
clarifications
in
the
abstract
and
introduction
on
the
mailing
list.
Yesterday
in
the
abstract,
we
have
added
a
couple
of
lines.
The
first
line
specifies
that
we
would
want
to
discuss
the
expected
behavior
of
XR
applications
in
terms
of
the
workload
that
a
network
operator
can
expect
in
a
use
case
such
as
ours.
I
I
The
updates
in
the
abstract
are
further
elaborated
in
the
introduction.
Sorry
Leslie,
the
previous
slide
for
slide
four
yeah,
so
the
updates
in
the
abstract
are
further
elaborated
in
the
introduction.
We
point
out
that
the
XR
traffics
workload
on
The
Operators
network
will
have
parameters
that
are
heavy
tailed,
making
it
hard
to
predict
the
resources
that
should
be
operationalized.
I
Examples
of
such
parameters
include
the
amount
of
data
carried
in
the
connection.
Connection
duration
burst
length
idle
time
Etc.
Secondly,
we
point
out
that
service
requirements
of
Exon
applications
will
have
qoe
factors
such
as
the
need
to
avoid
motion
sickness
that
will
be
unique
to
them
next
slide.
Please.
I
So
Section
5
is
where
we
propose
to
elaborate
on
the
abstract
and
introduction.
If
the
working
group
agrees,
section
5.1
will
elaborate
on
the
problem
of
what
the
network
operator
can
expect
in
terms
of
arvr
or
XR
traffic
workload
to
be
able
to
make
informed
decisions
on
resource
deployment.
I
I
So
picking
up
the
discussion
from
itf113,
we
are
trying
to
identify
issues
in
XR
media
delivery
in
an
operational
sense.
That
would
be
within
the
scope
of
the
mops
working
group,
and
it
would
be
great
to
get
some
pointers
from
the
working
group.
I
So,
for
example,
the
first
issue
is:
is
offloading
to
the
edge
the
only
solution
to
deal
with
the
problems
of
heat
dissipation
of
the
XR
devices
and
The
Limited
battery
power,
as
they
run
computationally
intensive
tasks.
In
our
use
case
now,
our
draft
focuses
on
leveraging
the
edge
devices,
but
we
welcome
inputs
from
the
working
group
that
discuss
other
pertinent
operational
ways
that
could
be
Solutions.
I
Secondly,
for
the
edge
Computing
solution
for
our
use
case.
What
kinds
of
underlying
technology
do
we
envisage,
for
example,
5G
or
Wi-Fi?
How
will
the
XR
media
delivery
scale
if
we
increase
the
number
of
users
how
to
ensure
that
the
edge
servers
are
adequately
provisioned,
for
example?
How
many
servers,
what
topology
to
connect
them
where
to
place
them?
What
capacity
to
assign
to
the
links
Etc?
I
How
to
migrate
users
XR
media
State
as
they
move
how
to
operationally
respond
to
changes
in
bandwidth
creation
and
destruction
of
logical
associations
between
software
components.
Etc.
I
This,
of
course,
is
not
an
exhaustive
list,
but
we
wish
to
prioritize
the
most
important
issues
that
the
working
group
fields
are
appropriate
for
inclusion
in
the
draft,
and
it
would
be
great
to
have
everyone's
feedback
please.
So,
with
the
chairs
permission,
we'd
like
to
open
the
floor
for
a
discussion.
E
Yes,
please
I,
think
I'll
I'll
give
people
the
opportunity
to
get
in
queue
and
appreciative
people
will
use
the
media
Coq.
E
We
had
some
good
discussion
around
this
document
at
the
last
ietf
and
then
we
sort
of
foundered
in
terms
of
follow-up
on
the
list,
so
I'd
like
to
make
sure
that
we
that
we
take
things
in
real
time
so
go
ahead.
Please.
M
Are
you
two
two
quick
things
on
the
one
on?
On
the
one
hand,
I
would
have
I
was
I,
was
reading
through
the
dot
and
I
would
have
loved
to
see
some
numbers
we
all
know
on
ranges
or
typical
characteristics
of
of
AR
traffic.
We
all
know
that
there's
many
different
flavors
and
resolutions
and
whatnot,
but
I
think
it
would
be
helpful
to
be
more
specific
in
these
cases,
because
that
ultimately
also
governs
what
you
can
do.
M
I
N
B
So
Colin
Jennings
I'm
with
Cisco
and
I,
don't
normally
talk
about
Cisco
products,
but
we
talk
about
just
briefly
here
in
the
context
of
providing
you
some
numbers
to
try
and
answer
that
question.
So
it's
a
product
called
WebEx
hologram.
B
It
records
me
as
a
or
records
a
user
with
a
set
of
light
field
cameras
as
a
light
field,
transmits
it
up
to
the
cloud
transmits
it
down
to
an
AR
headset,
and
then
you
see
the
person
sitting
across
the
table
from
you
as
an
AR
hologram
and
it's
a
it's
a
very
photorealistic
system.
If
you
want
to
see
videos
of
it,
you
know
Google
WebEx
hologram,
it's
it's
shipping
to
customers
now.
B
So
in
the
context
of
that
there
are,
you
know,
I
have
specific
numbers
of
what
of
what
we're
doing
on
that.
We
can
share
some
of
those
now
I'll
share
some
of
those
in
a
minute.
But
there
are
you
know:
we've
we've
watched
and
followed
what
a
lot
of
different
vendors
are
doing
in
this
space
and
I
think
this
draft
would
benefit
by
talking
about
a
little
like
I
I.
Think
that
now
the
industry's
evolved
enough
that
you
can
start
talking
about
more
details
on
how
this
works.
B
So
there
are
three
major
ways
that
we
see
this
type
of
a
XR
based
images
and
data
coming
across
the
network
and
different
vendors
have
taken
very
different
approaches
to
it.
I
would
say
a
bunch
of
vendors
have
gone
down.
The
texture
mapped
polygon
approach.
So
what
they're
sending
is
you
know
a
bunch
of
polygons
in
the
texture
maps
for
them
and
the
texture
maps
are
updating
with
the
video
stream
and
the
polygons
are
updating
with
some
mesh
data
and
that's
a
that's.
That's
one
representation.
We
see
widely
used.
B
We
see
a
representation
widely
used
as
Point
clouds,
where
you're,
basically
sending
a
bunch
of
teeny
points,
a
huge
cloud
of
them.
Each
point
has
a
color.
You
can
think
of
it
as
a
little
teeny
colored
sphere
when
you
render
it
and
it
results
up.
That
has
a
different
data
characteristic,
so
both
of
those
have
very
different
bandwidth
and
data
cursors,
and
then
the
third
one
that
we
use
that's
very
popular,
magically
Cisco.
Some
others
is
this
light
field
approach.
B
Where,
with
the
light
field,
you
can
think
of
it
as
almost
sort
of
You,
Know,
five-dimensional
video
or
something
like
that,
but
it
is
we're
trying
to
send
the
color
of
every
ray
of
light
going
through
every
point
in
the
room
in
every
different
direction,
and
you
might
be
like
that's,
and
so
it's
that's
the
field
is
this
five-dimensional
field,
and
you
compress
it
very
much
like
you
compress
video
everybody
takes
an
existing
video
standard
and
extends
it
to
do
the
multi-dimensional
slicing
approach.
B
So
the
so
I
think
the
interesting
thing
on
these
numbers
is
so
since
we're
talking
using
the
light
Fields
I'm
going
to
talk
about
numbers
in
that
space,
you
might
think
that
that's
a
huge
amount
of
data,
this
five-dimensional
space-
it
actually
has
an
incredible
amount
of
redundancy
in
it,
so
it
compresses
incredibly
well
we're
seeing
it
use
about
10x
more
than
the
standard
video.
B
So
as
a
very
specific
number
of
what
our
system
is
using
today,
we're
using
about
30
megabits
per
second
Upstream
of
recording
the
light
field
of
a
whole,
the
whole
image
of
the
scene
and
setting
it
up.
That's
a
very
specific
metric
now
on
the
downstream.
You
don't
need
to
send
anywhere
near
as
much
data
that
you're
sending
to
the
headset,
because
you
can
you.
B
The
headset
has
a
very
localized
point
of
view,
and
you
only
need
a
small
portion
of
the
light
field
to
make
to
cover
everywhere
that
the
person
can
move
that
you
need
to
display
in
any
reasonable
amount
of
time.
Of
course,
you've
got
to
be
able
to
keep
updating
where
the
user's
head
is,
so
you
can
change
where
that
is,
but
that
reduced
light
field
it
where
we're
seeing
it
at
about
2x.
B
What
the
typical
video
of
a
relative
thing
is
so
there
we're
using
about
six
megabits
per
second
is,
is
what
we
use
on
a
download
there.
Now
on
the
latencies
of
these
things,
I
think
the
draft
gets
really
a
lot
of
the
latencies
and
power,
and
you
know
those
types
of
things
very
right.
However.
I
think
that
it
sort
of
occasionally
confuses
the
issue
of
what
the
update
rate
is
like
hey
the
screen's,
going
to
update
every
15
milliseconds
or
with
we
need
that
much
time
to
update
the
screen.
B
B
The
system
that
we're
doing
has
a
you
know
end-to-end
latencies
that
are
comparable
to
video
conferencing
systems
because
it's
very
similar
it
goes.
You
know
up
to
a
cloud
running
on
AWS
and
then
back
down
to
an
end
user.
It
will
run
today
over.
You
know
we'll
run
it
over
LTE.
Actually,
you
don't
actually
really
need
5G
to
make
this
work,
which
is
you
know,
that's
our
experience
anyway.
Now
some
of
the
latencies,
the
the
latency,
is
about
the
motion,
sickness
and
everything
is
very
real.
B
You
have
to
keep
that
well
under
you
know
the
sort
of
12
milliseconds
that
you
you
expressed
in
there,
but
that's
why
people
are
doing
all
the
compute
on
the
head
and
the
compute
and
battery
power
was
a
real
issue
to
start
with,
but
as
each
evolution
of
headset
and
processors
come
out,
it's
way
less
of
a
problem
on
any
of
the
the
latest
generation
of
headsets.
So
if
you
look
at
a
magically
two
headset,
it
is
just
it's
a
phenomenally
better
situation
than
it
was
now.
B
Of
course
everybody
wants
to
make
the
headset
smaller
and
lighter
and
less
battery,
so
I'm
not
saying
that
that
isn't
a
constant
problem,
it
always
is,
but
they
won't
be
sort
of
changing
up
so
I
hope
that
helps
with
numbers
a
little
bit
I
and
I'm
glad
to
discuss
any
more
with
those
with
anybody,
who's
interested
one
bit
of
detail
in
the
draft
that
seemed
wrong
to
me
and
I.
B
Sorry,
I
didn't
send
an
email
on
this,
but
it
mentions
this
zero
point
that
that
three,
that
the
five
some
of
the
three
gpp
specs
are
are
it's
reliability,
one
and
reliability.
Two
references
say
that
they're
going
to
do
like
five
nines
over
0.5
milliseconds
I
mean
that's
only
150
meters
at
the
speed
of
light.
I
find
that
very
hard
to
believe
and
when
I
went
and
read
those
specs
I
couldn't
find
any
reference
to
those.
B
So
I
suspect
that
those
may
have
been
there
at
one
point
in
time,
but
the
latest
version
of
the
specs
may
have
moved
them
out.
So
I
think
that
we
should
I
I.
Think
there's
a
lot
of
things
in
this
draft
that
come
from
things
5G
might
have
hoped
could
be
true,
but
not
things
that
they
were
promising
and
the
draft
might
confuse
them
a
little
bit.
We
might
just
need
to
like
really
scan
through
all
the
numbers
in
the
draft
and
make
sure
they
actually
check
with
the
references.
B
I
And
kicker
points
regarding
the
3gpp
reference,
I'll
check
the
numbers
again
and
you've
made
fantastic
points
about
the
various
devices
that
are
available.
One
question
I
had
was:
have
these
devices
been
tested
in
outdoor
environments,
where,
let's
say
there
are
clouds
and
and
the
the
light
is
changing
and
those
kinds
of
scenarios.
B
They
so
they
have
not
been
so
I
was
speaking
only
for
the
work
I've
done
is
and
not
been
tested
in
in
super
outdoor
environments,
but
it
has
been
done
in
classic
video
con,
like
in
in
office
rooms
with
like
Windows
All
Along
The,
Far
Side,
where
you
have
changing
Cloud
lighting
and
lots
of
stuff
like
that,
you
can
have
very
flexible,
you
know
very
diverse
lighting
and
it
still
works
quite
well,
but
that's
an
advantage
of
the
light
field
approach.
B
I
think
as
you
move
to
the
the
polygon
and
the
texture,
mapping
approaches
it's
really
all
about
your
ability
and
look
I've
done
a
lot
of
work
on
both
of
those
we've
tried
all
three
of
these
brooches
before
we
went
forward
whether
we
did
I
I
think
that
you
end
up
with
how
good
your
depth
map
recovery
is
and
obviously
depth,
map
recovery,
I've
seen
people.
B
Obviously
people
are
doing
good
depth
map
recovery
on
outdoor
scenes,
but
it's
a
lot
harder
than
doing
on
a
controlled
lighting
type
environment,
so
I
think
I
I
think
it's
one
of
those
still.
You
know
your
mileage
varies
widely
and
you
get
better
depth
and
apps
with
more
controlled
lighting.
If
you
go
down
that
approach,
the
light
field
approach,
you're
just
capturing
the
color
of
the
light
in
the
room-
and
it's
like
you
know
it's
it's
less
dependent
on
clouds
or
fans,
spinning
lights
or
those
types
of
things
in
some
other
systems.
I
E
Great
thanks,
Colin
and
and
I
guess.
One
observation
is
that
probably
renan
you're
gonna
have
to
go
through
the
recording
of
this
session.
If
you
didn't
catch
all
that
real
time,
because
I
get
the
sense
that
there
is
an
awful
lot
of
depth
of
comment
and
what
Colin
had
to
say,
one
specific
question
to
Cullen:
are
you
suggesting
that
the
document
tees
out
the
separate
types
of
XR
the
I
mean
the
polygons
versus
raster
versus
streaming
yeah.
B
Certainly,
you
know
the
the
vast
majority
and
I
think
that
talking
about
that
makes
it
easier
to
understand
why
you're
seeing
such
different
numbers
from
different
people-
and
it
has
to
do
with
those
techniques.
H
So
I'm
getting
up
here
to
agree
with
everything
Cohen
said,
but
I
I
did
want
to
to
thank
the
authors
for
their
continued
sense
of
humor
hanging
with
this
draft
I
think
Leslie
was
exactly
right
that
we
had.
You
know
the
the.
What
we're
trying
to
describe
is
getting
to
closer
to
the
maturity
level
that
we
actually
could
describe
it,
and
so
I
think
that
I
think
that
this
is
a
good
time
for
us
to
push
a
bit
further.
H
I
would
say
you
know,
you'd
notice,
that
in
the
opscom
draft,
the
one
of
the
one
of
the
bigger
changes
that
was
there
was
our
description
of
video
bit
rights
and,
and
things
like
you
know,
and
character,
characterizing.
Those
I
think
that
that's
a
really
good
model
to
include
in
the
in
the
in
your
draft
as
well
and
I'm.
Sorry.
H
Okay
and
the
the
last
thing,
the
last
thing
I
would
say:
the
there
is
nothing
that
will
help
people
contribute
numbers
as
much
as
putting
in
one
set
of
numbers
and
letting
and
letting
Leslie
and
Kyle
shop.
Those
numbers
around
and
say
is
this:
what
is
this,
what
you're
seeing
and
if
that
you
know,
if
you've
got
that
in
the
draft,
it
really
makes
it
easier
for
the
chairs
to
make
progress.
In
my
opinion,
thank
you.
E
Great
thanks
so
yeah
Kyle
has
taken
himself
out
of
queue
and
he's
also
locked
the
queue
because
we're
targeting
terminating
this,
this
section
around
quarter
two
but
go
ahead.
Mo.
L
L
You
know,
most
popular
use
cases
and
then
derived
some
requirements
and
and
and
and
and
conclusions
from
that
and
I.
Don't
think
it
really
is
very
representative,
as
what
Colin
just
described
very,
very
different,
that
what
the
way
that
the
draft
describes,
the
rendering
of
of
scenes
and
images
is
very,
very
different
than
the
system
that
Cohen
described,
which
I'm
I'm
familiar
with,
and
even
even
that
system
has
morphed
many
times
and
will
morph
many
more
times
that
I
don't
think
the
draft
could
accurately
capture
all
those
permutations.
L
And
if
you
look
at
the
things
that
really
matter,
you
should
look
at
what
matters
to
the
local
devices
should
look
at
what
what
the
trends
are
on
the
local
device
and
how
that's
dictating
Network
requirements
and
the
the
typical
Trends
are
increased
frame
rates,
you're
getting
refresh
rates
of
around
120
hertz
on
all
new
devices.
So
that's
eight
millisecond
refresh
and
it's
unlikely
that
a
lot
of
edge
cases
are
going
to
allow
eight
millisecond
round
trip
with
processing
involved
in
everything.
L
A
Are
you
basically
arguing
that
that
there's
sort
of
a
minimum
viable
amount
of
computing
required
locally
to
you
know
in
order
to
make
XR
viable
against
motion
sickness,
Etc.
L
C
E
E
Okay,
thank
you
very
much,
Serena
and
so
I'd.
Ask
you
also
to
note
that
Sanjay
Mishra
did
send
some
comments
on
the
mailing
list
recently.
So
I
think
that
gives
plenty
to
follow
up
thanks.
Everyone.
I
E
Okay,
so
then,
next
up
we
are
switching
over
to
related
ietf
work
and
we
have
a
report
from
the
hackathon
which
I
believe
Jake
is
going
to
do.
K
K
So
we
were
working
on
a
so.
We
have
a
a
multicast,
quick
draft.
We
presented
it
that
in
about
that
in
the
quick
working
group,
hopefully
we'll
get
some
discussion
there.
We
were
working
on
an
implementation
at
the
hackathon
that
is
still
in
progress.
I
think
we're
we're
making
good
progress.
We
believe
it
to
be
possible
and
we've
learned
some
some
good
things
that
have
gone
into
updates
to
the
draft
that
we've
made
so
far,
we've
been
looking
toward
thanks
to
Luke
I.
K
Think
he's
here
about
the
warp
server
he's
posted
I.
Think
that
would
be
a
really
good
place
for
us
to
to
go
with
that,
because
it's
the
the
quick
multicaster
after
relies
on
server
initiated
data
and
warp
uses
the
server-initiated
data
in
such
a
way
that
we
think
we
might
be
able
to
apply
it
and
that
will
be
way
better
than
writing
our
own
player.
K
Also,
so
we're
we're
hoping
to
have
you
know
some
some
useful
demo
running
at
some
point:
we've
been
working
on
it
in
the
w3c
multicast
community.
E
K
That
meets
well
the
working
team
for
that's
been
waking
meeting
weekly.
If
anybody
wants
to
join
in,
but
we
we
also
meet
monthly
just
online,
so
you
can
check
the
check
that
out
anybody
that
wants
to
get
in
involved
in
that,
and
is
there
anything
else
like
it's
I
I
would
say
it's
not
done
yet.
I
wish
it
was
further
along,
of
course,
but
it's
coming
along
we're
hopeful
that
this
is
actually
a
plausible
thing
to
do.
K
K
L
K
At
that
point,
or
something
rather
than
trying
to
get
the
basic
functionality.
G
K
But
yeah
we
so
we
got
the
draft
up
in
maybe
May
or
so
I
think
and
then
and
then
we've
been
like
hammering
out,
you
know
bits
and
pieces
of
that
kind
of
as
time
permits.
I
think
none
of
us
are
are
this
is
our
main
project,
but
we
would
all
like
to
see
it
work
that
I've
been
working
on,
so
we've
been
hacking
on
it
when
we
can
and.
K
It's
not
primarily
a
you
know,
a
media
thing
necessarily,
but
I
mean
that's
the
use
case
that
I
have
mostly
so
so
that's
what
I'm
I'm
trying
to
hit
a
demo
that
will
that
will
be
able
to
do
that
right.
E
Thank
you,
Jake
all
right,
other
ietf
work
going
on.
There
is
a
media
over
quick
buff
this
week,
which
I'm
sure
many
of
you
are
also
at
but
Ted
Hardy
is
here
to
give
us
a
brief
overview
of
how
that
went
down.
Ted.
C
Howdy
Folks
thanks
very
much
for
the
time
today,
so
the
this
was
actually
the
second
buff.
There
was
a
previous
off
that
discussed
the
idea
of
forming
this
and
the
on
the
mailing
list
has
been
around
for
a
while.
So
as
a
second
boss,
that
was
very
much
focused
on
the
the
chartering
aspect
of
the
discussion.
So
almost
all
of
the
time
was
taken
going
through
the
charter
and
getting
various
improvements
made
into
it.
C
I've
just
put
a
link
to
the
the
GitHub
repo
where
the
charter
is
with
the
current
state
of
the
charter.
One
of
the
changes
suggested
in
the
room
and
adopted
in
the
room
was
to
make
sure
that
moq
did
a
liaison
with
mops,
and
so
that
and
a
couple
of
other
kind
of
key
changes
went
in
as
a
result
of
the
interventions
in
the
room.
C
We
then
went
through
the
buff
questions
and
there
was,
generally
speaking,
quite
strong
agreement
that
forming
a
working
group
based
on
the
the
recently
changed.
Charter
was
a
good
idea.
C
There
are
still
a
couple
of
issues
that
were
unresolved
in
the
discussion
and
that
will
need
to
be
resolved
as
it
goes
through
the
iasg
process,
but
I
think
we're,
anticipating
that
the
iesg
will
will
take
it
to
the
broader
Community
for
commentary
relatively
soon
and
I
do
see
quite
a
few
people
in
the
room
there
that
were
also
in
that
room
if
they
want
to
to
give
their
own
Impressions
or
if
they
want
to
make
Corrections.
O
Hey
hey
on
here,
I'm
clicking
buttons
and
like
hey
so
I'm
going
up
here
for
live
video
for
like
at
one
moment
for
the
entire
ITF
this
week,
so
hello,
everybody,
Ted,
hey,
I'm,
I'm
glad
we
got
that
connection
established
between
Mock
and
mops
and
I
think
that's
a
really
good
thing
sort
of
any
thoughts
on
how
we
make
that
sort
of
move
Beyond
like
into
reality
versus
like
hey.
O
Let's
put
this
in
the
in
a
charter
any
thoughts
about
how
operationally
we
might
make
that
really
working
well,
because
a
big
part
of
mops,
as
you
know,
is
to
bring
in
Industry
engagement,
one
of
the
things
I
see
in
the
mock
work.
That
worries
me
a
little
bit
is
that
there's
some
industry
engagement
but
I
like
to
see
more
industry
engagements,
especially
from
professional
media,
delivery
and
creation,
and
so
how
do
we
do
it
even
better?
Any
thoughts.
C
I
think
it's
it's
always
a
little
bit
easier.
One.
So
group
has
been
formed
to
convince
people
that
it's
worth
their
time
well,
something's
still
in
the
mailing
list
stage.
I
think
it's,
it's
always
a
little
bit
harder
to
say:
hey.
C
You
should
be
watching
this
as
it
as
it
comes
into
Focus,
so
I'm
hopeful
that
we
will
get
some
increased
engagement
simply
from
having
a
charter
approved
and
people
knowing
that
we're
working
on
the
protocol,
mechanics
and
state
machine,
and
all
of
that
I
do
agree
with
you
that
a
lot
of
what
we
have
right
now
are
people
who
are
very
familiar
with
the
networking
side
and
maybe
not
as
familiar
with
the
operation
side
or
with
the
aspects
of
the
media
creation
for
those
those
parts
of
the
charter
which
are
focused
on
on
streaming.
C
So
I
do
hope
that
some
of
the
folks
who
do
have
those
connections
can
help
them
bring
a
friend
so
to
speak
and
and
go
there.
I
also
anticipate
that
now
that
we
we've
established
that
there
needs
to
be
a
connection
here,
that
a
regular
Cadence
of
making
sure
that
mops
is
invited
to
any
moq
interims
or
knows
about
kind
of
the
Milestones
that
we're
hitting
that
are,
maybe
not
the
big
level
of
Milestones
of
you
know
sending
something
to
the
iesg.
C
But
oh
by
the
way,
there
is
an
update
to
this
document.
If
you're
not
on
the
list,
you
might
want
to
take
a
take
a
squint
at
it,
as
it
does
change
some
things
that
may
may
be
of
interest
so
that
that's
really
the
usual
mechanics
that
you
get
in
the
ITF
when
there's
a
fair
amount
of
shared
membership
and
I'm
I'm,
not
really
thinking
at
this
point.
C
That
would
make
a
big
push
out
into
into
the
into
the
broader
industry,
but
if
you
think
that's
valuable
and
have
ideas
of
how
to
do
that.
Please
do
at
this
point.
Let
the
area
directors
know
because
it's
still
in
their
hands.
C
Great
thanks
very
much
I
I
I'm
glad
to
hear
that
and
I
guess.
C
One
of
the
things
we
mentioned
in
the
room,
just
because
we
didn't
get
to
any
technical
presentations
or
conversation
is
that
we
may
in
fact
try
and
hold
some
sort
of
interim
between
now
and
itf-115,
just
to
make
sure
that
people
who
have
who
have
been
working
on
the
technical
aspects
of
moq
for
a
while
get
a
chance
to
share
their
thoughts
and,
of
course,
we'll
make
sure
that
when
that
gets
discussed
it
that
mops
is
is
involved
in
understands
when
it's
going
to
happen
or
or
the
the
logistics.
C
A
lot
of
it
depends
on
how
fast
the
isg
goes
through
the
rest
of
the
processing
and
what
the
comments
are
from
the
broader
community.
So
if,
if
the
isg
and
the
broader
Community
say
yeah,
this
all
looks
good
and
we're
we're
ready
to
to
hold
something
that
could
be
hybrid
in,
let's
say
September,
then
I
think
hybrid
would
be
the
right
way
to
go
because
there's
there's.
Obviously
a
lot
of
energy
you
get
when
people
can
share
can
share
their
ideas
in
real
time
in
in
a
space.
C
The
current
reality
is
any
any.
Such
meeting
will
obviously
also
have
a
remote
component.
I
mean
just
we're
we're
not
at
as
at
a
situation
either
for
travel
for
a
variety
of
reasons
or
for
for
covet
to
really
rule
that
out,
but
I
think
if
it
if
it
takes
a
little
bit
longer,
you
know
if
August
turns
into
kind
of
a
time
when
very
little
goes
on
and
it
re
the
the
ASG
processing
and
Etc
goes
on
mostly
in
September.
C
Then
we
definitely
be
looking
at
at
remote
just
to
make
sure
that
we
would
be
able
to
get
the
the
advantage
of
the
quicker
Logistics
so
that
that
turns
into
time
zone
math
more
than
any
other
kind
of
logistics,
and
that's
a
little.
It's
never
easy.
But
it's
a
little
bit
easier
than
adding
time
zone
math
to
you
know
finding
a
hotel
and
and
making
venue
arrangements
and
doing
all
of
that
other
stuff.
O
If
I
can
put
one
idea
out
there
in
order
to
help
connect
with
industry,
one
of
the
things
that
might
be
good
also
is
to
be
sensitive
to
Industry
events.
We
say
September
I'll
point
out
that
IBC
happens
in
September,
it's
a
very
big
industry
event
and
basically
people
disappear
for
about
a
week
and
a
half
during
it.
So
if
they,
if
there
was
overlap,
the
ability
to
get
industry
engagement
might
be
very
difficult
during
that
window.
Just
FYI.
O
O
Sometimes
I
mean
it
starts
just
after
Labor
Day,
so
like
I,
think
it's
like
September,
10th
or
11th,
because
Labor
Day
is
late
this
year
and
then
runs
over
the
weekend
typically
into
into
the
next
week.
So
typically,
people
do
run
things
back
on
either
side.
It
depends
upon
the
year
and
other
events
that
are
taking
place
and
sometimes
hallways.
C
Okay
and
and
obviously
it
would
be
very
quick
work
to
try
and
get
a
something
pulled
together
for
an
interim
right
right,
around
Labor
Day,
if
if
it
was
September,
it's
more
likely
to
be
later.
But
if
you
wouldn't
mind
sending
the
the
details
on
CC,
Allen
I
will
make
sure
that
the
the
area
directors
get
a
get
a
hold
of
those.
C
E
Great,
thank
you,
I
think.
It's
time
to
move
along
on
the
Queue
and
Spencer
yeah.
H
So
Spencer,
Dawkins
and
I
just
wanted
to
do
a
couple
of
things.
One
was
to
say
very
clearly
at
a
microphone
that
I
thought
the
mock
off
went
very
well
and
appreciated
very
much
Ted
and
Allen,
and
the
mock
Community.
For
that
a
thing
I
would
mention.
I
was
just
following
up
for
Glenn
as
he's
thinking
about
how
to
talk
through
the
through.
H
You
know,
through
the
through
the
pathway
to
other
operator
forums,
and
things
like
that
is
to
say
is
to
say
that
the
Charter,
the
last
time
I
looked
at
it
does
have
a
requirements
draft
listed
there
and
at
any
point
the
that
might
be
very
interesting
for
the
operator
Community
to
take
a
look
at
at
the
appropriate
time.
C
C
C
Spencer
Spencer
was
looking
at
so
that's
definitely
in
in
the
in
the
charter.
Now
I
I
would
be
a
bit
surprised
to
see
the
isg
cut
it
in
their
in
their
analysis
of
the
charter,
but
we'll
have
to
see
I've
been
surprised
so
many
times
over
the
past
few
years.
I
just
don't
rule
it
out
anymore.
B
Collins
I
just
wanted
to
respond
to
a
little
bit
more
data
to
Glenn.
I
I
mean
it
would.
It
would
definitely
be
great
to
see
I
I
think
there
are
some
people
I
want
to
talk
about
them
in
a
second
that
are
operating
large
video
networks
that
are
very
involved
with
with
the
mock
stuff.
B
They
already
have
good
solutions
to
most
of
their
problems
so,
but
still
it'd
be
good
to
hear
more
from
that
sort
of
classic
group,
but
I
mean
you
know,
there's
three
proposals
right
now
in
that
draft
in
that
or
being
proposed
into
that
working
group
right
now
you
know
one
from
WebEx,
which
is
doing
a
billion
minutes
a
day
which
is
a
drop
in
the
bucket
compared
to
what
meta
is
doing.
B
Meta
looks
I,
couldn't
find
recent
numbers
for
them,
but
several
years
ago
they
were
doing
six
billion
minutes
a
day,
so
I'm
sure
they're
doing
a
crop
load
more
than
that
now
and
that's
like
a
drop
in
the
bucket
compared
to
Twitch,
which
is
the
third
proposal
in
the
group.
That's
like
you
know
you
got
like.
2.6
million
viewers
live
on
it
right
now,
as
we
speak,
so
I
think
there's
a
fair
amount
of
operational
experience
involved
in
that
working
group
at
some
level
and
just
wanted
to
pass
that
on
thanks,
yeah.
E
So
I
don't
want
to
put
words
in
Glenn's,
mouth
or
anything,
but
I
think
that
part
of
the
problem
is
that
some
of
the
people
not
necessarily
represented
in
the
industry.
There
are
not
the
ones
operating
the
the
big
networks
you
mentioned,
but
are
the
ones
who
are
shifting
data
around
on
IP
networks.
For
other
reasons
and
the
problem
the
ITF
has.
Is
they
don't
have
their
own
solution
and
we
should
be
very
afraid
if
they
rule
their
own.
So
it's
as
much
that
we
should
have
them
so
kill.
B
E
So
the
the
point
I'm
trying
to
make
is
not
everybody
who
uses
IP
protocols
is
actually
involved
in
the
ietf.
Shocker
hasn't
has
been
true
for
20
years,
but
part
of
the
point
that
we
raised
when
we
formed
mops
and
everything
leading
up
to
it
was
the
fact
that
there
is
a
lot
of
video
being
shipped
over
IP,
whether
or
not
it's
in
the
traditionally
internet
oriented
companies,
and
so
what
we're
saying
now
is
some
of
those
companies
are
still
shipping.
Video
over
IP
should
should
make
use
of
more
ietf
protocols.
E
Mock
might
be
one
of
might
be
creating
one
of
them,
so
it
would
be
better
if
it
was
informed
by
their
use
cases
rather
than
being
yet
another
solution.
That
works
very
well
for
those
who
are
involved
in
the
internet,
and
we
have
more
crap
out
there
that
sorry
I
have
to
remind
myself
to
breathe.
E
We
have
more
crap
out
there
that
abuses,
the
Internet
Protocol
stack
as
we
know
it.
So
in
some
ways
it's
more
our
interests
to
make
sure
that
we
can
Loop
more
more
Industries
into
using
the
internet.
Well
than
saying
anything
against.
What's
going
on
in
mock,
maybe
that's
clear
yeah.
B
B
Right,
okay,
so
that's
why
I
mentioned
like
I'd
love
to
see
somebody
like
BBC's
I
mean
they're
an
example
of
that
right
for
sure
they
clearly
do
it,
but
I
work
with
them
on
a
regular
basis
and
I
think
that
the
thing
is
is
they
have
a
class
of
solutions
that
are
very
applicable
to
their
stuff
and
it's
a
very
standardized
area.
It's
just
not
standardized
at
ITF
and
I'm,
not
sure
that
what
we're
doing
in
mock
it
all
aligns
with
their
needs.
So.
A
O
Pushing
buttons,
so
you
know
you
know,
Colin
I,
think
the
the
the
vision
I'm
trying
to
put
out
here
is
that,
while
there's
use
cases
today,
I
think
which
you're
right
this
doesn't
satisfy
some
media
use
cases
absolutely.
O
On
the
other
hand,
media
companies
are
really
branched
down
to
a
lot
of
new
areas
and
even
if
they're,
not
in
that
business
today
is
one
of
their
storefront
items
and
I
think
that
there
may
be
useful
connections
that
could
be
made
here
and
ultimately,
what
I'm,
really
I'll
I'll
be
very
honest
here.
I
have
two
goals
here:
I
want
to
get
them
into
the
ITF,
because
that
means
more
participation,
and
that
means
better
standards
and
I
also
want
them
to
get
using
ITF
stuff
more
because
that
means
less.
O
You
know
in-house
Solutions
and
more
standards,
which
is
good
for
everybody,
and
I
see
this
at
the
early
stages.
Since
it
is
early
stages,
I
realize
you've
done
a
lot
of
work
on
it,
but
it
is
still
early
stages
from
the
itf's
perspective.
It's
a
great
opportunity
to
sort
of
wrangle
them
in
make
those
connections
and
get
them
engaged,
and
it
won't
all
happen
overnight
and
we
won't
get
everybody
in
the
room
all
at
once.
But
you
know
It's
a
Grind
right
and
you
know
so
anyways
I
hope.
C
C
Biggest
thing
is
bring
a
friend
right,
so
if
they're
people
you
you
see
that
need
to
be
in
that
room,
make
sure
that
you
you
encourage
them,
make
sure
we
know
that
we
need
to
encourage
them
and
we're
definitely
willing
to
do
that.
Thanks.
E
P
P
Okay,
so
first
off
all
of
these
groups
are
all
of
these
drafts
are
the
work
combined
effort
of
the
SVA
and
the
CTA,
so
the
software
video
Alliance
and
the
consumer
technology
Association,
specifically
the
web
application
video
ecosystem
group
are
working
together
to
produce
these
standards.
P
P
The
goal
here
is
to
carry
data
that
cdns
can
use
operationally
to
make
decisions
about
how
they
are
going
to
handle
the
data
more
intelligently
split
into
two
headers,
the
cmsd
static
header,
which
is
data
about
the
media,
object
itself
and
the
dynamic
header,
which
is
data
about
the
connection
that
is
transferring
the
object
next
slide.
P
I
am
not
going
to
go
into
all
of
the
keys,
but
I
did
put
them
up
on
the
screen.
For
you,
these
are
the
attributes
of
the
object
and
the
origins
response
next
slide
same
story.
These
are
the
key
attributes
that
are
about
the
specific
response,
and
all
of
these
are
added
together,
like
proxy
status
and
cache
status
into
an
agglutinative
header.
P
All
right
moving
on
to
the
common
access
token
this
is
this-
is
very
similar
to
the
URI
signing
token
work
out
of
the
cdni
working
group.
This
token
is
focusing
on
the
streaming
media
uses
very
similar
to
what
some
of
the
work
in
the
cvni
group
was
doing.
The
goal
here,
though,
is
to
have
one
token
that
covers
all
of
the
existing
industry
use
cases.
Adoption
is
key.
P
P
Okay,
again
not
going
to
go
through
all
of
these,
but
these
are
the
claims
that
it
has.
We
start
off
with
the
core
claims
that
you
would
expect
in
any
cwt.
Yes,
the
token
is
based
on
cwt
format,
and
it
goes
usually
in
the
URI,
and
then
we
have
a
bunch
of
General
claims
that
are
details
about
exactly
what
what
kinds
of
attributes
of
the
request
the
token
is
valid
for,
so
that
you
can
limit
it
to
specific
sorts
of
Uris.
P
Composition
claims
this
is
somewhat
new
to
cwts
in
general,
was
created
for
the
cat,
and
we
have
two
kinds
there,
the
and
or
and
nor
Boolean
logic
claims,
do
exactly
what
you
expect,
what
they
say
on
the
10.
P
and
they
will
have
lists
of
claims
inside
them
and
so
that
you
can
do
the
logic
on
that.
We
also
have
an
encrypted
claim,
because
sometimes
you
don't
want
certain
kinds
of
information
being
stored,
easily
readable
in
the
URI,
and
other
information
needs
to
be
secret
from
the
from
the
holder
of
the
URI.
So
we
have
an
encrypted
claim
as
well.
The
critical
claim
there
is
there
to
facilitate
the
encrypted
claim.
P
Since
we
have
composite
claims
or
composition
claims,
we
have
to
have
implementation
limits
right
now.
We've
got
that
set
about
four
levels
of
nesting
and
50
list
elements,
but
we
have
some
more
maximums
that
are
well
technically.
These
are
defined
as
implementation
minimums.
You
have
to
support
at
least
this
much,
but
we
know
operationally.
That
means
maximum
next
slide
action
claims.
These
are
interesting
in
that
these
are
claims
that
do
not
limit
the
utility
of
the
token,
but
they
affect
the
response
response
of
the
verifier.
P
The
renewal
claim
is
very
similar
to
the
renewal
that
you'd
see
in
the
CD
and
I
working
group
token,
and
it
can
be
renewed
as
either
a
cookie
or
a
header,
and
this
allows
you
to
specify
the
parameters
on
the
cookie
in
case
you
want
to
have
your
cookie,
be
something
like
same-site
strict,
which
is
great
the
if
control
response
is
still
being
worked
on,
but
the
idea
is
that
we
want
the
issuer
of
the
token
to
be
able
to
set
a
location
or
a
particular
content
code,
a
status
code
to
be
returned
when
the
token
doesn't
verify
so
it
puts
them
in
the
driver's
seat
next
slide.
P
P
We
have
the
Push,
Pull
and
Export
model,
where
the
push
is
covering
tracing
all
the
way
from
for
specifically
streaming
media
all
the
way
from
the
camera
lens
to
the
origin
and
keeping
track
of
the
entities
that
process
it
along
the
way
and
the
latency
is
introduced,
and
that
sort
of
thing
export
is
about
taking
it
off
of
the
line
from
the
from
the
camera
to
the
viewer
and
sending
the
logs
or
traces
to
a
third
entity,
and
we
are
looking
heavily
at
the
open
Telemetry
specification
for
that,
because
it
does
that
very
well,
and
so
we
want
to
use
existing
standards
wherever
possible
for
the
poll
process,
that
is
in
tracing
in
band
for
HTTP
requests
and
responses,
and
we're
actually
expecting
that.
P
A
number
of
these
use
agglutinative
headers,
so
these
are
headers
like
via
cache
status,
proxy
status,
and
the
idea
is
that
each
entity
that
processes,
one
of
these
requests
will
add
their
the
their
name.
Whatever
they
decide,
their
name
is
along
with
the
parameters
that
are
appropriate
to
that
particular
header.
P
We
are
trying
to
encourage
because
we're
getting
a
fairly
reasonable
number
of
ignitive
headers.
One
of
the
things
we're
looking
at
is
we're
trying
to
encourage
entities
to
make
sure
they
use
the
same
name
for
themselves,
whatever
they
are
when
they
put
values
in
more
than
one
of
these
headers
at
a
time
that'll
help
when
the
data
gets
exported
or
needs
to
be
cross-compared,
for
whatever
reasons
next
slide,
I
think
I
may
have
a
few
minutes
left
for
questions.
A
You
want
to
voice
what
you
said
at
the
mic,
possibly.
A
D
Sure
for
the
common
access
tokens,
hopefully
you're
not
allowing
those
to
be
locked
to
IP
addresses.
There's
one
thing
that
for
those
sorts
of
access
tokens,
lots
of
them
to
IP
addresses
causes
no
end
of
operational
pain.
P
You're
right-
and
there
is
a
lot
of
text
in
there
saying
that
IP
addresses-
are
almost
useless
for
identifying
people
or
systems,
and
you
should
never
use
them,
and
yet
the
goal
is
adoption
and
it
is
effectively
an
industry
requirement.
P
P
The
encrypted
claims
are
a
lot
more
difficult
to
use
and
I'm,
frankly,
kind
of
hoping
that
no
one
ever
needs
to
use
them,
but
they
do
exist
there
and
there
is
a
use
case
for
them
as
little
as
I
I
like
that.
D
D
Similar
things
happen
with
things
like
privacy
proxies,
so
the
it
may
be
worth
looking
at.
This
might
be
even
something
for
mops
to
look
at
is.
Is
there
some
way
to
be
much
more
clear
to
to
Industry
saying
no
stop
doing
this?
This
is
this
is
this
is
a
problem
because
I've
also
seen
that
case
where
some
industry
groups
will
try
to
say
say
well,
we'll
say:
oh
well,
our
standard
is,
you
need
to
lock
tokens
to
IP,
addresses
and
I.
D
Think
having
a
some
standards,
groups
come
out
and
say:
no,
don't
do
that.
You
could
also
Point
people
at
might
might
be
better
than
just
putting
them
in
blog
post
telling
them
not
to
do
it.
P
Yeah
yeah
and
I
agree.
The
goal
with
this
particular
token
is
not
to
enforce
anyone's
particular
business
model,
even
if
that
business
model
has
very
significant
operational
problems.
P
We
we've
tried
probably
three
or
four
times
to
remove
the
ability
to
lock
tokens
to
IP
addresses
from
the
drafts,
and
there
are
just
too
many
people
who
have
said
we
won't
implement
this
or
we
will
Implement
some
variation
of
this.
Regardless
of
what
the
specification
says,
the
CTA
is
very
much
like
the
ietf
in
that
we
don't
have
a
police
force.
A
So
is
I
mean
not
not
having
looked
at
what
the
industry,
what
what
what
the
industry
is
doing
in
this
area
very
closely,
has
there
been
an
attempt
to
to
sort
of
reveal
and
document
the
reasons
why
industry
players
want
to
use
IP
addresses
in
their
claims
and
what
better
Alternatives
there
are
for
achieving
the
same
goals?
Yeah.
P
So,
there's
a
couple
of
really
simple
use
cases
that
IP
addresses
just
make
really
easy
and
a
very
standard
and
a
very
standard.
One
that
keeps
coming
up
again
and
again
is
I
just
want
to
let
anyone
in
who
comes
in
on
a
10
dot
address
right,
I,
trust,
those
addresses,
I
I,
don't
care,
and
so
I
get
it
that's
how
their
things
are
set
up
and
while
I
don't
think
that
that's
necessarily
the
most
robust
way
to
do
things.
A
Yeah
I
was
more
suggesting
the
okay.
This
is
a
bad
idea,
and
here
are
some
Alternatives
I
think
it's
the
second
part
that
may
be
that
I
I,
don't
know
what
the
status
of
of
that
is
not
not
just
in
your
draft,
but
also
in
sort
of
documentation
of
the
industry
has
overall
in
this
area.
I
just
don't
know
if
it's.
If
it's
something
that's
that
people
have
spent
a
lot
of
time
on,
given
that
hey
we've
got
IP
addresses,
we
could
just
use
those
in
our
claims.
Well,.
P
I
mean
as
operationally
as
was
pointed
out.
You
mostly
can't
dual
stack
devices
devices
changing
their
IP
addresses
devices
that
have
IPv6
addresses.
All
of
that
is
a
significant
operational
concern.
Entities
that
deploy
this
and
expect
IP
address
to
work
on
the
bare
internet
are
going
to
disappoint
their
customers.
N
P
And
if
you
want
more
information
about
the
cdni
or
about
the
common
access
token,
there
will
be
a
slightly
more
in-depth
presentation
in
the
city
of
the
United
working
group.
Q
J
To
share
with
all
of
you,
one
initiative
that
we
are
running
in
in
telefonica,
with
the
objective
of
trying
to
to
to
make
the
delivery
of
content
a
network
aware.
So
we
are
doing
an
exercise
of
integrating
Alto
is
such
a
way
that
way
with
throughout,
or
we
can
expose
all
these
simple
information
that
could
fit
the
CDN
logic
and
with
that
improve
the
the
performance
we
I
represent
on
behalf
of
my
colleagues,
Patricia
nice
and
Francisco.
J
As
you
can
see,
this
is
a
joint
work
between
the
transporter
group
and
the
video
group
and
yeah
I
will
provide
the
details.
So
next
slide.
Please
thank
you
so
a
little
bit
about
about
background
of
telefonica,
because
it
is
important
thinking
on
the
next
steps
we
are
telefonica
is
a
group
of
different
companies,
different
operational
business
present
in
15
countries,
mostly
Latin
America
and
Europe,
plus
a
tire
one
international
carrier.
J
So
we
provide
multiple
Services,
as
you
can
imagine,
residential
mobile
and
so,
but
also
we
have
a
contents
that
we
distribute.
We
we
have
contents
produced
by
ourselves
and
with
one
several
several
brands
like
movie
Star,
Plus
and
moisture
play,
but
also
we
deliver
third
party
contents
right.
We
do
we
do
this
through
an
in-house
CDN
that
has
been
developed
by
by
the
video
group,
and
this
is
called
what
I
will
refer
to
that
as
telephonic
acidian
or
tcdn,
and
this
tcdn
is
worldwide
deployed.
J
The
mechanisms
for
delivering
is
is
based
on
HTTP
adaptive
streaming,
and
this
is
used
for
serving
either
internal
or
external
customers,
so
so
how
we
are
delivering
the
content
over
the
top,
so
external
customers
to
telefonica
can
receive
as
well
the
the
this
content
so
think.
You
know
talking
a
little
bit
about
the
the
CDN,
the
logic
that
the
CDN
is
using
for
the
deciding.
What
is
the
streamer
more
convenient
for
a
given
a
customer
so
essentially
the
the
request.
J
Protein
logic
that
is
is
being
used
take
into
account
the
number
of
inputs,
the
streamer
health
status,
the
load
level,
the
availability
of
the
content,
the
cash
hit,
radio,
maximization
and
so
on
so
far,
but
also-
and
importantly-
and
this
is
the
topic
of
Interest-
let's
say
for
for
this
presentation:
it's
also
the
network
topology.
By
now,
this
network
topology
is
fed
into
the
CD
analogy
manually.
So
somehow
there
is
a
group
of
people
with
taking
into
consideration.
J
What
are
the
different
prefixes
of
the
different
customers
are
located
in
the
in
existing
points
of
presence
of
the
telephonica
networks,
and
this
is
fed
as
I
told
you
manually
and
and
and
so
how
this
is
the
the
the
the
kind
of
problem
that
we
would
like
to
to
solve.
Now
this
is
the
initial
Revenue
foreign
for
this
activity,
in
fact,
so
how
we
could
go
in
an
automatic
manner
of
maintaining
these
Maps
yeah
being
being
I
mean
provided
in
an
automatic
way.
J
The
reason
of
going
in
in
in
this
direction
is
because
the
different
prefixes
of
the
network
can
be
can
be
changed
because
there
are
consideration
of
accesses
of
migrating,
for
instance,
for
DSL
access
to
Fiverr,
and
so
so
this
implies
that
sometimes
the
the
the
addresses
of
the
IP
addresses
are
reused
between
different
points
of
presence.
Also,
there
could
be
events
in
the
network
that
could
motivate
the
previous
election
based
on
this
manual.
Feeding
could
not
be
the
optimal,
and
so
on
so
far
so
this
is.
J
So
the
rationale
of
of
making
the
tcdn
transport
Network
aware
is
again
to
optimize
the
delivery.
So,
at
the
end,
what
we
will
do
is
we
want
to
do
is
is
to
have
a
more
efficient
way
of
of
delivering
the
traffic
to
the
customer
where,
where
this
customer
is
so
with
this
aesthetic
view
with
this
manual
feeding
of
the
topologies,
we
we
have
these
situations
that
cannot
be
controlled,
Network,
outages,
Network,
congestion,
the
migration
of
the
prefixes
and
so
on.
J
So
this
requires
a
periodic
and
a
a
grade
of
the
of
these
Network
maps
and
well
the
what
we
push
you
essentially
is
to
have
this
with
a
timely
information
in
an
automatic
way,
and
this
is
why
we
leverage
in
Alto
for
doing
that
next
slide.
Please.
J
So
how
can
the
world
looks
like
that
because
it's
important
to
understand
what
is,
let's
say,
the
playground
for
for
all
all
all
these
work.
You
know
yeah
the
network
on
telefonica.
Well,
you
saw
before
we
have
so
many
networks
in
very
in
so
many
countries.
So
there
is
not
a
common
pattern,
but
somehow
this
would
reflect
a
converter
that
we
are
following
at
the
time
of
Designing
the
telephonical
networks.
J
So
we
divide
in
a
number
of
hierarchical
levels,
but
you
can
see
there
is
a
kind
of
topology
or
a
canonical
topology
that
we
could
have
in
a
mid-sized
country.
So
we
have
the
hierarchical
level,
one
that
represent
the
interconditional
level
for
interconnecting
to
other
Transit
providers
or
either
the
tire
one
from
telefonica,
but
also
other
Transit
providers
or
ottps
or
whoever.
J
Then
we
have
the
hierarchical
ml2.
That
would
be
a
mesh
part
of
the
network
that
takes
essentially
the
traffic
coming
from
the
from
the
boundaries
from
the
perineum
from
the
transit
and
then
distributed
to
the
different
regions,
and
then
we'll
start
in
the
topology
that
is
covering
the
different
regions.
That
would
be
the
the
hierarchical
level
three,
which
essentially
is
a
kind
of
a
dual
headings
in
in
some
regions
or
so
depending
on
the
size
of
the
region.
J
For
sure
it
could
be
more
more
than
two
and
then
we
have
the
local
level,
the
hierarchical
level,
four,
which
are
rep,
which
represents
basically
the
central
offices
and
then
hierarchical
level,
5
or
even
six,
that
is
essentially
could
represent
the
the
more
remote
areas
or
the
cell
side,
routers
and
so
connecting
the
the
different
customers.
So
the
customers
are
essentially,
as
today,
connected
in
local
level
in
the
hierarchical
level
for
hierarchical
level.
Five
or
could
it
be
even
connected
in
hierarchical
level?
Six,
as
you
can
imagine,
not
all
the
countries
have
the
same.
J
Let's
move
to
the
next
slide.
Yes,
just
to
present
what
could
be
I
mean
we
saw
before
what
would
be
a
canonical
architecture.
This
is
an
excerpt
representing
Sonic's
16
deployment
in
in
in
in
the
operation
in
Spain,
and
here
we
just
represent
some
few
notes
for
a
hierarchical
level.
One
two
one,
three
hierarchical
level,
one
will
be
represented
by
the
Cycles
Layer
Two
by
the
squares
and
larger
three
by
the
triangles.
So
these
represent
the
the
upper
part.
J
So
from
this
part
below
we
have
had
the
the
central
offices
we
have
the
the
SSI
routers
and
the
remote
areas,
and
so
on
so
far,
so
the
idea
is
we.
We
have
so
many
streamers
deployed
in
the
in
the
network.
So
how
do
we
can
improve
how
we
can
maintain
the
the
optimal
delivery
of
the
video
as
long
as
we
suffer?
J
These
changes
in
in
the
network
again
changes
in
the
in
the
movement
of
the
prefixes
in
the
across
the
different
Central
offices,
changes
in
the
in
the
network
itself,
because
there
could
be
link,
outages
or
no
outages
or
so
so
trying
to
find
a
mechanism
to
optimize
this
delivery
in
real
time.
Let's
say
so
next
slide,
please!
J
J
That
is,
are
called
pids
with
us
on
how
we
can
group
the
prefixes
of
the
customers
in
the
central
offices,
the
ranges,
the
slash
24
20
or
whatever
we've
done
I
mean
we
create
apids
for
for
customers,
identifying
the
the
prefixes
of
the
customers
and
then
also
we
generate
pids
Associated
to
the
streamers
in
such
a
way
that,
through
this,
through
these
Maps,
we
can
have
a
a
timely
view
of
what
will
be
the
cost
and
the
reachability
between
the
different
pids
in
the
network,
so
essentially
how
the
streamers
can
be
rich
or
how
the
streamers
can
reach
the
different
customers
and
identifying
the
the
cost
Associated
to
that,
we
will
detail
a
little
bit
later,
but
this
course
is
easily
now
we
are
playing
with
a
number
of
hops.
J
That
could
be
some
more
richer
information
in
some
point
on
time,
so
we
are
working
also
in
attack
election.
So
what
we
do
with
with
these
Maps
is
to
determine
what
will
be
in
every
time
the
more
convenient
streamer
to
deliver
the
content.
J
According
to
the
request
that
is
coming
from
the
from
the
customer
you
can
see,
there
are
a
very
simple
example
that
we
use
in
the
lab
in
the
in
the
premier
steps
in
this
work
and-
and
you
can
see,
reflected
the
the
idea-
the
basic
idea,
so
the
pages
of
the
streamers
can
correlate
it
with
the
pids
of
the
of
the
customers
and
with
that
perform
the
the
best
selection.
Let's
move
to
the
next
slide,
please
so
we
represent
the
network
map
and
the
cost
map.
J
So
in
the
network
map,
as
I
told
you
before,
it's
essentially
the
the
grouping
of
the
different
prefixes
to
the
pids,
and
here
this
is
related
to
the
sample
before
so
you
can
check
later
on
offline,
and
then
we
have
the
cost
map
that
shows
the
reachability
between
the
different
pids.
J
So,
with
bgp
we
export
we
can
advertise
the
preferences
Associated
to
the
to
the
end
users
in
the
central
offices
as
they
are
being
configured
at
the
end,
the
the
preferences
are
associated
to
the
BNG,
which
is
a
element
yeah
and
in
the
PPP
session.
So
the
BEP
is
in
this
Central
offices
and
the
PDP
is
integrated
with
with
the
router
that
can
the
possibility
of
advertising
these
preferences.
J
So
this
is
in
one
hand
the
the
advertisement
of
the
preferences
Associated
to
the
customers
and
to
the
streamers,
and
then,
in
the
other
hand,
we
have
that
the
topology
itself,
for
that
we
use
bepls
and
with
bgpls
we
can
have
the
view
of
the
nodes
and
the
links
connect
the
interconnecting
the
different
nodes
in
the
network
inside.
So
the
final
result
is
that
we
can
build
this
cost
map
with
essentially
is
the
represents,
let's
say
the
reachability,
between
streamers
and
and
users.
J
J
So
you
can
take
a
look
later
on,
so
we
are
starting
essentially
to
understand
the
the
visibility
of
the
this
mechanism
for
having
an
up-to-date
view
of
the
network,
so
introducing
outages
into
introducing
new
prefaces
and
so
such
a
way
that
we
verify
that
the
visibility
of
of
this
exercise,
then
we
move
to
a
pre-production
lab
I,
will
provide
more
details
later
in
the
next
slide.
J
But
we
move
a
production
lab
where
we
started
to
to
play
with
real
configurations
in
the
network,
real
situations
that
we
could
find
in
the
network,
but
in
a
more
control
environment,
so
reduced
number
of
devices,
but
the
complexities
of
the
configuration
that
we
would
face.
Once
we
move
to
production-
and
we
are
now
in
the
point
of
moving
into
production,
we
we
are
working
on
the
deployment
of
Alto
in
in
the
real
Network.
J
We
will
start
with
telephonic
Spain
a
PDF
moving
later
on
to
other
companies
in
the
in
the
group,
and
here
the
point
now
that
we
are
doing
is
trying
to
introduce
Alto
in
the
specific
rules
and
processes
and
procedures,
and
so
that
a
production
level
implies
in
terms
of
secure
securitization
hardening
all
the
stuff
playing
with
the
different
IP
addresses
for
connectivity
within
the
different
elements,
and
so
on
so
forth.
I
will
provide
now
more
details
on
all
these
engineering
steps.
So
next
slide.
Please
thank
you.
J
So
these
are
these,
as
I
provide
a
little
bit
of
details
about
where
the
the
engineering
impact
that
we
follow
and
what
we,
what
issues
we
faced
in
this
process
in
the
technology
lab
test.
As
I
said,
we
started
with
a
auto
module
that
is
present
in
open
daylight.
This.
This
module
was
working
in
Italy
with
lldp.
So
our
first
task
that
we
do
was
to
integrate
with
the
bgp
module
of
open
daylight
in
this
technology
lab
test.
We
started
playing
with
a
monovender
router
scenario.
J
These
routers
were
built
for
life,
so
we
were
playing
its
own
servers
and
so
generating
the
issues.
As
I
mentioned
before,
the
outages
of
the
nodes,
outages
of
the
links
creating
new
prefixes
and
so
on
so
far,
we
play
with
a
very
simplistic
network,
with
just
one
single
igp
on
on
top
of
that
or
SPF.
J
In
that
case,
a
single
autonomous
system
and
simple
metrics,
based
on
the
the
hop
count
so
essentially
the
metric
that
we
Associated
to
the
AAP
was
just
one
so
in
such
a
way
that
we
could
determine
the
hop
count
and-
and
that
was
what
you
also
before
in
the
course,
but
they
presented
before
and
also
in
this
setup,
the
the
routers
themselves,
some
of
the
routers
themselves
were
acting
as
Road
reflectors,
so
very
simplistic
setup
in
this
case,
with
Alto
as
well
being
beautiful
eyes
in
that
server.
That
I
mentioned.
J
Before
from
this,
we
move
to
the
production
Network
test.
Now
the
things
started
to
change.
We
migrated
to
xav
as
a
open
source,
bgp
speaker
for
interacting
with
a
real
Rock
reflectors
in
this
preparation
environment.
We
in
this
exercise
of
migrating
to
xav
we've
found
some
issues
for
bdpls
We
rise
a
number
of
tickets.
You
have
data
details.
You
want
to
check.
These
issues
were
basically
related
to
bgpls
at
the
time
of
parsing
the
information
of
bepls
in
order
to
build
the
to
be
able
to
build
the
the
cost
map.
J
Okay,
the
relationship
between
the
pids
that
I
mentioned
before
display
production,
Network
environment
is
a
multi-mender
multivender
router
scenario
and
the
router
vendors
that
are
present
in
this
in
both
pre-production
and
production.
Network
are
different
to
the
ones
from
the
lab
test.
So
essentially
we
we
have
tested
at
the
end
three
different
vendors
on
this
setup,
and
this
multivender
scenarios
is
the
real
one
because
well
the
the
production
Network
leverages
or
or
is
supported
by
the
different
vendors.
In
reality,
these
routers
in
the
pre-production
environment
are
physical
routers.
J
We
also
deploy
a
dedicated
Alto
server
there,
so
no
no
longer
vitalize
on
the
same
environment,
so
it's
just
a
dedicated
server
and
then
we
have
started
with
with
this
network
that
well
generated
a
number
of
issues
in
the
sense
that
this
is
a
is
a
complex,
mpls
Network.
So
we
don't
have
a
Contin,
an
IP
continuity
end-to-end
in
the
different
hierarchical
levels
that
you
saw
at
the
beginning.
J
Also,
we
started
playing
with
more
sophisticated
metrics,
so
it's
dedicated
in
the
sense
that
we're
not
just
assigning
a
value
of
one
for
the
AGP
metric,
but
playing
with
the
real
igp
metrics,
as
we
could
found
in
the
in
the
production
Network.
So
having
Richard,
let's
say:
cost
Maps,
richer
values
in
the
course
map,
not
only
the
Hops,
but
playing
actually
with
the
IEP
metrics
there
I'm.
J
Finally,
also
another
point
that
was
different
to
the
technology
lab
test
was
the
connection
to
different
Power
reflectors
homegrown
reflectors
for
vgp
I
mean
a
couple
of
robot
inflators
for
vgp
information,
another
couple
of
Rob
reflectors
for
the
epls.
This
was
because
it's
somehow
mimicking
what
we
have
in
the
a
production
Network
and
finally,
as
I
as
I
mentioned
before,
we
are
in
the
in
the
pace
of
integrating
all
of
this
in
the
in
the
production
Network.
J
So
we
have
a
in
the
very
hard
phase
of
the
administrative
issues
for
adapting
to
the
production
procedures,
the
hardening
environment,
as
I
told
you
before
for
the
hardware
for
sure,
but
also
for
the
software,
because
these
are
most
all
the
pieces
are
coming
from
open
source.
So
we
need
to
follow
the
internal
rules
of
the
engineering
teams
in
the
operational
business
units
right
also.
J
The
one
one
situation
that
we
have
now
in
the
in
the
real
language
is
that
we
don't
have
a
broad
activation
of
bgpls
on
on
that,
so
we
will
start
with
some
small
region
in
the
Network
that
has
already
activated
the
bgpls,
and
from
that
we
have
started.
We
will
start
growing.
This
is
not
about
I
mean
this
is
even
a
good
thing
for
us
in
such
a
way
that,
with
that
we
we
will
be
able
to
test.
J
This
will
be
also
a
learning
process.
How
to
deal
with
that.
We
don't
expect
any
interference
with
that
because
at
the
end
we
are
connecting
to
the
raw
reflector.
So
we
are
doing
nothing
in
in
the
network.
They
are
simply
taking
the
information
from
the
raw
reflectors,
but
okay,
there
is
always
uncertainty.
What
could
be
something
could
happen
right?
J
L
J
This
next
slide,
please
so
just
for
for
finishing
us
next
steps,
we
will
start
with
integration
in
telefonica
as
pay
Network
for
later
moving
into
the
other
countries
of
telephonica,
where
we
have
the
tcdn
deployed,
the
probably
the
next
step
will
be
Brazil.
This
is
something
that
we
need
to
decide
and
again.
Brazil
is
a
very,
very
large
challenge
in
the
sense
that
there
is
a
very
big
Network
so
and
also
the
the
topology
is
not
so
canonical
as
we
saw
before.
J
So
it's
it's
a
very
good
challenge
that
we
start
to
face
starting
in
Nigeria,
hopefully.
So,
apart
from
that,
we
also
need
to
complete
the
the
full
development
of
the
tcdn
logic.
So
the
idea
is
that
the
tcdn
logic
consumes
the
information
that
is
provided
by
Alto.
So
we
also
need
to
work
on
that
on
that
line
to,
let's
say
how
to
say
to
refine
the
way
in
which
we
are
interacting
today,
with
the
with
Alto
so
making
more
iterating
better
in
the
in
the
CD
and
logic
at
the
end.
J
So,
but
by
now
it's
just
a
kind
of
proof
of
concept
in
the
in
this
sense
and
yeah
importantly,
we
expect
to
characterize
Alto
performance
on
real
environment
in
terms
of
scalability
processing.
We
will
require
many
instances
of
Alpha,
and
so
this
is
also
something
that
we
will
see
once
we
start
playing
with
Aldo
in
the
in
the
Real
Environment
again,
because
we
will
handle
with
thousands
of
routers.
So
it's
a
a
very
huge
Network
at
the
end
one
another
line
of
work.
J
So
by
now
we
are
working
for
for
the
topology
in
order
to
do
this
and
enhance
delivery
and
so
on,
trying
to
guarantee
the
quality
of
experience.
But
we
already
do
foresee
the
next
steps
in
the
direction
of
integrating
or
enriching
the
the
course
map
by
introducing
performance
metrics,
for
instance,
so
understanding
what
are
the
congestion
levels,
maybe
understanding
what
is
the
latency
that
we
do
have
between
the
different
pids
so
enriching
the
decision.
J
At
the
end,
we
will
start
by
The
Hop
counts
because
it's
a
or
igp
metrics,
because
it's
the
the
basic
stuff,
but
we
do
foresee
to
do
yeah
more
advanced
metrics
Etc,
whether
we
couldn't
reach
even
the
the
decision
in
the
CDN.
Also,
even
we
are
considering
to
include
Access
Network
information
so
to
have
some
idea
what
could
be
the
technology
Associated
to
a
specific
prefix
in
in
the
customer
side,
so
that
we
could
have
some
insight
about
if
the
customer
is
accessing
by
Fiverr
or
DSL
or
a
mobile
technology
or
whatever.
J
So
next,
that's
like
this.
This
is
the
the
very
final
one.
So
as
conclusion,
the
the
objective
of
all
of
this
is
to
make
the
CDN
Network
aware
so
to
have
the
possibility
of
exposing
Network
information
to
applications
that
could
leverage
on
that
Network
information
to
perform
better
and
tcdn
is
a
clear
case
on
this
respect
and
yeah,
and
we
also,
apart
from
this
integration,
work
that
we
are
doing.
We
are
foreseeing
this
next
step
that
will
be
to
enrich
the
decisions
incorporating
these
metrics.
J
That
adds
an
additional
complexity
that
we
we
we
hope
to
report
in
on
in
some
point
in
time,
and
also
here
for
for
the
media
operational
Community
as
well.
We
would
like
also
to
ask
for
feedback
from
from
mobs.
It
will
be
very
welcome
to
understand
if
this
is
useful,
if
this
is
a
good
direction
to
to
consider
I,
don't
know
to
get
your
your
views
on
on
this
respect,
so
you
can
approach
directly
us
or
even
build
them
even
more.
E
Thank
you
very
much,
so
Rajiv
you've
been
very
patient.
Would
you
like
to
ask
your
question.
F
Hi,
so
this
is
actually
going
back
to
the
slide
where
you
were
talking
about
calculating
the
cost
Maps.
F
Yes,
so
I
see
what
you've
kind
of
done
here
is.
You
know
you
basically
have
a
couple
of
pcdn
notes
that
the
PID
is
ending
in
the
five
and
a
six
and
a
number
of
customers
right,
which
are
a
lot
of
varies
now.
I,
see
that
you.
F
So
I
see
you
you're
calculating
the
cost
not
only
of
the
routes
from
the
customer
endpoint
to
the
pcdm
mode,
you're,
also
calculating
the
cost
from
customer
to
customers.
The
customer
umid03,
for
example,.
J
Yeah
well,
this
is
essentially
would
be
the
the
let's
say,
the
standard
output
of
alpha,
so
on
top
of
this
cost
map,
so
the
idea
would
be
to
for
us
is
not
interesting,
at
least
with
the
integration
with
the
CDN.
It's
not
interesting
to
to
have
the
reachability
between
customers,
so
this
would
be
filtered,
so
the
The
Next
Step
will
be,
on
top
of
this
course,
map
to
filter
and
only
retain
the
reachability
between
the
streamers
and
the
different
users.
So
by
now
we
will
not
use
the
user
to
user
information.
J
F
Here
is
in
the
slide
exactly
previous
to
this:
can
you
move
back
one
slide
where
you're
allotting,
the
CID
I'm,
assuming
you
are
looking
at
pids,
not
at
the
level
of
individual
users,
but
at
the
level
of
individual
user
segments,
so
each
slash
21st
rather
than
each
user,
I
I
I'm,
just
gonna.
Let
that
assumption
is
today.
F
Okay,
I
will
be
back
in
a
moment,
hopefully
with
better
audio
okay.
Q
A
You
I'm
sorry,
so
we
can.
We
can
take
Eric's
question
now
so.
G
J
Well
by
now
this
is
a
hello,
well,
I
think
Ronaldo's
way
of
retrieving
the
map,
so
deep
I
mean
the
CDM
will
ask
no
I
mean
we
have
not
specified
a
frequency
for
these
updates,
so
budobe
I
mean
this
is
not
yet
considered
at
all,
but
we
don't
expect
a
continuous
update
of
the
of
the
of
the
statement
so
take
into
account
how
we
are
now
with
the
manual
procedure.
We
also
have
a
date
in
the
maps
every
six
months.
No,
no!
J
J
J
Don't
care
so
much
something
like
that.
Maybe
weekly
I
mean
they
can
talk
about
I
mean
probably
the
the
more
frequent
changes
that
or
the
changes
that
could
be
more
representative
for
a
real-time
feedback
could
be
Network
outages
or
so
this
would
be
impact
in
some
part
of
the
network,
maybe
not
on
that,
but
even
in
that
case
you
you
could
retrieve
an
update
even
in
case
of
someone
outage
so,
but
could
be,
would
be
discrete
events
along
the
times
or
not
every
minute,
not
every
hours.
So.
G
E
Thanks
Rajiv,
would
you
like
to
try
again.
R
J
R
R
The
first
question
is
there:
are
you
know,
similar
efforts
in
place
right
now?
You
know,
though
they
are
more
tied
to
specific,
Cloud,
vendors
and
stuff
like
that,
to
make
this
kind
of
internal
Telco
Network
topology
consumable
as
a
service
for
people
who
are
looking
at
doing
some
sort
of
Service
delivery
using
you
know:
beep
Edge
nodes
in
5G
and
Mech
environments.
R
So
do
you
have
plans
to
eventually
make
this,
or
at
least
the
subset
of
this
information
available
outside
of
your
integration
with
tcdn,
and
you
know
make
it
consumable
by
other.
You
know:
potential
people
who
are
hosting
on
the
make.
J
Yes,
as
we
do
foresee,
Alto
in
general
is
to
to
play
the
role
of
as
a
network
special
function,
for
whoever
customer
could
be
internal,
customer
or
external
customer
and
with
these
Special
capabilities,
so
to
suppose
topology,
but
also
other
other
additional
capabilities
that
we
are
proposing
in
the
auto
working
group.
So
the
the
general
answer
to
the
answer
to
your
custom
will
be.
Yes,
we
are
going
in
in
that
digression
and
yeah.
R
N
Hi
this
is
Sanjay
Mishra
I
think
this
is
a
very
good
presentation
and
very
useful
one
question
I
have:
is
that
so
I
get
the
the
association
of
bid
and
the
IP
subnet.
Now
how
important
is
the
that
relationship
with
the
CDN
streamers,
because
I
assume
that
content
is
replicated
on
all
the
CDN
streamers
right,
so
that
information
pretty
much
remains
static?.
N
N
J
Yes,
this
astrologic,
for
instance,
with
the
would
perform
this
filtering.
That
I
commented
before
in
the
question
from
very
deep
and
so
would
I
come
presented
to.
This
is
simply
how
we
expose
the
information,
but
yeah
the
logic
in
the
CDN
side
will
need
to
get
a
lot
of
more
things
so
filtering
and
then
taking
into
account
where
the
the
content
is
and
so
on.
So
this
is
this
that
development
is
also
in
progress.
E
E
Anyone
anyone,
if
not
I,
think
we
are,
we
will
declare
victory
for
today.
Thank
you
all
very
much
for
coming
out
and
thank
you
again
Chris
for
taking
such
wonderful
notes,
and
we
will
see
you
all
on
the
mailing
list.
Thank
you.