►
From YouTube: IETF106-MOPS-20191121-1000
Description
MOPS meeting session at IETF106
2019/11/21 1000
https://datatracker.ietf.org/meeting/106/proceedings/
A
Right
good
morning,
everybody
thanks
for
making
it
out
to
this
chill
working
group
room
for
the
inaugural
mops
working
group
session.
I,
think
you're
familiar
with
II
know
well,
and
you
should
note
it
well
all
the
usual
conditions
for
participating
in
the
IETF
apply
a
couple
of
bits
of
administrivia.
A
Assuming
there
are
no
bashes
to
the
agenda.
We
have
blue
sheets
that
are
about
ready
to
circulate,
are
circulating.
Glen
did
are
about
to
circulate
I
understand
that
we
are
in
that
several
people
have
a
conflict
with
the
TSV
working
group,
that's
on
at
the
same
time,
so
I
expect
there
will
be
people
coming
in
and
going
out
for
different
agenda
items
in
either
working
group
and
we've
noted
that
we
should
make
sure
not
to
collide
with
that
working
group
going
forward.
A
A
Thank
You
Ronnie,
so,
okay,
great,
thank
you
very
much.
We
can
move
on
so
are
there
any
batches
to
the
agenda?
That's
been
posted
for
a
week
or
so.
A
See
me
not
moving
on
yeah,
so
this
is
our
inaugural
working
group
meeting.
We
have
met
several
times
before
in
multiple
contexts
and
different
guises,
so
I
just
wanted
to
remind
people
of
what
our
what
our
Charter
says,
there's
a
lot
of
soliciting
of
input
here
and
regular
updates.
So
this
is
the
general
area
of
focus
for
this
working
group
and
what
you're
gonna?
What
we're
going
to
talk
about
this
morning
has
a
lot
to
do
with
updates
from
various
I'm
actually
supposed
to
stand
on
this
pink
thing
of
them.
A
One
of
the
things
that
I
want
to
do
with
this
session,
because
it
is
a
working
group
and
and
not
just
a
conference
session,
is
start
to
work
out
our
mode
of
operation,
so
I'm,
hoping
that
we
will
actually
have
some
interactive
discussion
around.
Some
of
the
things
that
are
presented
in
terms
of
you
know
ask
questions
about
stuff
as
being
presented
on
work
going
on
elsewhere.
We
have
one
discussion
of
you
know
an
awareness-raising
with
regards
to
an
issue
with
streaming
and
quick.
A
We
can
have
a
discussion
after
that
presentation
about
you
know
is
there
is
our
further
follow
on
what
do
we
think
about
it
so
on
and
so
forth?
So,
generally
speaking,
just
all
that
say
that
this
really
should
be
a
I'm
hoping.
This
will
be
an
interactive
session,
notwithstanding
the
fact
that
it
is
Thursday
morning
after
a
very
long
week.
Thankfully,
it's
not
Friday
morning
after
a
very
long
week.
Okay,
all
right!
So
unless
there's
any
questions
so
far,.
A
Yeah,
this
is
just
a
further
note
about
my
earlier
point
about
what
we're
here
to
do.
Our
outputs
are
meant
to
be
some
documents,
the
presentations
that
get
posted
in
Proceedings
for
all
see
and
also
potentially
some
dispatch
like
activities
where
we
might
refer
people
to
other
working
groups.
That
last
aspect,
I
think
will
be
particularly
useful
when
we
have
people
who
you
know
join
our
efforts
from
other
organizations
who
may
not
know
I
may
not
know
where
to
go
in
the
IETF
or
may
not
know
where
to
take
their
work
within
the
IETF.
C
B
So
the
doc
status
I
think
it
went
on
to
the
milestones
list
when
we
first
started
the
working
group
or
something
but
I,
don't
think
it's
been
adopted
by
the
working
group
and
I'm
not
sure
whether
that
counts
as
consensus
or
not
I,
guess
probably
not
anyway.
We
probably
should
discuss
that
and
make
a
decision.
B
B
A
few
refinements
and
a
few
people
seem
to
indicate
that
it's
it's
not
totally
clear
that
that
the
duct
is
super
useful,
but
I'm
not
sure
whether
that
means
that
that
we
should
refine
it
until
it's
more
useful
or
that
the
general
concept
at
all
is.
Is
it
not
necessarily
worth
writing
down?
So
a
little
discussion
on
that
would
be
lovely.
B
Yeah.
I
haven't
actually
touched
it
since
since
supposed
to
hit
last
time,
but
I
do
have
a
few.
There
was
some
feedback
from
ollie,
so
I
have
a
few
dues
to
put
in
there
and
I
think
that
the
suggestion
from
from
lastly,
that
this
is
not
a
taxonomy,
it's
a
operational
considerations
for
media
streaming.
For
from
edges
indicates
changing
the
name
as
we
perhaps
adopt
it
and
and
put
it
forward,
and
then
there's
the
I
would
say
the
kind
of
most
there
is
some.
B
A
lot
of
the
feedback
I
got
is
that
there
is
one
really
good
insight
that
it
would
be
great
to
get
in
there
somehow,
which
is
the
sort
of
mismatch
between
the
capacity
and
the
demand
for
like
large-scale
events,
and
you
know
in
the
sense
of
presenting
it
with
good
archival
value.
This
is
these.
Are
the
slides
that
I
had
last
time
I
think
I've
shown
them
a
few
times.
B
B
So
the
best
idea
I
have
on
how
to
capture
this
in
a
sort
of
you
know,
an
insight
that
applies
across
time
is
maybe
the
trendline
with
the
accumulated
annualized
growth,
duration
rate
or
something
I'm,
not
sure.
That's
good
I
would
love
suggestions
on
how
to
do
it
better
and
just
other
suggestions
on
what
should
or
should
not
be
in
the
stock
would
be
very
welcome.
B
D
B
Example,
multiplication
is
to
say:
if
you
have
this
many
millions
of
years,
it
means
this
much
bandwidth.
It
doesn't
really
make
that
point.
You
know
in
as
punchy
away
as
the
as
the
mismatch
does
right.
It
just
presents
the
raw
number,
the
oh.
If
you
have
half
a
million
users,
you
need,
you
know,
600
gigabits,
which
may
or
may
not
be
easy.
B
You
know
in
two
years
time,
but
or
today
it's
just
the
the
question
of
like
there
is
a
mapping
but
making
the
point
that
there
is
a
mismatch
and
that
the
mismatch
has
been
sort
of
growing
and
persistent
and
and
problematic
is
the
part
where
I
feel
like
it's
it's
weak
and
could
use
a
proved
presentation
in
a
way
that
that
still
carries
some
punch
right
with
recognition
that
moving
forward
networks
changed
right.
That's
that's,
like
the
part,
the
part
that
I'm
struggling
to
really
communicate
in
the
document.
So
this
is
the
best
idea.
B
E
Just
thinking
out
loud,
maybe
it
would
help
to
kind
of
dive
into
where
you
think
the
constraints
are.
You
know,
like
you
know,
these
are
big
numbers
and
you
know,
but
but
the
Internet
has
grown
over
time
and
CDN
capacities
have
grown
over
time.
Is
it
you
know?
Is
it
last
mile,
probably
not
I
mean
at
least
where
I
live
everybody's
getting
hundred
mega
bitten
more
pretty
soon?
You.
C
E
B
F
B
F
Blending
from
over
here,
maybe
even
breaking
up
to
two
separate
charts,
one
showing
sort
of
the
bandwidth
requirements
for
file
based
or
highly
cacheable
content,
and
then
a
second
one,
that's
sort
of
more
live
oriented
because
then,
particularly
for
the
live,
I,
think
an
additional
parameter
to
try
and
start
catching
is
latency
as
well,
because
with
the
because,
as
we
evaluate
you,
know,
other
approaches
with
highly
cache
content.
Latency
isn't
a
big
deal
you
for
watching
a
movie
and
it
takes
five
seconds
or
10
or
20
seconds
from
initiation
to
your
scene.
That's
fine!
F
A
F
Trying
to
measure
that-
and
the
other
aspect
doesn't
suggest-
was
there's
this
other
component
here,
which
is
that
you
get
the
adaptive
bit
rates
coming
into
the
equation
and
so
I'm
not
sure
how
to
capture
it.
But
I
think
it's
important
to
recognize
that
these
rates
are
variable
even
during
a
session
and
there's
some
latter
that
you,
and
so
maybe
it's
a
it's
introduced,
a
standardized
ladder,
a
lot
of
this
content
and
then
do
the
charting
based
upon
those.
You
know,
because
a
lot
of
typically
has
these.
B
As
other
sounds,
that
is
good
for
this
tact.
It's
a
bit
fascinating
idea.
Thank
you.
Yeah
I
do
have
some
text
in
there
about
a
br
that
that
speaks
a
little
bit
to
these
points.
I
can't
remember
off
the
top
of
my
head,
not
but
yeah.
That's
that's
I,
will
overview
and
and
see.
If
that
addresses
any
of
your
comments.
Thank
you.
A
Okay,
so
I'm
hearing
engagement,
at
least
in
this
particular
point,
and
it
sounds
like
it
would
take
a
bit
of
document
real
estate
to
tease
apart
some
of
the
possible
approaches
to
answering
it.
So
that
sounds
to
me
like.
There
is
interesting
this
document
going
forward
and
to
your
point
about
whether
or
not
it's
officially
a
working
group
document,
I,
think
I.
Think
I
don't
know
if
I
want
to
try
it
hum
this
early
in
the
morning,
but
perhaps
I
should
this.
A
It
is
indeed
the
document
is
indeed
in
kind
of
a
strange
state
because
it
was
adopted
as
one
of
the
milestones
as
we
went
through
the
chartering
process.
But
on
the
other
hand,
we
didn't
have
a
formal
working
group
discussion
about
whether
or
not
we
wanted
to
adopt
it.
So
I
guess
at
this
point
I
would
ask
I
will
ask
for
a
hum.
A
Are
we
in
favor
of
adopting
Jake's
document
and
carrying
on
updates
to
carry
out
this
discussion
over
the
next?
You
know
month,
sorry
months,
18
months,
whatever
she
tried
to
tease
apart
and
come
up
with
good
articulations
of
these
things,
so
in
favor
adopted
of
adopting
the
document.
Please
hum
now
against
adopting
the
document
as
a
working
group
document.
Please
come
now
all
right.
Thank
you
all
and
thank
you
Jake.
A
B
G
Hawke
akamai,
not
speaking
for
Jabbar,
but
for
myself,
so
I
confess
I
have
not
read
the
document,
but
the
discussion,
I
thought
was
a
caused
me
to
think
that
there
you
know
a
CDN
based
media
delivery
is
a
pretty
mature
area.
There
are
industry,
standard,
key
performance,
metrics
and
I
think
to
the
extent
that
we
can
avoid
reinventing
those
in
the
slightly
different
way,
which
will
probably
cause
more
confusion
than
it
solves.
A
The
point
well-taken,
although
I,
will
also
observe
that
part
of
the
part-
maybe
that's
exactly
the
right
answer
and
I'm,
certainly
all
for
not
reinventing
something
as
well
well
shaking
out
elsewhere.
As
long
as
at
the
end
of
the
day,
we
wind
up
with
something
that
will
communicate
to
the
IETF
community,
something
about
the
scale
and
scope
of
you
know,
problems
to
be
addressed.
G
I
wasn't
trying
to
imply
otherwise
I
just
think
when
we're
doing
something
that,
if
already
exists
somewhere
else,
we
should
probably
just
suck
it
in
by
boundary.
I
have
a
comment
from
the
job
room
that
I
wanted
to
pass
on.
Chris
lemon
says:
he's
read
the
document.
It's
got
room
for
improvement
looks
like
it's
going
in
a
direction
that
will
be
useful
with
excellent
Chris.
Oh
sorry,
Scott
Davis
also
said
I
proved
it.
I
am
pretty
much
aligned
with
Chris,
so
two
more
thumbs
up
excellent.
F
B
A
A
A
I
I
Okay,
so
the
way
I
want
to
run.
The
slides
are
just
to
give
you
a
quick
word
or
two
on
what
streaming
video
Alliance
is
and
the
different
working
groups
within
these
streaming,
video
Alliance
and
then
kind
of
jump
into
the
SVA
labs
initiative,
as
seen
from
one
of
the
working
groups,
implementation
of
it
and
then
some
update
on
what
we
have
done
within
the
ITF
all
right.
So
two
things
here:
one
is
that
streaming.
I
As
well
and
the
next
slide
just
want
to
give
you
a
warning,
there's
a
lot
of
words
here,
I
guess
the
objective
was
to
see
how
many
words
can
fit
into
a
slide
well
and
so
I
think
that
got
done.
But
obviously
you
can't
read
anything
there.
So
there
are
basically
a
lot
of
working
groups
within
the
streaming,
video
Alliance
and
just
to
call
out
just
maybe
one
or
two
or
three
of
you
know
what
the
working
groups
are
focusing
on.
So
let's
take
an
example
of
live-streaming
working
group.
I
They
focus
on
issues
like
quality,
latency
and
scalability
measurement
group
focuses
on
Keowee,
networking
and
transport
working
group
focuses
on
streaming
at
scale.
Privacy
and
protection
group
focuses
on
piracy
and
content
protection,
theft,
user
privacy,
etc.
So
there
are
various
degrees
of
work
that
goes
on
within
each
of
these
working
groups
and
they
end
up
producing,
as
I
said,
various
levels
of
documentation
best
practices
is
one
common
thing.
I
Some
specifications
and
also
guidelines
based
on
the
experience
in
the
industry
and,
in
addition
now
through
to
the
IETF
mantra
of
running
code
Alliance,
also
works
on
specifications
and
api's,
is
actually
doing
a
lot
of
work
now
to
turn
those
best
practices
and
in
specifications
specifically
into
api's
and
they're
doing
that
by
establishing
an
open-source
platform.
Alliance
calls
this
and
SVA
Labs
initiative
and
I.
Think
the
following
slides
gives
you
a
little
bit
more
in-depth
view
of
what
that
is.
I
The
open
caching
is
I,
don't
know
if
it's
it's
a
fancy
name,
but
it
really
is
is
something
same
as
the
concept
within
this
ad
and
I,
where
you
have
an
upstream
CDN,
which
can
be
a
commercial
CDN
or
even
a
content
provider,
and
you
have
a
downstream
CDN
which
can
be
an
ISP.
So
it
really
is
using
the
CDN
IRF
sees
in
terms
of
how
an
upstream
CDN
can
delegate
content
down
to
the
downstream
CDN
in
cases
where
it
decides
that
that's
the
best
path
to
do
so
open.
I
Caching
is
really
leveraging
CDN
IR
FCS,
and
the
concept
is
very,
very
similar
and
the
you
have
really
three
main
pieces
within
the
open
caching
architecture.
What
you
have
is
the
control
function
that
are
within
the
upstream
CDN
and
you
have
the
corresponding
control
functions
within
the
downstream
CDN,
and
then
you
have
the
open
caching
nodes,
which
are
nothing
but
really
distributed.
Caching
or
CDN
environment
within
the
downstream
OSI
NSR.
I
What
you
see
down
below
there
that
are
distributed
through
the
network
footprint,
so
it
distributes
the
content
easily
and
the
API
is
basically
connects
all
the
dotted
lines.
So
what
we're
doing
is
that
identifying
all
the
key
functions
laying
out
the
api's
and
then
we
want
to
open-source
those
and
make
them
available
for
make
them
available
for
industry
for
collaboration.
I
In
addition
to
that,
the
working
group
is
also
has
set
up
a
testbed
where
we
are
testing
features
such
as
client
stickiness.
What
we
have
seen
is
that
some
of
the
media
players
don't
always
stick
to
the
last
redirected
source,
so
the
behavior
is
not
very
consistent.
So
we've
had
some
conversations
previously
in
IETF
on
this,
but
I.
Don't
think
we
had
enough
data
that
we
could
bring
back
to
pinpoint
exactly
where
the
problem
is,
whether
it's
the
standards
or
whether
it's
just
the
implementation
of
those
standards
where
the
inconsistency
is
so.
I
What
we're
hoping
to
do
is
that
run
through
our
test
pad
generate
enough
data
and
learn
from
it
and
see
if
there's
any
issues
that
point
towards
the
standards.
If
so,
then
we
can
bring
it
back
to
the
ITF
okay
and
just
to
kind
of
give
an
idea
on
what
really
open
caching
has
done
so
far
from
these
using
the
CDN,
our
CDN
RFC's.
I
Really
there
are
three
or
three
major
areas.
I
would
say
that
we
can
publish
the
api's
one.
We
would
call
as
the
service
provisioning
API
is,
and
these
would
be
both
directions
coming
in
from
the
upstream
CDN
to
the
downstream
and
also
in
cases
from
and
then
from
downstream
CDN
functions
down
to
the
ocn
footprint
and
capacity
is
usually
going
up
from
the
downstream
CDN
to
the
upstream
CDN,
providing
the
footprint
information
so
that
the
localization
can
happen
easily
and
then
the
content
management
API.
I
Is
these
focus
on
coming
in
from
the
upstream
CDN
to
the
downstream
CDN,
specifically
for
managing
the
content
down
to
the
ocean
level?
We
also
have
api's,
which
we
have
not
standardized
them
either,
because
companies
felt
that
some
of
this
was
just
not
quite
there
yet
and
in
some
cases
these
are
very
specific
to
to
features
and
functions.
I
Logging
is
one
such
API.
Basically,
billing
and
performance
related
API
is
that
we
have
and
then
the
capacity
insight,
which
would
be
something
that
the
downstream
CDN
can
advertise
to
the
upstream
CDN
so
that
they
can
figure
out
how
to
allocate
content
in
future.
So
these
are
not
really.
The
main
API
is
functions
that
we
have
and
the
intent
is
to
make
these
available
through
the
open
API,
but
the
hope
that
we
will
get
a
wider
set
of
contributors.
I
Also,
these
would
be
really
restful
api
x'
and
we
plan
to
support
in
multiple
languages,
so
just
to
kind
of
bring
all
of
that
together.
The
intent
is
that
I
think
we
have
done
enough
work
in
terms
of
whether
technology
is
and
now
that
we
have
the
information
enough
to
publish
api's.
Our
hope
is
that
this
will
increase.
Adoption,
also
encourage
collaboration
from
the
industry
and,
as
we
publish
these
API
is
one
of
the
core
benefit
we
also
look
at
is
that
we
can
bring
back
the
feedback
into
the
IETF,
and
we
have
done
that.
I
A
Great
Thank,
You,
Sanjay,
I
guess
one
starting
question:
I
have
with
regards
to
you've
outlined
a
lot
of
the
work,
that's
related
to
the
CD
and
I
work.
A
A
No
I
I
realize
that
you're
working
with
the
CD
and
I
work.
What
I'm,
where
I'm
trying
to
get
at,
is
that
this
sort
of
provided
an
overview
of
how
industry
is
using
the
existing
standards,
and
it
strikes
me
that
you
know
that
presumably
would
be
interesting
directly
to
the
CD
and
I
working
group,
or
maybe
their
focus
on
the
work
that
they're
getting
done
and
don't
have
time
for
these
industry
updates.
I
mean
I'm
trying
to
well
its
get
a
sense
of
whether
or
not
some.
I
A
I
G
I
G
I
J
A
F
Hello,
a
so
Glenn
Dean,
Comcast,
NBC
Universal,
wearing
a
different
hat
right
now.
So
I
was
a
on
the
program
committee
for
the
recent
centi
2019
technical
conference.
It
was
a
very
just
experience
and
what
I
want
to
do
is
provide
some
feedback
or
some
sharing
of
things.
I
observed
at
that
event,
that
I
think
are
relevant
to
the
ITF
and
hopefully
sparks
some
interests
and
work
here
and
some
lessons.
F
So
it's
really
professional
creation
of
content,
professional
distribution
of
content
and
that
whole
thing,
one
of
the
things
they've
been
very
engaged
on
for
the
last
few
years,
is
a
change
from
an
old
transport
protocol
called
SDI,
which
is
what
all
the
equipment
in
a
studio
historically
was
connected
with
into
things
our
IP
base,
which
are
very
much
you
seen
or
using
ITF
based
technologies
and
protocols.
So
one
of
the
things
I
want
to
bring
back
your
observations
of
where
that
is
right
now.
So
that's
clearly
still
ongoing.
F
There's
still
a
lot
of
studios
out
there
with
umbrellas.
They
got
a
replace,
a
lot
of
equipment
that
are
still
migrating
to
2110
and
and
going
on
to
the
IP
networks,
but
a
couple
things
that
really
came
out
of
the
the
conference
that
people
were
found
interesting
to
talk
about
and
really
engage
on,
microphones,
where
the
challenges
of
building
IP
networks
for
media.
F
There
are
several
people
that
gave
talks
on
network
architectures
that
were
designed
entirely
to
allow
for
extremely
high
bandwidth,
low
latency
connections
between
editing
stations,
that
you
mean
Bay's
cameras
and
recording
devices
and
storage,
bays
and
clouds.
It
was
you
know
a
few
of
the
things
as
a
different
guy
made
me
cringe
when
I
saw
them,
because
you
know
some
people
said
well,
you
don't
want
to
build
love,
multi-tiered,
hierarchical
networks
in
your
shop.
You
want
to
build
one
big
giant
flat
network
where
everybody's
on
the
same
segment,
so
you
can
get
high
bandwidth.
F
It
made
me
cringe
roll
into
the
ITF.
There
may
be
some
guidance
work
that
we
might
want
to
draft
some
materials
on
for
best
practices
of
how
to
build
such
networks
and
I
realize
that
crosses
a
little
bit
in
the
operation
thing.
We
have
expertise
service
so
for
future
working
this
group
that
might
be
an
area
we
want
to
sort
of
tap
into
and
play
with.
The
second
one
was
something
which
kind
of
surprised
me.
F
Even
though
I
I
do
live
in
this
world,
I
didn't
realize
how
much
focused
people
really
had
on
time
still
yeah.
There
is
numerous
papers
that
talked
about
time,
and
this
has
been
brought
on
because
the
old
protocols
did
carry
time
information
around
with
them
and
all
the
devices
and
were
able
to
synchronize
and
be
you
know,
engaged
and
synchronizing
their
actions,
but
also
when
they
blend
content
together
in
pieces
that
all
worked
really
well
under
IPE.
F
F
Of
course,
if
you,
if
you're
familiar
with
it,
there
is
other
mechanisms,
though
they
didn't
lock
into
only
p2p,
there's
other
ones
they
are
proposing
discussing,
because
they
have
a
lot
of
scenarios
where
they're
offline
or
they
one
of
the
scenarios
that
was
brought
up,
was
if
you're,
trying
to
source
from
GPS
information
and
your
GPS
is
all
go
down
or
your
antennas
are
covered
in
snow.
You
have
a
bit
of
a
problem.
How
do
you
deal
with
that?
It's
some
new
work
that
they
were
trying
to
get
other
parties
to
take
up.
F
H
It
would
be
good
also
for
us
to
know
what
how
do
cups,
how
the
data
is
how's
the
medians
transported,
in
order
to
figure
out
what
we
can
do
also
here
in
other
working
group.
In
order
to
to
help
with
with
this
timing,
it's
what
was
mostly
because
there
was
sending
the
information
from
different
sources
and
they
want
to
to
combine
them
when
in
the
the
production
area
and
that's
where
the
timing
and
they
were
getting
from
the
camera.
The
GPS
time
was
that,
yes
and
and.
F
A
tad
on
that
one
of
the
concerns
that
many
people
started
raising
was
worries
over
security
in
integrity
of
time,
because,
if
you
imagine
a
lot
of
stuff,
that's
getting,
you
know
produced
and
streamed
out
live.
If
you
start
tinkering
with
the
time,
you
can
cause
a
lot
of
problems,
make
screens
go
dark
and
cost
people
a
lot
of
money
or
if
you
wanna,
be
really
nefarious.
You
know,
makes
the
time
go
bad
and
then
start
inserting
your
own
material,
that's
out
of
sync
with
it
cause
a
lot
of
sort
of
social
activism.
G
F
The
one
thing
I
got
really
excited
about
when
I
was
sitting
in
this
conference
was
the
number
people
started,
benching
ipv6,
historically,
the
the
you
know.
Professional
media
industry
has
not
been
a
big
supporter
of
v6
and
I
people
started
talking
about.
V6
was
like
yes,
so
I
just
wanted
to
bring
that
back
to
the
ITF
they're
starting
to
listen
to
us
and
they're
getting
on
board,
which
is
awesome.
G
That's
great
so
Aaron
Faulk,
Akamai
I.
This
is
the
time
well
see.
There
was
another
presentation,
that's
actually
in
a
few
different
forms
this
week
on
the
itu-t
t30
vision
of
you
know,
IP
of
the
future
first
I
want
to
say
it
looks
like
SEM.
T
has
beaten
them
by
looking
at
2110.
So
that's
great
that
you
know
sort
of
getting
a
little
further
into
the
future,
but
at
a
more
serious
note,
time
actually
appears
on
their
list
and
one
of
the
things
that
sort
of.
G
Left
me
unsatisfied
about
the
discussion
there
was.
It
was
more
sort
of
a
statement
of
we
need
better
precision
and
here
what
this.
What
I'm
hearing
here
is
that
we're
actually
running
better
precision
in
lots
of
different
ways
and-
and
so
the
question
that
I
have
is
what
exactly
are
the
problems
and
challenges
that
are
coming
out
of
this
I
think
it
would
be
actually
really
useful
information
for
the
IDF
to
understand
like
what
are
the
limitations.
Where
are
things
not
working
so
that
we
can
actually
talk
about
some
potential
work
here?
G
Activity
for
this
working
group
is
to
try
to
bring
in
some
of
these
challenges
from
operational
areas
on
like
where
the
protocols
we're
hitting
some
walls
and
what
they
can
do,
where
there's
too
many
solutions
or
like
the
stuff
that
Roni
was
just
measuring,
mentioning
and
so
I
think
if
we
could
get
some
drafts
on,
like
you
know,
hey
here's,
an
industry
situation
where
you
know
this
is
painful.
That
would
be
really
helpful
for
the
IETF
to
say:
oh
well,
here's
an
area
or
maybe
we
should
apply
some
engineering
I.
F
F
So
it'd
be
really
useful
to
spin
them
into
this
conversation,
because
if
you're
gonna
run
in
one
of
those
data
centers
your
problem
basically
evaporates
because
you
have
nanosecond
scale
time
synchronization
across
the
entire
data
center
melting
point.
That's
your
point.
Dave
I
had
not
considered
that.
K
F
So
all
right,
what
top
point
is
look
at
what
these
guys
are
doing,
because
we
can
put
Tomic
clocks
almost
anywhere
now
at
reasonable
cost?
Well,
and
you
know
there
isn't
going
to
be
one
single
way:
people
do
this.
It's
a
very
diverse
set
of
environments.
It
ranges
from
people
doing
field
production
in
the
jungle,
recording
a
movie
to
people
connected
to
live
networks
to
people
in
stadiums.
Doing
live
streaming
at
low
latency.
So
there
there's
a
lot
of
areas
here
to
get
really
into
different
nets
and
different
details.
F
G
A
G
Side,
and
so
maybe
a
taxonomy
document
to
help
inform
the
IETF
on
like
what
do
these
environments
really
look
at?
This
is
kind
of
like
what
TCP
Sat
did
in
the
early
days
was
to
try
to
pull
in
some
information
on,
like
you
know
what
our
that,
what's
the
variation
in
in
these
environments,
to
understand
what
I
mean
from
a
networking
point
of
view,
okay,.
A
G
L
I'm
Eva
little
bit
of
Akamai
and
my
goal
today
is
basically
to
bring
some
information
to
the
people
who
were
not
who
are
not
following
a
working
group
every
moment
and
to
share
some
of
the
observations
about
what
quick
will
bring
to
the
people
who
operate
media
streaming,
who
are
responsible
for
quality
of
service?
What
kind
of
changes
some
challenges
that
you'll
encounter?
L
But
it's
basically
that's
what
operators
life
to
do
on
the
network
is
to
be
able
to
see,
delay
and
loss
and
hopefully
see
it
before
their
customers
call
them
and
tell
them
about
the
problem
that
they're
facing
when
you
are
looking
at
TCP
streams,
people
have
been
traditionally
linear,
the
sequence
numbers
and,
if
a
symmetric,
if
a
path
is
symmetric,
X
numbers
and
you
could
put
together
estimate
of
the
loss
and
the
estimate
of
the
delay
are
quick.
As
people
are
aware,
it's
a
protocol
that
encrypting
his
headers.
L
Now
one
of
the
alternatives
is
well.
If
you
cannot
see
anything
in
quit,
there
is
no
signal
in
it.
Let's
just
observe
similar
TCP
flows
that
actually
worked
pretty
well
today,
because
a
quick
on
the
internet
represents
single
digit
of
traffic.
I
totally
expect
that,
once
good
working
group
publishes
a
spec
and
there
is
a
number
of
implementations,
every
single
browser
has
one
and
every
city
and
will
have
an
implementation.
M
Spencer
Dawkins
two
things
I
wanted
to
mention.
Thank
you
for
bringing
this
work
here.
Igor
I
could
be
wrong
about
this,
but
to
the
best
of
my
knowledge,
this
is
the
first
time
a
chartered
ops
working
group
has
talked
about
this
issue
and
I.
Think
it's
been
really
important.
Speaking
is
the
former
area
director
of
the
spin
bit.
L
M
Excellent
on
this
on
this
slide,
when
you
say
just
observe
similar
TCP
flows,
is
not
a
good
answer.
How
bad
is
it
and
you
don't
you
don't
have
to
answer
but
I'm,
saying
I,
think
that's
the
engineering
question
right.
That's
gonna
really
matter!
It's
always
is
this,
as
this
conversation
goes
so
I
love.
L
The
TCP
and
UDP
have
very
different
treat,
gets
very
different
treatment,
but
by
the
network
it's
actually
not
uncommon
to
have
a
network
under
a
large
UDP
flood
and
that
typically
I
would
assume,
because
it's
much
easier
to
compromise
the
user
space
application
and
send
spoofed
UDP
traffic
and
typically
it
networks.
What
they
do
is
a
police
rate
limit
on
the
UDP
and
their
links.
So
you
may
immediately
get
very
different
performance
from
UDP
application
and
from
a
TCP
application,
so
absorbing
TCP
and
UDP
may
actually
diverge
quite
a
bit.
K
Bernard
about
Microsoft
just
wanted
to
make
a
comment
that
I
think
this
may
be,
assuming
that
quic
is
used
in
some
of
the
same
manners
that
TCP
is,
but
what
I'm
saying
is
is
somewhat
different.
Streaming
architectures
being
used
with
quick,
such
as
things
like
scalable,
video
coding
and
one
of
the
problems
that
introduced
is
introduces
is
that
it's
no
longer
sufficient
to
just
know
aggregate
loss.
You
really
want
to
know
the
loss
of
each
of
the
layers
that
are
being
transmitted
and
that's
something
you
can
really
only
get
from
the
end
systems.
K
It's
not
really
easy
to
observe
that
in
in
transit,
even
if
you
have,
even
if
you
were
to
do
everything
in
the
clear
you'd,
have
to
have
the
right
RTP
markings
and
observing
all
that
is
kind
of
complex,
so
and
I
would
observe
that
we've
created
a
whole
bunch
of
new
metrics
on
the
sender
and
receive
side
that
actually
get
you
much
more
detailed
loss
data
than
you
could
possibly
get
through
network
observation.
So
just
something
to
think
about
that
this
stuff
isn't
necessarily
going
to
be
collected.
L
I
mean
definitely,
you
cannot
get
everything
you
possibly
want.
Just
by
observing
transport
I
mean
that's
that's
the
same
thing
in
TCP,
world
and
UDP
world.
Absolutely
true,
all
right!
So
just
a
little
bit
of
about
well
media
streaming
right
now
is
by
far
the
biggest
quick
use
case.
So
that's
why
it's,
but
it's
important
for
this
group
and
the
streaming
is
actually
very
sensitive
to
changes
in
round-trip
time
and
the
packet
loss
now.
L
So
what
is
available
once
quick,
basically
removes
all
the
implicit
signals
from
the
transport
layer,
the
world
there
was
work
on
putting
back
an
explicit
signal
for
delay
measurement.
It's
a
spin
bit.
It
is
available
it's
in
quick
version.
One
I
mean,
of
course,
it's
subject
to
user
agents,
actually
supporting
turning
it
on,
but
it
is
part
of
the
standard
and
it
works
in
a
pretty
simple
way.
You
just
have
a
server
that,
echoing
back
those
bits
as
it
received
from
the
last
packet
from
the
client
and
clients,
spinning
the
bit.
L
Basically,
it's
flipping
it
to
another
opposite
value
from
the
last
packet
that
they
received
from
the
server,
and
you
could
see
an
unpaired
observer.
Can
measure
round-trip
time
in
each
direction?
So
you
you
can
you
can
be
on
each
either
of
the
directions
and
you
could
measure
the
round-trip
time
of
connections.
L
It's
using
one
of
the
bits
that
were
previously
resolved.
Now
it's
dedicated
for
spin
bit
in
quick,
short
header.
So
it's
basically
in
the
first
byte
and
there
is
a
link
to
some
more
about
it
lost
bits.
So
the
loss
loss
trip
signal
is
not
going
to
be
in
quit
version
one
that
mostly
due
to
timing.
But
there
are
proposals
to
how
to
do
the
loss
measurement
and
there
is
a
quick
extension
draft
or
maybe
there
will
be
several
that
are
actively
in
the
works
and
will
do
very
soon.
L
One
of
them
from
us
mechanisms
how
to
add
loss
reporting
and
the
loss
reporting
is
goal-
is
to
be
able
to
first
of
all
observe
that
there
is
loss
quantify
how
much
and
to
be
able
to
decide
to
determine
whether
its
upstream
loss
or
downstream
loss,
which
is
very
important
when
you
try
to
actually
troubleshoot
the
source
of
it
like
with
a
spin
bit.
It's
good.
L
L
So
actually,
since
again,
we
have
probably
a
couple
minutes:
I,
don't
have
a
slide
here.
You
could
read
the
draft
or
look
at
the
more
extensive
presentation
in
Montreal
and
TSE
WG,
but
so
one
of
the
operational
principles
for
one
of
the
proposals
is
essentially,
you
have
a
square
wave.
So
if
you
have
a
marking
that
alternates
every
n
packets
and
by
absorbing
that
you
could
observe
the
upstream
loss
and
then
you
have
a
another
bit
that's
effectively
driven
by
the
protocols
loss,
detection
machinery
if
a
loss
is
over
packet
is
detected,
it
increments.
L
The
counter
and
if
it
sends
out
a
packet
it
marks
is
counter,
is
nonzero
and
decrement
the
counter.
So
basically,
from
that
information
you
can
determine
and
to
end
lost,
which
is
because
it's
what
our
protocol
machinery
determines
now
knowing
end
to
end
loss
and
upstream
loss,
you
can
figure
out
everything
else.
So
that's!
Basically,
our
proposals,
mechanisms
around
others
and
just
before
we
close
it's
kind
of
a
for
completeness,
some
people
said
well.
One
way
to
determine
loss
in
quit
is
to
decrypt
all
those
headers.
L
L
Key
distribution
under
lossy
conditions
is
hard
and
problematic
and
just
not
a
good
idea.
So,
basically,
we
have
to
have
an
explicit
signal
in
the
protocol
to
enable
this
kind
of
measurement,
and
so
what
do
we
want
for
the
operators
for
people
who
actually
care
about
this?
For
people
who
think
it's
an
absorbing
loss
as
important
to
do
number?
L
One
thing
is:
if
you
have
a
discussion
happening
around
you
just
pick
up
and
say
it's
important
that
probably
one
of
the
main
reasons
as
Spencer
said
that
we're
still
here
talking
about
experiments
as
opposed
to
another
proposal
already
in
version.
One
is
that
we
haven't
had
a
huge
operator
presence
in
quit.
Working
group
from
the
beginning,
and
so
we're
here
now
and
of
course,
watch
folk
with
a
quick
extension
draft.
L
M
M
L
Explicit
signaling
means
that
the
sender
of
a
packet
put
some
signal
that
explicitly
meant
for
the
pass
for
a
particular
specific
purpose
as
opposed
to
what
we
have
with
many
other
protocols.
Tcp
is
a
great
example
where
there
are
protocol
machine,
there's
protocol
machineries,
that
observers
try
to
lachan
and
infer
information.
M
being
able
to
ever
change
anything
because
the
point
was
that
the
very
invariants
aren't
don't
vary,
so
I
think
that
I
think
that,
like
I,
say,
I
think
that
you're
definitely
headed
in
the
right
direction
and
I
only
wish
we'd
been
smart
enough
to
do
that
a
couple
of
years
ago,
because
I
think
you're
definitely
headed
in
the
right
direction.
Thank
you.
I.
L
Think
so,
as
far
as
invariants,
that
is
a
great
question.
Quick
is
a
pretty
new
thing
that
people
have
not
had
much
experience
with
anything
like
it,
especially
on
anything
like
this
scale.
So
the
signals
we're
talking
about
are
also
quite
new
that
many
people-
nobody
really
nobody,
has
any
experience
at
scale.
So
all
of
these
things
are
quite
experimental
and
so
forth.
L
That's
why
it's
their
involvement,
one
I
can
totally
see
if
they
take
off
as
one
of
the
important
means
for
people
to
monitor
their
network,
then
they're
going
to
be
deemed
very,
very
useful
and
nobody
will
think
of
removing
them
from
the
future
were
kind
of
the
protocol.
If
there
are
no
problems
with
them,.
N
Hi
Colin
Perkins,
so
gory,
Fairhurst
and
I
have
a
draft
in
the
transport
working
group,
which
is
talking
about
the
effects
of
transfer
header
encryption
on
future
transport,
Briscoe
V
pollution.
Now
one
of
the
things
that
talks
about
is
the
implicates
the
implications
for
network
operations
and
management.
It's
in
last
call
in
the
TSV
WG.
It
received
a
a
fair
amount.
It
received
a
fair
amount
of
feedback,
primarily
from
the
security
related
people.
N
O
James
bruising
BBC
I'm,
not
the
most
knowledgeable
and
quick.
So
if
I
get
this
wrong,
somebody
stop
me.
So
one
crazy
idea
is
is
instead
of
having
quick,
doing
affirmative
a
Khan
port
Rangers,
switching
it
so
we're
doing
negative
AK,
which
is
what
a
lot
of
Arc
based
protocols
use
and
then
use
one
of
those
reserve
bits
for
the
signaling.
This
would
be
in
the
client
opts
into
telling
when
there's
loss
in
the
network,
but
the
details
of
how
much
needs
to
be
retransmitted
is
still
inside
of
the
encrypted
payload
I'm.
L
Happy
to
talk
to
you
offline
and
to
get
the
figure
out
the
details,
the
general
just
about
just
the
overall
quick,
is
that
it's
trying
to
make
sure
that
the
only
signals
we
put
here
are
truly
explicit
signals
about
this
particular
purpose.
So
if
we
try
to
marry
a
little
bit
of
protocol
machinery
to
be
in
the
clear
it
may
be
different,
it
may
become
difficult,
but
I'm
happy
to
talk
to
you.
I
Hi
core
sanjay
mishra
from
verizon,
so
I
think
this
is
really
good
and
I
was
at
a
quick
meeting
where
the
discussion
did
happen
on
this
and
I
know,
a
lot
of
operators
didn't
necessarily
stood
up
to
the
mic
to
to
support
it,
but
I
think
it's
implicitly
there
and
you
know
I
think
this.
There's
a
lot
of
exhaustion
also
on
on
the
operators
is
part
because
they've
stood
up
significantly
and
Mark
can
vouch
for
that
just
to
get
spin
bit
in
the
in
quick,
so
I
think
it's
not
it's.
I
It's
not
lack
of
interest
in
this
there's
a
tremendous
interest
and
certainly
I
think
you
the
point
you
made
it's
well
taken
and
you
know:
I
speak
for
Verizon,
but
I
think
other
operators
are
also
very
equally
interested
in
in
having
in
seeing
this
work.
Progress
so
I
think
we'll
we'll
keep
an
eye
on
that
and
in
provide
comments
and
things
as
needed.
Thank
you.
Thanks.
P
Ahead
I'd
like
to
take
the
comment:
if
it
up
liver,
so
I
have
heard
like
Bernie
talking
about
like
a
lot
of
like
lost
information
and
other
information.
Metrics
will
be
collected
in
the
in
points
and
also
here,
like
Spencer
saying
like
yeah.
How
do
you
do
without
involving
end
points
right
so
I
so
would
it
be?
So
would
it
be
very
unrealistic
to
think
like
in
the
future
that
we
have
more
a
collaborative
client
and
network
in
place
where
basically
client
is
helping
and
the
network
with
providing
some
information
about
these
kind
of
metrics?
P
P
It's
coming
initiative
and
unlike
mask
where
you
actually
put
some
explicit
proxy
for
quick
traffic
and
stuff,
like
that,
where
you
can
have
some
kind
of
explicit
channel,
which
is
basically
authenticated
and
agreed
on,
where
you
share
some
information
and
I
think
that
that
is
another.
Another
thing
that
we
might
want
to
look
at
or
more
explicit,
more
like
a
collaborative
way
of
solving
the
problem.
You
start
up
and
doing
everyone
does
this
best
and
nothing
at
it.
So
that's
that
would
be
one
way
of
looking
at
the
problems.
Yes,
yeah.
L
I
totally
say
that
people
will
come
up
with
very
interesting
ways
of
solving
problems,
and
people
are
pretty
creative
around
here
and
if
you
find
clients
and
that
willing
to
talk
to
cooperating
cooperating
middle
box
and
through
some
control,
plain
communications
happening,
that's
helping
performance
or
anything
else.
It
will
happen
if
it's
useful
and
if
clients
find
it
useful.
Q
Hello,
there
I
am
Emil
second
for
more
lunch,
and
so
we
go.
We
asked
this
slot,
because
network
operation
on
monitoring
is
based
on
the
observation
of
the
delay
on
the
packet
loss
and
we
have
been
a
safe
fighting
in
the
quick
working
group
to
get
the
spin
bit
so
to
observe
the
delay
so
on
so
with
the
spin
bit,
a
non
pass
device
can
observe,
there's
a
decomposition
of
the
delay.
Let's
say
on
the
right
on
the
left
on.
Q
R
Motorcoach
video
from
Telecom
Italia
with
operators.
We
are
very
interested
in
this
technology.
We
have
the
developer
of
the
other
alternative
methodology,
but
we
support
this
kind
of
measurement
because,
in
our
opinion,
packet
loss
and
delay
in
a
species
way
is
very
important
to
monitor.
Our
network
is
not
also
important
to
motor
the
application
by
is
also
important
to
motile
the
network,
in
particular,
to
segment
the
network
in
the
many
segments
each
segment.
Each
segment
is
the
domain
of
each
operator,
so
we
want
to
distinguish
the
measurement
to
the
segment.
The
network
to
understand.
R
L
M
Spencer
Dawkins
I
I
did
not
get
up
here
to
say
this,
but
I
can
definitely
can
confirm
that
there
was
fighting
in
the
quick
working
group
about
the
spin,
but
but
more
relevant.
Probably
to
this
discussion
is
the
plus
buff
that
we
had
in
Berlin.
That
was
a
tussle
of
amazing
proportions
to
the
point
where
you
know,
I
made
the
decision
not
to
charter
a
working
group
there,
which
was
intended
to
provide
visibility
to
on
path
observers.
M
So
I
mean
it's
like
you
know,
couldn't
even
shorter
at
the
work
so
that
the
part
where
you're
talking
about
on
about
involving
the
end
points
with
explicit
signaling
and
is
you
know,
instead
of
district
distributing
keys
to
theoretically
on
path
observers
until
the
first
day
we
do
multipath
quick,
then
that
you
know,
then
they
won't
be
anyway,
but
forget
that
they
say
you
have
no
idea
how
important
what
you're
talking
what
you're
proposing.
Yes.
Thank
you
yeah.
L
Q
Q
Is
really
but
on
on
a
more
larger
scope,
I
think,
with
a
spin
bit
the
quick
rocking
group,
it's
a
enable
enabling
a
framework
where
a
hand,
hand
to
end
point
are
exposing
information
which
is
integral
which
integrity
is
protected
by
hand
to
end
on.
This
is
something
that
the
TSV
working
group
should
six
about.
Encryption
is
not
all
the
part.
You
have
integrity
on
encryption
and
they
should
take
care
of
both
not
only
encryption
right.
L
That's
actually
a
very
valid
point
that
one
of
the
things
that
you
get
from
using
quick,
headers
and
quic
protocol
mechanisms
is
that
you
get
integrity,
protection
of
that
of
the
signal
and
as
far
as
the
endpoints
are
concerned.
So
by
having
endpoints
engage
in
this,
it's
given
endpoints
a
choice
to
provide
the
signal.
L
We
hope
that
most
of
them
will
all
of
the
drafts
for
spin
bit
and
for
the
lost
bits
involve
the
provisions
that
mostly
they're
about
protocol
and
the
protocol
ossification,
but
also
for
privacy,
that
a
small
fraction
of
the
traffic
should
not
have
the
signal
exposed.
It's
not
gonna
compromise,
because
very
small
problem,
a
portion
of
the
traffic
ability
to
monitor
it,
but
it
will
give
additional
protection
against
protocol
ossification
or
some
possible
privacy
implications
for
a
minority
of
clients
who
may
choose
not
to
expose
the
signal
so.
A
N
N
The
second
is
that,
if
you're
reporting
on
endpoint,
you
know
if
you
have
endpoint
reporting
on
the
behavior
of
an
encrypted
protocol,
there
are
some
interesting
questions
about
the
integrity
of
that
from
whether
the
network
can
trust
the
reports
to
be
accurate
and
thinking
about
whether
there
are
ways
of
somehow
validating
that
the
reports
come
here
from
the
endpoints
match.
What's
actually
going
on
may
be
useful.
The.
N
L
I
know
that
connects
working
group
has
done
some
work
in
that
regard.
Previously
I
mean
there
are
not
enough
bits
for
that,
but
that's
a
very
valid
point
and
I
mean
theoretically,
you
could
have
a
TCP,
malicious,
TCP
sender,
who
is
sending
TCP
packets
that
look
like
gratuitous
loss.
So
that's
not
an
entirely
new
problem.
N
For
the
space,
but
if
it's
ECP,
if
a
tcp
sand,
tries
to
manipulate
whoever
it
lost,
you
know
the
information,
but
wherever
it
lost
packets
in
terms
of
the
acts
it
effects,
TC
tcp
behavior
right.
If
a
quick
receiver
sending
reports
does
the
same
thing,
it
doesn't
affect
the
quick
perform.
That's
the
difference.
Yeah,
okay,.
A
So
my
takeaway
is
that
our
awareness
has
been
raised
on
this
topic
and
that
the
work
is
being
done
is
currently
being
done
in
the
quick
working
group.
So
the
action
item
for
people
here
is
to
keep
an
eye
on
the
work
being
done
there
and
contribute
voices
and
opinions
as
appropriate
and
then
I
guess
and
if,
if
further,
if,
furthermore
code
cohesive
input
is
needed
from
people
interested
in
video,
we
may
add
a
future
in
a
future
time.
Decide
to
try
to
articulate
video
need
with
regards
this.
Yes,
Ronnie,
not.
H
Just
to
be
more
precise,
it's
it's
currently
there's
the
work
now
in
both
in
I
ppm
and
quick
working
group
and
I
assume
that
today,
in
T
is
the
area
they
will
try
to
discuss
where
to
do
future,
quick
work.
So
for
that,
if
people
are
interesting
this
world,
they
should
voice
their
opinion
where
it
should
be
earned
Oh,
mostly
because
of
the
overload
of
all
these
working
groups
right.
A
We're
quite
sure
it
shouldn't
be
done
in
this
working
group.
So
that's
one
off
the
list
right.
Okay,
great!
Thank
you
very
much.
So
now
we've
done
one
of
these
mops
working
group
meetings,
and
hopefully
people
have
a
better
sense
of
some
of
the
type
of
things
that
we
can
discuss,
and
you
can
be
thinking
of
things
that
we
should
be.
You
know
we
should
be
bringing
to
the
table
when
we
next
meet.