►
From YouTube: IETF104-SPEAKER-20190328-1230
Description
SPEAKER meeting session at IETF104
2019/03/28 1230
https://datatracker.ietf.org/meeting/104/proceedings/
A
I'm
a
tree
where
we
are
so
it's
a
little
bit
of
motherhood
and
apple
pie
across
all
layers,
networking
application
and
business
and
what
well
ITF
does
and
could
even
do
in
that
whole
Stein
domain
from
the
amount
of
people
in
the
room,
I'm
kind
of
staggered.
That
means
that
that
statement
might
be
right
or
wrong.
A
What
we
were
typically
extracting
was
such
really
incomplete
information,
and
there
was
sometimes
not
even
in
CLI
to
get
at
all
of
that
information
and
it
was
specific
to
a
particular
box.
As
a
consequence,
people
weren't
really
looking
at
all
that
data,
because
they
couldn't
look
at
all
the
data
and
was
really
hard
operationalize
and
we
made
that
overall
thing
more
fuzzy
by
adding
additional
stuff
on
top.
So
you
pile
not
only
per
device
information
but
also
per
flow
information
or
even
per
packet
information
with
IOM.
Then
it's
getting
even
harder
to
operationalize
so
well.
A
A
What
telemetry
is,
if
you
go
with
a
the
original
definition,
as
you
go
to
say,
Wikipedia
or
even
what
Mesa
used
telemetry
is
an
automated
communications
process
by
which
measurements
and
other
data
are
collected
at
remote
or
inaccessible
points
like
the
network
and
then
transmit
it
over
to
some
equipment
where
it's
gonna
be
digested.
So
we're
gonna
go
and
use
this
in
a
relatively
loose
way,
because
telemetry
is
kind
of
one
thing
for
one
guy
and
another
one
for
one
and
another
guy.
B
Thank
you
Frank,
so
we'll
play
ping
pong
game
between
the
two
of
us
during
this
session.
So,
okay,
we
need
telemetry.
So
there
are
three
neighbors
for
telemetry
in
the
network.
First
of
all,
don't
pull
push
it's
right.
If
you
ever
try
to
create
a
BGP
table
is
an
NP.
Well,
you
understand
why
now
the
second
one
is
analytics
right,
it
must
be
antics
ready.
I
will
cover
that
and
the
third
one
it
must
be
data
model
driven
because
it's
required
for
automation.
So
let
me
show
this
to
you
right,
an
interface.
B
B
The
name
is
T,
not
even
the
same
right
now,
if
I
see,
if
you
know
if
I
pop
is
MMP,
this
is
the
if'
index,
the
unique
ID
for
interface
right
now,
if
I
go
into
yang
what
this
is
the
highest
index,
we
were
clever
to
make
to
have
the
same
semantics
between
us,
an
MP
and
yang
great,
but
that's
a
different
field.
Now,
let's
go
to
what
NetFlow
an
IP
fix.
This
is
an
interface
but
pay
about
patterns.
About
that
I
mean
we've
got
the
different
semantics.
B
This
is
the
Inger
s
interface
or
the
egress
interface.
Even
if
we
map
it
with
the
if'
index,
you
see
it
coming
right
now.
Let's
say
that
I
need
to
analyze
the
triple-a
information.
Well,
you
take
X,
plus
that's
something
different
I
see
you're
smiling
already,
but
I
mean
this
is
reality
right
in
automation.
Now,
okay,
I,
use
radius!
Guess
what
it's
enough?
Sport
and
I
speak
here
about
a
very
simple
information,
which
is
an
interface
right.
We
know
when
the
face
is,
but
if
we
need
to
automate
this,
this
is
very
difficult.
B
So
what
are
the
solutions?
Solutions
are
okay,
I'm,
going
to
develop
an
expensive
mission
function
that
will
match
information
from
the
if'
index,
the
eyes
index,
and
if
it
come
from
a
cystic,
I'll
do
a
grep,
and
hopefully
people
will
have
implemented
the
right
interface
naming
convention,
and
if
this
is
net
floral
say
well:
okay,
there
is
extra
semantics
there
or
I'll
be
using
the
same
data
model
in
order
to
have
the
same
semantics
directly
they
one
and
as
wide
as
data
model.
Your
management
is
required.
What
is
important
is
the
semantics
right.
B
B
The
next
thing
we
want
is
the
analytics
ready
data.
Okay,
great
I've
got
the
time
series
of
a
counter.
Okay,
will
it
come
from?
What
is
the
information
for
platform?
What
form
is
this,
which
iOS
version?
Is
this
like
a
deviation
in
yang?
Is
this
like
a
specific
image
from
engineering?
This
information
must
follow
the
streaming
information
right
same
thing
about
the
data,
manifests
the
how
and
the
one
the
data
were
measured.
B
Now
you
could
say
that
data
models
are
defines
what
you
want
to
stream
right,
but
it's
equally
important
how?
How
do
you
want
to
consume
information?
And
this
is
tuning
aspect,
so,
let's
focus
a
tuning
aspect.
There
are
plenty
of
tools
for
data
models.
Right
I
could
have
like
15
slide.
Let
me
show
just
couple
of
them
be
I'm
to
show
us
in
a
model
to
the
three.
B
There
is
this
catalogue
there.
That
gives
you
all
the
models
that
you
could
think
of
right,
including
the
one
from
different
SDOs
and
I,
will
come
back
that
point
right.
There
is
I
Triple,
E
and
math
and
Bob
and
forum,
and
all
this
so
there
are
tours
for
that
for
telemetry.
There
are
some
tools,
we're
not
there.
Yet
this
one
is
the
advanced
that
come
Explorer.
B
You
just
go
on
a
device,
and
then
you
see
okay,
what
can
you
stream
to
me
and
from
there
you
subscribe
and
you
get
all
information
directly
to
you,
the
collector
right
another
one.
This
one
is
like
11
days
old,
Telegraph
right,
if
you,
if
you
hated
pipeline
this
one,
maybe
is
your
friends:
that's
the
way
to
basically
have
model
driven
telemetry
with
what
G,
RPC
dialing
dial
out
with
TCP
with
gin.
B
B
This
one
too
I
mean
yes,
we
all
want
you
to
go
into
this
data
modeling
too,
and
management
in
telemetry
right.
This
one
will
help
the
operators
to
move
away
from
mission
and
PU
IDs
from
eco
IDs
to
telemetry
yang
telemetry.
So
it's
a
mapping
to
say
if
you
were
using
these
Ioffe
index
and
these
counters
these
objects
in
SNMP.
This
is
the
mapping
to
yang.
So
there
are
some
tools,
but
we
are
at
the
beginning.
B
Now,
if
I
try
to
summarize
where
we
are
in
terms
of
network
telemetry,
at
least
one
part
of
it,
because
Frank
will
be
covering
in
the
part
under
what's
on
the
models
we
are
there.
We
have
a
specification
of
a
yang
of
a
Netcom
for
s,
confid
cetera
the
tools
we
are
there
right.
We've
got
reference
code.
The
industry
combination
you've
seen
that
with
the
catalog
there,
where
we
try
to
correlate
what's
happening
in
open
source
in
SDOs,
even
for
vendors,
etc.
So
what
you
see
as
a
theme
throughout
this
session?
B
Is
it's
not
good
enough
to
just
do
ITF
specifications?
We
need
specification,
choose
code
and
pollination
industry,
because,
if
I
look
at
the
right-hand
side
for
telemetry,
well
guess
what
we
don't
have
the
specifications
yet
after
four
years
in
the
ITF
right.
However,
we
got
some
tools
there
and
we've
got
some
code.
How
come
because
the
two
that
are
in
there
and
the
coaches
in
there
is
not
based
on
the
ITF
strategic
edge
that
we
don't
have
yet
there
are
based
on
something
else.
Does
it
matter?
Maybe
not.
B
A
It's
maybe
it's
easier
after
so
that
we
can
go
and
condense
this
overall
thing
a
little
so
from
well.
What
you
did
is
go
and
collect
information
from
an
individual
device
and
reason
about
that.
One
day
our
up
is
well.
You
might
want
to
go
and
get
a
little
bit
of
information
about
the
overall
network
and
how
data
has
progressed
and
we've
been
using
ping
and
trace
for
quite
some
time
and
well
more
recently.
A
What
was
recently
a
couple
of
years
ago,
even
we
did
run
way,
active
measurements,
protocol
and
two-way
act
of
measurement,
where
you
basically
have
a
control
channel
between
client
and
server
and
then
instructing
how
the
test
would
go.
And
then
you
get
more
accurate
information
between
these
two
devices.
I
think
what
I
want
to
go
highlight
these
tools
are,
there
were
specified
here
and
the
other
good
thing,
and
that's
what
Benoit
was
referring
to
in
order
to
get
these
things
deployed
and
useful.
A
We
need
something
that
looks
like
a
reference
implementation
so
that
people
can
go
and
consume
it.
Om
was
done
relatively
early
on
as
far
as
Internet
to
there
was
also
t1
code
out
there
you
can
grab
it,
you
can
containerize
it
and
you
can
ship
it
on
well
any
platform
that
runs
containers,
whether
it's
a
router
or
an
end
host.
You
can
use
that
to
do
real,
accurate
measurements.
A
These
days,
fine
now
up
leveling
the
conversation
a
little
bit,
so
we
did
observe
and
go
and
do
what
people
typically
refer
to
as
streaming
telemetry
on
a
per
device
basis.
So
getting
all
these
250,000
counters
out
of
the
box
or
a
subset
of
those
we
can
do
active
throwing.
But
there
is
something
missing
right.
So
how
do
you
measure
the
life
user
traffic,
we're
just
kind
of
moving
into
the
domain
that
I
care
about,
and
that
is
in
sitio
a.m.
A
A
A
What
we
decided
early
on
is
well,
we
don't
want
to
go
and
get
locked
into
one
or
the
other.
Well,
what
I
would
call
parent
a
protocol,
because
many
people
are
on
different
protocols,
but
we
want
to
go
and
have
one
set
of
data
fields
that
everybody
that
is
subscribing
to
this
particular
technology
would
support.
So
we
started
a
draft
and
that
drive
is
adopted
in
AI
ppm,
where
we're
just
defining
the
data
fields
of
what
you
carried
ie,
timestamp,
node,
ID,
trooper,
transit
marks,
sequence
numbers
if
you
need
them
and
the
likes.
A
A
In
and
spit
out,
we
can
carry
those
things
in
various
protocols
and
I
think
that's
the
journey
at
hand
that
we're
doing
and
well
that's
a
journey,
because
there
is
many
protocols
and
there
are
many
protocols
that
have
capability
to
carry
metadata
and
there's
many
protocols
that
we
can
go
and
insert
that
metadata
into
and
while
I'm
mentioning
a
few.
There
is
a
working
group
draft
in
an
nsh
and
we
are
well
debating
how
to
get
that
into
v6
and
even
more
so
in
to
v4.
A
Before
is
hard
right,
everybody
runs
it,
but
nobody
well
officially
wants
to
go
and
touch
it
anymore.
Maybe
we
need
to
do
that
so
from
a
standardized
generalization
process
perspective,
we
want
to
go
and
draw
a
picture.
Everything
that
is
common
is
done
in
IBM
data
formats.
How
do
you
operate
the
whole
thing?
We
need
a
yang
model
for
the
whole
thing
to
end
last
point
in
order
to
operationalize,
I
am
and
well.
A
You
need
to
go
and
extract
the
data
from
the
box
again
something
uniform
so
that
other
people
can
go
and
digest
it,
because
that's
just
one
source
of
information
and
we
might
have
multiple
other
sources
of
information
as
we
will
go
and
progress
and
then
well.
You
need
to
go
and
do
the
uncap
game.
An
endcap
isn't
easy.
So
we
have
individual
working
groups
spinning
on
an
end
cap.
Then
you
go
back
and
forth
between
IP
PM.
It
would
be
nice
if
we
had
something
that
would
streamline
that
process.
A
We
don't
have
that
so
far,
but
from
a
from
a
messaging
perspective.
Well,
we
have
this
as
a
standards
effort.
How
do
we
get
there?
We
started
this
journey,
not
with
a
document.
We
started
this
journey
with
running
code,
so
when
we
first
started
to
talk
about,
I
am
at
ITF
in
Berlin
three
years
back,
not
sure
where
we
were
in
Berlin.
A
We
started
off
saying
well,
look
at
this:
we've
done
an
implementation
in
vector
packet,
processing
s
open
source
and
you
can
go
and
look
at
it
and
run
it
since
then
guys
in
Belgium
at
the
University
of
liège.
So
professor
Benoit
donae
and
an
team
they've
done
a
version
for
the
Linux
kernel
and
they're
continuing
to
evolve.
A
That
I
just
heard
that
they're
implementing
you're
full
version
of
the
data
draft
so
we'll
have
something
for
the
Linux
kernel
that
is
pretty
modern
very
soon,
and
people
have
been
embarking
to
go
and
put
their
overall
thing
into
silicon.
Based
on
that,
we've
done
an
open
daylight
implementation
for
kind
of
how
to
go
and
configure
the
whole
thing.
A
That's
based
on
a
really
old
release,
carbon,
so
that
came
out
a
while
back
and
it's
part
of
the
SFC
code
there
but
I
wanted
to
gonna
gain,
highlight
we
did
the
tool
chain
and
it
was
really
running
code
and
then
kind
of
how
they
embark
into
the
standards
journey.
And,
yes,
it
changed,
but
still
I
think
it
is
what
got
people
interested
the
next
steps
are
operationalizing
the
thing
and
I
think
we're
not
really
there.
B
A
Ever
thought
about
that
question,
which
is
why
we
have
a
slide
on
it,
so
yeah
we're
doing
well
on
I,
think
ITF
specs.
It
could
go
faster,
but
yeah
coordination
is
coordination,
tooling,
Wiese,
yeah,
not
really
yeah
I
could
go
and
give
me
even
a
little
bit
of
a
C.
Maybe
because
there's
something
in
open
daylight.
Is
it
really
maintained
not
so
much?
A
So,
let's
go
and
uplevel
the
conversation
to
what
happens
at
application
layer
and
how
do
we
rope
application
visibility
into
the
overall
flow
so
that
we
do
something
that
is
not
there
only
for
the
art
network
operator,
but
also
for
the
app
developer
and
the
guy
who
runs
the
app
be
the
CI
CD
person
or
some
other
operation.
Your
team,
because
the
funny
thing
is,
if
you
look
at
the
questions
being
asked
there
are
very
similar
so,
which
is
why
well
look
at
them
right.
My
requests
are
slow.
Why?
Who
do
I
need
a
blame?
A
A
Database
lookup
is
slow.
Why
so
all
kinds
of
these?
Why
questions
when
it
comes
to
I,
can't
really
reason
about
why
my
application,
just
something
that
I
didn't
wanna,
go
in
half
and
if
you
look
at
the
landscape
today
from
an
application
developers,
perspective
you're,
doing
stuff
in
multiple
frameworks,
and
you
have
multiple
messaging
systems
underneath
yes,
well
we're
moving
to
a
world
where
everything
is
a
little
bit
easier
with
containerization
everything
runs
on
kubernetes,
eventually
and
yada
yada
yada.
A
We
need
to
go
and
get
there
right,
not
every
single
request
yet
is
based
on
HTTP,
but
what
you
do
as
an
application
developer
is
today.
You've
gotta
go
and
pretty
much
marry
yourself
with
one
of
the
many
frameworks
on
the
right
hand:
side,
because
you're
gonna
go
and
instrument
your
code
with
their
api's.
Then
you
run
their
libraries
to
go
and
get
the
thing
into
their
particular
tracing
environment
and
stats
environment.
A
And
that
means
well,
you
go
and
if
you
create
something
you
go
with
that
marriage
and
you
keep
with
that
marriage
for
quite
some
time
and
breaking
up
is
hard
can
be
done
like
in
the
real
world,
but
it's
hard.
So
if
you
look
at
this
overall
picture
a
little
bit
like
you
have
the
developer
guy
that
needs
to
go
and
instrument
his
code.
Then
you
have
an
instrumented
library
that
typically
comes
with
your
vendor
package.
A
Today,
then
you
have
something
that
exports
this
whole
thing
and
in
many
cases
even
the
tracing
backend
is
linked
to
the
agent
at
the
exporter.
So
all
of
that
is
one
particular
vertical
ecosystem
and
well
then
you
have
the
consumer
on
the
other
side,
so
we
we
don't
only
have
like
in
the
network
operating,
say
a
space,
the
operator
but
world.
So
half
the
developer.
That
has
a
savior
and
the
question
is:
can
there
be
standards
because
yeah
well,
multiple
silos,
everybody
doing
something
in
a
similar
way?
A
A
Ben
psycho-man
side
off
on
initially
was
open
tracing
and
opens
racing.
Just
just
tries
to
standardize
the
API,
that's
what
they
started
off
with,
so
that
you
have
a
standard
API
and
that
you
can
have
multiple
people
implementing
the
backend
of
that
particular
API.
So
you
instrument
your
code,
then
you
get
from
somebody
else,
an
instrumented
set
of
libraries
so
that
you
can
don't
spit
that
data
out
they're,
also
trying
to
eventually
get
to
harmonize.
What's
going
out
from
an
export
format
perspective,
what
the
format
is
really
the
the
tracing
piece,
the
tracing,
API
piece.
A
Multiple
people
have
started
to
implement
that
I
said
well
and
so
I
goemon
started
this
whole
thing,
so
lights.
Tab
is
obviously
one
of
them.
That
supports
the
thing,
but
you
have
also
open
source
frameworks
like
in
C
and
C
F.
We
have
Jager
as
a
tracing
framework.
Jager
supports
open
tracing
and
multiple
others.
Like
pick
your
choice,
there
is
multiple
of
them
I'm
listing
a
few.
There
is
another
group,
also
in
C
and
C
F
same
forum,
different
efforts
that
tries
to
go
one
step
further.
A
Don't
only
do
the
API,
but
also
build
a
tool
chain
along
with
the
API,
so
that
I
not
only
give
you
the
API
and
then
say
well
vendor
a
vendor,
be
vendor
C.
Please
implement
that
in
your
particular
ecosystem,
but
go
and
build
me.
A
tool
chain
have
a
library,
an
instrumental
library
that
goes
hand-in-hand
with
the
API
and
an
export
infrastructure
that
can
link
to
the
various
back-end
systems
so
that
I
can
own
export
to
say
well,
Jaeger,
again
Prometheus
a
couple
of
open-source
environments.
What
also
well
more
like
proprietary
environments?
A
A
So
how
do
these
different
guys
apply
in
our
framework?
So
I
try
to
going
to
pick
that
a
little
bit
so
open
tracing
is
really
drawing
to
go
mostly
focus
on
the
API.
And
how
do
you
instrument
your
app
that
comes
with
libraries,
which
is
why
they
have
a
say
there,
but
it's
a
vendor
that
implements
that
open
census
really
tries
to
go
and
do
the
overall
developer
ecosystem
tool
chain
that
they're
trying
to
build,
and
yes
well.
A
All
of
them
want
to
have
a
say
of
what
goes
on
to
the
wire
eventually
from
a
trace.
Theater
standardization
perspective.
Are
we
there
by
no
means
so
there
is
open
tickets
in
in
open
tracing,
for
instance,
on
how
to
represent
the
trace
data,
maybe
with
JSON
on
the
wire
and
then
well
you're
gonna
go
and
spit
that
out
into
your
tracing
back-end
and
that
needs
to
go
and
consume
that
and
expose
that
to
you
and
so
all
of
them,
despite
them,
quote-unquote
not
being
necessarily
completely
on
the
same
page,
have
the
same
tracing
model.
A
So
you
have
a
trace,
and
that
is
kind
of
here.
We're
just
querying
well
we're
trying
to
get
data
into
into
a
particular
cache.
So
authenticating
we're
issuing
a
cache
get.
Then
we
have
a
trace
on
that
trace
reaches
out
to
a
back-end.
My
see
a
query
that
ghost
comes
back,
so
you
see
that
darker
gray,
that
goes
off
to
a
different
box,
and
then
we
finally
can
go
and
update
the
cache.
A
The
individual
steps
are
so
called
spans
and
you
can
have
AI
hierarchy
so
that
one
trace
has
a
child
trace
or
one
trace
as
a
parent
parent
trace,
so
that
I
can
go
and
correlate
these
individual
things
later
on
from
a
tooling
perspective
into
one
big
picture,
that's
awesome
and
you
have
tools
like
Jaeger
gain
the
CNC
F
project,
where
you
can
go
and
look
at
these
traces
and
then
you
can
drill
down.
So
you
can
see
exactly
on
a
per
request
basis.
What
happens
at
what
layer,
when
do
I
submit
a
Redis
request?
A
The
HTTP
HTTP
GET
call.
So
I
trace
all
the
way
down,
even
with
kind
of
details
on
kind
of
how
long
the
call
took
from
when
doing
so.
I
get
really
detailed
information
on
how
that
thing
progressed.
So
that's
all
cool
now,
as
I
said
earlier
on,
maybe
the
HTTP
GET
request
took
us
652
milliseconds,
where
did
I
spend
the
time?
Do
I
blame
Network
TCP
stack
if
the
network
which
network
element,
where
did
the
lake
you
up?
A
What
can
be
done
attribute
be
attributed
to
the
network
and
how
so
could
we
marry
that
and
that's
a
discussion
that
the
open
tracing
people
and
the
open
census
people
are
really
interested
in
heaven?
But
the
question
is:
how
do
you
go
and
get
the
trace
ID
that
you
have
at
the
application
layer
down
into
the
network
layer
and
vice
versa,
and
make
the
to
speak
a
similar
language
so
that
you're
able
to
able
to
correlate
what
happens
on
one
layer
to
the
other
layer?
A
So
it
would
be
really
cool
if
you
issue
that
TCP
or
their
their
RPC
request-
and
you
have
another
try
segment,
a
child
trace
that
tells
you
yeah.
We
you
went
into
the
TCP
stack,
then
you
went
across
router
one
two
three!
You
got
that
the
the
query
on
the
other
side
responded
to,
and
then
you
went
wire
route
or
four,
and
then
you
went
back
so
that
would
have
ultimate
visibility
into
how
these
individual
steps
would
work
out.
The
problem
is
well
I.
Have
my
trace
ID
in
the
HTTP
requests?
A
Eventually,
how
do
I
see
that
at
the
network
layer
there
might
be
multiple
HTTP
going
over
the
same
TCP
connection?
So
I
can't
really
do
a
one-to-one
mapping.
So
let's
say
well
I
pass
that
on
as
an
option
if
I
open
the
socket,
so
something
needs
to
happen
there
right.
So
we
have
to
go
and
answer
that
question
eventually
if
we
want
to
go
and
provide
for
that
level
of
visibility,
that
everybody
would
appreciate
the
developer,
but
as
well
as
the
operator.
A
The
other
question
that
we
have
to
answer
is:
can
we
get
to
a
format
where
the
backend
system
can
ingest
and
digest
that
data
in
a
relatively
correlated
and
uniform
way?
So
if
we're
exporting
Trice
information
from
iom
net
flow
information
and
well,
these
tracing
information-
and
these
guys
are
just
about
to
embark
to
standardize
that
whole
thing?
A
Well,
wouldn't
it
be
nice
if
we
would
be
able
to
go
and
correlate
all
that
and
have
a
full
view,
I
believe
so,
so
how
do
we
correlate
these
metrics
to
the
backend
format?
There
is
an
open
discussion.
It's
still
an
open
issue
from
an
open
tracing
perspective.
It's
open
for
two
years
now
and
well.
What
I
missed
out
on
this
discussion
is
what
happens
at
the
transport
layer.
A
A
So
if
you
enlarge
the
picture,
then
you
put
the
network
into
this
whole
trace
data
thing,
so
we
have
trace
data
from
the
application
side
from
these
agents,
but
we
also
have
network
trace
data
that
would
all
be
going
back
into
the
back-end
system
so
that
we
can
go
and
all
correlate.
That
vision
is
there,
everybody
could
say
well,
we
can
export
into
CAF
can
correlate
the
various
topics
but
yeah.
We
need
to
go
and
get
that
done.
A
If
you
look
at
that
scorecard
that
we
had
earlier
on
and
then
why
I
don't
know
whether
you
want
to
go
to
ask
me
again,
but
I
have
one
yeah
who
I
have
one.
So
the
observation
is
and
I
think
that's
very
much
driven
from
an
app
development
perspective
as
a
developer.
Today,
I
have
to
choose
my
framework,
and
that
means
everything
is
driven
by
the
tool
chain:
open
census,
open
tracing
w3c.
All
these
guys
just
came
across
because
we
have
this
landscape
of
of
stovepipes
that
we
want
to
go
on.
Well,
basically,
teardown.
A
A
B
B
I've
been
there
with
net
flow
right
years
ago,
where
we've
got
so
many
flows,
it's
not
to
be
difficult
and
waiting
to
face
the
same
thing.
Take
the
twenty,
fifty
thousand
counters
time
in
census
in
a
router
in
a
time
series
and
make
sense
out
of
it.
So
what
you
have
to
do
is
that
we
have
to
look
at
the
service
assurance
parts
right.
B
We
need
to
tag
in
telemetry
the
context
information
directly
so
that
whenever
we
export
it,
we
know
what
to
look
at
for
specific
service
and
we've
got
the
three
information
of
all
services
and
we've
got
the
link
with
your
pic
information
right
and
we've
got
the
full
degree
position
on
where
the
problem
is.
This
is
tying
telemetry
to
our
intent
right,
starting
with
service
assurance
right.
Yes,
the
dream
and
the
vision
is
to
do
the
closed
loop
automation
we
want
together.
B
Maybe
you
want
to
run
so
you
want
to
walk
before
we
run
and
get
all
the
building
blocks
there
now.
Something
reset
will
not
going
to
spend
time
on.
This
is
the
telemetry
definition,
because
I,
like
the
simple
one,
we
had
the
beginning
a
collection
process
of
useful
operational
data.
Now,
when
I
speak
to
some
Mexicans
as
VP,
they
don't
care
about
my
little
time
series
of
a
counter
right.
They
want
to
say
what
is
the
business
impact
with
your
sorry,
which
business
impact?
Can
you
have
by
sending
telemetry
right
which
used
to
information?
B
Will
you
gave
me
for
my
business?
That's
why
we
made
it.
We
make
we
made
up
that
new
keyword,
which
is
not
their
business
telemetry,
just
to
say
if
you
speak,
Tunis
VP!
This
is
what
he
wants
to
hear
and
things
such
as
okay
in
networks.
What
type
of
devices
have
operators
right
is
there
dependencies
can
I
do
combined
cells?
What
about
licensing?
If
the
licensing
fine
can
I
sell,
some
more
licensing
can
I
enforce
it.
What
about
the
feature?
Usage
can
I.
Finally,
remove
the
6:25
feature
right.
What
are
the
customers
using?
B
This
is
the
type
of
things
that
they
mean
by
telemetry
they're
right.
This
is
reported
value
and
that's
something
that
we
keep
forgetting
right.
Writing
a
small
spike
in
the
ITF
is
great,
but
if
you
don't
see
the
big
value
it
won't
work,
the
good
news
is
that
whenever
they
speak
to
me
about
their
business
telemetry
and
they
call
dr.
Winfrey
by
the
way
right
while
we
call
its
operational
telemetry,
whatever
the
good
news
is
that
we
speak
about
the
same
information.
B
B
A
That
one
piece
I'll
cover
the
next
yeah,
so
I
think
we
we
need
to
figure
out
the.
What
and
I
think
we
spend
a
fair
amount
of
time
off
on
the
what
and
the
what
is
described
by
a
model,
our
favorite,
whatever
yang
whatever.
If
you
want
to
go
and
express
it
somewhere
else,
it
wouldn't
matter
that
much
I
think
we
need
to
really
focus
on
the.
How,
because
the
how
is,
from
a
developer
perspective,
how
I
consume
things
if
I
want
to
go
and
consume
the?
What
the?
A
How
needs
to
be
figured
out
and
those
two
things
go
hand
in
hand.
If
you
use
on
the
how
not
on
the
what,
if
I
can't
consume
it,
no
matter
how
well
you
describe
it,
I'm
not
gonna,
go
and
consume
it,
and
we
go
to
the
home
and
figure
out
who
for
right,
I.
Think
that's
the
three
questions
who
do
we
need
to
go
and
talk
to
who
do
we
need
or
unconvince
who
do?
We
need
Ali
A's
worth
as
an
IETF
in
order
to
go
and
get
these
things.
A
B
B
Now
we
need
to
go
to
all
different
SD
O's
and
to
the
consortium
and
etc,
and
unless
we've
got
those
four
different
conditions,
atom
thing
that
we're
going
to
be
relevant
or
actually
depending
if
your
glasses
are
full
of
empty
right.
But
I
believe
this
is
what
we
have
to
do
as
an
IT
organization
to
get
our
solution
adopted.
A
Bit
into
time
where
we
had
more
running
code
and
going
that
hand
and
hand
with
the
documentation
so
that
you
have
a
bar
where
you
say
well,
if
you
have
running
code
and
the
spec
it
mace
might
get
more
air
time,
it's
more
relevant
to
to
maybe
discussions
that
we're
having
here
and
that
needs
to
be
balanced
right.
So,
in
certain
place
cases
open
source
got
it
wrong
and
we're
stuck
with
the
wrong
framework,
because
well
it's
the
only
guy
in
town.
A
So
there
is
not
that
that
is
not
an
either
or
that
it's
not
a
a
discussion
that
well
do
this
or
do
that,
but
I
think
we
have
to
go
more
carefully.
Consider
how
we
bring
these
three
things
together,
how
we
more
frequently
and
better
Li
ace,
also
at
other
quote-unquote
standard
bodies
that
might
not
be
called
an
sto
CN
CF.
Does
these
things
they're,
not
an
S
do
but
people
care
about
it.
D
D
But
what
was
missing
in
the
bigger
picture
is,
if
you
plug
in
all
these
tools
into
a
bigger
architecture,
we
know
as
a
service
measure
like
st
you
as
an
example,
so
for
application
layer
networking
that
framework
is
already
established
and
CN
CF
is
rallying
behind
that
increasingly
to
have
their
tools
graduate
from
that
machinery
like
like
kubernetes
and
st
you
and
more
so
that
is
in
the
bigger
vertical
stack.
We
need
to
correlate
or
create
a
link
between
layer,
seven
application
infrastructure
or
networking
and
layer.
D
Two
and
three
that
Frank
and
I
know
you
mentioned.
There
is
a
disconnect,
absolutely
that's
a
great
point
and
then
how
to
linkage
these
two.
Of
course,
your
colleague
ed
is
trying
to
create
something
called
network
service
mesh
and
then,
where
that
ITF
can
come
in
and
perhaps
take
some
set
of
requirements
from
the
SDU
and
network
service
mesh
and
all
other
components
that
you
mentioned,
create
a
list
of
those
core
requirements
and
identify
the
gap
and
based
on
the
set
of
requirements
develop.
Probably
some
new
API
protocols
or
protocol
extensions,
Thank.
A
You
Maurice:
it's
there,
the
problem
is
there
and
it
surfaces
very
much
less.
You
say
in
network
service
mesh
where
we
are
bringing
together
well,
the
service
mesh
aspect,
which
is
an
application
layer
concept,
basically
add
from
kubernetes
and
the
network
world
and
the
guys
in
network
service
mesh
would
love
to
have
something
like
open,
says,
census
or
open
tracing
integrated
so
that
you
have
visibility
across
the
board
all
layers,
the
problem
that
we
have
there
is
in
the
first
bullet.
How
do
you
own
correlate?
A
What
do
you
use
as
a
correlation
ID,
because
your
try
segment
ID
is
something
that
only
lives
up
to
layer
5.
We
have
something
else
at
the
network
layer,
but
we
have
no
way
to
DES
to
unlink
that
and
that's
an
ongoing
discussion.
So
thanks
for
the
question,
it's
there
people
care
about
it.
We
don't
have
a
solution.
E
Max
power,
cable
that
very
good
presentation,
I,
really
like
it,
our
members,
they
you
know,
operate
large
networks
and
something
like
this
I
think
it's
interesting,
not
I,
don't
have
visibility
on
how
they
managed
internally,
their
own
network.
So
you
know
I'm
spitballing
here,
but
there
could
be
an
industry
that
you
might
have
open
years.
They
listen
to
to
what
you're
proposing
and
the
advantage
there
is
that
we
have
our
networks
and
to
be
a
little
more
controlled.
C
B
C
So
what
what
I
did
this
one
and
I
want
to
measure
the
network?
And
these
days
you
saw
that
there
are
many
different
ways
of
measuring
the
network
and
you
can
try
it,
but
it's
really
not.
There
is
not
a
unified
way.
The
OEM
people
are
trying
to
do
that
and
provide
the
some
of
those
tools.
But
you
say
there
is
what,
but
there
is
not
how
I
know
that
what
is
I
want
to
measure
the
network.
C
What
exactly
I
want
to
measure
will
depend
on
a
case-by-case
basis,
give
me
a
common
interface
for
the
network
measurement
and
then
not
the
vendors.
Although
some
of
the
vendors
can
also
measure
the
network
for
their
own
performance,
you
know,
for
their
own
performance
reasons,
to
be
able
to
improve
the
operations
of
the
of
the
equipment,
but
also
the
operator
will
be
able
to
say
I'm
measuring
this
and
how
I'm,
using
those
measurements
for
my
own
business
application
is
something
that
I
will
do.
C
We
have
many
young
device
models
in
the
ITF,
but
we
very
we
have
very
few
service
models
because
we
don't
have
the
right
participation.
So
when
we
are
doing
about
the
idea
of
specification,
we
have
to
be
careful
which
part
we
are
working
on
and
not
trying
to
go
into
the
area
where
we
don't
have
the
expertise.
B
Would
agree
with
that
now
the
the
point
I
was
making
about
the
feedback
loop,
the
closed
loop.
This
is
biding
the
service
KPI
to
your
configuration
and
to
your
telemetry.
This
is
the
way
to
do
it.
What
you're
also
asking
is
like
the
measurement,
let's
say
at
the
IP
layer,
like
the
active
probing
that
flag
mention
and
Frank
mentioned
like
yes,
we
should
have
the
same
interfaces
like
you
know,
a
no
like
yesterday
or
a
141
or
in
situation.
B
D
C
Routing
is
a
service
and
it's
being
consumed
by
the
high
level
services.
So
when
you
talk
about
the
service,
is
anything
then
crossed
that
goes
among
multiple
devices?
There
are
some
services
that
can
be
specific
for
single
device,
but
anything
that
you're
running
between
multiple
devices.
You
want
to
measure
how
that
service
is
running
across
that
being
it
like
a
routing
or
something
others
they're
like
elf,
like
an
l2
and
l3
VPN,
which
is
using
the
routing
in
order
to
deliver
that
service.
So
there
are
different
qualifications
of
its
services
and
it's
not
overloaded.