►
From YouTube: SIG Observability 2020-10-13
Description
SIG Observability 2020-10-13
A
A
B
B
A
A
A
A
C
C
B
B
B
B
B
B
E
D
Okay
looks
like
people
are
still
joining.
I
know
richie
tries
as
well,
but
zoom,
it's
not
always
working
what
best
with
linux.
D
Oh,
which
is
here
okay,
so
I
was.
I
was
wrong
anyway.
So
welcome
everyone.
Please
make
sure
you
open
the
dock,
and
you
know
mark
your
there's
like
attendees
list.
So
please
add
yourself.
Let
me
actually
send
you
the
doc
as
well.
In
case
you
don't
have
that.
A
D
So
we
have
a
couple
of
agenda
items
today.
If
you
have
anything
else,
just
just
drop
it
there
and
we
have
first
one
from
steve,
I
suppose,
are.
C
F
So,
based
on
the
conversation
last
time,
there
was
kind
of
requests
around
semantic
conventions
and
open
telemetry.
So
I
figured
I
would
do
a
quick
introduction
of
what
the
project
is
in
case.
People
are
not
aware,
and
then
talk
about
semantic
conventions
at
a
very
high
level.
Goal
is
only
to
take
about
10
minutes
here.
So
I'm
sure
there'll
be
action
items
I'm
going
to
link
in
a
presentation
I
did
earlier
this
year.
F
So
for
people
curious,
open
symmetry
is
the
joining
of
two
other
projects
in
cncf.
You
may
be
aware
of
open
tracing,
which
is
and
can
be
incubating
across
a
project.
There
was
also
another
one
called
open
census
which
came
out
of
google,
microsoft
and
omniscient.
These
two
projects
combined
to
form
open
telemetry.
F
There
is
backwards,
compatibility
with
open
tracing
and
open
census
through
the
use
of
shims,
but
going
forward
all
the
development
will
be
on
open
telemetry.
Now
what
is
the
open
symmetry
project
trying
to
solve?
I
like
this
table
representation,
so
if
you've
heard
of
observability
you've
heard
of
the
three
pillars
of
observability
traces,
metrics
and
logs,
these
are
just
different
data
sources.
F
F
Open
symmetry
is
looking
to
basically
solve
all
this,
so
the
best
way
to
think
about
it
is
anything
you
do
to
instrument
generate
and
emit
telemetry
data
in
your
environment,
open
symmetry
is
trying
to
provide
a
solution
for
where
it
draws.
The
line
is
that
it
does
not
provide
a
back
end.
It
supports
a
variety
of
different
open
source
and
even
commercial
back-ends,
but
it
does
not
actually
provide
a
back-end,
and
that
is
not
part
of
its
scope.
F
The
initial
focus
for
open
symmetry
was
very
much
focused
on
traces
and
metrics,
so
log
support
is
still
very
early
days,
but
is
part
of
the
charter
and
is
starting
to
be
incorporated
into
the
project
currently
kind
of
taking
that
table
and
representing
it
into
what
you'll
actually
find
in
the
in
the
repository
or
the
project
order
of
open
telemetry.
F
You
have
the
specification,
which
is
the
foundation,
that's
actually
where
the
semantic
conventions
live
and
where
we'll
be
spending
time
today,
but
there's
three
basic
components
here:
the
api,
the
sdk
and
then
all
the
data
stuff,
which
is
semantic
conventions
and
protocol
on
top
of
the
specifications,
there's
two
other
main
components:
there's
the
collector,
which
is
a
way
of
receiving
processing
and
exporting
generated
telemetry
data.
F
It's
actually
a
single
binary
that
can
be
deployed
either
as
an
agent
or
as
a
standalone
service,
and
it
is
the
default
destination
for
open,
telemetry,
client,
libraries
and
then
client.
Libraries
are
just
a
single
way
to
instrument
your
app
and
to
emit
traces
and
metrics
and
eventually
logs
it
supports
both
manual,
which
means
you
would
go
in
and
make
code
modifications
as
well
as
automatic,
which
means
you
change,
runtime
parameters
or
add
dependencies.
F
So
a
lot
of
active
work
going
on
right
now,
as
ga
is
pretty
imminent
at
this
point,
we're
probably
about
I
don't
know
four
to
five
weeks
away
and
interesting
data
fact
for
everyone,
open
symmetry
is
actually
the
second
most
active
project
in
cncf
today,
behind
only
kubernetes.
This
is
according
to
cncf
dev
stats.
You
can
actually
go
go.
Look
this
up.
It's
it's
basically
rafana
front
end
to
some
prometheus
metric
data
that
cncf
collects.
F
So
the
project
is
is
very
active
and
then
final
slide,
basically,
is
just
there's
a
lot
of
contributions
and
adoptions
going
on
here.
We're
seeing
cloud
providers
we're
seeing
vendors
and
we're
seeing
end
users
kind
of
all
get
together.
I
think
this
speaks
to
a
problem
that
open
symmetry
is
trying
to
solve
and
we're
seeing
other
industry
products
get
behind
it
as
well.
F
For
example,
jaeger
already
announced
that
they
are
moving
from
the
aeger
collector
to
the
open
telemetry
collector
fluent
bit
has
added
log
support
to
the
collector
there's
roadmap
items
to
add
open
symmetry
client
library
instrumentation
into
envoy
and
spring.
So
a
lot
of
cool
work
going
on
here,
there's
some
some
links
to
some
additional
reading
information
as
well
question
in
the
slack.
What
is
the
openmetrics
roadmap,
how
to
better
incorporate
open
metrics
into
this
model
as
well?
F
So
today
the
focus
has
been
on
supporting
like
the
prometheus
endpoint,
but
nothing
specific
to
to
openmetrics
question
which
vendors
do
you
have
on
board?
F
You
name
the
vendor,
I'm
sure
they're
on
board,
so
new,
relic
appd,
dynatrace,
honeycomb
splunk,
light
step
elastic
sentry,
I
don't
know,
there's
a
lot
but
pretty
much
any
major
vendor
in
the
space
and
all
the
open
source
ones
are
supported
too
so
jager
zipkin
prometheus,
but
any
any
vendor
in
the
tracing
or
metric
space.
Pretty
much
has
a
way
of
consuming
this
data
today.
B
F
F
Let's
do
it,
as
I
mentioned,
there's
a
specification
that
is
in
the
specification
repository
there's
a
huge
table
of
contents
here,
but
one
section
is
all
around
data
specifications
and
semantic
conventions.
When
you
click
that
link,
you
will
basically
see
that
there
are
three
different
types
of
conventions
that
are
defined
today,
spans
for
spans
metrics
for
metrics,
and
then
this
thing
called
resource.
F
I
should
explain
resource
so
basically,
open
telemetry's
approach
is
that
anything
infrastructure
like
is
considered
a
resource.
So
if
I
have
like
an
application
that
emits
a
metric
or
emits
a
span
well
that
application
is
running
on
something.
Maybe
it
is
a
docker
container
on
a
kubernetes
cluster,
that's
in
an
aws
region
as
part
of
an
ec2
instance,
so
that,
like
chaining
of
events
is
considered
resource,
information
and
opensymetry
has
conventions
around
tagging.
What
is
a
resource,
so
you
can
identify
objects
regardless
of
their
data
source
spans
and
metrics
are
kind
of
self-explanatory.
F
Eventually,
logs
will
be
added,
as
I
mentioned
it's
still
early
days
for
that
in
the
collector
from
a
resourcing
perspective,
there's
a
bunch
of
kind
of
unique
names
here.
I
won't
have
time
to
cover
all
of
it,
but,
as
you
can
see
for
each
subcategory
like
what
is
a
surface,
there
are
what
are
called
attributes.
Attributes
are
like
labels
or
key
value
pairs
that
you
add
on
to
things
for
each
of
these
different
conventions.
You'll
see
kind
of
a
description.
F
What
type
it
is,
whether
it's
required
or
not,
which
means
it
needs
to
be
included
as
part
of
the
telemetry
data,
that's
emitted
or
whether
it's
optional
and
you
don't
have
to
rely
on
it.
The
primary
reason
why
these
semantic
conventions
exist
is
because
normalizing
how
you
emit
the
telemetry
data
gives
you
the
ability
of
having
a
vendor
agnostic
solution.
F
So
if
everyone
kind
of
knows
that
in
this
case
the
service
name
and
instance,
id
is
required
and
the
other
fields
are
not
well,
then
you
can
now
take
on
a
dependency
for
these
required
fields
and
make
them
meaningful,
regardless
of
the
backend
that
you
send
that
data
to
for
the
non-required
fields.
Well,
some
vendors
actually
do
care
about
the
non-required
fields.
It's
actually
required
for
them.
Some
of
the
non-required
fields
are
just
additional
metadata.
F
That
can
be
beneficial,
but
could
also
increase
costs
or
cardinality,
depending
on
your
your
use
case
on,
if
you
drill
into
these,
you
can
get
down
to
like
very
specific
things.
Like
I
mentioned
aws
or
different
cloud
providers,
there
are
conventions
for
that
or
something
like
kubernetes,
very
common
in
cloud
native
or
cncf
you'll,
see
that
there's
different
naming
standards
around
like
namespaces
or
pods
or
controllers.
F
F
There
are
tracing
semantic
conventions,
so
these
are
things
like.
How
do
I
know
that
thing
over?
There
is
a
database.
How
do
I
know
I'm
making
a
restful
call?
How
do
I
know
I'm
talking
to
a
serverless
function?
What
is
that
message
queue?
These
are
kind
of
normalized
ways
to
tag
on
that
information.
One
of
the
reasons
why
this
is
kind
of
powerful
for
distributed
tracing
is
because
I
can
actually
infer
services
that
are
not
instrumented
like
if
my
application
calls
a
third
party
database
that
I
do
not
control
and
I
use
these
conventions.
F
I
can
actually
infer
that
that
is
a
database,
that
it
had
an
error
that
it
has
this
latency
and
I
can
actually
show
that
in
like
a
back
end
of
my
choice,
which
is
which
is
kind
of
cool
the
metrics
one.
If
you
go
to
it,
you'll
see
it
has
a
very
big
to
do
on
it,
so
that
one
has
not
been
merged
yet,
and
what
I
would
encourage
folks
to
do
is
actually
look
at
what
are
called
oteps.
F
So
oteps
are
open,
telemetry
engineering
proposals.
Basically
these
are
design
docs
if
you
will,
and
so
there's
a
variety
of
different
proposals
against
all
different
categories:
traces,
metrics
and
logs
being
the
primary
ones
in
here.
There
actually
is
an
otep
for
standard
naming
and
runtime
of
metrics.
That
includes
the
name
of
them
as
well
as
the
labels.
F
The
labels
are
the
dimensions
or
the
metadata
that
you
enrich
this
proposal
kind
of
outlines
how
to
do
this
in
different
aspects,
whether
it
be
the
collector
or
the
client
libraries
there's
a
bunch
of
links
as
to
how
this
is
currently
implemented
in
open
telemetry.
But
there
is
no
finalized
version
for
the
metric
semantic
conventions
which
I'm
assuming
this
audience
primarily
cares
about
from
a
prometheus
perspective.
F
F
D
No,
it's
amazing
overview.
So
maybe
first
question:
can
you
link
the
pr
into
our
documents,
so
we
can
maybe
take
our
time
and
review.
I
think
this
is
pretty
pretty
nice.
I
love
the
idea
of
proposals.
It's
super
powerful
in
a
github
yep.
D
F
Yeah
yeah,
so
the
way
that
the
project
operates
today
is
that
every
every
repository
has
its
own
set
of
approvers
and
maintainers,
and
you
should
think
of
it
as
an
independent
project
in
many
way
shapes
or
forms.
So
every
team
can
kind
of
decide
what
they
want
their
cadence
to
be
in
general,
you'll
find
that
the
client
libraries
release
anywhere
from
once
a
week
to
once
a
month,
that's
general.
It
falls
into
that
realm
somewhere.
The
collector
is
once
every
two
weeks
today
and
specifications
are
updated
as
as
needed.
F
All
the
sigs
meet
at
least
once
every
two
weeks,
some
meet
every
single
week.
The
more
interesting
question
is
actually
the
one
around
vendor
support.
Let
me
just
show
that
real,
quick
I'll
use
the
collector
as
an
example,
because
I'm
most
familiar
with
this
aspect
there
is
this
repository
called
open,
telemetry
collector.
It
is
the
core
collector
repository.
F
Everything
in
here
is
open
source,
so
you
will
find
that
it
has
like
support
for
zipkin
prometheus
jager
kafka,
but
there
are
no
vendor
stuff
in
this
repository.
Instead,
what
open
symmetry
does
is?
Has
this
notion
of
a
oops?
I
guess
I
didn't
go
to
it
of
a
contrib
repository,
so
for
every
core
there
is
a
contrib.
The
contrib
repository
is
where
either
vendor
or
commercial
third
party
stuff
lives
or
where
non-common
open
source
things
live.
F
So
if
we
go
into
here
and
look
at
the
exporter,
I
mentioned
a
whole
bunch
of
companies
that
are
supported.
You
can
see
all
the
vendor
stuff
listed
here.
Contrib
is
a
super
set
of
core,
so
you
get
everything
in
core
within
contrib
and
then
all
the
additional
stuff.
On
top
of
that,
most
of
the
client
libraries
do
the
same
thing.
You'll
see
a
java
and
a
java,
contrib
you'll,
see
a
python
and
a
python
can
trip.
D
Nice,
no,
it
makes
sense
how
I
mean
it's
always
kind
of
tricky
with,
especially
with
you
know,
golog
and
stuff.
How
how
you
add
those
plugins-
and
you
know,
there's
like.
F
H
I
have
a
question
that
I've
gotten
from
a
number
of
others,
but-
and
I
know
we're
we're
going
really
fast
here
and
this
isn't
a
full
hotel
overview,
but
you
know
suppose
folks
have
existing.
You
know
prometheus
or
thanos
or
cortex,
or
some
sort
of
open,
metrics
remote,
read
remote
right
based,
durable
metrics
back
end.
I
understand
that
there's
you
know,
there's
things
going
the
other
way,
but
but
for
things
that
are
instrumented
in
our
surfacing
open,
telemetry
metrics
via
the
open,
telemetry
apis
and
using
collectors,
and
things
like
that.
F
Yeah
so
some
of
the
remote
write
capabilities,
not
all
of
them-
are
there.
Yet.
Our
answer
today
would
be
like
you'll
see
official
prometheus
support.
Oh,
I
guess
I'm
in
the
wrong
repository
now:
let's
go
to
exporter.
Exporters
are
the
way
that
you
get
data
out,
so
you'll
see
remote
right.
Exporter
was
just
merged.
So
look
at
that
hot
off
the
press,
so
these
are
ways
that
you
could
configure
to
send
to
a
remote
right
destination.
F
So
you
basically
read
either
the
receiver
documentation
on
how
you
push
or
pull
to
get
data
in
or
the
exporter
documentation
of
how
you
read
or
write
to
push
data
out.
It
doesn't
matter
whether
say
open
symmetry
sits
between.
So
let's
say
you
have
an
application
that
sends
to
a
collector
and
the
collector
sends
to
prometheus
or
if
you
have
an
application
that
sends
to
prometheus
and
the
collector
scrapes
prometheus
and
gets
the
data
back
out.
F
H
F
And
again,
I'm
happy
to
spend
more
time
drill
into
this.
I
know
we
have
other
topics
for
today.
This
is
just
meant
to
be
more
of
an
introduction,
feel
free
to
put
questions
in
the
in
the
zoom
slack
and
the
slack
channel
or
in
the
google
doc
I'll
take
a
look
at
it
and
we
can
definitely
do
follow-ups
on
this.
D
D
And
those
cement
things
are
just
suggestions
or
actually
they
will
be
checked
and
validated
during
runtime
yeah.
F
So
the
the
maybe
a
good
example
would
be
looking
at
the
tracing
one
since
these
are
already
merged.
Let's
use
restful
calls
as
an
example.
F
F
There
are
trade-offs
between
this
you'll
actually
see
a
lot
of
discussion
in
the
proposals
that
go
up
into
the
oteps.
Let's
say
like:
why
is
url
not
required?
Well,
high
cardinality
would
probably
be
a
compelling
reason.
All
the
different,
unique
values
here.
Do
they
actually
provide
that
much
meaning
versus
like
a
normalize,
just
an
endpoint
versus
the
full
ur
url
or,
more
specifically,
the
uri?
Here?
F
A
lot
of
end
users
actually
have
their
own
client
libraries
of
their
own
telemetry
data
today
and
they're
looking
for
paths
forward
as
they
move
to
a
more
open
source
and
open
standards
approach
in
the
case
of
metrics,
while
the
otep
is
not
actually
a
pr
like
this
has
been
merged
right
yeah,
I'm
not
actually
in
pull
requests,
so
this
is
actually
merged
as
a
standard
metric
system,
but
not
something
that's
actually
been
brought
over
to
the
metric
sig,
I'm
guessing
and
merged
into
the
conventions.
F
That's
why
the
metrics
page
is
still
a
to
do
here.
This
might
also
still
be
a
to
do
because
the
metric
specification
was
not
in
the
stable
state
yet
where
the
tracing
one
is
so,
this
might
be
dependent
on
metrics
becoming
stable.
I
believe
there's
a
pr
open
for
that
right
now.
F
I
don't
know
if
there's
something
specific
you
want
to
look
at
here,
but
you
can
see
typically
oteps
will
link
to
prior
implementations.
You'll
see
that
there
was
a
previous
otep
that
tried
to
define
some
of
these
conventions.
F
D
F
So
this
one
is
both
it's
supposed
to
talk
about
names,
labels
and
conventions
for
common
instrumentation,
but
I
believe,
if
you
actually
look
at
this
most
of
this
is
metric
names.
D
Yeah
makes
sense.
Okay,
so
you
know
my
first
question
is
like
I
I
I
came
from
promise
years
old,
I'm
from
juice
maintainer
and
to
me
we
are
always
fighting
over.
You
know
the
naming
and
you
know
it
kind
of
matters
if
it's
like
counter
and
then
maybe
a
total
suffix
and
stuff
around
that.
So
to
me
those
names
are
totally
new
to
me,
but
probably
you
know
I
was
not
heavy
user
of
other
monitoring
systems
as
well.
D
So
I
guess
it
was
kind
of
experience
of
many
many
vendors
and
to
me
there
was
like
a
project
which
was
kind
of
kind
of
trying
to
establish
some
of
those
at
least
naming
like
possible,
naming,
let's
say,
rules
which
is
open,
open,
metrics.
So
how
this
is
relevant
to
that?
Did
we
kind
of
manage
to
collaborate
on
this
or
probably
not?
What?
What?
What's
your
take
here,
not
sure
yep.
F
F
Community
members,
okay
cool,
so
the
open
symmetry
project
has
a
governance
like
most
do.
It
also
has
a
technical
steering
committee.
Where
is
that
maintainers
trace
approvers?
Oh
here
we
go
technical
steering
committee.
So
this
is
the
current
technical
steering
committee.
This
group,
I
know,
has
met
with
the
open
metrics
team
multiple
times.
I
do
not
know
the
outcome
of
those
decisions.
I
am
most
familiar
with
bogdan,
so
I
would
recommend
reaching
out
to
bogdan,
but
he
would
probably
have
the
latest
on
what
is
going
on.
With
that.
F
D
Yeah
no
worries.
Thank
you.
Yeah.
To
be
honest,
you
know,
seek
observability
is
the
you
know.
I
don't
know
like
a
very
good
place
to
actually
have
those
discussions
around
yeah,
some
somehow
taboo.
You
know
topics
of
of
you
know
some
overlaps
between
projects
and
you
know
why
we
should
kind
of
collaborate
but
yeah.
D
I
think
we
should
definitely
talk
about
this,
this
a
little
bit
more,
but
those
semantics
are,
you
know
one
one
level
of
overlap
that
that
you
know
some
something
that
open
telemetry,
open,
metrics,
already
kind
of
specify
in
some
way,
but
not
everything,
not
the
actual
naming
of
the
you
know,
cpu
utilization,
so
I
would
be
curious
if
they
don't
even
you
know,
conflict,
because
that
would
be
kind
of
annoying
really
so
yeah.
That
would
be
my
kind
of
concern
and
something
to
discuss
further,
but
overall
yeah
even
in
prometheus.
D
There
are
many
semantics,
let's
say,
recommendations
that
are
just
spoken
and
never
actually
you
know
written
down,
so
it
would
be
amazing.
Even
I
would
be
super
curious
to
get
for
that
and
and
to
review
yeah.
Thank
you,
yeah
and
richie
is
speaking
on
the
channel
with
some
more
information.
I
guess
he
cannot
talk.
I
joined
the
metrics
called
several
times.
My
last
status
was
that
the
plan
of
open
telemetric
metrics
was
to
support
open
metrics
as
a
first
class
wire
format
and
naming
was
huge
part
of
the
discussion.
D
F
Yeah
and
there
is
a
dedicated
metrics
specification
sig
for
people
that
are
interested,
there's
links
here
that.
F
Go
join
to
kind
of
talk
about
that
particular
aspect
or
there's
the
general
specification
meeting
as
well,
but
yeah.
I
can
follow
up
with
bogdan
on
this
and
try
to
figure
out
what
what
the
status
is.
Who
on
the
open,
metrics
side
would
be
a
good
point
of
contact.
I
think
it.
D
D
You
know
I
was
not
involved
in
the
discussions,
but
I'm
super
you
know
into
community
and
just
collaborating
we
what
we
did.
I
mean
you
did
that
as
well
with
the
open
tracing
and
open
sensors,
how
you
were
able
to
actually
merge
and
not
fight
each
other.
We
are
doing
the
same
with
thanos
at
cortex.
We
are
literally
slowly.
You
know
reusing
the
stuff,
so
it's
hard
but
like
the
discussion
is
worth
to
have
so
yeah,
I'm
happy
to
and
curious
how
we
can
move
that
forward.
Yeah.
H
Yeah
having
that
remote
right
exporter
is
that
that's
a
really
good
sign
right.
I
mean
that
that,
by
definition
is
that
wire
protocol,
so
I'd
love
to
see.
G
D
D
Is
is
even
more
more
concerning
for
back-end
systems
like
thanos,
cortex
and
3db,
I
don't
know
cloudwatch
which
who,
who
probably
someday
will
support
you
know,
or
even
our
supports
remote
rider
or
things
like
that,
because
I
assume
this
is
like
a
side
car
near
or
application.
There
will
be
a
side
car
with
open
telemetry
that
just
pushes
metrics
directly
to
to
those
back-ends.
As
far
as
I
know,
this
is
not
like
very
scalable,
but
I
guess
for
some
use
cases
that
makes
sense
so,
but
anyway,.
H
Maybe
it
would
be
extremely
yeah,
maybe
it
would
make
sense
for
for
richie
or
someone
to
do
an
overview
of
open
metrics
and
then
what
specifically
it
is.
I
was
just
meaning
more
in
terms
of
interoperability
like
we
use
remote
right
as
our
standard
internally
in
our
own
in
our
own
infrastructure.
H
You
know
and
granted
that's
not
specifically
to
your
point
open
metrics
specifically,
but
it's
it's
the
closest
thing
we
could
find
quickly
to
be
able
to
inter-operate
across
systems
for
metrics.
In
any
event,
I
know
I
know
richie
can't
respond
with
a
mic,
but
that
would.
H
Yeah,
I
know
it's
been
yeah
well,
yeah!
Well
I'll!
Let
richie
cover
it!
Maybe
the
next
next
time
around
but
yeah
there's
some
very
long,
long
running
long
pole,
things
that
that
could
be
coming
to
a
head.
As
I
understand
in
the
near
term,
around
openmetrics
has
a
there's
an
actual
standard
without
atf
and
stuff.
So
anyway,.
D
But
I
would
I
want
to
just
summarize
that,
like
having
a
nice
semantics
like
metric
name
defined
for
overall
cpus
cpu
usage
for
let's
say
both
golang
and
python,
and
different
kind
of
like
it's
it's
golden,
because
right
now,
like
all
those
dashboards
and
others
would
would
be
able
to
work
across
different.
You
know
different
back-ends
and
different
upwards
different
applications,
different
languages,
different,
maybe
orchestration
systems.
So
that's
really
amazing
and
we
have
to
have
those.
So
that's
that's
a
really
good
step.
Okay,
so
it
looks.
D
About
the
open,
metrics
specification
and
the
metric
semantics
definitely
open
metrics,
doesn't
you
know,
define
everything,
but
I
I,
for
example,
having
a
dot
in
the
name
of
the
metric.
D
Is
you
know,
I
don't
know
if
it's
even
allowed
by
open
metrics,
so
how
people
can
can
kind
of
leverage
the
semantics
here
so
those
small
details
might
be.
I
might
be
curious
of
I
added
an
action
item
on
maybe
richie,
to
show
us
and
talk
about
open,
open,
metrics
more
at
some
point
as
well.
F
Cool
yeah,
one
thing
worth
noting
I
mean
we
didn't
cover
the
specifics
of
it,
but
one
of
open
symmetry's
goals
is
to
be
format
agnostic,
so
it
has
something
called
otop,
which
is
the
open
telemetry
protocol.
It
is
meant
to
be
a
super
set
of
capabilities
so
like
the
collector
can
actually
receive
in
prometheus
and
export
in
whatever
other
format.
Maybe
zipkinder
yeager
is
a
better
example
in
in
zipkin
out
jaeger,
and
it
does
that
by
converting
zipkin
to
otlp
and
then
otlp
out
to
jager.
F
So
if
openmetrics
doesn't
allow
periods,
it
doesn't
matter
as
you
convert
out
of
otlp,
you
would
convert
into
the
openmetrics
convention.
So
there
actually
is
a
path
forward.
That's
part
of
the
goals
of
open
symmetry
is
like.
There
will
be
a
new
standard
tomorrow.
We
need
to
be
ready
for
it.
How
are
we
going
to
be
ready
for
it.
D
D
So,
okay,
we
have
a
topic
from
jonah.
Do
you
want
to
cover
that?
And
maybe
before
this,
like?
I
just
want
to
mention
like
the
whole
kind
of
meeting
we
have
right
now.
It
would
be
amazing
to
well.
I
think
we
are
doing
it
right
now,
but
the
whole
goal
is
to
just
maybe
you
know
talk
about
status.
Synchronize
advertise
our
working
groups
but
actually
looks
like
the
direction
is
to
actually
do
the
work,
the
actual
work
offline.
D
So
we
can
just
summarize
and
talk
about
the
details
and
synchronize
yeah,
because
otherwise
we
don't
scale.
That's
why
I
think
the
most
important
part
is
that
we
are
not
walking
through
the
dock
as
we
used
to
so
yeah.
What's
the
plan
here
with
working
group.
G
To
go
very
cool,
so
I've
been
chatting
with
matt
about
the
user
survey
that
we
did
and
trying
to
make
some
improvements
on,
let's
say
the
methodology
or
process
and
really
understand
what
users
are
doing
and
where
they're
going
with
their
strategies
and
their
use
of
open
source.
Tooling,
I
would
say-
and
obviously
there
was
the
release
of
the
landscape
document.
I
guess
a
couple
of
months
ago,
so
I
started
putting
together
a
document
with
matt.
G
G
I
outlined
some
of
the
changes
and
you
know
matt
put
together
sort
of
like
a
template
for
us
to
start
with
the
idea
here,
I
think,
would
be
to
try
to
put
this
into
some
type
of
document
and
then
track
it
in
an
ongoing
way
through
github
issues
eventually
and
try
to
come
up
with
a
survey
that
we
can
execute
just
to
get
a
better
handle
of
what
what
people
are
doing
and
where
they're
going.
G
There's.
Just
the
discussion
we
just
had
about
overlapping
standards
and
what
people
are
doing.
This
is
a
perfect
example.
What
are
users
thinking?
Where
do
users
want
to
go?
What
do
they
want
from
the
community?
I
think
these
are
some
of
the
questions
that
could
help
us
make
better
decisions
as
a
community
and
maybe
force
us
to
collaborate
a
little
bit
versus
building
multiple
standards
right
all
the
time.
G
So
that's
the
thinking,
so
please
leave
comments
on
the
doc
feel
free
to
go
into
suggestion
mode
or
add
anything
in
here
and
then
I'd
like
to
sort
of
collaborate
on
the
dock,
as
as
bartok
was
saying
and
and
hopefully
come
up
with
something
more
meaningful
that
we
could
use.
I
don't
know
if
you
had
anything
else
to
add
that
for
what
you
wanted
to
do
in
terms
of
driving
it
forward.
H
No,
I
think
this
is
great.
I
think
I
like
the
idea
in
the
end
of
the
doctor.
There's
a
rough
timeline
around.
You
know
publishing
an
initial
draft
as
as
has
been
done
now
in
the
google
doc
and
kind
of
keeping
it
open
for
four
or
five
weeks.
My
only
input
would
be
before
we
actually
go
launch
the
survey
and
start
executing
it.
H
I
don't
think
that's
a
big
hurdle
or
anything
like
that,
but
just
you
know
we
we
can
iterate
on
this
within
the
sig,
for
as
long
as
we
think
we
need
five
weeks
to
me
seems
reasonable.
I'm
curious
if
others
like
that
timeline,
but
at
that
point
we
can
actually
have
a
again
a
formal
proposal
that
the
toc
would
vote
up
or
down.
I
would
expect
up
and
then
you
know
we're
off
to
the
races
yeah.
H
Thank
you
for
putting
this
together.
I
did
very
little.
I
copy
pasted
some
stuff
in
the
top
so,
but
I
would
encourage
everyone
on
the
call
or
watching
the
recording
later.
You
know
this
is
what
we
can
do
as
a
sig.
So
please
do
engage
and
and
let's
work,
work
in
this
dock
and
hammer
out
something
that
you
know
we
come
to
by
consensus.
G
And
although
we're
all
technologists,
I
think
it's
important
to
also
highlight
like
people
and
process
and
other
kinds
of
things,
that
we
don't
tend
to
think
about
so
much
but
is
really
important
to
end
users
to
understand
where
they
want
to
go
and
where
they
need
to
go.
Because
the
vendor
side
is
all
technology,
and
I
think
it's
it's
important
to
understand
how
it
fits
into.
You
know
the
culture
and
organization
and
processes.
D
All
yeah
that
makes
sense
so
to
motivate
you
all
in
this
document
like
there
is
a
huge
or
like
huge,
really
nice
kind
of
potential
number
list
of
questions
that
we
want
to
ask,
and
that's
definitely
some
something
you
can
contribute
to
it
like
what
you
would
you
like
to
know
around
yeah
and
also
what
what
questions
could
be
fair
and
yeah,
not
exhausted.
D
Be
curious,
just
one
more
thing
to
add:
is
that
how
to
motivate
people
to
also
answer
right,
that's
kind
of
if
you
have
any
ideas,
let
us
know.
H
I
I
would
suggest
also
kind
of
think
that
that
we
find
some
time
you
know
jonah.
If
you
want
to
organize
it
or
we,
we
can
try
to
organize
it
on
the
cncf
calendar.
H
But
but
you
know
again,
the
working
group
isn't,
you
know
ratified
yet,
but
you
know
we
could
do
all
of
this
async
using
slack
and
the
sake
observability
channel,
but
it
might
make
sense
to
have
some
working
sessions
like
we
did
for
the
charter
working
sessions,
like
maybe
weekly
for
the
next
five
weeks,
or
something
like
that,
where
you
know
we
can.
You
know,
discuss
these
comments.
G
G
H
Questions
could
also
put
something
out
to
the
mailing
list
as
well.
The
cncf
stick
observability
mailing
list
or
just
the
end
user.
Community
mailing
list
is
a
broader
one
to
say:
hey.
Even
if
you
haven't
been
engaging
with
sig
observability,
here's
a
place
where
you
know
you
could
you
could
not
only
help
us
in
terms
of
what's
your
feedback,
but
you
could
go
to
your
own
connections
in
your
own
personal
network
and
help
drive
this
forward.
H
I
think
that
the
big
takeaway
I
had
from
cheryl
a
couple
weeks
ago
when,
when
we
were
talking
about
this
or
four
weeks
ago
now
time
flies
is
that
you
know
it's
probably
going
to
be
personal
networks
and
personal
connections
that
really
allow
us
to
horizontally
scale.
The
effort
versus
just
you
know
blank
having
emails
and
things
like
that.
So.
D
Yeah
totally,
but
I
think
what
you're
talking
about
are
just
you
know,
one
meeting
or
why
one
work
to
be
done
to
actually
create
a
server
design
it
in
a
nice
way.
But
another
is
execution,
and
I
I
think
that
could
be
the
coupled
as
well.
B
B
B
In
my
case,
when
I
started
with
the
document,
I
was
looking
more
into
use
cases,
so
so
we
have
at
least
with
with
things
that
I
will
work
with.
So
we
are
also
reshaping
and
doing
a
lot
of
things
internally
in
eric's
on
how
to
operate
and
like
expose
data
from
the
network,
to
operators
or
for
ericsson
or
for
other
users.
So
based
on
these
users,
we
have
also
different
use
cases
which
sort
of
data
which
sort
of
observability,
and
what
comes
with
that.
B
So
my
idea
with
the
document
was
to
have
something
like
that,
as
I'm
not
sure
if
this
exists
in
in
the
in
cncf
or
in
those
under
the
observability
umbrella,
that
defines
a
little
bit.
What
is
well,
we
talk
about
distributors
from
observability
like
more
generic.
B
We
have
other,
like
other
sources
as
well.
For
example,
google
had
this
this
software,
not
software
the
site,
reliability,
engineering
book
where
they
talked
about
these
four
golden
signals
for
mainly
metrics.
I
think
it
was
like
more
for
the
infrastructure
for
the
the
research
the
reliability
engineer,
guys
in
the
in
google.
B
B
What
is
one,
and
what's
not
the
other
one,
and
a
lot
of
the
talks
that
I
have
been
following
in
this
space
is
about
companies
that
are
walking
this
route.
They
are
working
on
on
their
observability
internally.
B
Ingestion-
consolidation
of
tools,
for
example,
which
tools
were
better
working
together
for
the
use
cases
they
had
migration
challenges
that
they
had
moving
from
one
to
another
one.
I
guess
a
lot
of
that.
Maybe
it's
covered
in
the
open,
telemetry
group
already,
I'm
not
sure,
but
I
thought
more
about
having
something
that
was
that
a
bit
sets
the
the
like.
I
would
call
we
would
call
in
the
itf.
Usually
you
don't
write
a
you.
Don't
start
a
working
group
without
a
use
cases
document.
B
In
our
case
here
we
don't
have
a
document,
but
we
are
trying
to
to
get
all
this
observability
information
that
we
have
in
in
other
projects
in
cncf
and
try
to
consolidate
and
have
a
start
here
for
users
and
people
working
in
in
this
area.
So
I
thought
about
this
could
be
a
structure
somehow
for
us
to
to
start
working
on
how
to
like
write
something
that
could
be
a
public
public
document,
a
public
white
paper
or
something.
D
D
D
You
know
spectrum
around
observability.
Just
that
could
be
something
like
this.
I
assume
and
something
in
in
in
this
area.
There
is
also
kind
of
issue
with
number
19,
and
so
at
some
point
we
also
talked
about
entry
point.
You
know
page
or
like
documentation
where
totally
some
someone
totally
new
to
the
cncf
world.
D
Let's
say
they:
they
grabbed
communities
and
now
they
need
to
migrate
or
that
they
want
some
observability
right
and
they
don't
know
where
to
start
and
what
is
that
kind
of
offering
open
source
offering
here?
So
we
were
talking
about
some
some
kind
of
index
page
like
this,
which
will
yeah
route
to
the
proper
documentations
for
different
projects
and
maybe
have
some
global
information.
D
So
we
are
definitely
yeah
looking
forward
for
some
good
ideas
how
to
solve
those,
and
also,
I
think
there
was
some
idea
to
to
to
create
some
working
group
around
yeah
writing
this
kind
of
paper.
But
we
need
to
definitely
frame
it
better
and
also.
D
B
B
I
come
a
little
bit
from
a
different
corner.
We
had
like
legacy
things
that
are
being
moved
to
more
cloud
native
containerized
environments,
and
we
have
a
lot
of
let's
say:
regulations,
for
example,
3gpp
and
other
stuff,
that
that
says
decides
which
sort
of
data
has
to
be
available
for
which
entities
we
have,
for
example,
law
interception.
We
have
operators
that
need
to
have
their
own
set
of
data
sets.
B
We
have
us
as
troubleshooters
that
needs
data
different
data,
so
I
was
a
bit
like
how
how
do
we
put
this
together
as
a
as
a
yeah
as
like
as
a
common
work
base
as
a
common
base
for
us
to
to
do
something
here?.
E
Oh
sorry,
man,
hey
everyone.
This
is
karthik.
I
think
I
had
a
chat
with
matt
last
week,
a
few
days
ago
about
the
sick
observability
and
was
really
interested
to
join.
I
think
I
just
wanted
to
build
upon
some
of
the
points
that
simone
made,
so
I'm
coming
from
a
project
called
litmus
chaos,
which
is
a
chaos
engineering
project
for
communities,
and
I
think
the
notion
of
observability
within
chaos
is
something
that
we
are
trying
to
build
upon.
E
We
can
get
a
good
idea
of
how
to
go
about
things,
and
I
think
I
was
just
listening
into
the
earlier
topics
about
standards
and
conventions
around
naming
the
metrics
and
things
like
that,
so
just
to
get
a
sense
of
what
all
is
observability
for
example,
and
then
find
out
how
to
go
about
doing
it.
E
I
think
I'm
probably
a
good
example
for
the
beginner
of
the
end
user
case
in
that
respect
so
yeah.
I
really
looking
forward
to
that
kind
of
material.
B
If
I
talk
about
observability
with
my
ericsson
head
on
a
lot
of
people,
go
back
to
3gpp
legacy
way
of
doing
observability
that
defines
some
data
sets
that
are
telco
specific
and
these
data
sets
is
only
about
the
system
health.
It's
only
about
saying,
okay,
the
radio
is
running,
the
car
of
the
network
is
running,
but
there
is
no
connection
to
applications.
There
is
no
connection
to
performance.
There
is
no
connection
to
any
any
more
refined
things
that
you
would
like
to
do
in
the
system.
B
It's
just
like
a
very
high
level
overview,
and
this
has
to
do
with
things
like
performance
to
collect
and
and
expose
all
this
data,
but
things
are
changing
now,
so
we
are
going
in
a
different
direction,
but
we
still
have
to
educate
a
lot
of
people.
H
I
mean-
maybe
you
know
the
best
way
to
eat.
An
elephant
right
is
is,
is
one
piece
at
a
time
that
the
terrible
analogy,
I
suppose,
but
I
heard
a
couple
of
things
all
of
which
I
I
think
we
have
would
be
good
to
to
start
work
on.
So
one
of
the
things
that
you
mentioned
is
defining
the
different
operators
or
the
different
roles.
You
know
what
what
types
of
of
job
functions
and
or
people
need
access
to
observability
data
and
enumerating
those
and
and
calling
out
them
almost
in
a
requirements
based
way.
H
As
a
small,
concrete
example,
you
know
my
my
day.
Job
company,
we
work
in
you,
know
insurance
and
financial,
financial
tech
right.
So
we
have
a
lot
of
auditing
and
regulatory
compliance
issues
not
expect
concerns
and
requirements
around
how
we
store
data
that
might
have.
H
Information,
you
know,
and
and
so
like
those,
so
I
think
there
could
easily
be
like
a
white
paper
now
today
that
we
don't
have
that
we
could
do
in
the
context
of
the
sig
collaboratively
that
calls
out
all
of
those
stakeholders
or
roles
and
what
their
concerns
are,
because
that
that
provides
like
a
starting
point,
a
shared
set
of
nouns
or
personas
that
we
can
use.
H
There's
the
second
piece
of
like
you
know
what
projects
exist
in
the
cncf.
What
projects
you
know,
where
are
there
gaps
and
there's?
Obviously
a
lot
of
vendors
and
a
lot
of
overlap?
We
just
talked
about
it
earlier
this
hour,
but
but
that
same
analysis
that
just
enumerates
not
so
much
on
the
operator
side,
but
on
the
technology
or
the
project
side.
You
know
the
gaps
and
identifying
where
the
cncf
does
not
have
a
comprehensive
end-to-end
solution.
H
That's
one
of
the
primary
motivations
for
making
special
interest
groups
in
the
first
place
from
that
from
the
tod's
perspective.
So
I
don't
think
one
blocks
the
other,
but
I
see
both
as
really.
E
H
Of
that
is
but
but
to
the
bartex
point,
I
think
I
think
if
we
can
either
firm
up
the
scope
of
it
or
split
it
into
maybe
those
two
things
as
a
idea
or
maybe
three
I
I
don't
know
then
yeah.
Let's,
let's
work
on
it
and.
B
Yeah,
I
think
we
should
start
with
like
a
skeleton
at
least
or
have
an
idea
what
or
which
sections
which
things
we
want
to
write
about.
As
you
said,
retention
regulation,
this
is
like
one
aspect
that
might
affect
one
industry,
but
not
the
other
one.
I
thought
about
when
I
talked
about
use
cases,
it's
more
like
the
it's
really
like
the
the
end
user.
At
the
end
I
mean,
if,
if
a
system
that
I
deliver
crashes,
we
have
the
operator
side
that
has
their
own.
They
have
their
own,
their
own
oss
on
site.
B
So
I
I
certainly
need
some
help,
writing
that,
but
I
I
my
idea
was
like
to
get
more
this.
This
skeleton,
something
if
the
group
is
interested
in
that,
of
course,.
D
Definitely
yeah
to
me.
It
feels
like
the
kind
of
the
white
pepper
idea
we
had
so
we
can
just
start
it
off
and
you
know
like
we
do
with
the
working
group
for
the
survey
and
the
state
observability
we
should
just
join.
You
know
people
who
would
be
passionate
to
help
and
feel
that
or
just
create
the
structure
for
the
you
know,
questions
that
has
to
be
asked
and
answered
so
some
placeholders
for
for
such
kind
of
document,
some
kind
of
overview
that
we
are
looking
for.
D
So
I
would
say:
let's
let's,
let's
get
yeah
issue
number
16,
so
the
white
paper
and
just
get
this
started
start
a
document
and
start
kind
of
some
working
group,
or
rather
some
yeah
focus
group
on
on
this
one.
So
that
will
be.
I
guess,
the
next
step.
H
H
We
should
have
that
in
the
next
couple
weeks,
though,
but
for
now
we
can
just
I
can
make
one
or
if
you
want
to
make
a
google
doc
that
I
noticed
we
have
a
word
doc
there,
but
I
think
a
lot
of
people
unless
we
checked
are
more
easy,
more
able
to
easily
collaborate
on
google
doc,
but
rock
on.
I
think
we're
almost
out
of
time.
We
are
it's
one-on-one.
B
H
B
H
Spearheading
it
that's
awesome.
I
gave
it
a
raid
earlier
today.