►
From YouTube: 2022-08-11 meeting
Description
OpenTelemetry Prometheus WG
A
B
B
A
D
A
A
B
We
created,
or
so
there's
a
it's,
not
in
a
markdown
document
yet,
but
I
mean
it
would
be,
you
know
pretty
much.
We
can
just
copy
and
paste
it
over.
I
think
he
was
saying
more
for,
like
the
I
guess.
The
version
that
we
ultimately
share
would
be
that
version,
but
the
we
basically
went
with
the.
I
guess:
google
doc
version
first,
so
that
people
can
comment
and
edit
more
easily
than
in
the
markdown
version,
but
hopefully
yeah
I
mean
you
know.
B
I
guess
we
can
go
ahead
and
get
started,
but
yeah,
hopefully
after
today
we
can
kind
of
go
through
this
stock.
Maybe
add
a
little
bit
more
to
it,
make
sure
we're
hopefully
generally
on
the
same
page,
and
if
so
then
then
yeah
we
can
make
a
you
know,
actual
pr
with
the
you
know,
markdown
version
of
this
thanks.
I
just
realized.
B
Okay,
yeah
yeah
yeah,
I
was
like
I
mean
yeah.
It
should
be
easy
enough
to
just
copy
and
paste
over
and
it's
not
you
know
crazy
long.
No,
there's
reference
to
the
actually.
So
everybody
knows
what
doc
we're
talking
about.
There's,
okay,
cool
yeah:
there's
a
link
in
the
meeting
notes
I'll,
also
post
a
link
in
the
did.
I.
A
B
Yeah
there's
attendees
oops
attend
these
lists,
feel
free
to
add
yourself
to
the
attendees
list
on
the
meeting
agenda
yeah,
I
guess
to
back
up
real
quick
welcome
back
everybody.
This
is
a
meeting
to
talk
about
adding
profiling
as
a
supported
event
type.
Last
meeting
we
talked
about
the
importance
of
having
an
otep
that
we
can
present
to
the
greater
community
to
the
specification
sig.
B
You
know,
just
I
guess
to
everybody
in
general,
that
sort
of
summarizes
the
vision
of
this
group
now
that
we've
kind
of
met
a
decent
amount
and
have
started
to
move
towards
an
idea
of
what
we
ultimately
hope.
This
you
know
hope
to
accomplish
together
and
so
yeah.
B
So
there
is
a
doc
that
I
just
posted
a
link
to
in
the
chat
where
there
is
a
yeah,
a
draft
of
the
otep
that
we
would
potentially
send,
and
so
I'm
thinking
you
know
and
if
anybody
else
has
other
stuff
for
the
agenda
feel
free
to
add
it,
but
at
least
starting
off
I'm
thinking
we
can
kind
of
just
walk
through
the
general
sections
of
this
dock
and
see
if
anybody
has
anything
they
want
to
add
remove.
B
You
know,
update
change
and
then
yeah
I
mean
that's.
I
think
that
should
probably
be
the
main
goal
of
this
meeting.
I
don't
know
if
there's
I
guess,
let
me
pause
there
and
see
if
there's
anything
else,
that
I'm
missing
that
people
would
also
want
to
talk
about
today.
B
All
right
cool,
so
yeah,
let
me
kill
some
tabs
and
share
the
duck.
D
So
ryan,
just
a
quick
question:
yeah
is
there
some
sort
of
timeline
I
guess
for
when
we
would
aim
to
have
the
old
tip
finished?
I
guess
really
what
I'm
asking
is
like
say.
After
this
meeting
we
have
a
couple
of
days
to
kind
of
say,
scan
over
it
and
that
sort
of
thing.
B
Yeah,
absolutely
I
mean
yeah,
we
can
kind
of
it's
sort
of
there's
no
externally
imposed
timeline.
I
guess
it's
kind
of
up
to
us
what
timeline
we
want
to
operate
under.
Ideally,
I
know
so.
The
specification
sig
meets
every
tuesday,
I
think
correct
yeah,
and
so
I
guess
ideally
we
should
I
mean
if
we
could
do
it
by
tuesday.
That
would
be
great,
but
if
not
I
mean
we
could
always
do
the
next
tuesday.
You.
E
Can
always
target
the
next
one
yeah?
We
can
really
submit
it
whenever
it
can
be
out
there
for
review.
I
mean
there's.
No,
I
don't
think
there's
a
formal
timeline
for
once
it's
posted
for
like
how
soon
it
has
to
be
merged
or
approved.
That
being
said,
we
probably
want
to
submit
one
that's
in
a
shape
that
we're
happy
with,
so
we
can
actually
immediately
start
reviewing
it
instead
of
having
it
just
sit
there
for
a
while.
A
I
made
a
bunch
of
edits
by
the
way
this
morning
and
I
suggested
things
that
were
just
like
language
based.
You
know
just
like
kind
of
framing
up
language,
but
if
any,
I
did
have
some
comments
as
well,
where
you
know
I
had
some
thoughts
on
the
the
substance
of
what
was
being
said,
and
I
didn't
go
ahead
and
just
try
to
replace
anything.
So
I
don't
know
if
I've
done
it
the
right
way
or
not
versus
like
a
pr
workflow,
but
that
was
a
distinction
that
I
had
made.
B
Yeah
I
mean
I,
I
think,
that's
fine.
We
can
talk
about
those
today
if
you
want
to
as
we
kind
of
go
through
it,
I
guess
kind
of
just
feel
free
to
to
chime
in
and
yeah
where
it
makes
sense.
So
so
again,
the
I
kind
of
seeded
this
with
some
with
some
data
from
the
open,
telemetry
logs
vision,
which
t-gun
gave
to
us,
which
is
actually
you
know,
yeah
a
little
bit
honestly
shorter
than
I
expected
yeah.
It.
B
Yeah
so
I
mean
that's
nice,
you
know
we
don't
have
to
like
to
go.
Go
too
crazy,
but
also
added
in.
D
So,
ryan,
just
or
maybe
morgan
could
answer
this
as
well.
Just
would
it
be
possible
just
outline
kind
of
the
the
process
by
say
after
this
talk
is
written,
where
it
goes
what
it's
used
for
and
like
what
would
we
say,
the
success
criteria
for
this
document
like
what
would
define
a
successful
outcome
for
this,
like
in
terms
of
say,
detail,
content,
audience
one
and
so
forth.
I.
E
Mean
the
the
successful
outcome
would
be
the
technical
committee,
slash
specs
reviewers
people,
people
who
review
these
things.
Looking
at
it
and
saying
like
I
understand
what
this
is
saying,
I
agree
that
this
is
useful
and
I
believe
that
this
actually
gives
us
like
the
path
like
the
rapid
path
to
the
next
steps
towards
defining
the
actual
specification
components
and
then
implementation.
E
E
Yeah
yeah
right,
like
those
those
are
the
three
things
they're
looking
for.
I
think,
given
all
the
discussions
we've
had,
we
can
easily
pull
that
off
like
we
know,
there's
a
lot
of
appetite
for
this
in
the
community.
We
have
some
smart,
experienced
people
here.
I
don't
think,
there's
any
doubt
that
we
can
provide
a
vision
for
open
telemetry
to
introduce
profiling
that
is
tangible
and
achievable,
so
we
just
need
to
write
it.
D
E
Details
come
in
the
subsequent
ones
right.
This
is.
This
is
basically
everyone
just
being
like
yeah,
let's
do
profiling,
which
again,
like
I,
don't
think
we're
getting
anyone
who's
coming
in
going
to
be
blindsided
by
this,
I
suppose
it's
possible,
but
but
yeah.
This
is
really
just
this
directional
agreement,
I'm
saying
let's
do
profiling
and
then
there
will
be
subsequent
specification,
changes
that
we
make
or
specification
additions
that
we
make
that
describe.
B
Yeah
yeah,
it
sounded
too
like
when,
when
t
t,
ground
was
ex
they're
describing
it
last
week
that
he
was
sort
of,
he
was
just
saying
that
you
know
we
kind
of
they.
B
I
guess
he
was
trying
to
prevent
us
from
like
talking
in
circles
without
making
sort
of
you
know
little
checkpoints
of
progress
along
the
way,
and
so,
while
this
is
something
that
we've
definitely
you
know
talked
about
together,
we
haven't
really
yeah,
like
you
know,
looped
in
again
the
technical
committee,
the
specifications
group,
the
people
that
morgan
just
mentioned,
and
so
I
think
this
also
for
us
makes
you
know,
makes
marked
progress
exactly.
E
E
A
A
You
know
it's
just
a
agreeable
starting
point
like
like,
for
example,
even
in
the
like,
when
you
said
what
language
library
should
we
instrument
like
it's
also
possible
that
profiling
could
come.
You
know
from
from
a
chase
from
the
side.
You
know.
A
B
Cool
yeah
any
other
questions
about
that.
I
think
yeah,
it's.
I
guess
definitely
good
to
establish
what
the
goal
of
this
stock
is
cool.
I
guess
yeah
feel
free
to
jump
in
if
you
have
more
so
yeah.
So
what
I
was
about
to
say
was
I
sort
of
seated
this
with
some
of
the
basically,
this
document
is
kind
of
a
or
at
least
the
headings
are
sort
of
a
combination
of
the
open,
telemetry
mission.
B
I
felt,
like
you
know
that
made
sense
to
sort
of
use
this
as
a
somewhat
of
a
guide
to
make
sure
that
what
we're
doing
aligns
with
you
know
these
goals,
and
so
I
kind
of
just
copied
some
of
these
like
headings
and
pasted
them
in
there
and-
and
so
let
me
see
if
I
can
do
a
little
split
screen,
action
and
so
yeah.
So
you
know,
starting
with
how
it
aligns
with
open
telemetry
vision,
which
it
says
you
know,
the
goal
is
to
have
effective
observability.
B
You
know
high
quality,
telemetry
performance,
consistent,
instrumentation,
again
all
stuff
that
we've
kind
of
talked
about
collectively
here
and
then
yeah
I
mean
I
guess
I
don't
necessarily
go
through
each
the
the
meat
of
each
section,
but
you
know
this
telemetry
should
be
easy.
B
Profiling
should
be
easy,
universal,
vendor,
neutral,
so
sort
of
just
kind
of
like
matching
up
some
of
these
general
concepts
and
then
also
kind
of
cross-referencing
that
with
this,
which
actually
does
kind
of
do
us
some,
the
the
logs
vision,
which
does
a
similar,
you
know
goal
but
less
slightly
less
explicitly.
So
I
guess
that's
the.
B
I
think
the
biggest
difference
between
what
tigran
recommended
and
what
we
ultimately
have
in
the
stock,
but
but
yeah
so
starting
off
with
just
like
goals,
stuff
current
state
of
profilers,
basically
explaining
you
know
sort
of
what
we
came
up
with
in
that
goal,
stock
that
we
created
a
while
back.
I
just
moved
over
here.
You
know
some
of
the
issues
are
that,
like
languages
can
be
different
in
language,
runtime
profiler
types
can
be
different
data
types.
You
know
all
of
that
can
affect
the
actual
yeah.
B
All
that
can
basically
differ
between
different
profilers,
and
you
know,
as
a
result
of
that,
you
know,
each
vendor
has
a
different
way
of
doing
things.
As
we've
found
from
our
various
like
custom
formats
that
we've
talked
about,
you
know
from
the
stability
standpoint.
You
know
things
can
definitely
change
without
any
sort
of
like
unified.
You
know
reasoning
or
direction.
You
know
each
you
know
java
can
decide
to
do
something
or
jfr
java
could
decide
to
do
something.
You
know
p
prof
could
change.
B
You
know
so,
there's
like
a
lot
less
stability
in
that
sense
and
consistency,
and
then
on
the
performance
side.
You
know
it
can
also
vary
greatly
depending
on
you
know
what
profiler
you're,
using
what
format
its
profi.
You
know
outputting
and
so
yeah.
I
don't
know.
If
there's
anything,
anybody
would
want
to
add
there
on
sort
of
the
issues
with
not,
I
guess
not
necessarily
issues
but
yeah.
The
misalignment
between
you
know
the
current
state
of
profilers
and
what
open
telemetry
ultimately
aims
to
provide
that
is
not
listed
here.
D
Ryan,
can
I
just
jump
in
with
a
quick
question
on
the
the
performance
one
there.
So
I
get
the
feeling
that
the
divergence
and
performance
between
different
profilers
is
really
more
to
do
with
how
they
operate
internally,
like
how
they
actually
do
the
profiling,
rather
than
their
output
format.
And
I'm
wondering
if
by
stating
this
here
as
a
pro.
So
it
is
a
problem
with
profilers
the
divergence.
A
D
Consider
what
like
is
is
trying
to
address
this
performance
variance
part
of
what
we're
trying
to
do,
because
if
so
I
I
mean
the
problem
is
going
to
be
a
lot
bigger
than
the
output
format.
B
I
mean
that's
a
good
question,
I'm
curious
yeah
what
what
others
think
from
my
perspective,
I
mean,
I
think
it's
something
that
can
be
goal.
I
mean
if
we,
it
does
make
things
much
more
complicated
because
it's
like
you
know,
yeah,
which
you
know
yeah.
Obviously
it's
different
for
you
know
it
is
different
for
different
languages.
That
kind
of
thing.
B
I
think
that-
and
we
kind
of
talked
about
this
last
week
as
well,
that
the
the
format
itself
is
a
little
bit
less
of
a
is
a
little
bit
easier
to
figure
out.
I
think
compared
to
how
the
the
data
is
collected,
and
so
I
do
think
that
that,
as
you
know,
it
might
be
more
in
the
weeds
than
what
this
gets
into,
but
I
would
say
like,
as
far
as
an
implementation
perspective
that
that's
easier
to
possibly
start
with
and
then
figure
out.
B
You
know
how
hard
is
it
to
get?
You
know
various
agents
to
in
a
performant
way
come
up
with
that.
You
know
whatever
format
we
figure
out,
but
yeah,
that's
kind
of
just
my
opinion,
I'm
not
sure
I'm
curious
matt,
you
have
your
hand
up.
A
Yeah,
I
I
think
to
the
question
of
performance
and
whatnot.
I
I
personally
kind
of
tend
to
think
of
it
as
different
profiling.
Stacks
or
tools
take
different
approaches
at
times
and
have
different
trade-offs
that
have
been
made
and
then
many
in
many
cases
the
performance
hit
or
the
overhead
of
the
collection
itself
and
or
whatever
artifacts
are
emitted.
A
A
lot
of
that
can
be
driven
by
the
storage,
subsystem
and
or
availability
of
surplus
cpu
cycles
right
like
so
it's
not
like
it's
always
it's
not
always
fixed
is
what
I
mean
like
on
it.
You
know
the
same
profile
or
in
a
slightly
different
context.
You
know
if
it
just
gets
below
a
threshold
or
whatever.
So
suddenly,
you
know,
maybe
an
implementation
might
use
floating
point
stuff
it
shouldn't,
but
maybe
it
did
right
and
so
maybe
you're
nailing
the
fpu,
which
then
has
impacts
broadly
elsewhere.
A
So
I
think,
like
different
implementations,
should
be
able
to
make
different
choices
for
different
scenarios.
Everything
from
raspberry
pi's
to
hyper-converge,
crazy
multi-core
new
month
stuff
right.
So
a
format
that
we
all
come
together
around
might
like
include
an
optional.
For
example.
Here's
what
the
overhead
of
this
collected
data
that
this
report
summarizes
has
right
so,
but
I
don't
think
at
this
point
in
the
process.
We
should
really
say
like
it's
really
our
goal,
to
fix
the
performance
problem,
because
it's
so
multivariate
in
what
that
means.
A
You
know
to
different
products
or
solutions.
So,
okay,
what
do
others
think
that's
fair.
B
C
I
think
matt
touched
on
a
lot
of
what
I
wanted
to
say.
While
I
had
my
hand
up
but
yeah
my
my
two
senses.
I
think
it
doesn't
make
sense
to
try
and
normalize
or
standardize
performance
across
different
language
agents
or
across
different
profilers
and
as
math
suggested,
even
within
a
given
profiler
in
different
configurations
or
targeting
different
applications.
I
think
it's
it's
going
to
be
very
challenging
to
try
and
normalize
that
so
I'm
I'm
in
favor
of
maybe
maybe
getting
rid
of
that
section.
C
A
B
Fine
by
me,
let
me
go
to
where
is
it?
Well,
I
guess
any
objections
to
removing
this
section
speak
now
or
forever
hold
your
peace,
all
right,
cool,
perfect,
anything
else
to
add
there
yeah
I
mean,
I
think
I
think
all
that
is.
A
Just
seeing
it
the
last
point
where
it
says
they
lack
stability,
I
think
maybe
they
lack
consistency.
So
if
I
mean
if
it's
stability
like
across
the
ecosystem,
because
there's
variation
that
I
get
but
to
say
that
current
profilers,
like
stability,
is
kind
of
saying,
like
you
know,
they
crash,
they
don't
work
like
that.
That
could
be
read
in.
B
B
Yeah,
that's
fair!
That's
probably
a
a
better
way
to
put
that
cool.
B
Oh
yeah!
There
we
go
all
right
so
yeah.
Moving
on
to
the
next
section.
You
know
profiling
being
compatible
with
other
signals.
This
is
definitely
from
the
both
the
issue
that
I
should
probably
add
back
up
into
the
top,
but
the
the
issue
that
someone
created
around
adding
profiling
as
a
supported
event
type
and
then
also
from
the
conversation
at
cubecon
and
then
yeah,
some
of
what
we've
discussed
here
and
just
generally
yeah
one
of
the
goals
of
open
telemetry.
B
All
of
those
collectively
have
sort
of
you
know,
pushed
forward.
This
idea
that
you
know
all
observability
data
profiles
included.
Are
you
know
kind
of
stronger
when
they're
able
to
link
to
other
signals
in
some
way,
if
possible,
and
so
so
yeah?
So
here's
also
something
about
making
profiling
compatible
with
other
signals.
Sean
you
have,
you
want
to
add
anything.
D
Just
a
quick
question
on
the
second
point:
it's
the
part
that
I've
highlighted
in
green
there
we
say
profiling.
Data
should
be
transferred
as
efficiently
as
possible
without
losing
data.
I
don't
suppose
whoever
added
that
could
kind
of
elaborate
on
the
without
losing
data
part,
because
I
think
in
in
our
case
at
least-
and
I
get
the
feeling
this
is
probably
the
the
case
for
any
kind
of
large-scale
profiler.
That
kind
of
relies
on,
shall
we
say,
statistical
correctness
over
time.
D
It
is
actually
acceptable
to
drop
data
now
and
then,
for
example,
if
a
connection
goes
down-
or
you
know
think
things
like
that,
so
I'm
just
wondering
in
this
specific
instance.
Here
we
say
without
losing
data,
are
we
saying
in
the
context
of
like
a
lossy
compression,
or
are
we
talking
about
in
the
context
of
like
a
profiler,
will
is
going
to
give
strong
guarantees
that
all
profiles
are
always
delivered?
D
B
Yeah,
I
think,
that's
probably
a
better
way
to
put
it.
I
think
that
was
me.
I
was
thinking
about
it
more
in
the
sense
of
like
yeah,
like
aggregating
yeah,
like
in
the
sense
of
like
aggregating
data,
but
you
know
I
was
kind
of
just
you
know:
yeah
yeah
yeah
I
mean
I'm
not
not
not
tied
to
it.
D
I
guess
matt
has
a
suggestion
there,
which
I
think
is
kind
of
maybe
clarifying
as
well
like
the
data
model
should
be
lossless,
an
intentional
bias
for
say
marshalling
et
cetera,
at
least
that
it's
clear
with
that
that
we're
referring
to
the
encoding
mechanism
and
not
the
properties
of
the
system,
shall
we
say.
A
B
We
will
I'll
leave
this
here.
I
don't
know
how
do
you
strike
through
and
induct
command,
something.
B
Text
there
we
go
yeah,
we
can,
I
guess,
yeah,
maybe
offline.
We
can
kind
of
clarify
based
off
of
your
comments
here,
matt
and
feel
free
to
just
kind
of
update.
This.
A
Yeah
that
one
there
was
one
of
the
more
things
where
I
actually
would
love
to
hear
what
others
think
I
mean
as
I've
been
thinking
about
this
more
over
the
last
week.
You
know,
I
think
our
goal
at
this
initial
point
is
really
twofold.
Like
all
the
stuff
we
said
like,
we
should
do
profiling
and
maybe
like
taking
a
stab
at
like
an
on
an
ontology
like
like
a
description
of
like
what
is
it
that
we're
calling
profiling
like?
Is
it
this?
A
Is
it
that
what
are
the
columns
you
know,
and
what
do
they
mean
without
without
like
going
into
crazy
formalization?
But
you
know
in
the
current
language,
there's
sort
of
a
blending
of
like
what
is
it
we're
capturing
and
then
like
an
an
inference
of
a
particular
scenario
right
which
isn't
wrong
because
it's
the
starting
point.
B
Maybe
morgan
can
chime
in
there.
I
do
remember
last
week
I
believe,
that's
sort
of
the
next
more
specific
version
of
this
doc
is
the
yeah
kind
of
the
actual.
I
think
logs
has
one
as
well
that
has
actually
you
know
what
are
the
actual?
You
know,
fields
that
we
would
have
so
I
mean
yeah.
We
can
kind
of
tease
it
in
here.
I
do
believe,
that's
sort
of
like
the
next
document
that
we
come
up
with
after
this
one
is
sort
of
the
more
specific
you
know
yeah.
B
What
to
include
does
that
sound,
close
morgan.
B
A
E
B
Yeah,
and
so
I
believe,
that's
sort
of
the
let
me
see
if
I
can
find
your.
A
A
Between
a
data
model
and
then
various
implementations,
let's
say
so,
for
example,
that
third
bullet
says
with
an
acceptable
bounds
of
overhead
for
the
conversion.
I
think
we
should
just
remove
that
because
it
implies
what's
acceptable
for
you.
You
know
on
my
soc
running
it.
You
know
100
megahertz,
you
know
reverses,
you
know
so
like
just
like
yeah
just
make
it
more
shorter,
plus
it's
more.
B
Cool
makes
sense.
B
This
one
pretty
straightforward,
you
know,
recommendations
for
commonly
used
are
yeah
mapping,
recommendations
for
commonly
used
formats
that
was
kind
of
copied
from
the
logs.
You
know,
whatever
format
we
ultimately
come
up
with,
you
know
obviously
yeah.
We
should
try
and
make
it.
You
know
work
with
the
popular
existing
formats
that
we've
already
discussed
here,
particularly
you
know,
the
ones
listed
any
other
yeah.
C
I
think
it
would
be
cool
to
add
something
to
that
data
model
section
that
says
we
will
also
include
the
ability
to
like
to
incorporate
like
other
signal
types
like
there's,
there's
nothing
to
suggest
that
we're
going
to
be
linking
call
stacks
to
traces.
For
example.
I
don't
know
how
you
phrase
it,
but
I
think
it'd
be
cool
if
there
was
something
in
there
that
says
that
the
model
also
includes
relationships
to
other
open,
telemetry
signal
types.
We.
A
C
A
B
Yeah,
no,
I
I
agree
I'll
just
I
mean
we
can
reorder
this,
but
like.
A
B
A
good
point,
yeah
definitely
should
be
in
there
supporting
legacy
formats
yeah,
okay,
so
this
is
the
part
that
does
not
have
you
know
much
might
not
be
worth
going.
You
know
yeah.
This
is
the
part
where
it
starts
to
get
a
little
kind
of
what
we
were
saying
earlier.
B
We
can
honestly
just
remove
this
section,
possibly
yeah,
I'm
curious
what
other
people
think,
given
the
the
conversation
we've
had
so
far,
it
seemed
like
it
might
be
worth
mentioning
either
the
agent
or
evpf,
or
I
guess,
the
the
things
that
we've
talked
about
that
were
not
included
in
here
and
again
might
be
sort
of
out
of
scope,
for
this
document
are
like
something
with
symbols
or
symbolization.
B
Something
with
you
know.
The
agent
and
evpf
has
been
a
big
topic
as
well,
and
so
I
didn't
know
if
there
was
anything
that
you
know
is
high
level
enough
and
worth
mentioning
here.
That
I
do
think
seems
to
be
a
common
theme
in
this
group.
Sean.
C
D
I
guess
if
we
were
to
start
discussing
that,
like
actual
details
of
we're
kind
of
into
the
the
what
I
guess,
rather
than
like
of
of
building
an
agent
which
I
think
is
going
to
be
quite
diverse
across
different
different
companies
and
implementations,
and
I'm
I'm
unsure
like
that,
seems
like
a
huge
pandora's
box
of
of
things
to
to
get
into
yeah
like
when
it's
like
the
same
point
as
before.
Basically
like
are
we
do
we
just
want
to
enable
interoperability
or
do
we
want
to
start
prescribing?
D
You
know
it's
not
compliant
unless
it
uses,
say
one
percent
of
cpu
over
a
certain
amount
of
time
or
something
like
that,
like
I've
never
been
involved
in
any
of
the
shall
we
say
open,
telemetry
things
before
so
I'm
not
sure
if
that's
normal
or
not
or
acceptable.
I.
C
B
Yeah,
I
guess
another
way,
maybe
to
think
about
that
is,
like
you
know
so
say
we
at
least
come
up
with
the
format
which
seems
to
be
much
again
much
easier
to
sort
of
come
to
a
reasonable
agreement
on.
B
It
does
seem
like
maybe
that's,
not
necessarily
the
agent,
but
it
sounded
like
last
week,
like
some
people's
hesitation,
there
was
just
that
the
agent
that
needs
to
produce
the
data
in
a
certain
format
needs
to
be
able
to
like
get
to
that
format
somewhat
in
a
in
a
reasonable
way.
I
don't
know
if
that
makes
sense.
You
know
like.
If
we
come
up
with
this
new
yeah
like
profiling
format,
then
agents
will
need
to
do
some
sort
of
work
to
come
up
with
that
format.
I
guess.
B
Yeah,
I
guess
that's
fair
yeah,
I
guess
honestly
yeah
and
maybe
that
does
get
too
into
the
into
the
implementation
details.
Then
this
could
get.
A
Into
I
mean
I
could
see
a
reference
implementation
of
something
that
replays
or
simulates
the
discovery
of
things
like
a
reference,
implementation
of
a
receiver
kind
of,
but
like
a
different
profile.
We,
I
don't
think
we
want
to
have
to
say,
like
we're
going
to
at
this
point,
make
that
intermediate
format
like
instead,
I
think
it's
more
of
a
scalable
model
say
like
if
you
have
a
profiler
and
you
want
to
jack
it
in
to
wire
it
into
a
hotel
solution.
A
A
Then
it's
like
an
exercise
for
those
integrating
their
own
profiling
stacks,
and
maybe
we
can
provide
reference
implementations
that
get
you
there,
but
but
like
when,
I
think
about
an
agent,
I
kind
of
like
what
others
said
like
I
can
vary
so
greatly
that
they,
I
could
see
it
useful
for
like
ci
or
for
like
testing
yeah.
You
know
the
format
itself.
B
Fair
enough,
it
sounds
like
we
are
generally
on
the
same
page
there.
You
know
not
saying
that
we
won't
talk
about
that
kind
of
you
know
stuff
in
the
future,
but
for
this
document
I
think
it
seems
as
though
that
is
out
of
scope
again.
If
anybody
disagrees
speak
now
or
forever
hold
your
peace.
B
Well,
not
forever
hold
your
peace,
but
I
guess
for
for
now
hold
your
peace
cool
all
right
and
then
the
last
thing
we
have
on
here
is
just
the
the
logs
one
also
made
like
a
brief
mention
of
just
kind
of
like
cloud
native
stuff.
B
You
know
just
basically
adding
that
in
here
too,
just
you
know,
for
you
know,
people
for
the
people
who
will
be
reviewing
this,
that
it's
sort
of
not
only
in
line
with
sort
of
hotel's
mission,
but
also
with
the
sort
of
cncf
best
practices
and
what
they're
trying
to
do
as
well,
both
because
that
you
know
will
fit
into
the
way.
B
A
lot
of
you
know,
companies
and
end
users
of
otel
are
moving
or
already
are,
and
then
also
sort
of
helps
with
the
potential
for
us
to
get
more
buy-in
from.
You
know
not
just
the
hotel
community,
but
also
sort
of
the
cncf,
which
are
you
know,
very
overlapping,
but
I
just
felt
like
it
was
worth
mentioning
as
well.
Yeah.
C
Cool,
hey,
ryan,
something
I
noticed
in
logs
is
down
there
at
the
bottom.
They
have
some
logs
use
cases.
Does
it
make
sense
for
us
to
maybe
just
try
and
summarize,
like
three
or
four
common
use
cases
for
profiler.
B
Yeah,
that's
I
like
it.
C
A
B
Yeah,
I
think
that's
pr,
I
think
that's
great
and
I
think
yeah
people
in
this
group
will
have
good
stuff
to
add
there.
I
don't
know
if
we
necessarily
yeah.
Maybe
we
can
do
that
offline
yeah.
B
A
stab
at
that
yeah
yeah
yeah-
that
would
be
great
and
yeah.
Everybody
here,
presumably
has
some
ideas
of
use
cases
for
profiling,
so
yeah
feel
free
to
add
yeah,
anything
else
that
you
know
any
other
holes
or
things
that
we're
missing.
From
this.
I
tacked.
A
On
the
last
one,
that
might
be
something
that
we
might
want
to
address.
I've
never
done
another
tech
before
really,
but
I,
but
the
cncf
has
some
like
core
values
and
principles
for
all
communities
and
stuff,
and
we
might
want
to
just
tack
in
a
little
statement.
That's
not
necessarily
technical,
but
it's
like
you
know
something
like
that
stuff.
I
just.
C
D
Ryan,
just
before
we
move
on
just
ask
one
question:
it's
kind
of
on
the
performance
side
again
and
I'm
just
thinking
about
like
down
the
line.
I
guess
what
we're
hoping
for
is
a
world
where
all
back-ends
consume
open,
telemetry
input,
all
agents
produce
it.
D
You
can
hook
them
up
whatever
way
you
want,
when
this
was
done
for
apm
and
traces,
and
things
like
that
was
there
anything
in
the
document
where
they
spoke
about,
should
we
say
the
overheads,
like
computational
overhead
of
converting
from
whatever
internal
representation
that
those
agents
produced
to
the
the
standardized
representation
like
was
that
taken
into
account
at
all,
because
when
we're
talking
about
performance
like
that
does
seem
to
be
the
one
one
of
the
ones
that
that
and
then
obviously
network
bandwidth
seems
to
be
things
like.
B
Yeah,
I
do
not
know
the
answer.
To
that
we
can,
I
can
ask,
definitely
either
tigran
or
yeah.
I
don't
know
someone
who's
like
more
involved
in
those
processes
and
have
them
message
in
the
slack.
C
Yeah,
I
think
tigran
or
morgan
would
be
better,
like
tigran
will
probably
have
an
answer
for
you
immediately.
I'm
not
aware
of
any
concrete
numbers
or
concrete
sections
that
speak
exactly
to
like
performance
qualifications,
because
we
all
know
it's
like
super
duper
hard
to
measure,
and
it
varies
so
wildly
with
application
environments,
and
so
I
think,
if
there's
any,
if
there's
any
mention
of
it
like
in
the
spec,
I
think
it's
probably
kind
of
vague
or
high
level.
It's
like
yeah
we're
gonna,
try
and
be
efficient.
C
D
If
there
comes
up
a
discussion
between,
say
two
points
of
view,
and
you
have
like
at
least
stated
a
preference
for
one
approach
previously,
like
one
high
level
principle,
you
can
kind
of
perhaps
refer
back
to
that
over
time
and
it
kind
of
guides
decisions
over
time
rather
than
like
stating,
you
know,
say
two
percent
cpu
or
whatever.
C
A
Like
if
I'm
going
to
make
a
back
end
or
if
I'm
going
to
make,
you
know
something
that
plugs
into
the
collector
to
do
some
sort
of
inline
whatever
you
know,
it's
not
a
poke
in
the
eye.
You
know
to
say:
okay
to
do
that
now
you
have
to
do
all
this
crazy
crap,
because
we
made
this
data
model.
You
know
like
for
an
example
might
be
like
our
data
model.
Is
you
know
an
olap?
A
You
know
style,
relational
databases
totally
optimized
to
just
like
capture
all
data
or
a
columner
store,
or
something
like
that,
where
it's
very
opinionated
right
and
now,
because
of
that
now
I
have
to
do
all
this
pain
and
suffering
to
do
anything.
You
know
like
a
guiding
principle.
That's
a
positive
version
of
that.
C
Yeah-
and
I
think
it
also
speaks
to
the
always-on
nature
of
profilers,
the
kind
of
the
way
the
industry's
gone
is
like
historically
profilers.
You
turn
them
on
for
a
while.
You
turn
them
back
off,
and
it
it
makes
your
app
run
so
slowly
that
you
just
don't
do
that
very
often,
and
so
the
industry
is
kind
of
switching
to
this
always-on
model,
or
at
least
people
hope
they
can
switch
to
an
amazon
model
and
that's
often
governed
by
this
overhead
right.
A
Nice,
I
think,
what's
available
to
us,
is
about
to
it
already
is,
but
it's
about
to
just
have
a
heyday
right
like
we're
going
to
start
seeing
socs
with
really
high
speed.
I
o
channels
just
plumbed
in
like
as
local
storage.
It
looks
like
local,
but
maybe
it
goes
off
to
a
distributed
array,
or
you
know,
like
already
many
of
the
data
structures
like
ls
entries
can't
keep
up
with
nvme
drives
like
a
problem
like
those
data
structures
that
are
pervasively
used
exist
to
sli,
so
that
you
can
reduce
the
amount
of
rights
right.
A
C
A
Oh,
I'm
sorry
yeah.
Well,
I
spent
the
first
about
a
dozen
years
of
my
career
with
microsoft
and
about
the
first,
like
two-thirds
of
that
working
on
windows
ce,
I
worked
on
compilers
icon
miners
debuggers
and
I
wrote
some
profilers
for
arm
and
other
things,
but
that
was
like
20
years
ago,
more
recent
years
I've
worked
at
vmware,
red
hat
and
a
little
startup
and
dell
doing
storage
stuff,
just
everything
across
infrastructure
as
a
service
platform
as
a
service
with
the
background,
storage
and
distributed
systems.
C
B
Yeah,
you
definitely
yeah
you're.
Definitely
in
the
weeds
there
now.
B
Nice,
well,
I
guess
yeah
I
mean
yeah
honestly,
the
main
goal
of
this
meeting
was
to
yeah
make
sure
we
were
somewhat
on
the
same
page
here
and
we
can
start
to.
We
don't
necessarily
have
to
finish
this
by
tuesday,
but
I
think
I
will
try
and
go
to
the
specifications
sig.
I
think
it's
called
it's
like.
I
think
this
time
on
tuesdays
and
and
then
yeah
I
was
just
gonna.
B
You
know
just
let
them
know
that
you
know
we
have
a
draft
that
we're
working
on
and
I'll
just
let
them
know
sort
of
like
the
state
of
it.
You
know,
as
we
finish
it
and
then,
maybe
by
like
the
following
tuesday,
we
can
actually
create
the
pr
and
you
know
and
start
making
it
more.
I
guess
formalized
in
that
sense.
I'll.
Add
that
to
the
notes.
B
Formalized
version
on
the
following
week.
B
Yeah,
I
think
that
should
be
good.
We
had
started
talking
about,
you
know,
sort
of
the
again,
the
easier
part
of
like
benchmarking,
the
format
or
coming
up
with
some
sort
of
method
by
which
we
could
benchmark
a
a
format
or
an
arbitrary
format.
B
Whatever
format
we
ultimately
come
up
with
yeah,
I
think
the
the
thing
that
morgan
and
tigran
also
mentioned
last
week
was
that
we
should
still
try-
and
you
know
that's
something
that
we'll
have
to
do
eventually
anyway,
and
so,
even
though
that's
not
necessarily
the
most
important
part
or
difficult
part
that
we
should
try-
and
you
know,
work
on
that
in
parallel.
B
So
again,
just
if
you
get
the
chance
check
out
that
doc
and
you
know,
and
then
maybe
next
time
we
meet,
we
can
sort
of
you
know
either
yeah.
I
ideally
make
progress
there
and
maybe
start
coming
up
with
some
actual.
You
know
sort
of
numbers.
I
think
that
also
adds
to
the
you
know:
progress.
That's
it's
easy
progress
to
make
and
I
think,
we'll
you
know
kind
of
help,
get
some
momentum
going
and
get
the
ball
rolling
there
and
then
yeah.
We
can.
B
I
guess
talk
about
yeah,
the
kind
of
more
detailed
stuff
that
we
mentioned
today,
actually
I'll
remove
that
for
now.
I
added
a
question
on
here
to
talk
to
tigran
about
that
we'll
ask
in
the
channel
after
this
shawn's
question
and
then
yeah.
Besides
that,
honestly,
we
got
through
everything
pretty
quickly
here.
If
there's
anything
else,
anybody
wants
to
talk
about
before
we
get
off.
Otherwise
we
can,
you
know,
call
it
a
little
bit
early.
B
All
right,
nice,
I
think,
yeah.
I
think
we
made
some
good
progress
here
and
excited
to
yeah
to
get
the
get
the
ball
even
more,
officially
rolling
in
the
coming
weeks.
So
thanks
everybody
and
we'll
see
you
in
a
yeah,
please
add
to
the
to
the
otep,
especially
those
use
cases
and
yeah
anything
else.
If
you
haven't
had
a
chance
to
like.
Actually,
you
know
read
through
the
whole
thing
in
detail
and
yeah
other
than
that
see
you
all.
Next
time
see
you
guys
bye
everyone,
peace.